id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
366810 | https://en.wikipedia.org/wiki/Knot%20%28unit%29 | Knot (unit) | The knot () is a unit of speed equal to one nautical mile per hour, exactly (approximately or ). The ISO standard symbol for the knot is kn. The same symbol is preferred by the Institute of Electrical and Electronics Engineers (IEEE), while kt is also common, especially in aviation, where it is the form recommended by the International Civil Aviation Organization (ICAO). The knot is a non-SI unit. The knot is used in meteorology, and in maritime and air navigation. A vessel travelling at 1 knot along a meridian travels approximately one minute of geographic latitude in one hour.
Definitions
1 international knot =
1 nautical mile per hour (by definition),
(exactly),
(approximately),
(approximately),
(approximately)
(approximately).
The length of the internationally agreed nautical mile is . The US adopted the international definition in 1954, having previously used the US nautical mile (). The UK adopted the international nautical mile definition in 1970, having previously used the UK Admiralty nautical mile ( or ).
Usage
The speeds of vessels relative to the fluids in which they travel (boat speeds and air speeds) can be measured in knots. If so, for consistency, the speeds of navigational fluids (ocean currents, tidal streams, river currents and wind speeds) are also measured in knots. Thus, speed over the ground (SOG; ground speed (GS) in aircraft) and rate of progress towards a distant point ("velocity made good", VMG) can also be given in knots. Since 1979, the International Civil Aviation Organization list the knot as permitted for temporary use in aviation, but no end date to the temporary period has been agreed .
Origin
Until the mid-19th century, vessel speed at sea was measured using a chip log. This consisted of a wooden panel, attached by line to a reel, and weighted on one edge to float perpendicularly to the water surface and thus present substantial resistance to the water moving around it. The chip log was cast over the stern of the moving vessel and the line allowed to pay out. Knots tied at a distance of from each other, passed through a sailor's fingers, while another sailor used a 30-second sand-glass (28-second sand-glass is the currently accepted timing) to time the operation. The knot count would be reported and used in the sailing master's dead reckoning and navigation. This method gives a value for the knot of . The difference from the modern definition is less than 0.02%.
Derivation of knots spacing:
, so in seconds that is metres per knot.
Modern use
Although the unit knot does not fit within the SI system, its retention for nautical and aviation use is important because the length of a nautical mile, upon which the knot is based, is closely related to the longitude/latitude geographic coordinate system. As a result, nautical miles and knots are convenient units to use when navigating an aircraft or ship.
On a standard nautical chart using Mercator projection, the horizontal (East–West) scale varies with latitude. On a chart of the North Atlantic, the scale varies by a factor of two from Florida to Greenland. A single graphic scale, of the sort on many maps, would therefore be useless on such a chart. Since the length of a nautical mile, for practical purposes, is equivalent to about a minute of latitude, a distance in nautical miles on a chart can easily be measured by using dividers and the latitude scales on the sides of the chart. Recent British Admiralty charts have a latitude scale down the middle to make this even easier.
Speed is sometimes incorrectly expressed as "knots per hour", which would mean "nautical miles per hour per hour" and thus would refer to acceleration.
Aeronautical terms
Prior to 1969, airworthiness standards for civil aircraft in the United States Federal Aviation Regulations specified that distances were to be in statute miles, and speeds in miles per hour. In 1969, these standards were progressively amended to specify that distances were to be in nautical miles, and speeds in knots.
The following abbreviations are used to distinguish between various measurements of airspeed:
TAS is "knots true airspeed", the airspeed of an aircraft relative to undisturbed air
KIAS is "knots indicated airspeed", the speed shown on an aircraft's pitot-static airspeed indicator
CAS is "knots calibrated airspeed", the indicated airspeed corrected for position error and instrument error
EAS is "knots equivalent airspeed", the calibrated airspeed corrected for adiabatic compressible flow for the particular altitude
The indicated airspeed is close to the true airspeed only at sea level in standard conditions and at low speeds. At , an indicated airspeed of 300 kn may correspond to a true airspeed of 500 kn in standard conditions.
| Physical sciences | Velocity | null |
367399 | https://en.wikipedia.org/wiki/Safrole | Safrole | Safrole is an organic compound with the formula CH2O2C6H3CH2CH=CH2. It is a colorless oily liquid, although impure samples can appear yellow. A member of the phenylpropanoid family of natural products, it is found in sassafras plants, among others. Small amounts are found in a wide variety of plants, where it functions as a natural antifeedant. Ocotea pretiosa, which grows in Brazil, and Sassafras albidum, which grows in eastern North America, are the main natural sources of safrole. It has a characteristic "sweet-shop" aroma.
It is a precursor in the synthesis of the insecticide synergist piperonyl butoxide, the fragrance piperonal via isosafrole, and the empathogenic/entactogenic substance MDMA.
History
Safrole was obtained from a number of plants, but especially from the sassafras tree (Sassafras albidum), which is native to North America, and from Japanese star anise (Illicium anisatum, called shikimi in Japan). In 1844, the French chemist Édouard Saint-Èvre (1817–1879) determined safrole's empirical formula. In 1869, the French chemists Édouard Grimaux (1835–1900) and J. Ruotte investigated and named safrole. They observed its reaction with bromine, suggesting the presence of an allyl group. By 1884, the German chemist Theodor Poleck (1821–1906) suggested that safrole was a derivative of benzene, to which two oxygen atoms were joined as epoxides (cyclic ethers).
In 1885, the Dutch chemist Johann Frederik Eijkman (1851–1915) investigated shikimol, the essential oil that is obtained from Japanese star anise, and he found that, upon oxidation, shikimol formed piperonylic acid, whose basic structure had been determined in 1871 by the German chemist Wilhelm Rudolph Fittig (1835–1910) and his student, the American chemist Ira Remsen (1846–1927). Thus Eijkman inferred the correct basic structure for shikimol. He also noted that shikimol and safrole had the same empirical formula and had other similar properties, and thus he suggested that they were probably identical. In 1886, Poleck showed that upon oxidation, safrole also formed piperonylic acid, and thus shikimol and safrole were indeed identical. It remained to be determined whether the molecule's C3H5 group was a propenyl group (R−CH=CH−CH3) or an allyl group (R−CH2−CH=CH2). In 1888, the German chemist Julius Wilhelm Brühl (1850–1911) determined that the C3H5 group was an allyl group.
Natural occurrence
Safrole is the principal component of brown camphor oil made from Ocotea pretiosa, a plant growing in Brazil, and sassafras oil made from Sassafras albidum.
In the United States, commercially available culinary sassafras oil is usually devoid of safrole due to a rule passed by the U.S. FDA in 1960.
Safrole can be obtained through natural extraction from Sassafras albidum and Ocotea cymbarum. Sassafras oil for example is obtained by steam distillation of the root bark of the sassafras tree. The resulting steam distilled product contains about 90% safrole by weight. The oil is dried by mixing it with a small amount of anhydrous calcium chloride. After filtering-off the calcium chloride, the oil is vacuum distilled at 100 °C under a vacuum of or frozen to crystallize the safrole out. This technique works with other oils in which safrole is present as well.
Safrole is typically extracted from the root-bark or the fruit of Sassafras albidum (native to eastern North America) in the form of sassafras oil, or from Ocotea odorifera, a Brazilian species. Safrole is also present in certain essentials oils and in brown camphor oil, which is present in small amounts in many plants. Safrole can be found in anise, nutmeg, cinnamon, and black pepper. Safrole can be detected in undiluted liquid beverages and pharmaceutical preparations by high-performance liquid chromatography.
Applications
Safrole is a member of the methylenedioxybenzene group, of which many compounds are used as insecticide synergists; for example, safrole is used as a precursor in the synthesis of the insecticide piperonyl butoxide. Safrole is also used as a precursor in the synthesis of the drug ecstasy (MDMA, 3,4-methylenedioxymethamphetamine). Before safrole was banned by the US FDA in 1960 for use in food, it was used as a food flavor for its characteristic 'candy-shop' aroma. It was used as an additive in root beer, chewing gum, toothpaste, soaps, and certain pharmaceutical preparations.
Safrole exhibits antibiotic and anti-angiogenic functions.
Synthesis
It can be synthesized from catechol first by conversion to methylenedioxybenzene, which is brominated and coupled with allyl bromide.
Safrole is a versatile precursor to many compounds. Examples are N-acylarylhydrazones, isosters, aryl-sulfonamide derivatives, acidic sulfonylhydrazone derivatives, benzothiazine derivatives. and many more.
Isosafrole
Isosafrole is produced synthetically from safrole. It is not found in nature. Isosafrole comes in two forms, trans-isosafrole and cis-isosafrole. Isosafrole is used as a precursor for the psychoactive drug MDMA (ecstasy). When safrole is metabolized several metabolites can be identified. Some of these metabolites have been shown to exhibit toxicological effects, such as 1′-hydroxysafrole and 3′-hydroxysafrole in rats. Further metabolites of safrole that have been found in urine of both rats and humans include 1,2-dihydroxy-4-allylbenzene or 1(2)-methoxy-2(1)hydroxy-4-allylbenzene.
Metabolism
Safrole can undergo many forms of metabolism. The two major routes are the oxidation of the allyl side chain and the oxidation of the methylenedioxy group. The oxidation of the allyl side chain is mediated by a cytochrome P450 complex, which will transform safrole into 1′-hydroxysafrole. The newly formed 1′-hydroxysafrole will undergo a phase II drug metabolism reaction with a sulfotransferase enzyme to create 1′-sulfoxysafrole, which can cause DNA adducts. A different oxidation pathway of the allyl side chain can form safrole epoxide. So far, this has only been found in rats and guinea pigs. The formed epoxide is a small metabolite due to the slow formation and further metabolism of the compound. An epoxide hydratase enzyme will act on the epoxide to form dihydrodiol, which can be secreted in urine.
The metabolism of safrole through the oxidation of the methylenedioxy proceeds via the cleavage of the methylenedioxy group. This results in two major metabolites: allylcatechol and its isomer, propenylcatechol. Eugenol is a minor metabolite of safrole in humans, mice, and rats. The intact allyl side chain of allylcatechol may then be oxidized to yield 2′,3′-epoxypropylcatechol. This can serve as a substrate for an epoxide hydratase enzyme, and will hydrate the 2′,3′-epoxypropylcatechol to 2′,3′-dihydroxypropylcatechol. This new compound can be oxidized to form propionic acid (PPA), which is a substance that is related to an increase in oxidative stress and glutathione S-transferase activity. PPA also causes a decrease in glutathione and Glutathione peroxidase activity. The epoxide of allylcatechol may also be generated from the cleavage of the methylenedioxy group of the safrole epoxide. The cleavage of the methylenedioxy ring and the metabolism of the allyl group involve hepatic microsomal mixed-function oxidases.
Toxicity
Toxicological studies have shown that safrole is a weak hepatocarcinogen at higher doses in rats and mice. Safrole requires metabolic activation before exhibiting toxicological effects. Metabolic conversion of the allyl group in safrole is able to produce intermediates which are directly capable of binding covalently with DNA and proteins. Metabolism of the methylenedioxy group to a carbene allows the molecule to form ligand complexes with cytochrome P450 and P448. The formation of this complex leads to lower amounts of available free cytochrome P450. Safrole can also directly bind to cytochrome P450, leading to competitive inhibition. These two mechanisms result in lowered mixed function oxidase activity.
Furthermore, because of the altered structural and functional properties of cytochrome P450, loss of ribosomes which are attached to the endoplasmatic reticulum through cytochrome P450 may occur. The allyl group thus directly contributes to mutagenicity, while the methylenedioxy group is associated with changes in the cytochrome P450 system and epigenetic aspects of carcinogenicity. In rats, safrole and related compounds produced both benign and malignant tumors after intake through the mouth. Changes in the liver are also observed through the enlargement of liver cells and cell death.
In the United States, it was once widely used as a food additive in root beer, sassafras tea, and other common goods, but was banned for human consumption by the FDA after studies in the 1960s suggested that safrole was carcinogenic, causing permanent liver damage in rats; food products sold there purporting to contain sassafras instead contain a safrole-free sassafras extract. Safrole is also banned for use in soap and perfumes by the International Fragrance Association.
According to a 1977 study of the metabolites of safrole in both rats and humans, two carcinogenic metabolites of safrole found in the urine of rats, 1′-hydroxysafrole and 3′-hydroxyisosafrole, were not found in human urine. The European Commission on Health and consumer protection assumes safrole to be genotoxic and carcinogenic. It occurs naturally in a variety of spices, such as cinnamon, nutmeg, and black pepper, and herbs such as basil. In that role, safrole, like many naturally-occurring compounds, may have a small but measurable ability to induce cancer in rodents. Despite this, the effects in humans were estimated by the Lawrence Berkeley National Laboratory to be similar to risks posed by breathing indoor air or drinking municipally supplied water.
Adverse effects
Besides being a hepatocarcinogen, safrole exhibits further adverse effects in that it will induce the formation of hepatic lipid hydroperoxides. Safrole also inhibits the defensive function of neutrophils against bacteria. In addition to the inhibition of the defensive function of neutrophils, it has also been discovered that safrole interferes with the formation of superoxides by neutrophils. Furthermore, safrole oxide, a metabolite of safrole, has a negative effect on the central nervous system. Safrole oxide inhibits the expression of integrin β4/SOD, which leads to apoptosis of the nerve cells.
Use in MDMA manufacture
Safrole is listed as a Table I precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances.
Due to their role in the manufacture of MDMA, safrole, isosafrole, and piperonal are Category I precursors under regulation no. 273/2004 of the European Community. In the United States, safrole is currently a List I chemical.
The root bark of American sassafras contains a low percentage of steam-volatile oil, which is typically 75% safrole. Attempts to refine safrole from sassafras bark in mass quantities are generally not economically viable due to low yield and high effort. However, smaller quantities can be extracted quite easily via steam distillation (about 10% of dry sassafras root bark by mass, or about 2% of fresh bark). Demand for safrole is causing rapid and illicit harvesting of the Cinnamomum parthenoxylon tree in Southeast Asia, in particular the Cardamom Mountains in Cambodia. However, it is not clear what proportion of illicitly harvested safrole is going toward MDMA production, as over 90% of the global safrole supply (about per year) is used to manufacture pesticides, fragrances, and other chemicals. Sustainable harvesting of safrole is possible from leaves and stems of certain plants, including the roots of camphor seedlings.
| Physical sciences | Phenylpropanoids | Chemistry |
367461 | https://en.wikipedia.org/wiki/Pholcidae | Pholcidae | The Pholcidae are a family of araneomorph spiders. The family contains more than 1,800 individual species of pholcids, including those commonly known as cellar spider, daddy long-legs spider, carpenter spider, daddy long-legger, vibrating spider, gyrating spider, long daddy, and angel spider. The family, first described by Carl Ludwig Koch in 1850, is divided into 94 genera.
The common name "daddy long-legs" is used for several species, especially Pholcus phalangioides, but is also the common name for several other arthropod groups, including harvestmen and crane flies.
Appearance
Pholcids have extremely long and thin legs with flexible tarsi. They can be distinguished from other long-legged spiders by the eye arrangement: Pholcidae have two groups of three eyes each, and there may be a pair of small eyes in between them. Most have this middle pair present for a total of eight eyes, but some genera (e.g. Modisimus, Spermophora, Spermophorides) lack this pair and have a total of six eyes. The body is often whitish or grey in colour. Harvestmen (Opiliones), which share the name "daddy longlegs", also have long and thin legs but have only one pair of eyes, and their body appears to be a single segment.
Like other spiders, pholcids have two body segments, the prosoma and opisthosoma. The prosoma may be evenly domed (e.g. Pholcus, Micropholcus) or have a furrow or pit in the middle. The opisthosoma may be long and cylindrical (e.g. Pholcus, Holocnemus), long and pointed dorso-posteriorly (e.g. Crossopriza) or short (e.g. Micropholcus).
There is variation in size, ranging from just over 1 millimetre (Spermophorides lascars) to 11 mm (Artema atlanta) in body length.
Habitat
Pholcids are found in every continent in the world except Antarctica. Pholcids hang inverted in their messy and irregular-shaped webs. These webs are constructed in dark and damp recesses such as in caves, under rocks and loose bark, and in abandoned mammal burrows. In areas of human habitation pholcids construct webs in undisturbed areas in buildings such as high corners, attics and cellars, hence the common name "cellar spider".
Behavior
Trapping
The web of pholcids has no adhesive properties and instead relies on its irregular structure to trap prey. When pholcid spiders detect prey within their webs the spiders quickly envelop prey with silk-like material. The prey may be eaten immediately or stored for later. When finished feeding they will clean the web by unhooking the remains of the prey and letting the carcass drop from the web. They are passive against humans.
Threat response
Some species of Pholcidae exhibit a threat response when disturbed by a touch to the web or entangled large prey. The arachnid responds by vibrating rapidly in a gyrating motion in its web, which may sometimes fall into a circular rhythm. It may oscillate in tune with the elasticity of the web causing an oscillation larger than the motion of the spider's legs. While other species of spider exhibit this behaviour, such behavior by the Pholcidae species has led to these spiders sometimes being called "vibrating spiders". There are several proposed reasons for this threat response. The movement may make it difficult for a predator to locate or strike the spider, or may be a signal to an assumed rival to leave. Vibrating may also increase the chances of capturing insects that have just brushed their web and are still hovering nearby, or further entangle prey that may have otherwise been able to free itself. If the spider continues to be disturbed it will retreat into a corner or drop from its web and escape.
Diet
Although they do eat insects, certain species of these spiders invade webs of other spiders to eat the host, the eggs, or the prey. In some cases the spider vibrates the web of other spiders, mimicking the struggle of trapped prey to lure the host closer. Pholcids prey on Tegenaria funnel weaver spiders, and are known to attack and eat redback spiders, huntsman spiders and house spiders.
Pholcids may be beneficial to humans living in regions with dense hobo spider populations as predation on Tegenaria may keep populations in check. They have also been observed to feed on the spider Steatoda nobilis in countries like Ireland and England.
Gait
Pholcus phalangioides often uses an alternating tetrapod gait (first right leg, then second left leg, then third right leg, etc.), which is commonly found in many spider species. However, frequent variations from this pattern have been documented during observations of the spiders' movements.
Misconceptions
There is an urban legend that daddy long-legs spiders have the most potent venom of any spider but that their fangs are either too small or too weak to puncture human skin; the same legend is also repeated of the harvestman and crane fly, also known as daddy long-legs in some regions. This is not true for any of the three. Pholcidae are indeed capable of biting humans and their venom is not medically significant, and neither harvestmen nor crane flies have any venom or fangs to speak of. Indeed, pholcid spiders do have a short fang structure (called uncate due to its "hooked" shape). Brown recluse spiders also have uncate fang structure, but are able to deliver medically significant bites.
Possible explanations include: pholcid venom is not toxic to humans; pholcid uncate are smaller than those of brown recluse; or there is a musculature difference between the two arachnids, with recluses, being hunting spiders, possessing stronger muscles for fang penetration. According to Rick Vetter of the University of California, Riverside, the daddy long-legs spider has never harmed a human, and there is no evidence that they are dangerous to humans.
The legend may result from the fact that the daddy long-legs spider preys upon deadly venomous spiders, such as the redback, and other members of the true widow genus Latrodectus. To the extent that such arachnological information was known to the general public, it was perhaps thought that if the daddy long-legs spider could kill a spider capable of delivering fatal bites to humans, then it must be more venomous, and the uncate fangs were regarded as prohibiting it from killing people. In reality, it is able to cast lengths of silk onto its prey, incapacitating them from a safe distance.
Mythbusters experiment
During 2004, the Discovery Channel television show MythBusters tested the daddy long-legs venom myth in episode 13, "Buried in Concrete". Hosts Jamie Hyneman and Adam Savage first established that the spider's venom was not as toxic as other venoms, after being told about an experiment whereby mice were injected with venom from both a daddy long-legs and a black widow, with the black widow venom producing a much stronger reaction. After measuring the spider's fangs at approximately 0.25 mm, Adam Savage inserted his hand into a container with several daddy-long-legs, and reported that he felt a bite which produced a mild, short-lived burning sensation. The bite did in fact penetrate his skin, but did not cause any notable harm. Additionally, recent research has shown that pholcid venom is relatively weak in its effects on insects.
Genera
, the World Spider Catalog accepted the following genera:
Aetana Huber, 2005Asia, Fiji
Anansus Huber, 2007Africa
Anopsicus Chamberlin & Ivie, 1938Mexico, Ecuador, Caribbean, Central America
Apokayana Huber, 2018Malaysia, Indonesia
Arenita Huber & Carvalho, 2019Brazil
Arnapa Huber, 2019Indonesia, Papua New Guinea
Artema Walckenaer, 1837Asia, Africa
Aucana Huber, 2000Chile
Aymaria Huber, 2000South America
Belisana Thorell, 1898Asia, Oceania
Blancoa Huber, 2000Venezuela
Buitinga Huber, 2003Africa
Calapnita Simon, 1892Asia
Canaima Huber, 2000Trinidad, Venezuela
Cantikus Huber, 2018Asia
Carapoia González-Sponga, 1998South America
Cenemus Saaristo, 2001Seychelles
Chibchea Huber, 2000South America
Chisosa Huber, 2000Mexico, Aruba, United States
Ciboneya Pérez, 2001Cuba
Coryssocnemis Simon, 1893Trinidad, South America, Mexico, Central America
Crossopriza Simon, 1893Asia, Africa, United States, Venezuela, Germany, Australia
Enetea Huber, 2000Bolivia
Galapa Huber, 2000Ecuador
Gertschiola Brignoli, 1981Argentina
Giloloa Huber, 2019Indonesia
Guaranita Huber, 2000Argentina, Brazil
Hantu Huber, 2016Indonesia
Holocneminus Berland, 1942Asia, Samoa
Holocnemus Simon, 1873Spain, Italy, Portugal
Hoplopholcus Kulczyński, 1908Asia, Greece
Ibotyporanga Mello-Leitão, 1944Brazil
Ixchela Huber, 2000Mexico, Central America
Kairona Huber & Carvalho, 2019Brazil
Kambiwa Huber, 2000Brazil
Kelabita Huber, 2018Indonesia, Malaysia
Khorata Huber, 2005Asia
Kintaqa Huber, 2018Thailand, Malaysia
Leptopholcus Simon, 1893Asia, Africa
Litoporus Simon, 1893South America
Magana Huber, 2019Oman
Mecolaesthus Simon, 1893Caribbean, South America
Meraha Huber, 2018Asia
Mesabolivar González-Sponga, 1998South America, Trinidad
Metagonia Simon, 1893North America, South America, Central America, Caribbean
Micromerys Bradley, 1877Papua New Guinea, Australia
Micropholcus Deeleman-Reinhold & Prinsen, 1987Morocco, Caribbean, Europe, Asia, Australia
Modisimus Simon, 1893North America, Central America, Caribbean, Germany, Seychelles, Asia, Australia, South America
Muruta Huber, 2018Malaysia
Nerudia Huber, 2000Chile, Argentina
Ninetis Simon, 1890Africa, Yemen
Nipisa Huber, 2018Asia
Nita Huber & El-Hennawy, 2007Egypt, Iran, Uzbekistan
Nyikoa Huber, 2007Central Africa
Ossinissa Dimitrov & Ribera, 2005Canary Is.
Otavaloa Huber, 2000South America
Paiwana Huber, 2018Taiwan
Panjange Deeleman-Reinhold & Deeleman, 1983Asia, Oceania
Papiamenta Huber, 2000Curaçao
Paramicromerys Millot, 1946Madagascar
Pehrforsskalia Deeleman-Reinhold & van Harten, 2001Africa, Asia
Pemona Huber, 2019Venezuela
Pholcophora Banks, 1896United States, Canada, Mexico
Pholcus Walckenaer, 1805Asia, Europe, Africa, United States, Oceania
Physocyclus Simon, 1893North America, South America, Czech Republic, Asia, Australia, Central America
Pinoquio Huber & Carvalho, 2019Brazil
Pisaboa Huber, 2000Peru, Venezuela, Bolivia
Pomboa Huber, 2000Colombia
Pribumia Huber, 2018Asia
Priscula Simon, 1893South America
Psilochorus Simon, 1893North America, South America, Asia, New Zealand
Quamtana Huber, 2003Africa
Queliceria González-Sponga, 2003Venezuela
Saciperere Huber & Carvalho, 2019Brazil
Savarna Huber, 2005Thailand, Malaysia, Indonesia
Smeringopina Kraus, 1957Africa
Smeringopus Simon, 1890Africa, Asia, Australia
Spermophora Hentz, 1841Africa, Asia, Oceania, Germany, Brazil, United States
Spermophorides Wunderlich, 1992Africa, Europe
Stenosfemuraia González-Sponga, 1998Venezuela
Stygopholcus Absolon & Kratochvíl, 1932Croatia, Greece, Montenegro
Systenita Simon, 1893Venezuela
Tainonia Huber, 2000Hispaniola
Teranga Huber, 2018Indonesia, Philippines
Tibetia Zhang, Zhu & Song, 2006Tibet
Tissahamia Huber, 2018Asia
Tolteca Huber, 2000Mexico
Trichocyclus Simon, 1908Australia
Tupigea Huber, 2000Brazil
Uthina Simon, 1893Asia, Seychelles
Wanniyala Huber & Benjamin, 2005Sri Lanka
Waunana Huber, 2000Colombia, Ecuador, Panama
Wugigarra Huber, 2001Australia
Zatavua Huber, 2003Madagascar
| Biology and health sciences | Spiders | Animals |
367492 | https://en.wikipedia.org/wiki/Trace%20fossil | Trace fossil | A trace fossil, also known as an ichnofossil (; from ikhnos "trace, track"), is a fossil record of biological activity by lifeforms but not the preserved remains of the organism itself. Trace fossils contrast with body fossils, which are the fossilized remains of parts of organisms' bodies, usually altered by later chemical activity or by mineralization. The study of such trace fossils is ichnology - the work of ichnologists.
Trace fossils may consist of physical impressions made on or in the substrate by an organism. For example, burrows, borings (bioerosion), urolites (erosion caused by evacuation of liquid wastes), footprints, feeding marks, and root cavities may all be trace fossils.
The term in its broadest sense also includes the remains of other organic material produced by an organism; for example coprolites (fossilized droppings) or chemical markers (sedimentological structures produced by biological means; for example, the formation of stromatolites). However, most sedimentary structures (for example those produced by empty shells rolling along the sea floor) are not produced through the behaviour of an organism and thus are not considered trace fossils.
The study of traces – ichnology – divides into paleoichnology, or the study of trace fossils, and neoichnology, the study of modern traces. Ichnological science offers many challenges, as most traces reflect the behaviour – not the biological affinity – of their makers. Accordingly, researchers classify trace fossils into form genera based on their appearance and on the implied behaviour, or ethology, of their makers.
Occurrence
Traces are better known in their fossilized form than in modern sediments. This makes it difficult to interpret some fossils by comparing them with modern traces, even though they may be extant or even common. The main difficulties in accessing extant burrows stem from finding them in consolidated sediment, and being able to access those formed in deeper water.
Trace fossils are best preserved in sandstones; the grain size and depositional facies both contributing to the better preservation. They may also be found in shales and limestones.
Classification
Trace fossils are generally difficult or impossible to assign to a specific maker. Only in very rare occasions are the makers found in association with their tracks. Further, entirely different organisms may produce identical tracks. Therefore, conventional taxonomy is not applicable, and a comprehensive form of taxonomy has been erected. At the highest level of the classification, five behavioral modes are recognized:
Domichnia, dwelling structures reflecting the life position of the organism that created it.
Fodinichnia, three-dimensional structures left by animals which eat their way through sediment, such as deposit feeders;
Pascichnia, feeding traces left by grazers on the surface of a soft sediment or a mineral substrate;
Cubichnia, resting traces, in the form of an impression left by an organism on a soft sediment;
Repichnia, surface traces of creeping and crawling.
Fossils are further classified into form genera, a few of which are even subdivided to a "species" level. Classification is based on shape, form, and implied behavioural mode.
To keep body and trace fossils nomenclatorially separate, ichnospecies are erected for trace fossils. Ichnotaxa are classified somewhat differently in zoological nomenclature than taxa based on body fossils (see trace fossil classification for more information). Examples include:
Late Cambrian trace fossils from intertidal settings include Protichnites and Climactichnites, amongst others
Mesozoic dinosaur footprints including ichnogenera such as Grallator, Atreipus, and Anomoepus
Triassic to Recent termite mounds, which can encompass several square kilometers of sediment
Information provided by ichnofossils
Trace fossils are important paleoecological and paleoenvironmental indicators, because they are preserved in situ, or in the life position of the organism that made them. Because identical fossils can be created by a range of different organisms, trace fossils can only reliably inform us of two things: the consistency of the sediment at the time of its deposition, and the energy level of the depositional environment. Attempts to deduce such traits as whether a deposit is marine or non-marine have been made, but shown to be unreliable.
Paleoecology
Trace fossils provide us with indirect evidence of life in the past, such as the footprints, tracks, burrows, borings, and feces left behind by animals, rather than the preserved remains of the body of the actual animal itself. Unlike most other fossils, which are produced only after the death of the organism concerned, trace fossils provide us with a record of the activity of an organism during its lifetime. Unlike body fossils, which can be transported far away from where an individual organism lived, trace fossils record the type of environment an animal actually inhabited and thus can provide a more accurate palaeoecological sample than body fossils.
Trace fossils are formed by organisms performing the functions of their everyday life, such as walking, crawling, burrowing, boring, or feeding. Tetrapod footprints, worm trails and the burrows made by clams and arthropods are all trace fossils.
Perhaps the most spectacular trace fossils are the huge, three-toed footprints produced by dinosaurs and related archosaurs. These imprints give scientists clues as to how these animals lived. Although the skeletons of dinosaurs can be reconstructed, only their fossilized footprints can determine exactly how they stood and walked. Such tracks can tell much about the gait of the animal which made them, what its stride was, and whether the front limbs touched the ground or not.
However, most trace fossils are rather less conspicuous, such as the trails made by segmented worms or nematodes. Some of these worm castings are the only fossil record we have of these soft-bodied creatures.
Paleoenvironment
Fossil footprints made by tetrapod vertebrates are difficult to identify to a particular species of animal, but they can provide valuable information such as the speed, weight, and behavior of the organism that made them. Such trace fossils are formed when amphibians, reptiles, mammals, or birds walked across soft (probably wet) mud or sand which later hardened sufficiently to retain the impressions before the next layer of sediment was deposited. Some fossils can even provide details of how wet the sand was when they were being produced, and hence allow estimation of paleo-wind directions.
Assemblages of trace fossils occur at certain water depths, and can also reflect the salinity and turbidity of the water column.
Stratigraphic correlation
Some trace fossils can be used as local index fossils, to date the rocks in which they are found, such as the burrow Arenicolites franconicus which occurs only in a layer of the Triassic Muschelkalk epoch, throughout wide areas in southern Germany.
The base of the Cambrian period is defined by the first appearance of the trace fossil Treptichnus pedum.
Trace fossils have a further utility, as many appear before the organism thought to create them, extending their stratigraphic range.
Ichnofacies
Ichnofacies are assemblages of individual trace fossils that occur repeatedly in time and space. Palaeontologist Adolf Seilacher pioneered the concept of ichnofacies, whereby geologists infer the state of a sedimentary system at its time of deposition by noting the fossils in association with one another. The principal ichnofacies recognized in the literature are Skolithos, Cruziana, Zoophycos, Nereites, Glossifungites, Scoyenia, Trypanites, Teredolites, and Psilonichus. These assemblages are not random. In fact, the assortment of fossils preserved are primarily constrained by the environmental conditions in which the trace-making organisms dwelt. Water depth, salinity, hardness of the substrate, dissolved oxygen, and many other environmental conditions control which organisms can inhabit particular areas. Therefore, by documenting and researching changes in ichnofacies, scientists can interpret changes in environment. For example, ichnological studies have been utilized across mass extinction boundaries, such as the Cretaceous–Paleogene mass extinction, to aid in understanding environmental factors involved in mass extinction events.
Inherent bias
Most trace fossils are known from marine deposits. Essentially, there are two types of traces, either exogenic ones, which are made on the surface of the sediment (such as tracks) or endogenic ones, which are made within the layers of sediment (such as burrows).
Surface trails on sediment in shallow marine environments stand less chance of fossilization because they are subjected to wave and current action. Conditions in quiet, deep-water environments tend to be more favorable for preserving fine trace structures.
Most trace fossils are usually readily identified by reference to similar phenomena in modern environments. However, the structures made by organisms in recent sediment have only been studied in a limited range of environments, mostly in coastal areas, including tidal flats.
Evolution
The earliest complex trace fossils, not including microbial traces such as stromatolites, date to . This is far too early for them to have an animal origin, and they are thought to have been formed by amoebae.
Putative "burrows" dating as far back as may have been made by animals which fed on the undersides of microbial mats, which would have shielded them from a chemically unpleasant ocean; however their uneven width and tapering ends make a biological origin so difficult to defend that even the original author no longer believes they are authentic.
The first evidence of burrowing which is widely accepted dates to the Ediacaran (Vendian) period, around . During this period the traces and burrows basically are horizontal on or just below the seafloor surface. Such traces must have been made by motile organisms with heads, which would probably have been bilateran animals. The traces observed imply simple behaviour, and point to organisms feeding above the surface and burrowing for protection from predators. Contrary to widely circulated opinion that Ediacaran burrows are only horizontal the vertical burrows Skolithos are also known. The producers of burrows Skolithos declinatus from the Vendian (Ediacaran) beds in Russia with date have not been identified; they might have been filter feeders subsisting on the nutrients from the suspension. The density of these burrows is up to 245 burrows/dm2. Some Ediacaran trace fossils have been found directly associated with body fossils. Yorgia and Dickinsonia are often found at the end of long pathways of trace fossils matching their shape. The feeding was performed in a mechanical way, supposedly the ventral side of body these organisms was covered with cilia. The potential mollusc related Kimberella is associated with scratch marks, perhaps formed by a radula, further traces from appear to imply active crawling or burrowing activity.
As the Cambrian got underway, new forms of trace fossil appeared, including vertical burrows (e.g. Diplocraterion) and traces normally attributed to arthropods. These represent a "widening of the behavioural repertoire", both in terms of abundance and complexity.
Trace fossils are a particularly significant source of data from this period because they represent a data source that is not directly connected to the presence of easily fossilized hard parts, which are rare during the Cambrian. Whilst exact assignment of trace fossils to their makers is difficult, the trace fossil record seems to indicate that at the very least, large, bottom-dwelling, bilaterally symmetrical organisms were rapidly diversifying during the early Cambrian.
Further, less rapid diversification occurred since, and many traces have been converged upon independently by unrelated groups of organisms.
Trace fossils also provide our earliest evidence of animal life on land. Evidence of the first animals that appear to have been fully terrestrial dates to the Cambro-Ordovician and is in the form of trackways. Trackways from the Ordovician Tumblagooda sandstone allow the behaviour of other terrestrial organisms to be determined. The trackway Protichnites represents traces from an amphibious or terrestrial arthropod going back to the Cambrian.
Common ichnogenera
Anoigmaichnus is a bioclaustration. It occurs in the Ordovician bryozoans. Apertures of Anoigmaichnus are elevated above their hosts' growth surfaces, forming short chimney-like structures.
Arachnostega is the name given to the irregular, branching burrows in the sediment fill of shells. They are visible on the surface of steinkerns. Their traces are known from the Cambrian period onwards.
Asteriacites is the name given to the five-rayed fossils found in rocks and they record the resting place of starfish on the sea floor. Asteriacites are found in European and American rocks, from the Ordovician period onwards, and are numerous in rocks from the Jurassic period of Germany.
Burrinjuckia is a bioclaustration. Burrinjuckia includes outgrowths of the brachiopod's secondary shell with a hollow interior in the mantle cavity of a brachiopod.
Chondrites (not to be confused with stony meteorites of the same name) are small branching burrows of the same diameter, which superficially resemble the roots of a plant. The most likely candidate for having constructed these burrows is a nematode (roundworm). Chondrites are found in marine sediments from the Cambrian period of the Paleozoic onwards. They are especially common in sediments which were deposited in reduced-oxygen environments.
Climactichnites is the name given to surface trails and burrows that consist of a series of chevron-shaped raised cross bars that are usually flanked on either side by a parallel ridge. They somewhat resemble tire tracks, and are larger (typically about wide) than most of the other trace fossils made by invertebrates. The trails were produced on sandy tidal flats during Cambrian time. While the identity of the animal is still conjectural, it may have been a large slug-like animal – its trails produced as it crawled over and processed the wet sand to obtain food.
Cruziana are excavation trace marks made on the sea floor which have a two-lobed structure with a central groove. The lobes are covered with scratch marks made by the legs of the excavating organism, usually a trilobite or allied arthropod. Cruziana are most common in marine sediments formed during the Paleozoic era, particularly in rocks from the Cambrian and Ordovician periods. Over 30 ichnospecies of Cruziana have been identified. | Biology and health sciences | Paleontology | Biology |
367503 | https://en.wikipedia.org/wiki/Concretion | Concretion | A concretion is a hard and compact mass formed by the precipitation of mineral cement within the spaces between particles, and is found in sedimentary rock or soil. Concretions are often ovoid or spherical in shape, although irregular shapes also occur. The word concretion is borrowed from Latin , itself derived from concrescere , from con- and crescere .
Concretions form within layers of sedimentary strata that have already been deposited. They usually form early in the burial history of the sediment, before the rest of the sediment is hardened into rock. This concretionary cement often makes the concretion harder and more resistant to weathering than the host stratum.
There is an important distinction to draw between concretions and nodules. Concretions are formed from mineral precipitation around some kind of nucleus while a nodule is a replacement body.
Descriptions dating from the 18th century attest to the fact that concretions have long been regarded as geological curiosities. Because of the variety of unusual shapes, sizes and compositions, concretions have been interpreted to be dinosaur eggs, animal and plant fossils (called pseudofossils), extraterrestrial debris or human artifacts.
Origins
Detailed studies have demonstrated that concretions form after sediments are buried but before the sediment is fully lithified during diagenesis. They typically form when a mineral precipitates and cements sediment around a nucleus, which is often organic, such as a leaf, tooth, piece of shell or fossil. For this reason, fossil collectors commonly break open concretions in their search for fossil animal and plant specimens. Some of the most unusual concretion nuclei are World War II military shells, bombs, and shrapnel, which are found inside siderite concretions found in an English coastal salt marsh.
Depending on the environmental conditions present at the time of their formation, concretions can be created by either concentric or pervasive growth. In concentric growth, the concretion grows as successive layers of mineral precipitate around a central core. This process results in roughly spherical concretions that grow with time. In the case of pervasive growth, cementation of the host sediments, by infilling of its pore space by precipitated minerals, occurs simultaneously throughout the volume of the area, which in time becomes a concretion. Concretions are often exposed at the surface by subsequent erosion that removes the weaker, uncemented material.
Appearance
Concretions vary in shape, hardness and size, ranging from objects that require a magnifying lens to be clearly visible to huge bodies three meters in diameter and weighing several thousand pounds. The giant, red concretions occurring in Theodore Roosevelt National Park, in North Dakota, are almost in diameter. Spheroidal concretions, as large as in diameter, have been found eroding out of the Qasr el Sagha Formation within the Faiyum depression of Egypt. Concretions occur in a wide variety of shapes, including spheres, disks, tubes, and grape-like or soap bubble-like aggregates.
Composition
Concretions are commonly composed of a mineral present as a minor component of the host rock. For example, concretions in sandstones or shales are commonly formed of a carbonate mineral such as calcite; those in limestones are commonly an amorphous or microcrystalline form of silica such as chert, flint, or jasper; while those in black shale may be composed of pyrite. Other minerals that form concretions include iron oxides or hydroxides (such as goethite and hematite), dolomite, siderite, ankerite, marcasite, barite, and gypsum.
Although concretions often consist of a single dominant mineral, other minerals can be present depending on the environmental conditions that created them. For example, carbonate concretions, which form in response to the reduction of sulfates by bacteria, often contain minor percentages of pyrite. Other concretions, which formed as a result of microbial sulfate reduction, consist of a mixture of calcite, barite, and pyrite.
Occurrence
Concretions are found in a variety of rocks, but are particularly common in shales, siltstones, and sandstones. They often outwardly resemble fossils or rocks that look as if they do not belong to the stratum in which they were found. Occasionally, concretions contain a fossil, either as its nucleus or as a component that has been incorporated during its growth but concretions are not fossils themselves. They appear in nodular patches, concentrated along bedding planes, or protruding from weathered cliffsides.
Small hematite concretions or Martian spherules have been observed by the Opportunity rover in the Eagle Crater on Mars.
Types of concretion
Concretions vary considerably in their compositions, shapes, sizes and modes of origin.
Septarian concretions
Septarian concretions (or septarian nodules) are carbonate-rich concretions containing angular cavities or cracks (septaria; , from the Latin "partition, separating element", referring to the cracks or cavities separating polygonal blocks of hardened material). Septarian nodules are characteristically found in carbonate-rich mudrock. They typically show an internal structure of polyhedral blocks (the matrix) separated by mineral-filled radiating cracks (the septaria) which taper towards the rim of the concretion. The radiating cracks sometimes intersect a second set of concentric cracks. However, the cracks can be highly variable in shape and volume, as well as the degree of shrinkage they indicate. The matrix is typically composed of argillaceous carbonate, such as clay ironstone, while the crack filling is usually calcite. The calcite often contains significant iron (ferroan calcite) and may have inclusions of pyrite and clay minerals. The brown calcite common in septaria may also be colored by organic compounds produced by bacterial decay of organic matter in the original sediments.
Septarian concretions are found in many kinds of mudstone, including lacustrine siltstones such as the Beaufort Group of northwest Mozambique, but are most commonly found in marine shales, such as the Staffin Shale Formation of Skye, the Kimmeridge Clay of England, or the Mancos Group of North America.
It is commonly thought that concretions grew incrementally from the inside outwards. Chemical and textural zoning in many concretions are consistent with this concentric model of formation. However, the evidence is ambiguous, and many or most concretions may have formed by pervasive cementation of the entire volume of the concretion at the same time. For example, if the porosity after early cementation varies across the concretion, then later cementation filling this porosity would produce compositional zoning even with uniform pore water composition. Whether the initial cementation was concentric or pervasive, there is considerable evidence that it occurred quickly and at shallow depth of burial. In many cases, there is clear evidence that the initial concretion formed around some kind of organic nucleus.
The origin of the carbonate-rich septaria is still debated. One possibility is that dehydration hardens the outer shell of the concretion while causing the interior matrix to shrink until it cracks. Shrinkage of a still-wet matrix may also take place through syneresis, in which the particles of colloidal material in the interior of the concretion become gradually more tightly bound while expelling water. Another possibility is that early cementation reduces the permeability of the concretion, trapping pore fluids and creating excess pore pressure during continued burial. This could crack the interior at depths as shallow as . A more speculative theory is that the septaria form by brittle fracturing resulting from earthquakes. Regardless of the mechanism of crack formation, the septaria, like the concretion itself, likely form at a relatively shallow depth of burial of less than and possibly as little as . Geologically young concretions of the Errol Beds of Scotland show texture consistent with formation from flocculated sediments containing organic matter, whose decay left tiny gas bubbles (30 to 35 microns in diameter) and a soap of calcium fatty acids salts. The conversion of these fatty acids to calcium carbonate may have promoted shrinkage and fracture of the matrix.
One model for the formation of septarian concretions in the Staffin Shales suggests that the concretions started as semirigid masses of flocculated clay. The individual colloidal clay particles were bound by extracellular polymeric substances or EPS produced by colonizing bacteria. The decay of these substances, together with syneresis of the host mud, produced stresses that fractured the interiors of the concretions while still at shallow burial depth. This was possible only with the bacterial colonization and the right sedimentation rate. Additional fractures formed during subsequent episodes of shallow burial (during the Cretaceous) or uplift (during the Paleogene). Water derived from rain and snow (meteoric water) later infiltrated the beds and deposited ferroan calcite in the cracks.
Septarian concretions often record a complex history of formation that provides geologists with information on early diagenesis, the initial stages of the formation of sedimentary rock from unconsolidated sediments. Most concretions appear to have formed at depths of burial where sulfate-reducing microorganisms are active. This corresponds to burial depths of , and is characterized by generation of carbon dioxide, increased alkalinity and precipitation of calcium carbonate. However, there is some evidence that formation continues well into the methanogenic zone beneath the sulfate reduction zone.
A spectacular example of boulder septarian concretions, which are as much as in diameter, are the Moeraki Boulders. These concretions are found eroding out of Paleocene mudstone of the Moeraki Formation exposed along the coast near Moeraki, South Island, New Zealand. They are composed of calcite-cemented mud with septarian veins of calcite and rare late-stage quartz and ferrous dolomite. The much smaller septarian concretions found in the Kimmeridge Clay exposed in cliffs along the Wessex coast of England are more typical examples of septarian concretions.
Cannonball concretions
Cannonball concretions are large spherical concretions, which resemble cannonballs. These are found along the Cannonball River within Morton and Sioux Counties, North Dakota, and can reach in diameter. They were created by early cementation of sand and silt by calcite. Similar cannonball concretions, which are as much as in diameter, are found associated with sandstone outcrops of the Frontier Formation in northeast Utah and central Wyoming. They formed by the early cementation of sand by calcite. Somewhat weathered and eroded giant cannonball concretions, as large as in diameter, occur in abundance at "Rock City" in Ottawa County, Kansas. Large and spherical boulders are also found along Koekohe beach near Moeraki on the east coast of the South Island of New Zealand. The Moeraki Boulders, Ward Beach boulders and Koutu Boulders of New Zealand are examples of septarian concretions, which are also cannonball concretions. Large spherical rocks, which are found on the shore of Lake Huron near Kettle Point, Ontario, and locally known as "kettles", are typical cannonball concretions. Cannonball concretions have also been reported from Van Mijenfjorden, Spitsbergen; near Haines Junction, Yukon Territory, Canada; Jameson Land, East Greenland; near Mecevici, Ozimici, and Zavidovici in Bosnia-Herzegovina; in Alaska in the Kenai Peninsula Captain Cook State Park on north of Cook Inlet beach and on Kodiak Island northeast of Fossil Beach. This type of concretion is also found in Romania, where they are known as trovants.
Hiatus concretions
Hiatus concretions are distinguished by their stratigraphic history of exhumation, exposure and reburial. They are found where submarine erosion has concentrated early diagenetic concretions as lag surfaces by washing away surrounding fine-grained sediments. Their significance for stratigraphy, sedimentology and paleontology was first noted by Voigt who referred to them as Hiatus-Konkretionen. "Hiatus" refers to the break in sedimentation that allowed this erosion and exposure. They are found throughout the fossil record but are most common during periods in which calcite sea conditions prevailed, such as the Ordovician, Jurassic and Cretaceous. Most are formed from the cemented infillings of burrow systems in siliciclastic or carbonate sediments.
A distinctive feature of hiatus concretions separating them from other types is that they were often encrusted by marine organisms including bryozoans, echinoderms and tube worms in the Paleozoic and bryozoans, oysters and tube worms in the Mesozoic and Cenozoic. Hiatus concretions are also often significantly bored by worms and bivalves.
Elongate concretions
Elongate concretions form parallel to sedimentary strata and have been studied extensively due to the inferred influence of phreatic (saturated) zone groundwater flow direction on the orientation of the axis of elongation. In addition to providing information about the orientation of past fluid flow in the host rock, elongate concretions can provide insight into local permeability trends (i.e., permeability correlation structure; variation in groundwater velocity, and the types of geological features that influence flow.
Elongate concretions are well known in the Kimmeridge Clay formation of northwest Europe. In outcrops, where they have acquired the name "doggers", they are typically only a few meters across, but in the subsurface they can be seen to penetrate up to tens of meters of along-hole dimension. Unlike limestone beds, however, it is impossible to consistently correlate them between even closely spaced wells.
Moqui Marbles
Moqui Marbles, also called Moqui balls or "Moki marbles", are iron oxide concretions which can be found eroding in great abundance out of outcrops of the Navajo Sandstone within south-central and southeastern Utah. These concretions range in shape from spheres to discs, buttons, spiked balls, cylindrical forms, and other odd shapes. They range from pea-size to baseball-size.
The concretions were created by the precipitation of iron, which was dissolved in groundwater. The iron was originally present as a thin film of iron oxide surrounding sand grains in the Navajo Sandstone. Groundwater containing methane or petroleum from underlying rock beds reacted with the iron oxide, converting it to soluble reduced iron. When the iron-bearing groundwater came into contact with more oxygen-rich groundwater, the reduced iron was converted back to insoluble iron oxide, which formed the concretions. It is possible that reduced iron first formed siderite concretions that were subsequently oxidized. Iron-oxidizing bacteria may have played a role.
Kansas pop rocks
Kansas pop rocks are concretions of either iron sulfide, i.e. pyrite and marcasite, or in some cases jarosite, which are found in outcrops of the Smoky Hill Chalk Member of the Niobrara Formation within Gove County, Kansas. They are typically associated with thin layers of altered volcanic ash, called bentonite, that occur within the chalk comprising the Smoky Hill Chalk Member. A few of these concretions enclose, at least in part, large flattened valves of inoceramid bivalves. These concretions range in size from a few millimeters to as much as in length and in thickness. Most of these concretions are oblate spheroids. Other "pop rocks" are small polycuboid pyrite concretions, which are as much as in diameter. These concretions are called "pop rocks" because they explode if thrown in a fire. Also, when they are either cut or hammered, they produce sparks and a burning sulfur smell. Contrary to what has been published on the Internet, none of the iron sulfide concretions, which are found in the Smoky Hill Chalk Member were created by either the replacement of fossils or by metamorphic processes. In fact, metamorphic rocks are completely absent from the Smoky Hill Chalk Member. Instead, all of these iron sulfide concretions were created by the precipitation of iron sulfides within anoxic marine calcareous ooze after it had accumulated and before it had lithified into chalk.
Iron sulfide concretions, such as the Kansas Pop rocks, consisting of either pyrite and marcasite, are nonmagnetic. On the other hand, iron sulfide concretions, which either are composed of or contain either pyrrhotite or smythite, will be magnetic to varying degrees. Prolonged heating of either a pyrite or marcasite concretion will convert portions of either mineral into pyrrhotite causing the concretion to become slightly magnetic.
Claystones, clay dogs, and fairy stones
Disc concretions composed of calcium carbonate are often found eroding out of exposures of interlaminated silt and clay, varved, proglacial lake deposits. For example, great numbers of strikingly symmetrical concretions have been found eroding out of outcrops of Quaternary proglacial lake sediments along and in the gravels of the Connecticut River and its tributaries in Massachusetts and Vermont. Depending the specific source of these concretions, they vary in an infinite variety of forms that include disc-shapes; crescent-shapes; watch-shapes; cylindrical or club-shapes; botryoidal masses; and animal-like forms. They can vary in length from to over and often exhibit concentric grooves on their surfaces. In the Connecticut River Valley, these concretions are often called "claystones" because the concretions are harder than the clay enclosing them. In local brickyards, they were called "clay-dogs" either because of their animal-like forms or the concretions were nuisances in molding bricks. Similar disc-shaped calcium carbonate concretions have also been found in the Harricana River valley in the Abitibi-Témiscamingue administrative region of Quebec, and in Östergötland county, Sweden. In Scandinavia, they are known as "marlekor" ("fairy stones").
Gogottes
are sandstone concretions found in Oligocene (~30 million years) aged sediments near Fontainebleau, France. Gogottes have fetched high prices at auction due to their sculpture-like quality.
| Physical sciences | Sedimentary rocks | Earth science |
367867 | https://en.wikipedia.org/wiki/Citron | Citron | The citron (Citrus medica), historically cedrate, is a large fragrant citrus fruit with a thick rind. It is said to resemble a 'huge, rough lemon'. It is one of the original citrus fruits from which all other citrus types developed through natural hybrid speciation or artificial hybridization. Though citron cultivars take on a wide variety of physical forms, they are all closely related genetically. It is used in Asian and Mediterranean cuisine, traditional medicines, perfume, and religious rituals and offerings. Hybrids of citrons with other citrus are commercially more prominent, notably lemons and many limes.
Etymology
The fruit's English name "citron" derives ultimately from Latin, citrus, which is also the origin of the genus name.
Other languages
A source of confusion is that 'citron' in French and English are false friends, as the French word 'citron' refers to what in English is a lemon; whereas the French word for the citron is 'cédrat. Indeed, into the 16th century, the English term citron included the lemon and perhaps the lime as well. Other languages that use variants of citron to refer to the lemon include Armenian, Czech, Dutch, Finnish, German, Estonian, Latvian, Lithuanian, Hungarian, Esperanto, Polish and the Scandinavian languages.
In Italian it is known as , the same name used also to indicate the coniferous tree cedar. Similarly, in Latin, citrus, or thyine wood referred to the wood of a North African cypress, Tetraclinis articulata.
In Indo-Iranian languages, it is called , as against ('bitter orange'). Both names were borrowed into Arabic and introduced into Spain and Portugal after their occupation by Muslims in AD 711, whence the latter became the source of the name orange through rebracketing (and the former of 'toronja' and 'toranja', which today describe the grapefruit in Spanish and Portuguese respectively).
Dutch merchants seasonally import for baked goods; a thick, light green colored commercially candied half peeling from Indonesia and other countries ( – Indonesian word for love, Citrus médica variety 'Macrocárpa'), which can reach 2.5 kilograms mass. A bitter taste is removed by salt treatment before processing into confectionery.
In Hebrew it is called an etrog (); in Yiddish, it is pronounced "esrog" or "esreg". The citron plays an important role in the harvest holiday of Sukkot paired with lulavim (fronds of the date palm).
Origin and distribution
The citron is an old and original citrus species.
There is molecular evidence that most cultivated citrus species arose by hybridization of a small number of ancestral types: the citron, pomelo, mandarin and, to a lesser extent, papedas and kumquat. The citron is usually fertilized by self-pollination, which results in their displaying a high degree of genetic homozygosity. It is the male parent of any citrus hybrid rather than a female one.
Archaeological evidence for citrus fruits has been limited, as neither seeds nor pollen are likely to be routinely recovered in archaeology. The citron is thought to have been native to the southeast foothills of the Himalayas. It is thought that by the 4th century BC, when Theophrastus mentions the "Median apple." Despite its scientific designation, which is an adaptation of the old name in classical Greek sources “Median pome”, this fruit was not indigenous to Media or ancient Media the citron was mostly cultivated in the Caspian Sea (north of Mazandarn and Gilan) on its way to the Mediterranean basin, where it was cultivated during the later centuries in different areas as described by Erich Isaac. Many mention the role of Alexander the Great and his armies as they attacked Iran and what is today Pakistan, as being responsible for the spread of the citron westward, reaching the European countries such as Greece and Italy.Biology of Citrus
Antiquity
Leviticus mentions the "fruit of the beautiful ('hadar') tree" as being required for ritual use during the Feast of Tabernacles (Lev. 23:40). According to Jewish Rabbinical tradition, the "fruit of the tree hadar" refers to the citron. Mishna Sukkah, , deals with halakhic aspects of the citron.
The Egyptologist and archaeologist Victor Loret said he had identified it depicted on the walls of the botanical garden at the Karnak Temple, which dates back to the time of Thutmosis III, approximately 3,500 years ago. Citron was also cultivated in Sumer as early as the 3rd millennium BC.
The citron has been cultivated since ancient times, predating the cultivation of other citrus species.
Theophrastus
The following description on citron was given by Theophrastus
In the east and south there are special plants ... i.e. in Media(Perhaps they mistakenly called it Mad because it was located in the east of Parthia and south and the tree grows in the question of Caspian sea, Mazandaran, Gilan , not Mad ) and Persia there are many types of fruit, between them there is a fruit called Median or Persian Apple. The tree has a leaf similar to and almost identical with that of the andrachn (Arbutus andrachne L.), but has thorns like those of the apios (the wild pear, Pyrus amygdaliformis Vill.) or the firethorn (Cotoneaster pyracantha Spach.), except that they are white, smooth, sharp and strong. The fruit is not eaten, but is very fragrant, as is also the leaf of the tree; and the fruit is put among clothes, it keeps them from being moth-eaten. It is also useful when one has drunk deadly poison, for when it is administered in wine; it upsets the stomach and brings up the poison. It is also useful to improve the breath, for if one boils the inner part of the fruit in a dish or squeezes it into the mouth in some other medium, it makes the breath more pleasant.
The seed is removed from the fruit and sown in the spring in carefully tilled beds, and it is watered every fourth or fifth day. As soon the plant is strong it is transplanted, also in the spring, to a soft, well watered site, where the soil is not very fine, for it prefers such places.
And it bears its fruit at all seasons, for when some have gathered, the flower of the others is on the tree and is ripening others. Of the flowers I have said those that have a sort of distaff [meaning the pistil] projecting from the middle are fertile, while those that do not have this are sterile. It is also sown, like date palms, in pots punctured with holes.
This tree, as has been remarked, grows in Media and Persia.
Pliny the Elder
Citron was also described by Pliny the Elder, who called it nata Assyria malus. The following is from his book Natural History:
There is another tree also with the same name of "citrus", and bears a fruit that is held by some persons in particular dislike for its smell and remarkable bitterness; while, on the other hand, there are some who esteem it very highly. This tree is used as an ornament to houses; it requires, however, no further description.
The citron tree, called the Assyrian, and by some the Median or Persian apple, is an antidote against poisons. The leaf is similar to that of the arbute, except that it has small prickles running across it. As to the fruit, it is never eaten, but it is remarkable for its extremely powerful smell, which is the case, also, with the leaves; indeed, the odour is so strong, that it will penetrate clothes, when they are once impregnated with it, and hence it is very useful in repelling the attacks of noxious insects.
The tree bears fruit at all seasons of the year; while some is falling off, other fruit is ripening, and other, again, just bursting into birth. Various nations have attempted to naturalize this tree among them, for the sake of its medica or Persian properties, by planting it in pots of clay, with holes drilled in them, for the purpose of introducing the air to the roots; and I would here remark, once for all, that it is as well to remember that the best plan is to pack all slips of trees that have to be carried to any distance, as close together as they can possibly be placed.
It has been found, however, that this tree will grow nowhere except in Persia. It is this fruit, the pips of which, as we have already mentioned, the Parthian grandees employ in seasoning their ragouts, as being peculiarly conducive to the sweetening of the breath. We find no other tree very highly commended that is produced in Media.
Citrons, either the pulp of them or the pips, are taken in wine as an antidote to poisons. A decoction of citrons, or the juice extracted from them, is used as a gargle to impart sweetness to the breath. The pips of this fruit are recommended for pregnant women to chew when affected with qualmishness. Citrons are good, also, for a weak stomach, but it is not easy to eat them except with vinegar.
Medieval authors
Ibn al-'Awwam's 12th-century agricultural encyclopedia, Book on Agriculture, contains an article on citron tree cultivation in Spain.
Description and variation
Fruit
The citron fruit is usually ovate or oblong, narrowing towards the stylar end. However, the citron's fruit shape is highly variable, due to the large quantity of albedo, which forms independently according to the fruits' position on the tree, twig orientation, and many other factors. The rind is leathery, furrowed, and adherent. The inner portion is thick, white and hard; the outer is uniformly thin and very fragrant. The pulp is usually acidic, but also can be sweet, and some varieties are entirely pulpless.
Most citron varieties contain a large number of monoembryonic seeds. The seeds are white with dark inner coats and red-purplish chalazal spots for the acidic varieties, and colorless for the sweet ones. Some citron varieties have persistent styles which do not fall off after fecundation. Those are usually preferred for ritual etrog use in Judaism.
Some citrons have medium-sized oil bubbles at the outer surface, medially distant to each other. Some varieties are ribbed and faintly warted on the outer surface. A fingered citron variety is commonly called Buddha's hand.
The color varies from green, when unripe, to a yellow-orange when overripe. The citron does not fall off the tree and can reach 8–10 pounds (4–5 kg) if not picked before fully mature.The Search for the Authentic Citron: Historic and Genetic Analysis; HortScience 40(7):1963–1968. 2005 However, they should be picked before the winter, as the branches might bend or break to the ground, and may cause numerous fungal diseases for the tree.
Despite the wide variety of forms taken on by the fruit, citrons are all closely related genetically, representing a single species. Genetic analysis divides the known cultivars into three clusters: a Mediterranean cluster thought to have originated in India, and two clusters predominantly found in China, one representing the fingered citrons, and another consisting of non-fingered varieties.
Plant
Citrus medica is a slow-growing shrub or small tree that reaches a height of about . It has irregular straggling branches and stiff twigs and long spines at the leaf axils. The evergreen leaves are green and lemon-scented with slightly serrate edges, ovate-lanceolate or ovate elliptic 2.5 to 7.0 inches long. Petioles are usually wingless or with minor wings. The clustered flowers of the acidic varieties are purplish tinted from outside, but the sweet ones are white-yellowish.
The citron tree is very vigorous with almost no dormancy, blooming several times a year, and is therefore fragile and extremely sensitive to frost.
Varieties and hybrids
The acidic varieties include the Florentine and Diamante citron from Italy, the Greek citron and the Balady citron from Israel. The sweet varieties include the Corsican and Moroccan citrons. The pulpless varieties also include some fingered varieties and the Yemenite citron.
There are also a number of citron hybrids; for example, ponderosa lemon, the lumia and rhobs el Arsa are known citron hybrids. Some claim that even the Florentine citron is not pure citron, but a citron hybrid.
Uses
Culinary
While the lemon and orange are primarily peeled to consume their pulpy and juicy segments, the citron's pulp is dry, containing a small quantity of juice, if any. The main content of a citron fruit is its thick white rind, which adheres to the segments and cannot easily be separated from them. The citron gets halved and depulped, then its rind (the thicker the better) is cut into pieces. Those are cooked in sugar syrup and used as a spoon sweet known in Greek as "kítro glykó" (κίτρο γλυκό), or diced and candied with sugar and used as a confection in cakes. In Italy, a soft drink called "Cedrata" is made from the fruit.
In Samoa a refreshing drink called "vai tipolo" is made from squeezed juice. It is also added to a raw fish dish called "oka" and to a variation of palusami or luáu.
Citron is a regularly used item in Asian cuisine.
Today the citron is also used for the fragrance or zest of its flavedo, but the most important part is still the inner rind (known as pith or albedo), which is a fairly important article in international trade and is widely employed in the food industry as succade, as it is known when it is candied in sugar.
The dozens of varieties of citron are collectively known as Lebu in Bangladesh, West Bengal, where it is the primary citrus fruit.
In Iran the citron's thick white rind is used to make jam; in Pakistan the fruit is used to make jam but is also pickled; in South Indian cuisine, some varieties of citron (collectively referred to as "Narthangai" in Tamil and "Heralikayi" in Kannada) are widely used in pickles and preserves. In Karnataka, heralikayi (citron) is used to make lemon rice. In Kutch, Gujarat, it is used to make pickle, wherein entire slices of fruits are salted, dried and mixed with jaggery and spices to make sweet spicy pickle. In the United States, citron is an important ingredient in holiday fruitcakes.
Folk medicine
From ancient through medieval times, the citron was used mainly for supposed medical purposes to combat seasickness, scurvy and other disorders. The essential oil of the flavedo (the outermost, pigmented layer of rind) was also regarded as an antibiotic.
The juice of the citron has a high content of vitamin C and dietary fiber (pectin) which can be extracted from the thick albedo of the citron.
Religious
In Judaism
The citron (the word for which in Hebrew is etrog) is used by Jews for a religious ritual during the Jewish harvest holiday of Sukkot, the Feast of Tabernacles; therefore, it is considered to be a Jewish symbol, one found on various Hebrew antiques and archaeological findings.
In Buddhism
A variety of citron native to China has sections that separate into finger-like parts and is used as an offering in Buddhist temples.
In Hinduism
In Nepal, the citron () is worshipped during the Bhai Tika ceremony during Tihar. The worship is thought to stem from the belief that it is a favorite of Yama, Hindu god of death, and his sister Yami.
Perfumery
For many centuries, citron's fragrant essential oil (oil of cedrate''') has been used in perfumery, the same oil that was used medicinally for its antibiotic properties. Its major constituent is limonene.
| Biology and health sciences | Citrus fruits | Plants |
368227 | https://en.wikipedia.org/wiki/Epicyclic%20gearing | Epicyclic gearing | An epicyclic gear train (also known as a planetary gearset) is a gear reduction assembly consisting of two gears mounted so that the center of one gear (the "planet") revolves around the center of the other (the "sun"). A carrier connects the centers of the two gears and rotates, to carry the planet gear(s) around the sun gear. The planet and sun gears mesh so that their pitch circles roll without slip. If the sun gear is held fixed, then a point on the pitch circle of the planet gear traces an epicycloid curve.
An epicyclic gear train can be assembled so the planet gear rolls on the inside of the pitch circle of an outer gear ring, or ring gear, sometimes called an annulus gear. Such an assembly of a planet engaging both a sun gear and a ring gear is called a planetary gear train. By choosing to hold one component or anotherthe planetary carrier, the ring gear, or the sun gearstationary, three different gear ratios can be realized.
Overview
Epicyclic gearing or planetary gearing is a gear system consisting of one or more outer, or planet, gears or pinions, revolving about a central sun gear or sun wheel. Typically, the planet gears are mounted on a movable arm or carrier, which itself may rotate relative to the sun gear. Epicyclic gearing systems also incorporate the use of an outer ring gear or annulus, which meshes with the planet gears. Planetary gears (or epicyclic gears) are typically classified as simple or compound planetary gears. Simple planetary gears have one sun, one ring, one carrier, and one planet set. Compound planetary gears involve one or more of the following three types of structures: meshed-planet (there are at least two more planets in mesh with each other in each planet train), stepped-planet (there exists a shaft connection between two planets in each planet
train), and multi-stage structures (the system contains two or more planet sets). Compared to simple planetary gears, compound planetary gears have the advantages of larger reduction ratio, higher torque-to-weight ratio, and more flexible configurations.
The axes of all gears are usually parallel, but for special cases like pencil sharpeners and differentials, they can be placed at an angle, introducing elements of bevel gear (see below). Further, the sun, planet carrier and ring axes are usually coaxial.
Epicyclic gearing is also available which consists of a sun, a carrier, and two planets which mesh with each other. One planet meshes with the sun gear, while the second planet meshes with the ring gear. For this case, when the carrier is fixed, the ring gear rotates in the same direction as the sun gear, thus providing a reversal in direction compared to standard epicyclic gearing.
History
Around 500 BC, the Greeks invented the idea of epicycles, of circles travelling on the circular orbits. With this theory Claudius Ptolemy in the Almagest in 148 AD was able to approximate planetary paths observed crossing the sky. The Antikythera Mechanism, circa 80 BC, had gearing which was able to closely match the Moon's elliptical path through the heavens, and even to correct for the nine-year precession of that path. (The Greeks interpreted the motion they saw, not as elliptical, but rather as epicyclic motion.)
In the 2nd century AD treatise The Mathematical Syntaxis (a.k.a. Almagest), Claudius Ptolemy used rotating deferent and epicycles that form epicyclic gear trains to predict the motions of the planets. Accurate predictions of the movement of the Sun, Moon, and the five planets, Mercury, Venus, Mars, Jupiter, and Saturn, across the sky assumed that each followed a trajectory traced by a point on the planet gear of an epicyclic gear train. This curve is called an epitrochoid.
Epicyclic gearing was used in the Antikythera Mechanism, circa 80 BC, to adjust the displayed position of the Moon for the ellipticity of its orbit, and even for its orbital apsidal precession. Two facing gears were rotated around slightly different centers; one drove the other, not with meshed teeth but with a pin inserted into a slot on the second. As the slot drove the second gear, the radius of driving would change, thus invoking a speeding up and slowing down of the driven gear in each revolution.
Richard of Wallingford, an English abbot of St. Albans monastery, later described epicyclic gearing for an astronomical clock in the 14th century. In 1588, Italian military engineer Agostino Ramelli invented the bookwheel, a vertically revolving bookstand containing epicyclic gearing with two levels of planetary gears to maintain proper orientation of the books.
French mathematician and engineer Desargues designed and constructed the first mill with epicycloidal teeth .
Requirements for non-interference
In order that the planet gear teeth mesh properly with both the sun and ring gears, assuming equally spaced planet gears, the following equation must be satisfied:
where
are the number of teeth of the sun gear and the ring gear, respectively and
is the number of planet gears in the assembly and
is a whole number
If one is to create an asymmetric carrier frame with non-equiangular planet gears, say to create some kind of mechanical vibration in the system, one must make the teething such that the above equation complies with the "imaginary gears". For example, in the case where a carrier frame is intended to contain planet gears spaced 0°, 50°, 120°, and 230°, one is to calculate as if there are actually 36 planetary gears (10° equiangular), rather than the four real ones.
Gear speed ratios of conventional epicyclic gearing
The gear ratio of an epicyclic gearing system is somewhat non-intuitive, particularly because there are several ways in which an input rotation can be converted into an output rotation. The four basic components of the epicyclic gear are:
Sun gear: The central gear
Carrier frame: Holds one or more planetary gear(s) symmetrically and separated, all meshed with the sun gear
Planet gear(s): Usually two to four peripheral gears, all of the same size, that mesh between the sun gear and the ring gear
Ring gear, Moon gear, Annulus gear, or Annular gear: An outer ring with inward-facing teeth that mesh with the planetary gear(s)
The overall gear ratio of a simple planetary gearset can be calculated using the following two equations, representing the sun-planet and planet-ring interactions respectively:
where
are the angular velocities of the ring gear, sun gear, planetary gears, and carrier frame respectively, and are the number of teeth of the ring gear, the sun gear, and each planet gear respectively.
from which we can derive the following:
and
only if
In many epicyclic gearing systems, one of these three basic components is held stationary (hence set for whichever gear is stationary); one of the two remaining components is an input, providing power to the system, while the last component is an output, receiving power from the system. The ratio of input rotation to output rotation is dependent upon the number of teeth in each of the gears, and upon which component is held stationary.
Alternatively, in the special case where the number of teeth on each gear meets the relationship the equation can be re-written as the following:
where
is the sun-to-planet gear ratio.
These relationships can be used to analyze any epicyclic system, including those, such as hybrid vehicle transmissions, where two of the components are used as inputs with the third providing output relative to the two inputs.
In one arrangement, the planetary carrier (green in the diagram above) is held stationary, and the sun gear (yellow) is used as input. In that case, the planetary gears simply rotate about their own axes (i.e., spin) at a rate determined by the number of teeth in each gear. If the sun gear has teeth, and each planet gear has teeth, then the ratio is equal to For instance, if the sun gear has 24 teeth, and each planet has 16 teeth, then the ratio is , or ; this means that one clockwise turn of the sun gear produces 1.5 counterclockwise turns of each of the planet gear(s) about its axis.
Rotation of the planet gears can in turn drive the ring gear (not depicted in diagram), at a speed corresponding to the gear ratios: If the ring gear has teeth, then the ring will rotate by turns for each turn of the planetary gears. For instance, if the ring gear has 64 teeth, and the planets 16 teeth, one clockwise turn of a planet gear results in , or clockwise turns of the ring gear. Extending this case from the one above:
One turn of the sun gear results in turns of the planets
One turn of a planet gear results in turns of the ring gear
So, with the planetary carrier locked, one turn of the sun gear results in turns of the ring gear.
The ring gear may also be held fixed, with input provided to the planetary gear carrier; output rotation is then produced from the sun gear. This configuration will produce an increase in gear ratio, equal to
If the ring gear is held stationary and the sun gear is used as the input, the planet carrier will be the output. The gear ratio in this case will be which may also be written as This is the lowest gear ratio attainable with an epicyclic gear train. This type of gearing is sometimes used in tractors and construction equipment to provide high torque to the drive wheels.
In bicycle hub gears, the sun is usually stationary, being keyed to the axle or even machined directly onto it. The planetary gear carrier is used as input. In this case the gear ratio is simply given by The number of teeth in the planet gear is irrelevant.
Accelerations of standard epicyclic gearing
From the above formulae, we can also derive the accelerations of the sun, ring and carrier, which are:
Torque ratios of standard epicyclic gearing
In epicyclic gears, two speeds must be known in order to determine the third speed. However, in a steady state condition, only one torque must be known in order to determine the other two torques. The equations which determine torque are:
where: — Torque of ring (annulus), — Torque of sun, — Torque of carrier. For all three, these are the torques applied to the mechanism (input torques). Output torques have the reverse sign of input torques. These torque ratios can be derived using the law of conservation of energy. Applied to a single stage this equation is expressed as:
In the cases where gears are accelerating, or to account for friction, these equations must be modified.
Fixed carrier train ratio
A convenient approach to determine the various speed ratios available in a planetary gear train begins by considering the speed ratio of the gear train when the carrier is held fixed. This is known as the fixed carrier train ratio.
In the case of a simple planetary gear train formed by a carrier supporting a planet gear engaged with a sun and ring gear, the fixed carrier train ratio is computed as the speed ratio of the gear train formed by the sun, planet and ring gears on the fixed carrier. This is given by
In this calculation the planet gear is an idler gear.
The fundamental formula of the planetary gear train with a rotating carrier is obtained by recognizing that this formula remains true if the angular velocities of the sun, planet and ring gears are computed relative to the carrier angular velocity. This becomes,
This formula provides a simple way to determine the speed ratios for the simple planetary gear train under different conditions:
1. The carrier is held fixed, ωc=0,
2. The ring gear is held fixed, ωr=0,
3. The sun gear is held fixed, ωs=0,
Each of the speed ratios available to a simple planetary gear train can be obtained by using band brakes to hold and release the carrier, sun or ring gears as needed. This provides the basic structure for an automatic transmission.
Spur gear differential
A spur gear differential is constructed from two identical coaxial epicyclic gear trains assembled with a single carrier such that their planet gears are engaged. This forms a planetary gear train with a fixed carrier train ratio R = −1.
In this case, the fundamental formula for the planetary gear train yields,
or
Thus, the angular velocity of the carrier of a spur gear differential is the average of the angular velocities of the sun and ring gears.
In discussing the spur gear differential, the use of the term ring gear is a convenient way to distinguish the sun gears of the two epicyclic gear trains. Ring gears are normally fixed in most applications as this arrangement will have a good reduction capacity. The second sun gear serves the same purpose as the ring gear of a simple planetary gear train but clearly does not have the internal gear mate that is typical of a ring gear.
Gear ratio of reversed epicyclic gearing
Some epicyclic gear trains employ two planetary gears which mesh with each other. One of these planets meshes with the sun gear, the other planet meshes with the ring gear. This results in different ratios being generated by the planetary and also causes the sun gear to rotate in the same direction as the ring gear when the planet carrier is the stationary. The fundamental equation becomes:
where
which results in:
when the carrier is locked,
when the sun is locked,
when the ring gear is locked.
Compound planetary gears
"Compound planetary gear" is a general concept and it refers to any planetary gears involving one or more of the following three types of structures: meshed-planet (there are at least two or more planets in mesh with each other in each planet train), stepped-planet (there exists a shaft connection between two planets in each planet train), and multi-stage structures (the system contains two or more planet sets).
Some designs use "stepped-planet" which have two differently-sized gears on either end of a common shaft. The small end engages the sun, while the large end engages the ring gear. This may be necessary to achieve smaller step changes in gear ratio when the overall package size is limited. Compound planets have "timing marks" (or "relative gear mesh phase" in technical term). The assembly conditions of compound planetary gears are more restrictive than simple planetary gears, and they must be assembled in the correct initial orientation relative to each other, or their teeth will not simultaneously engage the sun and ring gear at opposite ends of the planet, leading to very rough running and short life. In 2015, a traction based variant of the "stepped-planet" design was developed at the Delft University of Technology, which relies on compression of the stepped planet elements to achieve torque transmission. The use of traction elements eliminates the need to "timing marks" as well as the restrictive assembly conditions as typically found. Compound planetary gears can easily achieve larger transmission ratio with equal or smaller volume. For example, compound planets with teeth in a 2:1 ratio with a 50T ring gear would give the same effect as a 100T ring gear, but with half the actual diameter.
More planet and sun gear units can be placed in series in the same housing (where the output shaft of the first stage becomes the input shaft of the next stage) providing a larger (or smaller) gear ratio. This is the way most automatic transmissions work. In some cases multiple stages may even share the same ring gear which can be extended down the length of the transmission, or even be a structural part of the casing of smaller gearboxes.
During World War II, a special variation of epicyclic gearing was developed for portable radar gear, where a very high reduction ratio in a small package was needed. This had two outer ring gears, each half the thickness of the other gears. One of these two ring gears was held fixed and had one tooth fewer than did the other. Therefore, several turns of the "sun" gear made the "planet" gears complete a single revolution, which in turn made the rotating ring gear rotate by a single tooth like a cycloidal drive.
Power splitting
More than one member of a system can serve as an output. As an example, the input is connected to the ring gear, the sun gear is connected to the output and the planet carrier is connected to the output through a torque converter. Idler gears are used between sun gear and the planets to cause the sun gear to rotate in the same direction as the ring gear when the planet carrier is stationary. At low input speed, because of the load on the output, the sun will be stationary and the planet carrier will rotate in the direction of the ring gear. Given a high enough load, the turbine of the torque converter will remain stationary, the energy will be dissipated and the torque converter pump will slip. If the input speed is increased to overcome the load the converter turbine will turn the output shaft. Because the torque converter itself is a load on the planet carrier, a force will be exerted on the sun gear. Both the planet carrier and the sun gear extract energy from the system and apply it to the output shaft.
Advantages
Planetary gear trains provide high power density in comparison to standard parallel axis gear trains. They provide a reduction in volume, multiple kinematic combinations, purely torsional reactions, and coaxial shafting. Disadvantages include high bearing loads, constant lubrication requirements, inaccessibility, and design complexity.
The efficiency loss in a planetary gear train is typically about 3% per stage. This type of efficiency ensures that a high proportion (about 97%) of the energy being input is transmitted through the gearbox, rather than being wasted on mechanical losses inside the gearbox.
The load in a planetary gear train is shared among multiple planets; therefore, torque capability is greatly increased. The more planets in the system, the greater the load ability and the higher the torque density.
The planetary gear train also provides stability due to an even distribution of mass and increased rotational stiffness. Torque applied radially onto the gears of a planetary gear train is transferred radially by the gear, without lateral pressure on the gear teeth.
In a typical application, the drive power connects to the sun gear. The sun gear then drives the planetary gears assembled with the external gear ring to operate. The whole set of planetary gear system revolves on its own axis and along the external gear ring where the output shaft connected to the planetary carrier achieves the goal of speed reduction. A higher reduction ratio can be achieved by doubling the multiple staged gears and planetary gears which can operate within the same ring gear.
The method of motion of a planetary gear structure is different from traditional parallel gears. Traditional gears rely on a small number of contact points between two gears to transfer the driving force. In this case, all the loading is concentrated on a few contacting surfaces, making the gears wear quickly and sometimes crack. But the planetary speed reducer has multiple gear contacting surfaces with a larger area that can distribute the loading evenly around the central axis. Multiple gear surfaces share the load, including any instantaneous impact loading, evenly, which makes them more resistant to damage from higher torque. The housing and bearing parts are also less likely to be damaged from high loading as only the planet carrier bearings experience significant lateral force from the transmission of torque, radial forces oppose each other and are balanced, and axial forces only arise when using helical gears.
3D printing
Planetary gears have become popular in the maker community, due to their inherent high torque capabilities and compactness/efficiency. Especially within 3D printing, they can be used to rapidly prototype a gear box, to then be manufactured with machining technologies later.
A geared-down motor must turn farther and faster in order to produce the same output movement in the 3D printer which is advantageous if it is not outweighed by the slower movement speed. If the stepper motor has to turn farther then it also has to take more steps to move the printer a given distance; therefore, the geared-down stepper motor has a smaller minimum step-size than the same stepper motor without a gearbox. While down-gearing improves precision in unidirectional motion, it adds backlash to the system and so reduces its absolute positioning accuracy.
Since herringbone gears are easy to 3D print, it has become very popular to 3D print a moving herringbone planetary gear system for teaching children how gears work. An advantage of herringbone gears is that they don't fall out of the ring and don't need a mounting plate, allowing the moving parts to be clearly seen.
Gallery
| Technology | Mechanisms | null |
368328 | https://en.wikipedia.org/wiki/Electrostatics | Electrostatics | Electrostatics is a branch of physics that studies slow-moving or stationary electric charges.
Since classical times, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word for amber, (), was thus the root of the word electricity. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law.
There are many examples of electrostatic phenomena, from those as simple as the attraction of plastic wrap to one's hand after it is removed from a package, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier and laser printer operation.
The electrostatic model accurately predicts electrical phenomena in "classical" cases where the velocities are low and the system is macroscopic so no quantum effects are involved. It also plays a role in quantum mechanics, where additional terms also need to be included.
Coulomb's law
Coulomb's law states that:
The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.
If is the distance (in meters) between two charges, then the force between two point charges and is:
where ε0 = is the vacuum permittivity.
The SI unit of ε0 is equivalently A2⋅s4 ⋅kg−1⋅m−3 or C2⋅N−1⋅m−2 or F⋅m−1.
Electric field
The electric field, , in units of Newtons per Coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force on a hypothetical small test charge at the point due to Coulomb's law, divided by the charge
Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point.
A collection of particles of charge , located at points (called source points) generates the electric field at (called the field point) of:
where is the displacement vector from a source point to the field point , and is the unit vector of the displacement vector that indicates the direction of the field due to the source at point . For a single point charge, , at the origin, the magnitude of this electric field is
and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density and can be obtained by converting this sum into a triple integral:
Gauss's law
Gauss's law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Many numerical problems can be solved by considering a Gaussian surface around a body. Mathematically, Gauss's law takes the form of an integral equation:
where is a volume element. If the charge is distributed over a surface or along a line, replace by or . The divergence theorem allows Gauss's Law to be written in differential form:
where is the divergence operator.
Poisson and Laplace equations
The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ:
This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation:
Electrostatic approximation
If the electric field in a system can be assumed to result from static charges, that is, a system that exhibits no significant time-varying magnetic fields, the system is justifiably analyzed using only the principles of electrostatics. This is called the "electrostatic approximation".
The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational:
From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields:
In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as non-relativistic Galilean limits for electromagnetism. In addition, conventional electrostatics ignore quantum effects which have to be added for a complete description.
Electrostatic potential
As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, , called the electrostatic potential (also known as the voltage). An electric field, , points from regions of high electric potential to regions of low electric potential, expressed mathematically as
The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point to point with the following line integral:
From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object).
Electrostatic energy
A test particle's potential energy, , can be calculated from a line integral of the work, . We integrate from a point at infinity, and assume a collection of particles of charge , are already situated at the points . This potential energy (in Joules) is:
where is the distance of each charge from the test charge , which situated at the point , and is the electric potential that would be at if the test charge were not present. If only two charges are present, the potential energy is . The total electric potential energy due a collection of N charges is calculating by assembling these particles one at a time:
where the following sum from, j = 1 to N, excludes i = j:
This electric potential, is what would be measured at if the charge were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription :
This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely and ; they yield equal values for the total electrostatic energy only if both are integrated over all space.
Electrostatic pressure
On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to:
This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
| Physical sciences | Electrostatics | null |
368389 | https://en.wikipedia.org/wiki/Ultrafiltration | Ultrafiltration | Ultrafiltration (UF) is a variety of membrane filtration in which forces such as pressure or concentration gradients lead to a separation through a semipermeable membrane. Suspended solids and solutes of high molecular weight are retained in the so-called retentate, while water and low molecular weight solutes pass through the membrane in the permeate (filtrate). This separation process is used in industry and research for purifying and concentrating macromolecular (103–106 Da) solutions, especially protein solutions.
Ultrafiltration is not fundamentally different from microfiltration. Both of these are separate based on size exclusion or particle capture. It is fundamentally different from membrane gas separation, which separate based on different amounts of absorption and different rates of diffusion. Ultrafiltration membranes are defined by the molecular weight cut-off (MWCO) of the membrane used. Ultrafiltration is applied in cross-flow or dead-end mode.
Applications
Industries such as chemical and pharmaceutical manufacturing, food and beverage processing, and waste water treatment, employ ultrafiltration in order to recycle flow or add value to later products. Blood dialysis also utilizes ultrafiltration.
Drinking water
Ultrafiltration can be used for the removal of particulates and macromolecules from raw water to produce potable water. It has been used to either replace existing secondary (coagulation, flocculation, sedimentation) and tertiary filtration (sand filtration and chlorination) systems employed in water treatment plants or as standalone systems in isolated regions with growing populations. When treating water with high suspended solids, UF is often integrated into the process, utilising primary (screening, flotation, filtration) and some secondary treatments as pre-treatment stages. UF processes are currently preferred over traditional treatment methods for the following reasons:
No chemicals required (aside from cleaning)
Constant product quality regardless of feed quality
Compact plant size
Capable of exceeding regulatory standards of water quality, achieving 90–100% pathogen removal
UF processes are currently limited by the high cost incurred due to membrane fouling and replacement. Additional pretreatment of feed water is required to prevent excessive damage to the membrane units.
In many cases UF is used for pre filtration in reverse osmosis (RO) plants to protect the RO membranes.
Protein concentration
UF is used extensively in the dairy industry; particularly in the processing of cheese whey to obtain whey protein concentrate (WPC) and lactose-rich permeate. In a single stage, a UF process is able to concentrate the whey 10–30 times the feed.
The original alternative to membrane filtration of whey was using steam heating followed by drum drying or spray drying. The product of these methods had limited applications due to its granulated texture and insolubility. Existing methods also had inconsistent product composition, high capital and operating costs and due to the excessive heat used in drying would often denature some of the proteins.
Compared to traditional methods, UF processes used for this application:
Are more energy efficient
Have consistent product quality, 35–80% protein product depending on operating conditions
Do not denature proteins as they use moderate operating conditions
The potential for fouling is widely discussed, being identified as a significant contributor to decline in productivity. Cheese whey contains high concentrations of calcium phosphate which can potentially lead to scale deposits on the membrane surface. As a result, substantial pretreatment must be implemented to balance pH and temperature of the feed to maintain solubility of calcium salts.
Other applications
Filtration of effluent from paper pulp mill
Cheese manufacture, see ultrafiltered milk
Removal of some bacteria from milk
Process and waste water treatment
Enzyme recovery
Fruit juice concentration and clarification
Dialysis and other blood treatments
Desalting and solvent-exchange of proteins (via diafiltration)
Laboratory grade manufacturing
Radiocarbon dating of bone collagen
Recovery of electrodeposition paints
Treatment of oil and latex emulsions
Recovery of lignin compounds in spent pulping liquors
Principles
The basic operating principle of ultrafiltration uses a pressure induced separation of solutes from a solvent through a semi permeable membrane. The relationship between the applied pressure on the solution to be separated and the flux through the membrane is most commonly described by the Darcy equation:
,
where is the flux (flow rate per membrane area), is the transmembrane pressure (pressure difference between feed and permeate stream), is solvent viscosity and is the total resistance (sum of membrane and fouling resistance).
Membrane fouling
Concentration polarization
When filtration occurs the local concentration of rejected material at the membrane surface increases and can become saturated. In UF, increased ion concentration can develop an osmotic pressure on the feed side of the membrane. This reduces the effective TMP of the system, therefore reducing permeation rate. The increase in concentrated layer at the membrane wall decreases the permeate flux, due to increase in resistance which reduces the driving force for solvent to transport through membrane surface. CP affects almost all the available membrane separation processes. In RO, the solutes retained at the membrane layer results in higher osmotic pressure in comparison to the bulk stream concentration. So the higher pressures are required to overcome this osmotic pressure. Concentration polarisation plays a dominant role in ultrafiltration as compared to microfiltration because of the small pore size membrane. Concentration polarization differs from fouling as it has no lasting effects on the membrane itself and can be reversed by relieving the TMP. It does however have a significant effect on many types of fouling.
Types of fouling
Types of Foulants
The following are the four categories by which foulants of UF membranes can be defined in:
biological substances
macromolecules
particulates
ions
Particulate deposition
The following models describe the mechanisms of particulate deposition on the membrane surface and in the pores:
Standard blocking: macromolecules are uniformly deposited on pore walls
Complete blocking: membrane pore is completely sealed by a macromolecule
Cake formation: accumulated particles or macromolecules form a fouling layer on the membrane surface, in UF this is also known as a gel layer
Intermediate blocking: when macromolecules deposit into pores or onto already blocked pores, contributing to cake formation
Scaling
As a result of concentration polarization at the membrane surface, increased ion concentrations may exceed solubility thresholds and precipitate on the membrane surface. These inorganic salt deposits can block pores causing flux decline, membrane degradation and loss of production. The formation of scale is highly dependent on factors affecting both solubility and concentration polarization including pH, temperature, flow velocity and permeation rate.
Biofouling
Microorganisms will adhere to the membrane surface forming a gel layer – known as biofilm. The film increases the resistance to flow, acting as an additional barrier to permeation. In spiral-wound modules, blockages formed by biofilm can lead to uneven flow distribution and thus increase the effects of concentration polarization.
Membrane arrangements
Depending on the shape and material of the membrane, different modules can be used for ultrafiltration process. Commercially available designs in ultrafiltration modules vary according to the required hydrodynamic and economic constraints as well as the mechanical stability of the system under particular operating pressures. The main modules used in industry include:
Tubular modules
The tubular module design uses polymeric membranes cast on the inside of plastic or porous paper components with diameters typically in the range of 5–25 mm with lengths from 0.6–6.4 m. Multiple tubes are housed in a PVC or steel shell. The feed of the module is passed through the tubes, accommodating radial transfer of permeate to the shell side. This design allows for easy cleaning however the main drawback is its low permeability, high volume hold-up within the membrane and low packing density.
Hollow fibre
This design is conceptually similar to the tubular module with a shell and tube arrangement. A single module can consist of 50 to thousands of hollow fibres and therefore are self-supporting unlike the tubular design. The diameter of each fibre ranges from 0.2–3 mm with the feed flowing in the tube and the product permeate collected radially on the outside. The advantage of having self-supporting membranes as is the ease at which it can be cleaned due to its ability to be backflushed. Replacement costs however are high, as one faulty fibre will require the whole bundle to be replaced. Considering the tubes are of small diameter, using this design also makes the system prone to blockage.
Spiral-wound modules
Are composed of a combination of flat membrane sheets separated by a thin meshed spacer material which serves as a porous plastic screen support. These sheets are rolled around a central perforated tube and fitted into a tubular steel pressure vessel casing. The feed solution passes over the membrane surface and the permeate spirals into the central collection tube. Spiral-wound modules are a compact and cheap alternative in ultrafiltration design, offer a high volumetric throughput and can also be easily cleaned. However it is limited by the thin channels where feed solutions with suspended solids can result in partial blockage of the membrane pores.
Plate and frame
This uses a membrane placed on a flat plate separated by a mesh like material. The feed is passed through the system from which permeate is separated and collected from the edge of the plate. Channel length can range from 10–60 cm and channel heights from 0.5–1.0 mm. This module provides low volume hold-up, relatively easy replacement of the membrane and the ability to feed viscous solutions because of the low channel height, unique to this particular design.
Process characteristics
The process characteristics of a UF system are highly dependent on the type of membrane used and its application. Manufacturers' specifications of the membrane tend to limit the process to the following typical specifications:
Process design considerations
When designing a new membrane separation facility or considering its integration into an existing plant, there are many factors which must be considered. For most applications a heuristic approach can be applied to determine many of these characteristics to simplify the design process. Some design areas include:
Pre-treatment
Treatment of feed prior to the membrane is essential to prevent damage to the membrane and minimize the effects of fouling which greatly reduce the efficiency of the separation. Types of pre-treatment are often dependent on the type of feed and its quality. For example, in wastewater treatment, household waste and other particulates are screened. Other types of pre-treatment common to many UF processes include pH balancing and coagulation. Appropriate sequencing of each pre-treatment phase is crucial in preventing damage to subsequent stages. Pre-treatment can even be employed simply using dosing points.
Membrane specifications
Material
Most UF membranes use polymer materials (polysulfone, polypropylene, cellulose acetate, polylactic acid) however ceramic membranes are used for high temperature applications.
Pore size
A general rule for choice of pore size in a UF system is to use a membrane with a pore size one tenth that of the particle size to be separated. This limits the number of smaller particles entering the pores and adsorbing to the pore surface. Instead they block the entrance to the pores allowing simple adjustments of cross-flow velocity to dislodge them.
Operation strategy
Flowtype
UF systems can either operate with cross-flow or dead-end flow. In dead-end filtration the flow of the feed solution is perpendicular to the membrane surface. On the other hand, in cross flow systems the flow passes parallel to the membrane surface. Dead-end configurations are more suited to batch processes with low suspended solids as solids accumulate at the membrane surface therefore requiring frequent backflushes and cleaning to maintain high flux. Cross-flow configurations are preferred in continuous operations as solids are continuously flushed from the membrane surface resulting in a thinner cake layer and lower resistance to permeation.
Flow velocity
Flow velocity is especially critical for hard water or liquids containing suspensions in preventing excessive fouling. Higher cross-flow velocities can be used to enhance the sweeping effect across the membrane surface therefore preventing deposition of macromolecules and colloidal material and reducing the effects of concentration polarization. Expensive pumps are however required to achieve these conditions.
Flow temperature
To avoid excessive damage to the membrane, it is recommended to operate a plant at the temperature specified by the membrane manufacturer. In some instances however temperatures beyond the recommended region are required to minimise the effects of fouling. Economic analysis of the process is required to find a compromise between the increased cost of membrane replacement and productivity of the separation.
Pressure
Pressure drops over multi-stage separation can result in a drastic decline in flux performance in the latter stages of the process. This can be improved using booster pumps to increase the TMP in the final stages. This will incur a greater capital and energy cost which will be offset by the improved productivity of the process. With a multi-stage operation, retentate streams from each stage are recycled through the previous stage to improve their separation efficiency.
Multi-stage, multi-module
Multiple stages in series can be applied to achieve higher purity permeate streams. Due to the modular nature of membrane processes, multiple modules can be arranged in parallel to treat greater volumes.
Post-treatment
Post-treatment of the product streams is dependent on the composition of the permeate and retentate and its end-use or government regulation. In cases such as milk separation both streams (milk and whey) can be collected and made into useful products. Additional drying of the retentate will produce whey powder. In the paper mill industry, the retentate (non-biodegradable organic material) is incinerated to recover energy and permeate (purified water) is discharged into waterways. It is essential for the permeate water to be pH balanced and cooled to avoid thermal pollution of waterways and altering its pH.
Cleaning
Cleaning of the membrane is done regularly to prevent the accumulation of foulants and reverse the degrading effects of fouling on permeability and selectivity.
Regular backwashing is often conducted every 10 min for some processes to remove cake layers formed on the membrane surface. By pressurising the permeate stream and forcing it back through the membrane, accumulated particles can be dislodged, improving the flux of the process. Backwashing is limited in its ability to remove more complex forms of fouling such as biofouling, scaling or adsorption to pore walls.
These types of foulants require chemical cleaning to be removed. The common types of chemicals used for cleaning are:
Acidic solutions for the control of inorganic scale deposits
Alkali solutions for removal of organic compounds
Biocides or disinfection such as chlorine or peroxide when bio-fouling is evident
When designing a cleaning protocol it is essential to consider:
Cleaning time – Adequate time must be allowed for chemicals to interact with foulants and permeate into the membrane pores. However, if the process is extended beyond its optimum duration it can lead to denaturation of the membrane and deposition of removed foulants. The complete cleaning cycle including rinses between stages may take as long as 2 hours to complete.
Aggressiveness of chemical treatment – With a high degree of fouling it may be necessary to employ aggressive cleaning solutions to remove fouling material. However, in some applications this may not be suitable if the membrane material is sensitive, leading to enhanced membrane ageing.
Disposal of cleaning effluent – The release of some chemicals into wastewater systems may be prohibited or regulated therefore this must be considered. For example, the use of phosphoric acid may result in high levels of phosphates entering water ways and must be monitored and controlled to prevent eutrophication.
Summary of common types of fouling and their respective chemical treatments
New developments
In order to increase the life-cycle of membrane filtration systems, energy efficient membranes are being developed in membrane bioreactor systems. Technology has been introduced which allows the power required to aerate the membrane for cleaning to be reduced whilst still maintaining a high flux level. Mechanical cleaning processes have also been adopted using granulates as an alternative to conventional forms of cleaning; this reduces energy consumption and also reduces the area required for filtration tanks.
Membrane properties have also been enhanced to reduce fouling tendencies by modifying surface properties. This can be noted in the biotechnology industry where membrane surfaces have been altered in order to reduce the amount of protein binding. Ultrafiltration modules have also been improved to allow for more membrane for a given area without increasing its risk of fouling by designing more efficient module internals.
The current pre-treatment of seawater desulphonation uses ultrafiltration modules that have been designed to withstand high temperatures and pressures whilst occupying a smaller footprint. Each module vessel is self supported and resistant to corrosion and accommodates easy removal and replacement of the module without the cost of replacing the vessel itself.
| Physical sciences | Other separations | Chemistry |
368390 | https://en.wikipedia.org/wiki/Microfiltration | Microfiltration | Microfiltration is a type of physical filtration process where a contaminated fluid is passed through a special pore-sized membrane filter to separate microorganisms and suspended particles from process liquid. It is commonly used in conjunction with various other separation processes such as ultrafiltration and reverse osmosis to provide a product stream which is free of undesired contaminants.
General principles
Microfiltration usually serves as a pre-treatment for other separation processes such as ultrafiltration, and a post-treatment for granular media filtration. The typical particle size used for microfiltration ranges from about 0.1 to 10 μm. In terms of approximate molecular weight these membranes can separate macromolecules of molecular weights generally less than 100,000 g/mol. The filters used in the microfiltration process are specially designed to prevent particles such as, sediment, algae, protozoa or large bacteria from passing through a specially designed filter. More microscopic, atomic or ionic materials such as water (H2O), monovalent species such as Sodium (Na+) or Chloride (Cl−) ions, dissolved or natural organic matter, and small colloids and viruses will still be able to pass through the filter.
The suspended liquid is passed through at a relatively high velocity of around 1–3 m/s and at low to moderate pressures (around 100-400 kPa) parallel or tangential to the semi-permeable membrane in a sheet or tubular form. A pump is commonly fitted onto the processing equipment to allow the liquid to pass through the membrane filter. There are also two pump configurations, either pressure driven or vacuum. A differential or regular pressure gauge is commonly attached to measure the pressure drop between the outlet and inlet streams. See Figure 1 for a general setup.
The most abundant use of microfiltration membranes are in the water, beverage and bio-processing industries (see below). The exit process stream after treatment using a micro-filter has a recovery rate which generally ranges to about 90-98 %.
Range of applications
Water treatment
Perhaps the most prominent use of microfiltration membranes pertains to the treatment of potable water supplies. The membranes are a key step in the primary disinfection of the uptake water stream. Such a stream might contain pathogens such as the protozoa Cryptosporidium and Giardia lamblia which are responsible for numerous disease outbreaks. Both species show a gradual resistance to traditional disinfectants (i.e. chlorine). The use of MF membranes presents a physical means of separation (a barrier) as opposed to a chemical alternative. In that sense, both filtration and disinfection take place in a single step, negating the extra cost of chemical dosage and the corresponding equipment (needed for handling and storage).
Similarly, the MF membranes are used in secondary wastewater effluents to remove turbidity but also to provide treatment for disinfection. At this stage, coagulants (iron or aluminum) may potentially be added to precipitate species such as phosphorus and arsenic which would otherwise have been soluble.
Sterilization
Another crucial application of MF membranes lies in the cold sterilisation of beverages and pharmaceuticals. Historically, heat was used to sterilize refreshments such as juice, wine and beer in particular, however a palatable loss in flavour was clearly evident upon heating. Similarly, pharmaceuticals have been shown to lose their effectiveness upon heat addition. MF membranes are employed in these industries as a method to remove bacteria and other undesired suspensions from liquids, a procedure termed as 'cold sterilisation', which negate the use of heat.
Petroleum refining
Furthermore, microfiltration membranes are finding increasing use in areas such as petroleum refining, in which the removal of particulates from flue gases is of particular concern. The key challenges/requirements for this technology are the ability of the membrane modules to withstand high temperatures (i.e. maintain stability), but also the design must be such to provide a very thin sheeting (thickness < 2000 angstroms) to facilitate an increase of flux. In addition the modules must have a low fouling profile and most importantly, be available at a low-cost for the system to be financially viable.
Dairy processing
Aside from the above applications, MF membranes have found dynamic use in major areas within the dairy industry, particularly for milk and whey processing. The MF membranes aid in the removal of bacteria and the associated spores from milk, by rejecting the harmful species from passing through. This is also a precursor for pasteurisation, allowing for an extended shelf-life of the product. However, the most promising technique for MF membranes in this field pertains to the separation of casein from whey proteins (i.e. serum milk proteins). This results in two product streams both of which are highly relied on by consumers; a casein-rich concentrate stream used for cheese making, and a whey/serum protein stream which is further processed (using ultrafiltration) to make whey protein concentrate. The whey protein stream undergoes further filtration to remove fat in order to achieve higher protein content in the final WPC (Whey Protein Concentrate) and WPI (Whey Protein Isolate) powders.
Other applications
Other common applications utilising microfiltration as a major separation process include
Clarification and purification of cell broths where macromolecules are to be separated from other large molecules, proteins, or cell debris.
Other biochemical and bio-processing applications such as clarification of dextrose.
Production of Paints and Adhesives.
Characteristics of main process
Membrane filtration processes can be distinguished by three major characteristics: driving force, retentate stream and permeate streams. The microfiltration process is pressure driven with suspended particles and water as retentate and dissolved solutes plus water as permeate. The use of hydraulic pressure accelerates the separation process by increasing the flow rate (flux) of the liquid stream but does not affect the chemical composition of the species in the retentate and product streams.
A major characteristic that limits the performance of microfiltration or any membrane technology is a process known as fouling. Fouling describes the deposition and accumulation of feed components such as suspended particles, impermeable dissolved solutes or even permeable solutes, on the membrane surface and or within the pores of the membrane. Fouling of the membrane during the filtration processes decreases the flux and thus overall efficiency of the operation. This is indicated when the pressure drop increases to a certain point. It occurs even when operating parameters are constant (pressure, flow rate, temperature and concentration) Fouling is mostly irreversible although a portion of the fouling layer can be reversed by cleaning for short periods of time.
Membrane configurations
Microfiltration membranes can generally operate in one of two configurations.
Cross-flow filtration: where the fluid is passed through tangentially with respect to the membrane. Part of the feed stream containing the treated liquid is collected below the filter while parts of the water are passed through the membrane untreated. Cross flow filtration is understood to be a unit operation rather than a process. Refer to Figure 2 for a general schematic for the process.
Dead-end filtration; all of the process fluid flows and all particles larger than the pore sizes of the membrane are stopped at its surface. All of the feed water is treated at once subject to cake formation. This process is mostly used for batch or semicontinuous filtration of low concentrated solutions. Refer to Figure 3 for a general schematic for this process.
Process and equipment design
The major issues that influence the selection of the membrane include
Site-specific issues
Capacity and demand of the facility.
Percentage recovery and rejection.
Fluid characteristics (viscosity, turbidity, density)
Quality of the fluid to be treated
Pre-treatment processes
Membrane specific issues
Cost of material procurement and manufacture
Operating temperature
Trans-membrane pressure
Membrane flux
Handling fluid characteristics (viscosity, turbidity, density)
Monitoring and maintenance of the system
Cleaning and treatment
Disposal of process residuals
Process design variables
Operation and control of all processes in the system
Materials of construction
Equipment and instrumentation (controllers, sensors) and their cost.
Fundamental design heuristics
A few important design heuristics and their assessment are discussed below:
When treating raw contaminated fluids, hard sharp materials can wear and tear the porous cavities in the micro-filter, rendering it ineffective. Liquids must be subjected to pre-treatment before passage through the micro-filter. This may be achieved by a variation of macro separation processes such as screening, or granular media filtration.
When undertaking cleaning regimes the membrane must not dry out once it has been contacted by the process stream. Thorough water rinsing of the membrane modules, pipelines, pumps and other unit connections should be carried out until the end water appears clean.
Microfiltration modules are typically set to operate at pressures of 100 to 400 kPa. Such pressures allow removal of materials such as sand, slits and clays, and also bacteria and protozoa.
When the membrane modules are being used for the first time, i.e. during plant start-up, conditions need to be well devised. Generally a slow-start is required when the feed is introduced into the modules, since even slight perturbations above the critical flux will result in irreversible fouling.
Like any other membranes, microfiltration membranes are prone to fouling. (See Figure 4 below) It is therefore necessary that regular maintenance be carried out to prolong the life of the membrane module.
Routine 'backwashing', is used to achieve this. Depending on the specific application of the membrane, backwashing is carried out in short durations (typically 3 to 180 s) and in moderately frequent intervals (5 min to several hours). Turbulent flow conditions with Reynolds numbers greater than 2100, ideally between 3000 - 5000 should be used. This should not however be confused with 'backflushing', a more rigorous and thorough cleaning technique, commonly practiced in cases of particulate and colloidal fouling.
When major cleaning is needed to remove entrained particles, a CIP (Clean In Place) technique is used. Cleaning agents/detergents, such as sodium hypochlorite, citric acid, caustic soda or even special enzymes are typically used for this purpose. The concentration of these chemicals is dependent on the type of the membrane (its sensitivity to strong chemicals), but also the type of matter (e.g. scaling due to the presence of calcium ions) to be removed.
Another method to increase the lifespan of the membrane may be feasible to design two microfiltration membranes in series. The first filter would be used for pre-treatment of the liquid passing through the membrane, where larger particles and deposits are captured on the cartridge. The second filter would act as an extra "check" for particles which are able to pass through the first membrane as well as provide screening for particles on the lower spectrum of the range.
Design economics
The cost to design and manufacture a membrane per unit of area are about 20% less compared to the early 1990s and in a general sense are constantly declining. Microfiltration membranes are more advantageous in comparison to conventional systems. Microfiltration systems do not require expensive extraneous equipment such as flocculates, addition of chemicals, flash mixers, settling and filter basins. However the cost of replacement of capital equipment costs (membrane cartridge filters etc.) might still be relatively high as the equipment may be manufactured specific to the application. Using the design heuristics and general plant design principles (mentioned above), the membrane life-span can be increased to reduce these costs.
Through the design of more intelligent process control systems and efficient plant designs some general tips to reduce operating costs are listed below
Running plants at reduced fluxes or pressures at low load periods (winter)
Taking plant systems off-line for short periods when the feed conditions are extreme.
A short shutdown period (approximately 1 hour) during the first flush of a river after rainfall (in water treatment applications) to reduce cleaning costs in the initial period.
The use of more cost effective cleaning chemicals where suitable (sulphuric acid instead of citric/ phosphoric acids.)
The use of a flexible control design system. Operators are able to manipulate variables and setpoints to achieve maximum cost savings.
Table 1 (below) expresses an indicative guide of membrane filtration capital and operating costs per unit of flow.
Table 1 Approximate Costing of Membrane Filtration per unit of flow
Note:
Capital Costs are based on dollars per gallon of the treatment plant capacity
Design flow is measured in millions of gallons per day.
Membrane Costs only (No Pre-Treatment or Post-Treatment equipment considered in this table)
Operating and Annual costs, are based on dollars per thousand gallons treated.
All prices are in US dollars current of 2009, and is not adjusted for inflation.''
Process equipment
Membrane materials
The materials which constitute the membranes used in microfiltration systems may be either organic or inorganic depending upon the contaminants that are desired to be removed, or the type of application.
Organic membranes are made using a diverse range of polymers including cellulose acetate (CA), polysulfone, polyvinylidene fluoride, polyethersulfone and polyamide. These are most commonly used due to their flexibility, and chemical properties.
Inorganic membranes are usually composed of sintered metal or porous alumina. They are able to be designed in various shapes, with a range of average pore sizes and permeability.
Membrane structures
General Membrane structures for microfiltration include
Screen filters (Particles and matter which are of the same size or larger than the screen openings are retained by the process and are collected on the surface of the screen)
Depth filters (Matter and particles are embedded within the constrictions within the filter media, the filter surface contains larger particles, smaller particles are captured in a narrower and deeper section of the filter media.)
Membrane modules
Plate and frame (flat sheet)
Membrane modules for dead-end flow microfiltration are mainly plate-and-frame configurations. They possess a flat and thin-film composite sheet where the plate is asymmetric. A thin selective skin is supported on a thicker layer that has larger pores. These systems are compact and possess a sturdy design, Compared to cross-flow filtration, plate and frame configurations possess a reduced capital expenditure; however the operating costs will be higher. The uses of plate and frame modules are most applicable for smaller and simpler scale applications (laboratory) which filter dilute solutions.
Spiral-wound
This particular design is used for cross-flow filtration. The design involves a pleated membrane which is folded around a perforated permeate core, akin to a spiral, that is usually placed within a pressure vessel. This particular design is preferred when the solutions handled is heavily concentrated and in conditions of high temperatures and extreme pH. This particular configuration is generally used in more large scale industrial applications of microfiltration.
Hollow fiber
This design involves bundling several hundred to several thousand hollow fiber membranes in a tube filter housing. Feed water is delivered into the membrane module. It passes through from the outside surface of the hollow fibers and the filtered water exits through the center of the fibers. With the flux rate in excess of 75 gallon per square foot per day, this design can be used for large scale facilities.
Fundamental design equations
As separation is achieved by sieving, the principal mechanism of transfer for microfiltration through micro porous membranes is bulk flow.
Generally, due to the small diameter of the pores the flow within the process is laminar (Reynolds Number < 2100) The flow velocity of the fluid moving through the pores can thus be determined (by Hagen-Poiseuille's equation), the simplest of which assuming a parabolic velocity profile.
Transmembrane Pressure (TMP)
The transmembrane pressure (TMP) is defined as the mean of the applied pressure from the feed to the concentrate side of the membrane subtracted by the pressure of the permeate. This is applied to dead-end filtration mainly and is indicative of whether a system is fouled sufficiently to warrant replacement.
Where
is the pressure on the Feed Side
is the pressure of the Concentrate
is the pressure of the Permeate
Permeate Flux
The permeate flux in microfiltration is given by the following relation, based on Darcy's Law
Where
= Permeate membrane flow resistance ()
= Permeate cake resistance ()
μ = Permeate viscosity (kg m-1 s-1)
∆P = Pressure Drop between the cake and membrane
The cake resistance is given by:
Where
r = Specific cake resistance (m-2)
Vs = Volume of cake (m3)
AM = Area of membrane (m2)
For micron sized particles the Specific Cake Resistance is roughly.
Where
ε = Porosity of cake (unitless)
d_s = Mean particle diameter (m)
Rigorous design equations
To give a better indication regarding the exact determination of the extent of the cake formation, one-dimensional quantitative models have been formulated to determine factors such as
Complete Blocking (Pores with an initial radius less than the radius of the pore)
Standard Blocking
Sublayer Formation
Cake Formation
See | Physical sciences | Other separations | Chemistry |
368684 | https://en.wikipedia.org/wiki/Moment%20%28mathematics%29 | Moment (mathematics) | In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.
For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from to ) uniquely determines the distribution (Hausdorff moment problem). The same is not true on unbounded intervals (Hamburger moment problem).
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables.
Significance of the moments
The -th raw moment (i.e., moment about zero) of a random variable with density function is defined byThe -th moment of a real-valued continuous random variable with density function about a value is the integral
It is possible to define moments for random variables in a more general fashion than moments for real-valued functions — see moments in metric spaces. The moment of a function, without further explanation, usually refers to the above expression with .
For the second and higher moments, the central moment (moments about the mean, with c being the mean) are usually used rather than the moments about zero, because they provide clearer information about the distribution's shape.
Other moments may also be defined. For example, the th inverse moment about zero is and the -th logarithmic moment about zero is
The -th moment about zero of a probability density function is the expected value of and is called a raw moment or crude moment. The moments about its mean are called central moments; these describe the shape of the function, independently of translation.
If is a probability density function, then the value of the integral above is called the -th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the -th moment of the probability distribution is given by the Riemann–Stieltjes integralwhere X is a random variable that has this cumulative distribution F, and is the expectation operator or mean.
Whenthe moment is said not to exist. If the -th moment about any point exists, so does the -th moment (and thus, all lower-order moments) about every point.
The zeroth moment of any probability density function is 1, since the area under any probability density function must be equal to one.
Standardized moments
The normalised -th central moment or standardised moment is the -th central moment divided by ; the normalised -th central moment of the random variable is
These normalised central moments are dimensionless quantities, which represent the distribution independently of any linear change of scale.
Notable moments
Mean
The first raw moment is the mean, usually denoted
Variance
The second central moment is the variance. The positive square root of the variance is the standard deviation
Skewness
The third central moment is the measure of the lopsidedness of the distribution; any symmetric distribution will have a third central moment, if defined, of zero. The normalised third central moment is called the skewness, often . A distribution that is skewed to the left (the tail of the distribution is longer on the left) will have a negative skewness. A distribution that is skewed to the right (the tail of the distribution is longer on the right), will have a positive skewness.
For distributions that are not too different from the normal distribution, the median will be somewhere near ; the mode about .
Kurtosis
The fourth central moment is a measure of the heaviness of the tail of the distribution. Since it is the expectation of a fourth power, the fourth central moment, where defined, is always nonnegative; and except for a point distribution, it is always strictly positive. The fourth central moment of a normal distribution is .
The kurtosis is defined to be the standardized fourth central moment. (Equivalently, as in the next section, excess kurtosis is the fourth cumulant divided by the square of the second cumulant.) If a distribution has heavy tails, the kurtosis will be high (sometimes called leptokurtic); conversely, light-tailed distributions (for example, bounded distributions such as the uniform) have low kurtosis (sometimes called platykurtic).
The kurtosis can be positive without limit, but must be greater than or equal to ; equality only holds for binary distributions. For unbounded skew distributions not too far from normal, tends to be somewhere in the area of and .
The inequality can be proven by consideringwhere . This is the expectation of a square, so it is non-negative for all a; however it is also a quadratic polynomial in a. Its discriminant must be non-positive, which gives the required relationship.
Higher moments
High-order moments are moments beyond 4th-order moments.
As with variance, skewness, and kurtosis, these are higher-order statistics, involving non-linear combinations of the data, and can be used for description or estimation of further shape parameters. The higher the moment, the harder it is to estimate, in the sense that larger samples are required in order to obtain estimates of similar quality. This is due to the excess degrees of freedom consumed by the higher orders. Further, they can be subtle to interpret, often being most easily understood in terms of lower order moments – compare the higher-order derivatives of jerk and jounce in physics. For example, just as the 4th-order moment (kurtosis) can be interpreted as "relative importance of tails as compared to shoulders in contribution to dispersion" (for a given amount of dispersion, higher kurtosis corresponds to thicker tails, while lower kurtosis corresponds to broader shoulders), the 5th-order moment can be interpreted as measuring "relative importance of tails as compared to center (mode and shoulders) in contribution to skewness" (for a given amount of skewness, higher 5th moment corresponds to higher skewness in the tail portions and little skewness of mode, while lower 5th moment corresponds to more skewness in shoulders).
Mixed moments
Mixed moments are moments involving multiple variables.
The value is called the moment of order (moments are also defined for non-integral ). The moments of the joint distribution of random variables are defined similarly. For any integers , the mathematical expectation is called a mixed moment of order (where ), and is called a central mixed moment of order . The mixed moment is called the covariance and is one of the basic characteristics of dependency between random variables.
Some examples are covariance, coskewness and cokurtosis. While there is a unique covariance, there are multiple co-skewnesses and co-kurtoses.
Properties of moments
Transformation of center
Since
where is the binomial coefficient, it follows that the moments about b can be calculated from the moments about a by:
The moment of a convolution of function
The raw moment of a convolution reads
where denotes the -th moment of the function given in the brackets. This identity follows by the convolution theorem for moment generating function and applying the chain rule for differentiating a product.
Cumulants
The first raw moment and the second and third unnormalized central moments are additive in the sense that if X and Y are independent random variables then
(These can also hold for variables that satisfy weaker conditions than independence. The first always holds; if the second holds, the variables are called uncorrelated).
In fact, these are the first three cumulants and all cumulants share this additivity property.
Sample moments
For all k, the -th raw moment of a population can be estimated using the -th raw sample moment
applied to a sample drawn from the population.
It can be shown that the expected value of the raw sample moment is equal to the -th raw moment of the population, if that moment exists, for any sample size . It is thus an unbiased estimator. This contrasts with the situation for central moments, whose computation uses up a degree of freedom by using the sample mean. So for example an unbiased estimate of the population variance (the second central moment) is given by
in which the previous denominator has been replaced by the degrees of freedom , and in which refers to the sample mean. This estimate of the population moment is greater than the unadjusted observed sample moment by a factor of and it is referred to as the "adjusted sample variance" or sometimes simply the "sample variance".
Problem of moments
Problems of determining a probability distribution from its sequence of moments are called problem of moments. Such problems were first discussed by P.L. Chebyshev (1874) in connection with research on limit theorems. In order that the probability distribution of a random variable be uniquely defined by its moments it is sufficient, for example, that Carleman's condition be satisfied:
A similar result even holds for moments of random vectors. The problem of moments seeks characterizations of sequences that are sequences of moments of some function f, all moments of which are finite, and for each integer let
where is finite. Then there is a sequence that weakly converges to a distribution function having as its moments. If the moments determine uniquely, then the sequence weakly converges to .
Partial moments
Partial moments are sometimes referred to as "one-sided moments." The -th order lower and upper partial moments with respect to a reference point r may be expressed as
If the integral function does not converge, the partial moment does not exist.
Partial moments are normalized by being raised to the power 1/n. The upside potential ratio may be expressed as a ratio of a first-order upper partial moment to a normalized second-order lower partial moment.
Central moments in metric spaces
Let be a metric space, and let B(M) be the Borel -algebra on M, the -algebra generated by the d-open subsets of M. (For technical reasons, it is also convenient to assume that M is a separable space with respect to the metric d.) Let .
The -th central moment of a measure on the measurable space (M, B(M)) about a given point is defined to be
μ is said to have finite -th central moment if the -th central moment of about x0 is finite for some .
This terminology for measures carries over to random variables in the usual way: if is a probability space and is a random variable, then the -th central moment of X about is defined to be
and X has finite -th central moment if the -th central moment of X about x0 is finite for some .
| Mathematics | Probability | null |
17684209 | https://en.wikipedia.org/wiki/Pleurosauridae | Pleurosauridae | Pleurosauridae is an extinct family of sphenodontian reptiles, known from the Jurassic of Europe. Members of the family had long-snake like bodies with reduced limbs that were adapted for aquatic life in marine environments. It contains two genera, Palaeopleurosaurus, which is known from the Early Jurassic (Toarcian) Posidonia Shale of Germany, as well as Pleurosaurus from the Late Jurassic of Germany and France. Paleopleurosaurus is more primitive than the later Pleurosaurus, with a skull similar to those of other sphenodontians, while that of Pleurosaurus is highly modified relative to other sphenodontians. They likely swam via anguilliform locomotion. Vadasaurus and Derasmosaurus from the Late Jurassic and Early Cretaceous of Europe have been placed as part of this family in some studies, but lack the body elongation that typifies the other two genera.
| Biology and health sciences | Rhynchocephalia | Animals |
184331 | https://en.wikipedia.org/wiki/Petalite | Petalite | Petalite, also known as castorite, is a lithium aluminum phyllosilicate mineral LiAlSi4O10, crystallizing in the monoclinic system. Petalite occurs as colorless, pink, grey, yellow, yellow grey, to white tabular crystals and columnar masses. It occurs in lithium-bearing pegmatites with spodumene, lepidolite, and tourmaline. Petalite is an important ore of lithium, and is converted to spodumene and quartz by heating to ~500 °C and under 3 kbar of pressure in the presence of a dense hydrous alkali borosilicate fluid with a minor carbonate component. Petalite (and secondary spodumene formed from it) is lower in iron than primary spodumene, making it a more useful source of lithium in, e.g., the production of glass. The colorless varieties are often used as gemstones.
Discovery and occurrence
Petalite was discovered in 1800, by Brazilian naturalist and statesman Jose Bonifacio de Andrada e Silva. Type locality: Utö Island, Haninge, Stockholm, Sweden. The name is derived from the Greek word petalon, which means leaf, alluding to its perfect cleavage.
Economic deposits of petalite are found near Kalgoorlie, Western Australia; Aracuai, Minas Gerais, Brazil; Karibib, Namibia; Manitoba, Canada; and Bikita, Zimbabwe.
The first important economic application for petalite was as a raw material for the glass-ceramic cooking ware CorningWare. It has been used as a raw material for ceramic glazes.
| Physical sciences | Silicate minerals | Earth science |
184342 | https://en.wikipedia.org/wiki/Naphtha | Naphtha | Naphtha (, recorded as less common or nonstandard in all dictionaries: ) is a flammable liquid hydrocarbon mixture. Generally, it is a fraction of crude oil, but it can also be produced from natural-gas condensates, petroleum distillates, and the fractional distillation of coal tar and peat. In some industries and regions, the name naphtha refers to crude oil or refined petroleum products such as kerosene or diesel fuel.
Naphtha is also known as Shellite in Australia.
Etymology
The word naphtha comes from Latin through Ancient Greek (), derived from Middle Persian naft ("wet", "naphtha"), the latter meaning of which was an assimilation from the Akkadian 𒉌𒆳𒊏 (see Semitic relatives such as Arabic ["petroleum"], Syriac naftā, and Hebrew , meaning petroleum).
Antiquity
The book of II Maccabees (2nd cent. BC) tells how a "thick water" was put on a sacrifice at the time of Nehemiah and when the sun shone it caught fire. It adds that "those around Nehemiah termed this 'Nephthar', which means Purification, but it is called Nephthaei by the many." This same substance is mentioned in the Mishnah as one of the generally permitted oils for lamps on Shabbat, although Rabbi Tarfon permits only olive oil (Mishnah Shabbat 2).
In Ancient Greek, it was used to refer to any sort of petroleum or pitch. The Greek word designates one of the materials used to stoke the fiery furnace in the Song of the Three Children (possibly 1st or 2nd cent. BC). The translation of Charles Brenton renders this as "rosin".
The naphtha of antiquity is explained to be a "highly flammable light fraction of petroleum, an extremely volatile, strong-smelling, gaseous liquid, common in oil deposits of the Near East"; it was a chief ingredient in incendiary devices described by Latin authors of the Roman period.
Modern period
Since the 19th century, Solvent naphtha has denoted a product (xylene or trimethylbenzenes) derived by fractional distillation from petroleum; these mineral spirits, also known as "Stoddard Solvent", were originally the main active ingredient in Fels Naptha laundry soap. The naphtha in Fels Naptha was later removed as a cancer risk.
The usage of the term "naphtha" during this time typically implies petroleum naphtha, a colorless liquid with a similar odor to gasoline. However, "coal tar naphtha", a reddish brown liquid that is a mixture of hydrocarbons (toluene, xylene, and cumene, etc.), could also be intended in some contexts.
Petroleum
In older usage, "naphtha" simply meant crude oil, but this usage is now obsolete in English. There are a number of cognates to the word in different modern languages, typically signifying "petroleum" or "crude oil".
The Ukrainian and Belarusian word нафта (nafta), Lithuanian, Latvian and Estonian "nafta" and the Persian () mean "crude oil". The Russian word (neft') means "crude oil", but нафта (nafta) is a synonym of ligroin. Also, in Albania, Bosnia and Herzegovina, Bulgaria, Croatia, Finland, Italy, Serbia, Slovenia, Macedonia nafta (нафта in Cyrillic) is colloquially used to indicate diesel fuel and crude oil. In the Czech Republic and Slovakia, nafta was historically used for both diesel fuel and crude oil, but its use for crude oil is now obsolete and it generally indicates diesel fuel. In Bulgarian, nafta means diesel fuel, while neft, as well as petrol (петрол in Cyrillic), means crude oil. Nafta is also used in everyday parlance in Argentina, Paraguay and Uruguay to refer to gasoline/petrol. In Poland, the word means kerosene, and colloquially crude oil (technical name for crude oil is , also colloquially used for diesel fuel as ). In Flemish, the word naft(e) is used colloquially for gasoline.
Types
Various qualifiers have been added to the term "naphtha" by different sources in an effort to make it more specific:
One source distinguishes by boiling point:
Another source which differentiates light and heavy comments on the hydrocarbon structure, but offers a less precise dividing line:
Both of these are useful definitions, but they are incompatible with one another and the latter does not provide for mixes containing both six and seven carbon atoms per molecule. These terms are also sufficiently broad that they are not widely useful.
"Petroleum naphtha", which contains both heavy and light naphtha, typically constitutes 15-30% of crude oil by weight.
Uses
Heavy crude oil dilution
Naphtha is used to dilute heavy crude oil to reduce its viscosity and enable/facilitate transport; undiluted heavy crude cannot normally be transported by pipeline, and may also be difficult to pump onto oil tankers. Other common dilutants include natural-gas condensate and light crude. However, naphtha is a particularly efficient dilutant and can be recycled from diluted heavy crude after transport and processing. The importance of oil dilutants has increased as global production of lighter crude oils has fallen and shifted to exploitation of heavier reserves.
Fuel
Light naphtha is used as a fuel in some commercial applications. One notable example is wick-based cigarette lighters, such as the Zippo, which draw "lighter fluid"—naphtha—into a wick from a reservoir to be ignited using the flint and wheel.
It is also a fuel for camping stoves and oil lanterns, known as "white gas", where naphtha's low boiling point makes it easy to ignite. Naphtha is sometimes preferred over kerosene as it clogs fuel lines less. The outdoor equipment manufacturer MSR published a list of tradenames and translations to help outdoor enthusiasts obtain the correct products in various countries.
Naphtha was also historically used as a fuel in some small launch boats where steam technology was impractical; most were built to circumvent safety laws relating to traditional steam launches.
As an internal combustion engine fuel, petroleum naphtha has seen very little use and suffers from lower efficiency and low octane ratings, typically 40 to 70 RON. It can be used to run unmodified diesel engines, though it has a longer ignition-delay than diesel. Naphtha tends to be noisy in combustion due to the high pressure rise rate. There is a possibility of using naphtha as a low-octane base fuel in an octane-on-demand concept, with the engine drawing a high-octane mix only when needed. Naptha benefits from lesser emissions in refinement: fuel energy losses from "well-to-tank" are 13%; lower than the 22% losses for petroleum.
Plastics
Naphtha is a crucial component in the production of plastics.
Health and safety considerations
The safety data sheets (SDSs) from various naphtha vendors indicate various hazards such as flammable mixture of hydrocarbons: flammability, carcinogenicity, skin and airway irritation, etc.
Humans can be exposed to naphtha in the workplace by inhalation, ingestion, dermal contact, and eye contact. The US Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for naphtha in the workplace as 100 ppm (400 mg/m3) over an 8-hour workday. The US National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 100 ppm (400 mg/m3) over an 8-hour workday. At levels of 1000 ppm, which equates to 10 times the lower exposure limit, naphtha is immediately dangerous to life and health.
| Physical sciences | Hydrocarbons | Chemistry |
184393 | https://en.wikipedia.org/wiki/Thrush%20%28bird%29 | Thrush (bird) | The thrushes are a passerine bird family, Turdidae, with a worldwide distribution. The family was once much larger before biologists reclassified the former subfamily Saxicolinae, which includes the chats and European robins, as Old World flycatchers. Thrushes are small to medium-sized ground living birds that feed on insects, other invertebrates, and fruit. Some unrelated species around the world have been named after thrushes due to their similarity to birds in this family.
Characteristics
Thrushes are plump, soft-plumaged, small to medium-sized birds that inhabit wooded areas and often feed on the ground. The smallest thrush may be the shortwings, which have ambiguous alliances with both thrushes and Old World flycatchers. The lesser shortwing averages . The largest thrush is the great thrush at and ; the larger, commonly recognized blue whistling thrush is an Old world flycatcher. The Amami thrush might, however, grow larger than the great thrush. Most species are grey or brown in colour, often with speckled underparts.
They are insectivorous, but most species also eat worms, land snails, and fruit (usually berries). Many species are permanently resident in warm climates, while others migrate to higher latitudes during the summer, often over considerable distances.
Thrushes build cup-shaped nests, sometimes lining them with mud. They lay two to five speckled eggs, sometimes laying two or more clutches per year. Both parents help raise the young. In almost all cases, the nest is placed on a branch; the only exceptions are the three species of bluebird, which nest in holes.
Ecology
Turdidae species spread the seeds of plants, contributing to the dispersal of many species and the recovery of ecosystems.
Plants have limited seed dispersal mobility away from the parent plant and consequently rely upon a variety of dispersal vectors to transport their propagules, including both abiotic and biotic vectors. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time.
Many bats and birds rely heavily on fruits for their diet, including birds in the families Cotingidae, Columbidae, Trogonidae, Turdidae, and Ramphastidae. While eating fruit, these animals swallow seeds and then later regurgitate them or pass them in their faeces. Such ornithochory has been a major mechanism of seed dispersal across ocean barriers.
Other seeds may stick to the feet or feathers of birds and in this way may travel long distances. Seeds of grasses, spores of algae, and the eggs of molluscs and other invertebrates commonly establish in remote areas after long journeys of this sort. The Turdidae have a great ecological importance because some populations migrate long distances and disperse the seeds of endangered plant species at new sites, helping to eliminate inbreeding and increasing the genetic diversity of local flora.
Taxonomy
The family Turdidae was introduced (as Turdinia) by the French polymath Constantine Samuel Rafinesque in 1815. The taxonomic treatment of this large family has varied significantly in recent years. Traditionally, the Turdidae included the small Old World species, like the nightingale and European robin in the subfamily Saxicolinae, but most authorities now place this group in the Old World flycatcher family Muscicapidae. Molecular phylogenetic analysis has shown that the family Turdidae is a member of the superfamily Muscicapoidea and is sister to the family Muscicapidae. The two families diverged in the Miocene around 17 million years ago.
The family formerly included more species. At the time of the publication of the third edition of Howard and Moore Complete Checklist of the Birds of the World in 2003, the genera Myophonus, Alethe, Brachypteryx and Heinrichia were included in Turdidae. Subsequent molecular phylogenetic studies have shown that the species in these four genera are more closely related to species in the family Muscicapidae. As a consequence, these four genera are now placed in Muscicapidae. In contrast, the genus Cochoa which had previously been placed in Muscicapidae, was shown to belong in Turdidae.
Genera
The family contains 191 species, which are divided into 17 genera:
Grandala – grandala
Sialia – bluebirds (3 species)
Stizorhina – rufous thrushes (2 species)
Neocossyphus – ant thrushes (2 species)
Pinarornis – boulder chat
Myadestes – solitaires (12 species, including one recently extinct)
Chlamydochaera – fruithunter
Cochoa – cochoas (4 species)
Ixoreus – varied thrush
Ridgwayia – Aztec thrush
Cichlopsis – rufous-brown solitaire
Entomodestes – solitaires (2 species)
Hylocichla – wood thrush
Catharus – typical American thrushes and nightingale-thrushes (13 species)
Zoothera – Asian thrushes (21 species, including one recently extinct)
Geokichla – (21 species)
Turdus – true thrushes (104 species, including two recently extinct)
Cooking
The thrush is one of the many kinds of small bird that have in the past been trapped and eaten in much of Europe; the practice is now rare. Among traditional ways of cooking thrush were with polenta or grilled on a skewer, in Italy; with juniper berries in Belgium; and made into a pâté or terrine. The French cook and cookery writer Marie-Antoine Carême recommended cooking thrushes in crépinettes and serving with sauce Périgueux.
| Biology and health sciences | Passerida | null |
184414 | https://en.wikipedia.org/wiki/Spoonbill | Spoonbill | Spoonbills are a genus, Platalea, of large, long-legged wading birds. The spoonbills have a global distribution, being found on every continent except Antarctica. The genus name Platalea derives from Ancient Greek and means "broad", referring to the distinctive shape of the bill. Six species are recognised, which although usually placed in a single genus have sometimes been split into three genera.
All spoonbills have large, flat, spatulate bills and feed by wading through shallow water, sweeping the partly opened bill from side to side. The moment any small aquatic creature touches the inside of the bill—an insect, crustacean, or tiny fish—it is snapped shut. Spoonbills generally prefer fresh water to salt but are found in both environments. They need to feed many hours each day.
Taxonomy
The genus Platalea was introduced in 1758 by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. The genus name is Latin for "spoonbill" and is derived from the Ancient Greek platea meaning "broad", referring to the distinctive shape of the bill. The type species was designated as the Eurasian spoonbill (Platalea leucorodia) by George Robert Gray in 1840.
They have traditionally been thought to form one of two subfamilies, Plataleinae, in the family Threskiornithidae, which also includes the ibises (Threskiornithinae). Molecular studies, including a 2013 study, have suggested instead that they form a clade within the family with several cosmopolitan ibis genera, separate from another clade of New World ibises.
A 2010 study of mitochondrial DNA of the spoonbills by Chesser and colleagues found that the roseate and yellow-billed spoonbills were each other's closest relative, and the two were descended from an early offshoot from the ancestors of the other four spoonbill species. They felt the genetic evidence meant it was equally valid to consider all six to be classified within the genus Platalea or alternatively for two of the species to be placed in monotypic genera named as Platibis and Ajaja. However, as the six species were so similar morphologically, keeping them within the one genus made more sense.
Description
Spoonbills are most easily distinguished from ibises in the shape of their bill, which is long and flat and wider at the end. The nostrils are located near the base of the bill so that the bird can breathe while the bill is submerged in water. The eyes are positioned to provide spoonbills with binocular vision, although, when foraging, tactile senses are important too. Like ibises, spoonbills have bare patches of skin around the bill and eyes.
Breeding
Spoonbills are monogamous, but, so far as is known, only for one season at a time. Most species nest in trees or reed beds, often with ibises or herons. The male gathers nesting material—mostly sticks and reeds, sometimes taken from an old nest—the female weaves it into a large, shallow bowl or platform which varies in its shape and structural integrity according to species.
The female lays a clutch of about three smooth, oval, white eggs and both parents incubate; chicks hatch one at a time rather than all together. The newly hatched young are blind and cannot care for themselves immediately; both parents feed them by partial regurgitation. Chicks' bills are short and straight, and only gain the characteristic spoonbill shape as they mature. Their feeding continues for a few weeks longer after the family leaves the nest. The primary cause of brood failure appears not to be predation but starvation.
Species and distribution
The six species of spoonbill are distributed over much of the world.
| Biology and health sciences | Pelecanimorphae | Animals |
184527 | https://en.wikipedia.org/wiki/Uncrewed%20spacecraft | Uncrewed spacecraft | Uncrewed spacecraft or robotic spacecraft are spacecraft without people on board. Uncrewed spacecraft may have varying levels of autonomy from human input, such as remote control, or remote guidance. They may also be autonomous, in which they have a pre-programmed list of operations that will be executed unless otherwise instructed. A robotic spacecraft for scientific measurements is often called a space probe or space observatory.
Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival, given current technology. Outer planets such as Saturn, Uranus, and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit.
The first uncrewed space mission was Sputnik, launched October 4, 1957 to orbit the Earth. Nearly all satellites, landers and rovers are robotic spacecraft. Not every uncrewed spacecraft is a robotic spacecraft; for example, a reflector ball is a non-robotic uncrewed spacecraft. Space missions where other animals but no humans are on-board are called uncrewed missions.
Many habitable spacecraft also have varying levels of robotic features. For example, the space stations Salyut 7 and Mir, and the International Space Station module Zarya, were capable of remote guided station-keeping and docking maneuvers with both resupply craft and new modules. Uncrewed resupply spacecraft are increasingly used for crewed space stations.
History
The first robotic spacecraft was launched by the Soviet Union (USSR) on 22 July 1951, a suborbital flight carrying two dogs Dezik and Tsygan. Four other such flights were made through the fall of 1951.
The first artificial satellite, Sputnik 1, was put into a Earth orbit by the USSR on 4 October 1957. On 3 November 1957, the USSR orbited Sputnik 2. Weighing , Sputnik 2 carried the first animal into orbit, the dog Laika. Since the satellite was not designed to detach from its launch vehicle's upper stage, the total mass in orbit was .
In a close race with the Soviets, the United States launched its first artificial satellite, Explorer 1, into a orbit on 31 January 1958. Explorer I was an long by diameter cylinder weighing , compared to Sputnik 1, a sphere which weighed . Explorer 1 carried sensors which confirmed the existence of the Van Allen belts, a major scientific discovery at the time, while Sputnik 1 carried no scientific sensors. On 17 March 1958, the US orbited its second satellite, Vanguard 1, which was about the size of a grapefruit, and which remains in a orbit .
The first attempted lunar probe was the Luna E-1 No.1, launched on 23 September 1958. The goal of a lunar probe repeatedly failed until 4 January 1959 when Luna 1 orbited around the Moon and then the Sun.
The success of these early missions began a race between the US and the USSR to outdo each other with increasingly ambitious probes. Mariner 2 was the first probe to study another planet, revealing Venus' extremely hot temperature to scientists in 1962, while the Soviet Venera 4 was the first atmospheric probe to study Venus. Mariner 4 1965 Mars flyby snapped the first images of its cratered surface, which the Soviets responded to a few months later with images from on its surface from Luna 9. In 1967, America's Surveyor 3 gathered information about the Moon's surface that would prove crucial to the Apollo 11 mission that landed humans on the Moon two years later.
The first interstellar probe was Voyager 1, launched 5 September 1977. It entered interstellar space on 25 August 2012, followed by its twin Voyager 2 on 5 November 2018.
Nine other countries have successfully launched satellites using their own launch vehicles: France (1965), Japan and China (1970), the United Kingdom (1971), India (1980), Israel (1988), Iran (2009), North Korea (2012), and South Korea (2022).
Design
In spacecraft design, the United States Air Force considers a vehicle to consist of the mission payload and the bus (or platform). The bus provides physical structure, thermal control, electrical power, attitude control and telemetry, tracking and commanding.
JPL divides the "flight system" of a spacecraft into subsystems. These include:
Structure
The physical backbone structure, which
provides overall mechanical integrity of the spacecraft
ensures spacecraft components are supported and can withstand launch loads
Data handling
This is sometimes referred to as the command and data subsystem. It is often responsible for:
command sequence storage
maintaining the spacecraft clock
collecting and reporting spacecraft telemetry data (e.g. spacecraft health)
collecting and reporting mission data (e.g. photographic images)
Attitude determination and control
This system is mainly responsible for the correct spacecraft's orientation in space (attitude) despite external disturbance-gravity gradient effects, magnetic-field torques, solar radiation and aerodynamic drag; in addition it may be required to reposition movable parts, such as antennas and solar arrays.
Entry, descent, and landing
Integrated sensing incorporates an image transformation algorithm to interpret the immediate imagery land data, perform a real-time detection and avoidance of terrain hazards that may impede safe landing, and increase the accuracy of landing at a desired site of interest using landmark localization techniques. Integrated sensing completes these tasks by relying on pre-recorded information and cameras to understand its location and determine its position and whether it is correct or needs to make any corrections (localization). The cameras are also used to detect any possible hazards whether it is increased fuel consumption or it is a physical hazard such as a poor landing spot in a crater or cliff side that would make landing very not ideal (hazard assessment).
Landing on hazardous terrain
In planetary exploration missions involving robotic spacecraft, there are three key parts in the processes of landing on the surface of the planet to ensure a safe and successful landing. This process includes an entry into the planetary gravity field and atmosphere, a descent through that atmosphere towards an intended/targeted region of scientific value, and a safe landing that guarantees the integrity of the instrumentation on the craft is preserved. While the robotic spacecraft is going through those parts, it must also be capable of estimating its position compared to the surface in order to ensure reliable control of itself and its ability to maneuver well. The robotic spacecraft must also efficiently perform hazard assessment and trajectory adjustments in real time to avoid hazards. To achieve this, the robotic spacecraft requires accurate knowledge of where the spacecraft is located relative to the surface (localization), what may pose as hazards from the terrain (hazard assessment), and where the spacecraft should presently be headed (hazard avoidance). Without the capability for operations for localization, hazard assessment, and avoidance, the robotic spacecraft becomes unsafe and can easily enter dangerous situations such as surface collisions, undesirable fuel consumption levels, and/or unsafe maneuvers.
Telecommunications
Components in the telecommunications subsystem include radio antennas, transmitters and receivers. These may be used to communicate with ground stations on Earth, or with other spacecraft.
Electrical power
The supply of electric power on spacecraft generally come from photovoltaic (solar) cells or from a radioisotope thermoelectric generator. Other components of the subsystem include batteries for storing power and distribution circuitry that connects components to the power sources.
Temperature control and protection from the environment
Spacecraft are often protected from temperature fluctuations with insulation. Some spacecraft use mirrors and sunshades for additional protection from solar heating. They also often need shielding from micrometeoroids and orbital debris.
Propulsion
Spacecraft propulsion is a method that allows a spacecraft to travel through space by generating thrust to push it forward. However, there is not one universally used propulsion system: monopropellant, bipropellant, ion propulsion, etc. Each propulsion system generates thrust in slightly different ways with each system having its own advantages and disadvantages. But, most spacecraft propulsion today is based on rocket engines. The general idea behind rocket engines is that when an oxidizer meets the fuel source, there is explosive release of energy and heat at high speeds, which propels the spacecraft forward. This happens due to one basic principle known as Newton's Third Law. According to Newton, "to every action there is an equal and opposite reaction." As the energy and heat is being released from the back of the spacecraft, gas particles are being pushed around to allow the spacecraft to propel forward. The main reason behind the usage of rocket engine today is because rockets are the most powerful form of propulsion there is.
Monopropellant
For a propulsion system to work, there is usually an oxidizer line and a fuel line. This way, the spacecraft propulsion is controlled. But in a monopropellant propulsion, there is no need for an oxidizer line and only requires the fuel line. This works due to the oxidizer being chemically bonded into the fuel molecule itself. But for the propulsion system to be controlled, the combustion of the fuel can only occur due to a presence of a catalyst. This is quite advantageous due to making the rocket engine lighter and cheaper, easy to control, and more reliable. But, the downfall is that the chemical is very dangerous to manufacture, store, and transport.
Bipropellant
A bipropellant propulsion system is a rocket engine that uses a liquid propellant. This means both the oxidizer and fuel line are in liquid states. This system is unique because it requires no ignition system, the two liquids would spontaneously combust as soon as they come into contact with each other and produces the propulsion to push the spacecraft forward. The main benefit for having this technology is because that these kinds of liquids have relatively high density, which allows the volume of the propellent tank to be small, therefore increasing space efficacy. The downside is the same as that of monopropellant propulsion system: very dangerous to manufacture, store, and transport.
Ion
An ion propulsion system is a type of engine that generates thrust by the means of electron bombardment or the acceleration of ions. By shooting high-energy electrons to a propellant atom (neutrally charge), it removes electrons from the propellant atom and this results in the propellant atom becoming a positively charged atom. The positively charged ions are guided to pass through positively charged grids that contains thousands of precise aligned holes are running at high voltages. Then, the aligned positively charged ions accelerates through a negative charged accelerator grid that further increases the speed of the ions up to . The momentum of these positively charged ions provides the thrust to propel the spacecraft forward. The advantage of having this kind of propulsion is that it is incredibly efficient in maintaining constant velocity, which is needed for deep-space travel. However, the amount of thrust produced is extremely low and that it needs a lot of electrical power to operate.
Mechanical devices
Mechanical components often need to be moved for deployment after launch or prior to landing. In addition to the use of motors, many one-time movements are controlled by pyrotechnic devices.
Robotic vs. uncrewed spacecraft
Robotic spacecraft are specifically designed system for a specific hostile environment. Due to their specification for a particular environment, it varies greatly in complexity and capabilities. While an uncrewed spacecraft is a spacecraft without personnel or crew and is operated by automatic (proceeds with an action without human intervention) or remote control (with human intervention). The term 'uncrewed spacecraft' does not imply that the spacecraft is robotic.
Control
Robotic spacecraft use telemetry to radio back to Earth acquired data and vehicle status information. Although generally referred to as "remotely controlled" or "telerobotic", the earliest orbital spacecraft – such as Sputnik 1 and Explorer 1 – did not receive control signals from Earth. Soon after these first spacecraft, command systems were developed to allow remote control from the ground. Increased autonomy is important for distant probes where the light travel time prevents rapid decision and control from Earth. Newer probes such as Cassini–Huygens and the Mars Exploration Rovers are highly autonomous and use on-board computers to operate independently for extended periods of time.
Space probes and observatories
A space probe is a robotic spacecraft that does not orbit Earth, but instead, explores further into outer space. Space probes have different sets of scientific instruments onboard. A space probe may approach the Moon; travel through interplanetary space; flyby, orbit, or land on other planetary bodies; or enter interstellar space. Space probes send collected data to Earth. Space probes can be orbiters, landers, and rovers. Space probes can also gather materials from its target and return it to Earth.
Once a probe has left the vicinity of Earth, its trajectory will likely take it along an orbit around the Sun similar to the Earth's orbit. To reach another planet, the simplest practical method is a Hohmann transfer orbit. More complex techniques, such as gravitational slingshots, can be more fuel-efficient, though they may require the probe to spend more time in transit. Some high Delta-V missions (such as those with high inclination changes) can only be performed, within the limits of modern propulsion, using gravitational slingshots. A technique using very little propulsion, but requiring a considerable amount of time, is to follow a trajectory on the Interplanetary Transport Network.
A space telescope or space observatory is a telescope in outer space used to observe astronomical objects. Space telescopes avoid the filtering and distortion of electromagnetic radiation which they observe, and avoid light pollution which ground-based observatories encounter. They are divided into two types: satellites which map the entire sky (astronomical survey), and satellites which focus on selected astronomical objects or parts of the sky and beyond. Space telescopes are distinct from Earth imaging satellites, which point toward Earth for satellite imaging, applied for weather analysis, espionage, and other types of information gathering.
Cargo spacecraft
Cargo or resupply spacecraft are robotic vehicles designed to transport supplies, such as food, propellant, and equipment, to space stations. This distinguishes them from space probes, which are primarily focused on scientific exploration.
Automated cargo spacecraft have been servicing space stations since 1978, supporting missions like Salyut 6, Salyut 7, Mir, the International Space Station (ISS), and the Tiangong space station.
Currently, the ISS relies on three types of cargo spacecraft: the Russian Progress, along with the American Cargo Dragon 2, and Cygnus. China's Tiangong space station is solely supplied by the Tianzhou.
The American Dream Chaser and Japanese HTV-X are under development for future use with the ISS. The European Automated Transfer Vehicle was previously used between 2008 and 2015.
| Technology | Basics_6 | null |
184540 | https://en.wikipedia.org/wiki/GABA | GABA | GABA (gamma-aminobutyric acid, γ-aminobutyric acid) is the chief inhibitory neurotransmitter in the developmentally mature mammalian central nervous system. Its principal role is reducing neuronal excitability throughout the nervous system.
GABA is sold as a dietary supplement in many countries. It has been traditionally thought that exogenous GABA (i.e., taken as a supplement) does not cross the blood–brain barrier, but data obtained from more recent research (2010s) in rats describes the notion as being unclear.
The carboxylate form of GABA is γ-aminobutyrate.
Function
Neurotransmitter
Two general classes of GABA receptor are known:
GABAA in which the receptor is part of a ligand-gated ion channel complex
GABAB metabotropic receptors, which are G protein-coupled receptors that open or close ion channels via intermediaries (G proteins)
Neurons that produce GABA as their output are called GABAergic neurons, and have chiefly inhibitory action at receptors in the adult vertebrate. Medium spiny cells are a typical example of inhibitory central nervous system GABAergic cells. In contrast, GABA exhibits both excitatory and inhibitory actions in insects, mediating muscle activation at synapses between nerves and muscle cells, and also the stimulation of certain glands. In mammals, some GABAergic neurons, such as chandelier cells, are also able to excite their glutamatergic counterparts. In addition to fast-acting phasic inhibition, small amounts of extracellular GABA can induce slow timescale tonic inhibition on neurons.
GABAA receptors are ligand-activated chloride channels: when activated by GABA, they allow the flow of chloride ions across the membrane of the cell. Whether this chloride flow is depolarizing (makes the voltage across the cell's membrane less negative), shunting (has no effect on the cell's membrane potential), or inhibitory/hyperpolarizing (makes the cell's membrane more negative) depends on the direction of the flow of chloride. When net chloride flows out of the cell, GABA is depolarising; when chloride flows into the cell, GABA is inhibitory or hyperpolarizing. When the net flow of chloride is close to zero, the action of GABA is shunting. Shunting inhibition has no direct effect on the membrane potential of the cell; however, it reduces the effect of any coincident synaptic input by reducing the electrical resistance of the cell's membrane. Shunting inhibition can "override" the excitatory effect of depolarising GABA, resulting in overall inhibition even if the membrane potential becomes less negative. It was thought that a developmental switch in the molecular machinery controlling the concentration of chloride inside the cell changes the functional role of GABA between neonatal and adult stages. As the brain develops into adulthood, GABA's role changes from excitatory to inhibitory.
Brain development
GABA is an inhibitory transmitter in the mature brain; its actions were thought to be primarily excitatory in the developing brain. The gradient of chloride was reported to be reversed in immature neurons, with its reversal potential higher than the resting membrane potential of the cell; activation of a GABA-A receptor thus leads to efflux of Cl− ions from the cell (that is, a depolarizing current). The differential gradient of chloride in immature neurons was shown to be primarily due to the higher concentration of NKCC1 co-transporters relative to KCC2 co-transporters in immature cells. GABAergic interneurons mature faster in the hippocampus and the GABA machinery appears earlier than glutamatergic transmission. Thus, GABA is considered the major excitatory neurotransmitter in many regions of the brain before the maturation of glutamatergic synapses.
In the developmental stages preceding the formation of synaptic contacts, GABA is synthesized by neurons and acts both as an autocrine (acting on the same cell) and paracrine (acting on nearby cells) signalling mediator. The ganglionic eminences also contribute greatly to building up the GABAergic cortical cell population.
GABA regulates the proliferation of neural progenitor cells, the migration and differentiation the elongation of neurites and the formation of synapses.
GABA also regulates the growth of embryonic and neural stem cells. GABA can influence the development of neural progenitor cells via brain-derived neurotrophic factor (BDNF) expression. GABA activates the GABAA receptor, causing cell cycle arrest in the S-phase, limiting growth.
Beyond the nervous system
Besides the nervous system, GABA is also produced at relatively high levels in the insulin-producing beta cells (β-cells) of the pancreas. The β-cells secrete GABA along with insulin and the GABA binds to GABA receptors on the neighboring islet alpha cells (α-cells) and inhibits them from secreting glucagon (which would counteract insulin's effects).
GABA can promote the replication and survival of β-cells and also promote the conversion of α-cells to β-cells, which may lead to new treatments for diabetes.
Alongside GABAergic mechanisms, GABA has also been detected in other peripheral tissues including intestines, stomach, fallopian tubes, uterus, ovaries, testicles, kidneys, urinary bladder, the lungs and liver, albeit at much lower levels than in neurons or β-cells.
Experiments on mice have shown that hypothyroidism induced by fluoride poisoning can be halted by administering GABA. The test also found that the thyroid recovered naturally without further assistance after the fluoride had been expelled by the GABA.
Immune cells express receptors for GABA and administration of GABA can suppress inflammatory immune responses and promote "regulatory" immune responses, such that GABA administration has been shown to inhibit autoimmune diseases in several animal models.
In 2018, GABA has shown to regulate secretion of a greater number of cytokines. In plasma of T1D patients, levels of 26 cytokines are increased and of those, 16 are inhibited by GABA in the cell assays.
In 2007, an excitatory GABAergic system was described in the airway epithelium. The system is activated by exposure to allergens and may participate in the mechanisms of asthma. GABAergic systems have also been found in the testis and in the eye lens.
Structure and conformation
GABA is found mostly as a zwitterion (i.e., with the carboxyl group deprotonated and the amino group protonated). Its conformation depends on its environment. In the gas phase, a highly folded conformation is strongly favored due to the electrostatic attraction between the two functional groups. The stabilization is about 50 kcal/mol, according to quantum chemistry calculations. In the solid state, an extended conformation is found, with a trans conformation at the amino end and a gauche conformation at the carboxyl end. This is due to the packing interactions with the neighboring molecules. In solution, five different conformations, some folded and some extended, are found as a result of solvation effects. The conformational flexibility of GABA is important for its biological function, as it has been found to bind to different receptors with different conformations. Many GABA analogues with pharmaceutical applications have more rigid structures in order to control the binding better.
History
In 1883, GABA was first synthesized, and it was first known only as a plant and microbe metabolic product.
In 1950, Washington University School of Medicine researchers Eugene Roberts and Sam Frankel used newly-developed techniques of chromatography to analyze protein-free extracts of mammalian brain and discovered that GABA is produced from the glutamic acid and accumulates in the mammalian central nervous system.
There was not much further research into the substance until seven years later, Canadian researchers identified GABA as the mysterious component (termed Factor I by its discoverers in 1954) of brain and spinal cord extracts which inhibited crayfish neurons.
By 1959, it was shown that at an inhibitory synapse on crayfish muscle fibers GABA acts like stimulation of the inhibitory nerve. Both inhibition by nerve stimulation and by applied GABA are blocked by picrotoxin.
Biosynthesis
GABA is primarily synthesized from glutamate via the enzyme glutamate decarboxylase (GAD) with pyridoxal phosphate (the active form of vitamin B6) as a cofactor. This process converts glutamate (the principal excitatory neurotransmitter) into GABA (the principal inhibitory neurotransmitter).
GABA can also be synthesized from putrescine by diamine oxidase and aldehyde dehydrogenase.
Historically it was thought that exogenous GABA did not penetrate the blood–brain barrier, but more current research describes the notion as being unclear pending further research.
Metabolism
GABA transaminase enzymes catalyze the conversion of 4-aminobutanoic acid (GABA) and 2-oxoglutarate (α-ketoglutarate) into succinic semialdehyde and glutamate. Succinic semialdehyde is then oxidized into succinic acid by succinic semialdehyde dehydrogenase and as such enters the citric acid cycle as a usable source of energy.
Pharmacology
Drugs that act as allosteric modulators of GABA receptors (known as GABA analogues or GABAergic drugs), or increase the available amount of GABA, typically have relaxing, anti-anxiety, and anti-convulsive effects (with equivalent efficacy to lamotrigine based on studies of mice). Many of the substances below are known to cause anterograde amnesia and retrograde amnesia.
In general, GABA does not cross the blood–brain barrier, although certain areas of the brain that have no effective blood–brain barrier, such as the periventricular nucleus, can be reached by drugs such as systemically injected GABA. At least one study suggests that orally administered GABA increases the amount of human growth hormone (HGH). GABA directly injected to the brain has been reported to have both stimulatory and inhibitory effects on the production of growth hormone, depending on the physiology of the individual. Consequently, considering the potential biphasic effects of GABA on growth hormone production, as well as other safety concerns, its usage is not recommended during pregnancy and lactation.
GABA enhances the catabolism of serotonin into N-acetylserotonin (the precursor of melatonin) in rats. It is thus suspected that GABA is involved in the synthesis of melatonin and thus might exert regulatory effects on sleep and reproductive functions.
Chemistry
Although in chemical terms, GABA is an amino acid (as it has both a primary amine and a carboxylic acid functional group), it is rarely referred to as such in the professional, scientific, or medical community. By convention the term "amino acid", when used without a qualifier, refers specifically to an alpha amino acid. GABA is not an alpha amino acid, meaning the amino group is not attached to the alpha carbon. Nor is it incorporated into proteins as are many alpha-amino acids.
GABAergic drugs
GABAA receptor ligands are shown in the following table.
GABAergic pro-drugs include chloral hydrate, which is metabolised to trichloroethanol, which then acts via the GABAA receptor.
The plant kava contains GABAergic compounds, including kavain, dihydrokavain, methysticin, dihydromethysticin and yangonin.
Other GABAergic modulators include:
GABAB receptor ligands.
Agonists: baclofen, propofol, GHB, phenibut.
Antagonists: phaclofen, saclofen.
GABA reuptake inhibitors: deramciclane, hyperforin, tiagabine.
GABA transaminase inhibitors: gabaculine, phenelzine, valproate, vigabatrin, lemon balm (Melissa officinalis).
GABA analogues: pregabalin, gabapentin, picamilon, progabide
4-Amino-1-butanol is a biochemical precursor of GABA and can be converted into GABA by the actions of aldehyde reductase (ALR) and aldehyde dehydrogenase (ALDH) with γ-aminobutyraldehyde (GABAL) as a metabolic intermediate.
In plants
GABA is also found in plants. It is the most abundant amino acid in the apoplast of tomatoes. Evidence also suggests a role in cell signalling in plants.
| Biology and health sciences | Neurotransmitters | Biology |
184726 | https://en.wikipedia.org/wiki/Heat%20transfer | Heat transfer | Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat.
Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process.
Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface.
In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life.
The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others.
Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes.
Mechanisms
The fundamental modes of heat transfer are:
Advection
Advection is the transport mechanism of a fluid from one location to another, and is dependent on motion and momentum of that fluid.
Conduction or diffusion
The transfer of energy between objects that are in physical contact. Thermal conductivity is the property of a material to conduct heat and is evaluated primarily in terms of Fourier's law for heat conduction.
Convection
The transfer of energy between an object and its environment, due to fluid motion. The average temperature is a reference for evaluating properties related to convective heat transfer.
Radiation
The transfer of energy by the emission of electromagnetic radiation.
Advection
By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula:
where
is heat flux (W/m2),
is density (kg/m3),
is heat capacity at constant pressure (J/kg·K),
is the difference in temperature (K),
is velocity (m/s).
Conduction
On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time.
Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study.
Convection
The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means.
Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction.
Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current.
Convection-cooling
Convective cooling is sometimes described as Newton's law of cooling:
However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply.
Convection vs. conduction
In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong.
The Rayleigh number () is the product of the Grashof () and Prandtl () numbers. It is a measure that determines the relative strength of conduction and convection.
where
g is the acceleration due to gravity,
ρ is the density with being the density difference between the lower and upper ends,
μ is the dynamic viscosity,
α is the Thermal diffusivity,
β is the volume thermal expansivity (sometimes denoted α elsewhere),
T is the temperature,
ν is the kinematic viscosity, and
L is characteristic length.
The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system.
The buoyancy force driving the convection is roughly , so the corresponding pressure is roughly . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals , where V is the typical fluid velocity due to convection and the order of its timescale. The conduction timescale, on the other hand, is of the order of .
Convection occurs when the Rayleigh number is above 1,000–2,000.
Radiation
Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference.
When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation. For an object in vacuum, the equation is:
For radiative transfer between two objects, the equation is as follows:
where
is the heat flux,
is the emissivity (unity for a black body),
is the Stefan–Boltzmann constant,
is the view factor between two surfaces a and b, and
and are the absolute temperatures (in kelvins or degrees Rankine) for the two objects.
The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength. The study of these cases is called near-field radiative heat transfer.
Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to .
The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France.
Phase transition
Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water.
The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation.
Phase transitions involve the four fundamental states of matter:
Solid – Deposition, freezing, and solid-to-solid transformation.
Liquid – Condensation and melting / fusion.
Gas – Boiling / evaporation, recombination/ deionization, and sublimation.
Plasma – Ionization.
Boiling
The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume.
In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB).
At similar standard atmospheric pressure and high temperatures, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF).
The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier".
Condensation
Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure.
There are several types of condensation:
Homogeneous condensation, as during the formation of fog.
Condensation in direct contact with subcooled liquid.
Condensation on direct contact with a cooling wall of a heat exchanger: This is the most common mode used in industry: Dropwise condensation is difficult to sustain reliably; therefore, industrial equipment is normally designed to operate in filmwise condensation mode.
Melting
Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state.
Modeling approaches
Heat transfer can be modeled in various ways.
Heat equation
The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.).
Lumped system analysis
Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling.
System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time.
In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
Climate models
Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice.
Engineering
Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, chemical engineering and power station engineering.
Insulation, radiance and resistance
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference.
Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator.
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature.
Devices
A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work.
A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect.
A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction.
Heat exchangers
A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.
A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces.
Applications
Architecture
Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors.
Smart meter is a device that records electric energy consumption in intervals.
Thermal transmittance is the rate of transfer of heat through a structure divided by the difference in temperature across the structure. It is expressed in watts per square meter per kelvin, or W/(m2K). Well-insulated parts of a building have a low thermal transmittance, whereas poorly-insulated parts of a building have a high thermal transmittance.
Thermostat is a device to monitor and control temperature.
Climate engineering
Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases.
An alternative method is passive daytime radiative cooling, which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K) to lower ambient temperatures while requiring zero energy input.
Greenhouse effect
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun.
Heat transfer in the human body
The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level.
Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered.
To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids.
Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity.
Cooling techniques
Evaporative cooling
Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect.
Laser cooling
In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level.
Doppler cooling is the most common method of laser cooling.
Sympathetic cooling is a process in which particles of one type cool particles of another type. Typically, atomic ions that can be directly laser-cooled are used to cool nearby ions or atoms. This technique allows the cooling of ions and atoms that cannot be laser-cooled directly.
Magnetic cooling
Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect.
Radiative cooling
Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide () at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere.
Thermal energy storage
Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.
History
Newton's law of cooling
In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures (, "degrees of heat") between the body and its surroundings. The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same.
Thermal conduction
In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true.
Thermal convection
In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference.
Thermal radiation
In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences.
Thermal conductivity of different metals
In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities:
Benjamin Thompson's experiments on heat transfer
During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria, reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim. During his years in Mannheim and later in Munich, Thompson made a large number of discoveries and inventions related to heat.
Conductivity experiments
"New Experiments upon Heat"
In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a "Torricellian vacuum".
For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction. These were his results.
After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", but only a very small difference between common air and rarefied air. He also noted the great difference between dry air and moist air, and the great benefit this affords.
Temperature vs. sensible heat
Thompson concluded with some comments on the important difference between temperature and sensible heat.
Coining of the term "convection"
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
| Physical sciences | Thermodynamics | null |
184729 | https://en.wikipedia.org/wiki/Sanitation | Sanitation | Sanitation refers to public health conditions related to clean drinking water and treatment and disposal of human excreta and sewage. Preventing human contact with feces is part of sanitation, as is hand washing with soap. Sanitation systems aim to protect human health by providing a clean environment that will stop the transmission of disease, especially through the fecal–oral route. For example, diarrhea, a main cause of malnutrition and stunted growth in children, can be reduced through adequate sanitation. There are many other diseases which are easily transmitted in communities that have low levels of sanitation, such as ascariasis (a type of intestinal worm infection or helminthiasis), cholera, hepatitis, polio, schistosomiasis, and trachoma, to name just a few.
A range of sanitation technologies and approaches exists. Some examples are community-led total sanitation, container-based sanitation, ecological sanitation, emergency sanitation, environmental sanitation, onsite sanitation and sustainable sanitation. A sanitation system includes the capture, storage, transport, treatment and disposal or reuse of human excreta and wastewater. Reuse activities within the sanitation system may focus on the nutrients, water, energy or organic matter contained in excreta and wastewater. This is referred to as the "sanitation value chain" or "sanitation economy". The people responsible for cleaning, maintaining, operating, or emptying a sanitation technology at any step of the sanitation chain are called "sanitation workers".
Several sanitation "levels" are being used to compare sanitation service levels within countries or across countries. The sanitation ladder defined by the Joint Monitoring Programme in 2016 starts at open defecation and moves upwards using the terms "unimproved", "limited", "basic", with the highest level being "safely managed". This is particularly applicable to developing countries.
The Human Right to Water and Sanitation was recognized by the United Nations (UN) General Assembly in 2010. Sanitation is a global development priority and the subject of Sustainable Development Goal 6. The estimate in 2017 by JMP states that 4.5 billion people currently do not have safely managed sanitation. Lack of access to sanitation has an impact not only on public health but also on human dignity and personal safety.
Definitions
There are some variations on the use of the term "sanitation" between countries and organizations. The World Health Organization defines the term "sanitation" as follows:
Sanitation includes all four of these technical and non-technical systems: Excreta management systems, wastewater management systems (included here are wastewater treatment plants), solid waste management systems as well as drainage systems for rainwater, also called stormwater drainage. However, many in the WASH sector only include excreta management in their definition of sanitation.
Another example of what is included in sanitation is found in the handbook by Sphere on "Humanitarian Charter and Minimum Standards in Humanitarian Response" which describes minimum standards in four "key response sectors" in humanitarian response situations. One of them is "Water Supply, Sanitation and Hygiene Promotion" (WASH) and it includes the following areas: Hygiene promotion, water supply, excreta management, vector control, solid waste management and WASH in disease outbreaks and healthcare settings.
Hygiene promotion is seen by many as an integral part of sanitation. The Water Supply and Sanitation Collaborative Council defines sanitation as "The collection, transport, treatment and disposal or reuse of human excreta, domestic wastewater and solid waste, and associated hygiene promotion."
Despite the fact that sanitation includes wastewater treatment, the two terms are often used side by side as "sanitation and wastewater management".
Another definition is in the DFID guidance manual on water supply and sanitation programmes from 1998:
Sanitation can include personal sanitation and public hygiene. Personal sanitation work can include handling menstrual waste, cleaning household toilets, and managing household garbage. Public sanitation work can involve garbage collection, transfer and treatment (municipal solid waste management), cleaning drains, streets, schools, trains, public spaces, community toilets and public toilets, sewers, operating sewage treatment plants, etc. Workers who provide these services for other people are called sanitation workers.
Purposes
The overall purposes of sanitation are to provide a healthy living environment for everyone, to protect the natural resources (such as surface water, groundwater, soil), and to provide safety, security and dignity for people when they defecate or urinate.
The Human Right to Water and Sanitation was recognized by the United Nations (UN) General Assembly in 2010. It has been recognized in international law through human rights treaties, declarations and other standards. It is derived from the human right to an adequate standard of living.
Effective sanitation systems provide barriers between excreta and humans in such a way as to break the disease transmission cycle (for example in the case of fecal-borne diseases). This aspect is visualised with the F-diagram where all major routes of fecal-oral disease transmission begin with the letter F: feces, fingers, flies, fields, fluids, food.
Sanitation infrastructure has to be adapted to several specific contexts including consumers' expectations and local resources available.
Sanitation technologies may involve centralized civil engineering structures like sewer systems, sewage treatment, surface runoff treatment and solid waste landfills. These structures are designed to treat wastewater and municipal solid waste. Sanitation technologies may also take the form of relatively simple onsite sanitation systems. This can in some cases consist of a simple pit latrine or other type of non-flush toilet for the excreta management part.
Providing sanitation to people requires attention to the entire system, not just focusing on technical aspects such as the toilet, fecal sludge management or the wastewater treatment plant. The "sanitation chain" involves the experience of the user, excreta and wastewater collection methods, transporting and treatment of waste, and reuse or disposal. All need to be thoroughly considered.
Economic impacts
The benefits to society of managing human excreta are considerable, for public health as well as for the environment. As a rough estimate: For every US$1 spent on sanitation, the return to society is US$5.50.
For developing countries, the economic costs of inadequate sanitation is a huge concern. For example, according to a World Bank study, economic losses due to inadequate sanitation to The Indian economy are equivalent to 6.4% of its GDP. Most of these are due to premature mortality, time lost in accessing, loss of productivity, additional costs for healthcare among others. Inadequate sanitation also leads to loss from potential tourism revenue. This study also found that impacts are disproportionately higher for the poor, women and children. Availability of toilet at home on the other hand, positively contributes to economic well-being of women as it leads to an increase in literacy and participation in labor force.
Types and concepts (for excreta management)
The term sanitation is connected with various descriptors or adjectives to signify certain types of sanitation systems (which may deal only with human excreta management or with the entire sanitation system, i.e. also greywater, stormwater and solid waste management) – in alphabetical order:
Basic sanitation
In 2017, JMP defined a new term: "basic sanitation service". This is defined as the use of improved sanitation facilities that are not shared with other households. A lower level of service is now called "limited sanitation service" which refers to use of improved sanitation facilities that are shared between two or more households.
Container-based sanitation
Community-based sanitation
Community-based sanitation is related to decentralized wastewater treatment (DEWATS).
Community-led total sanitation
Dry sanitation
The term "dry sanitation" is not in widespread use and is not very well defined. It usually refers to a system that uses a type of dry toilet and no sewers to transport excreta. Often when people speak of "dry sanitation" they mean a sanitation system that uses urine-diverting dry toilet (UDDTs).
Ecological sanitation
Emergency sanitation
Environmental sanitation
Environmental sanitation encompasses the control of environmental factors that are connected to disease transmission. Subsets of this category are solid waste management, water and wastewater treatment, industrial waste treatment and noise pollution control. According to World health organization (WHO) Environmental sanitation was defined as the control of all those factors in the physical environment which exercise a harmful effect on human being physical development, health and survival. One of the primary function of environmental sanitation is to protect public health.
Fecal sludge management
Improved and unimproved sanitation
Lack of sanitation
Lack of sanitation refers to the absence of sanitation. In practical terms it usually means lack of toilets or lack of hygienic toilets that anybody would want to use voluntarily. The result of lack of sanitation is usually open defecation (and open urination but this is of less concern) with associated serious public health issues. It is estimated that 2.4 billion people still lacked improved sanitation facilities including 660 million people who lack access to safe drinking water as of 2015.
Onsite sanitation or non-sewered sanitation system
Onsite sanitation (or on-site sanitation) is defined as "a sanitation system in which excreta and wastewater are collected and stored or treated on the plot where they are generated". Another term that is used for the same system is non-sewered sanitation systems (NSSS), which are prevalent in many countries. NSSS play a vital role in the safe management of fecal sludge, accounting for approximately half of all existing sanitation provisions. The degree of treatment may be variable, from none to advanced. Examples are pit latrines (no treatment) and septic tanks (primary treatment of wastewater). On-site sanitation systems are often connected to fecal sludge management (FSM) systems where the fecal sludge that is generated onsite is treated at an offsite location. Wastewater (sewage) is only generated when piped water supply is available within the buildings or close to them.
A related term is a decentralized wastewater system which refers in particular to the wastewater part of on-site sanitation. Similarly, an onsite sewage facility can treat the wastewater generated locally.
The global methane emissions from NSSS in 2020 was estimated to as 377 Mt CO2e per year or 4.7% of global anthropogenic methane emissions, which are comparable to the greenhouse gas emissions from wastewater treatment plants. This means that the GHG emissions from the NSSS as a non-negligible source.
Safely managed sanitation
Safely managed sanitation is the highest level of household sanitation envisioned by the Sustainable Development Goal Number 6. It is measured under the Sustainable Development Goal 6.2, Indicator 6.2.1, as the "Proportion of population using (a) safely managed sanitation services and (b) a hand-washing facility with soap and water". The current value in the 2017 baseline estimate by JMP is that 4.5 billion people currently do not have safely managed sanitation.
Safely managed sanitation is defined as an improved sanitation facility which is not shared with other households, and where the excreta produced is either treated and disposed in situ, stored temporarily and then emptied and transported to treatment off-site, or transported through a sewer with wastewater and then treated off-site. In other words, safely managed sanitation is a basic sanitation service where in addition excreta are safely disposed of in situ or transported and treated offsite.
Sustainable sanitation
Other types, concepts and systems
Wastewater management
Wastewater management consists of collection, wastewater treatment (be it municipal or industrial wastewater), disposal or reuse of treated wastewater. The latter is also referred to as water reclamation.
Sanitation systems in urban areas of developed countries usually consist of the collection of wastewater in gravity driven sewers, its treatment in wastewater treatment plants for reuse or disposal in rivers, lakes or the sea.
In developing countries most wastewater is still discharged untreated into the environment. Alternatives to centralized sewer systems include onsite sanitation, decentralized wastewater systems, dry toilets connected to fecal sludge management.
Stormwater drainage
Sewers are either combined with storm drains or separated from them as sanitary sewers. Combined sewers are usually found in the central, older parts or urban areas. Heavy rainfall and inadequate maintenance can lead to combined sewer overflows or sanitary sewer overflows, i.e., more or less diluted raw sewage being discharged into the environment. Industries often discharge wastewater into municipal sewers, which can complicate wastewater treatment unless industries pre-treat their discharges.
Solid waste disposal
Disposal of solid waste is most commonly conducted in landfills, but incineration, recycling, composting and conversion to biofuels are also avenues. In the case of landfills, advanced countries typically have rigid protocols for daily cover with topsoil, where underdeveloped countries customarily rely upon less stringent protocols. The importance of daily cover lies in the reduction of vector contact and spreading of pathogens. Daily cover also minimizes odor emissions and reduces windblown litter. Likewise, developed countries typically have requirements for perimeter sealing of the landfill with clay-type soils to minimize migration of leachate that could contaminate groundwater (and hence jeopardize some drinking water supplies).
For incineration options, the release of air pollutants, including certain toxic components is an attendant adverse outcome. Recycling and biofuel conversion are the sustainable options that generally have superior lifecycle costs, particularly when total ecological consequences are considered. Composting value will ultimately be limited by the market demand for compost product.
Food safety
Sanitation within the food industry means the adequate treatment of food-contact surfaces by a process that is effective in destroying vegetative cells of microorganisms of public health significance, and in substantially reducing numbers of other undesirable microorganisms, but without adversely affecting the food or its safety for the consumer (U.S. Food and Drug Administration, Code of Federal Regulations, 21CFR110, USA). Sanitation Standard Operating Procedures are mandatory for food industries in United States. Similarly, in Japan, food hygiene has to be achieved through compliance with food sanitation law.
In the food and biopharmaceutical industries, the term "sanitary equipment" means equipment that is fully cleanable using clean-in-place (CIP) and sterilization-in-place (SIP) procedures: that is fully drainable from cleaning solutions and other liquids. The design should have a minimum amount of deadleg, or areas where the turbulence during cleaning is insufficient to remove product deposits. In general, to improve cleanability, this equipment is made from Stainless Steel 316L, (an alloy containing small amounts of molybdenum). The surface is usually electropolished to an effective surface roughness of less than 0.5 micrometre to reduce the possibility of bacterial adhesion.
Hygiene promotion
In many settings, provision of sanitation facilities alone does not guarantee good health of the population. Studies have suggested that the impact of hygiene practices have as great an impact on sanitation related diseases as the actual provision of sanitation facilities. Hygiene promotion is therefore an important part of sanitation and is usually key in maintaining good health.
Hygiene promotion is a planned approach of enabling people to act and change their behavior in an order to reduce and/or prevent incidences of water, sanitation and hygiene (WASH) related diseases. It usually involves a participatory approach of engaging people to take responsibility of WASH services and infrastructure including its operation and maintenance. The three key elements of promoting hygiene are; mutual sharing of information and knowledge, the mobilization of affected communities and the provision of essential material and facilities.
Health aspects
Environmental aspects
Indicator organisms
When analyzing environmental samples, various types of indicator organisms are used to check for fecal pollution of the sample. Commonly used indicators for bacteriological water analysis include the bacterium Escherichia coli (abbreviated as E. coli) and non-specific fecal coliforms. With regards to samples of soil, sewage sludge, biosolids or fecal matter from dry toilets, helminth eggs are a commonly used indicator. With helminth egg analysis, eggs are extracted from the sample after which a viability test is done to distinguish between viable and non viable eggs. The viable fraction of the helminth eggs in the sample is then counted.
Climate change
Global mechanisms
Sustainable Development Goal Number 6
In the year 2016, the Sustainable Development Goals replaced the Millennium Development Goals. Sanitation is a global development priority and included Sustainable Development Goal 6 (SDG 6). The target is about "clean water and sanitation for all" by 2030. It is estimated that 660 million people still lacked access to safe drinking water as of 2015.
Since the COVID-19 pandemic in 2020, the fight for clean water and sanitation is more important than ever. Handwashing is one of the most common prevention methods for Coronavirus, yet two out of five people do not have access to a hand-washing station.
Millennium Development Goal Number 7 until 2015
The United Nations, during the Millennium Summit in New York in 2000 and the 2002 World Summit on Sustainable Development in Johannesburg, developed the Millennium Development Goals (MDGs) aimed at poverty eradication and sustainable development. The specific sanitation goal for the year 2015 was to reduce by half the number of people who had no access to potable water and sanitation in the baseline year of 1990. As the JMP and the United Nations Development Programme (UNDP) Human Development Report in 2006 has shown, progress meeting the MDG sanitation target is slow, with a large gap between the target coverage and the current reality. In December 2006, the United Nations General Assembly declared 2008 "The International Year of Sanitation", in recognition of the slow progress being made towards the MDGs sanitation target. The year aimed to develop awareness and more actions to meet the target.
There are numerous reasons for this gap. A major one is that sanitation is rarely given political attention received by other topics despite its key importance. Sanitation is not high on the international development agenda, and projects such as those relating to water supply projects are emphasised.
The Joint Monitoring Programme for Water Supply and Sanitation of WHO and UNICEF (JMP) has been publishing reports of updated estimates every two years on the use of various types of drinking-water sources and sanitation facilities at the national, regional and global levels. The JMP report for 2015 stated that:
Between 1990 and 2015, open defecation rates have decreased from 38% to 25% globally. Just under one billion people (946 million) still practise open defecation worldwide in 2015.
82% of the global urban population, and 51% of the rural population is using improved sanitation facilities in 2015, as per the JMP definition of "improved sanitation".
Initiatives to promote sanitation
In 2011 the Bill & Melinda Gates Foundation launched the "Reinvent the Toilet Challenge" to promote safer, more effective ways to treat human waste. The program is aimed at developing technologies that might help bridge the global sanitation gap (for example the Omni Processor, or technology for fecal sludge management). In 2015, the Bill & Melinda Gates Foundation published their "Water, sanitation, and hygiene strategy portfolio update and overview" called "Building demand for sanitation".
The latest innovations in the field of public health sanitation, currently in the testing phase, comprise - use of 'locally produced alcohol-based hand rub'; 'novel latrine improvement'; and 'container-based sanitation'. Centers for Disease Control and Prevention (CDC), the national public health agency of the United States has recognized the stated three initiatives.
Capacity development
Capacity development is regarded as an important mechanism to achieve progress in the sanitation sector. For example, in India the Sanitation Capacity Building platform (SCBP) was designed to "support and build the capacity of town/cities to plan and implement decentralized sanitation solutions" with funding by the Bill & Melinda Gates Foundation from 2015 to 2022. Results from this project showed that capacity development best happens on the job and in a learning organization culture. In a government capacity development initiative, it is critical to have an enabling policy and program funding to translate capacity development input into program and infrastructure outputs. Capacity development aims to empower staff and institutions, develop a learning strategy, learning content and training modules, as well as strengthened partnerships and institutions of learning. The Capacity Development Effectiveness Ladder Framework (CDEL) identifies five critical steps for capacity development interventions: Developing original learning content, partnerships for learning and outreach, learning strategy, visioning change and designing solutions, contribution to capacity development discourse.
Costs
A study was carried out in 2018 to compare the lifecycle costs of full sanitation chain systems in developing cities of Africa and Asia. It found that conventional sewer systems are in most cases the most expensive sanitation options, followed, in order of cost, by sanitation systems comprising septic tanks, ventilated improved pit latrines (VIP), urine diversion dry toilets and pour-flush pit latrines. The main determinants of urban sanitation financial costs include: Type of technology, labour, material and utility cost, density, topography, level of service provided by the sanitation system, soil condition, energy cost and others (distance to wastewater treatment facility, climate, end-use of treatment products, business models, water table height).
Some grassroots organizations have trialled community-managed toilet blocks whose construction and maintenance costs can be covered by households. One study of Mumbai informal settlements found that US$1.58 per adult would be sufficient for construction, and less than US$1/household/month would be sufficient for maintenance.
History
Major human settlements could initially develop only where fresh surface water was plentiful, such as near rivers or natural springs. Throughout history people have devised systems to get water into their communities and households, and to dispose (and later also treat) wastewater. The focus of sewage treatment at that time was on conveying raw sewage to a natural body of water, e.g. a river or ocean, where it would be diluted and dissipated.
The Sanitation in the Indus Valley Civilization in Asia is an example of public water supply and sanitation during the Bronze Age (3300–1300 BCE). Sanitation in ancient Rome was quite extensive. These systems consisted of stone and wooden drains to collect and remove wastewater from populated areas—see for instance the Cloaca Maxima into the River Tiber in Rome. The first sewers of ancient Rome were built between 800 and 735 BCE.
By country
Society and culture
There is a vast number of professions that are involved in the field of sanitation, for example on the technical and operations side: sanitation workers, waste collectors, sanitary engineers.
| Technology | Health, fitness, and medicine | null |
184774 | https://en.wikipedia.org/wiki/Amanita%20phalloides | Amanita phalloides | Amanita phalloides (), commonly known as the death cap, is a deadly poisonous basidiomycete fungus and mushroom, one of many in the genus Amanita. Originating in Europe but later introduced to other parts of the world since the late twentieth century, A. phalloides forms ectomycorrhizas with various broadleaved trees. In some cases, the death cap has been introduced to new regions with the cultivation of non-native species of oak, chestnut, and pine. The large fruiting bodies (mushrooms) appear in summer and autumn; the caps are generally greenish in colour with a white stipe and gills. The cap colour is variable, including white forms, and is thus not a reliable identifier.
These toxic mushrooms resemble several edible species (most notably Caesar's mushroom and the straw mushroom) commonly consumed by humans, increasing the risk of accidental poisoning. Amatoxins, the class of toxins found in these mushrooms, are thermostable: they resist changes due to heat, so their toxic effects are not reduced by cooking.
Amanita phalloides is the most poisonous of all known mushrooms. It is estimated that as little as half a mushroom contains enough toxin to kill an adult human. It is also the deadliest mushroom worldwide, responsible for 90% of mushroom-related fatalities every year. It has been involved in the majority of human deaths from mushroom poisoning, possibly including Roman Emperor Claudius in AD 54 and Holy Roman Emperor Charles VI in 1740. It has also been the subject of much research and many of its biologically active agents have been isolated. The principal toxic constituent is α-Amanitin, which causes liver and kidney failure.
Taxonomy
The death cap is named in Latin as such in the correspondence between the English physician Thomas Browne and Christopher Merrett. Also, it was described by French botanist Sébastien Vaillant in 1727, who gave a succinct phrase name "Fungus phalloides, annulatus, sordide virescens, et patulus"—a recognizable name for the fungus today. Though the scientific name phalloides means "phallus-shaped", it is unclear whether it is named for its resemblance to a literal phallus or the stinkhorn mushrooms Phallus.
In 1821, Elias Magnus Fries described it as Agaricus phalloides, but included all white amanitas within its description. Finally, in 1833, Johann Heinrich Friedrich Link settled on the name Amanita phalloides, after Persoon had named it Amanita viridis 30 years earlier. Although Louis Secretan's use of the name A. phalloides predates Link's, it has been rejected for nomenclatural purposes because Secretan's works did not use binomial nomenclature consistently; some taxonomists have, however, disagreed with this opinion.
Amanita phalloides is the type species of Amanita section Phalloideae, a group that contains all of the deadly poisonous Amanita species thus far identified. Most notable of these are the species known as destroying angels, namely A. virosa, A. bisporigera and A. ocreata, as well as the fool's mushroom (A. verna). The term "destroying angel" has been applied to A. phalloides at times, but "death cap" is by far the most common vernacular name used in English. Other common names also listed include "stinking amanita" and "deadly amanita".
A rarely appearing, all-white form was initially described A. phalloides f. alba by Max Britzelmayr, though its status has been unclear. It is often found growing amid normally colored death caps. It has been described, in 2004, as a distinct variety and includes what was termed A. verna var. tarda. The true A. verna fruits in spring and turns yellow with KOH solution, whereas A. phalloides never does.
Description
The death cap has a large and imposing epigeous (aboveground) fruiting body (basidiocarp), usually with a pileus (cap) from across, initially rounded and hemispherical, but flattening with age. The color of the cap can be pale-green, yellowish-green, olive-green, bronze, or (in one form) white; it is often paler toward the margins, which can have darker streaks; it is also often paler after rain. The cap surface is sticky when wet and easily peeled—a troublesome feature, as that is allegedly a feature of edible fungi. The remains of the partial veil are seen as a skirtlike, floppy annulus usually about below the cap. The crowded white lamellae (gills) are free. The stipe is white with a scattering of grayish-olive scales and is long and thick, with a swollen, ragged, sac-like white volva (base). As the volva, which may be hidden by leaf litter, is a distinctive and diagnostic feature, it is important to remove some debris to check for it. Spores: 7-12 x 6-9 μm. Smooth, ellipsoid, amyloid.
The smell has been described as initially faint and honey-sweet, but strengthening over time to become overpowering, sickly-sweet and objectionable. Young specimens first emerge from the ground resembling a white egg covered by a universal veil, which then breaks, leaving the volva as a remnant. The spore print is white, a common feature of Amanita. The transparent spores are globular to egg-shaped, measure 8–10 μm (0.3–0.4 mil) long, and stain blue with iodine. The gills, in contrast, stain pallid lilac or pink with concentrated sulfuric acid.
Biochemistry
The species is now known to contain two main groups of toxins, both multicyclic (ring-shaped) peptides, spread throughout the mushroom and d cell–destroying activity in vitro. An unrelated compound, antamanide, has also been isolated.
Amatoxins consist of at least eight compounds with a similar structure, that of eight amino-acid rings; they were isolated in 1941 by Heinrich O. Wieland and Rudolf Hallermayer of the University of Munich. Of the amatoxins, α-Amanitin is the chief component and along with β-amanitin is likely responsible for the toxic effects. Their major toxic mechanism is the inhibition of RNA polymerase II, a vital enzyme in the synthesis of messenger RNA (mRNA), microRNA, and small nuclear RNA (snRNA). Without mRNA, essential protein synthesis and hence cell metabolism grind to a halt and the cell dies. The liver is the principal organ affected, as it is the organ which is first encountered after absorption in the gastrointestinal tract, though other organs, especially the kidneys, are susceptible. The RNA polymerase of Amanita phalloides is insensitive to the effects of amatoxins, so the mushroom does not poison itself.
The phallotoxins consist of at least seven compounds, all of which have seven similar peptide rings. Phalloidin was isolated in 1937 by Feodor Lynen, Heinrich Wieland's student and son-in-law, and Ulrich Wieland of the University of Munich. Though phallotoxins are highly toxic to liver cells, they have since been found to add little to the death cap's toxicity, as they are not absorbed through the gut. Furthermore, phalloidin is also found in the edible (and sought-after) blusher (A. rubescens). Another group of minor active peptides are the virotoxins, which consist of six similar monocyclic heptapeptides. Like the phallotoxins, they do not induce any acute toxicity after ingestion in humans.
The genome of the death cap has been sequenced.
Similarity to edible species
A. phalloides is similar to the edible paddy straw mushroom (Volvariella volvacea) and A. princeps, commonly known as "white Caesar".
Some may mistake juvenile death caps for edible puffballs or mature specimens for other edible Amanita species, such as A. lanei, so some authorities recommend avoiding the collecting of Amanita species for the table altogether. The white form of A. phalloides may be mistaken for edible species of Agaricus, especially the young fruitbodies whose unexpanded caps conceal the telltale white gills; all mature species of Agaricus have dark-colored gills.
In Europe, other similarly green-capped species collected by mushroom hunters include various green-hued brittlegills of the genus Russula and the formerly popular Tricholoma equestre, now regarded as hazardous owing to a series of restaurant poisonings in France. Brittlegills, such as Russula heterophylla, R. aeruginea, and R. virescens, can be distinguished by their brittle flesh and the lack of both volva and ring. Other similar species include A. subjunquillea in eastern Asia and A. arocheae, which ranges from Andean Colombia north at least as far as central Mexico, both of which are also poisonous.
Distribution and habitat
The death cap is native to Europe, where it is widespread. It is found from the southern coastal regions of Scandinavia in the north, to Ireland in the west, east to Poland and western Russia, and south throughout the Balkans, in Greece, Italy, Spain, and Portugal in the Mediterranean basin, and in Morocco and Algeria in north Africa. In west Asia, it has been reported from forests of northern Iran. There are records from further east in Asia but these have yet to be confirmed as A. phalloides.
By the end of the 19th century, Charles Horton Peck had reported A. phalloides in North America. In 1918, samples from the eastern United States were identified as being a distinct though similar species, A. brunnescens, by George Francis Atkinson of Cornell University. By the 1970s, it had become clear that A. phalloides does occur in the United States, apparently having been introduced from Europe alongside chestnuts, with populations on the West and East Coasts. A 2006 historical review concluded the East Coast populations were inadvertently introduced, likely on the roots of other purposely imported plants such as chestnuts. The origins of the West Coast populations remained unclear, due to scant historical records, but a 2009 genetic study provided strong evidence for the introduced status of the fungus on the west coast of North America. Observations of various collections of A. phalloides, from conifers rather than native forests, have led to the hypothesis that the species was introduced to North America multiple times. It is hypothesized that the various introductions led to multiple genotypes which are adapted to either oaks or conifers.
A. phalloides were conveyed to new countries across the Southern Hemisphere with the importation of hardwoods and conifers in the late twentieth century. Introduced oaks appear to have been the vector to Australia and South America; populations under oaks have been recorded from Melbourne, Canberra (where two people died in January 2012, of four who were poisoned), Adelaide, and further observed by citizen scientists in Beechworth, Sydney and Albury.
It has been recorded under other introduced trees in Argentina. Pine plantations are associated with the fungus in Tanzania and South Africa, found under oaks and poplars in Chile, as well as Uruguay.
A number of deaths in India have been attributed to it.
Ecology
It is ectomycorrhizally associated with several tree species and is symbiotic with them. In Europe, these include hardwood and, less frequently, conifer species. It appears most commonly under oaks, but also under beeches, chestnuts, horse-chestnuts, birches, filberts, hornbeams, pines, and spruces. In other areas, A. phalloides may also be associated with these trees or with only some species and not others. In coastal California, for example, A. phalloides is associated with coast live oak, but not with the various coastal pine species, such as Monterey pine. In countries where it has been introduced, it has been restricted to those exotic trees with which it would associate in its natural range. There is, however, evidence of A. phalloides associating with hemlock and with genera of the Myrtaceae: Eucalyptus in Tanzania and Algeria, and Leptospermum and Kunzea in New Zealand, suggesting that the species may have invasive potential. It may have also been anthropogenically introduced to the island of Cyprus, where it has been documented to fruit within Corylus avellana plantations.
Toxicity
As the common name suggests, the fungus is highly toxic, and is responsible for the majority of fatal mushroom poisonings worldwide. Its biochemistry has been researched intensively for decades, and , or half a cap, of this mushroom is estimated to be enough to kill a human. On average, one person dies a year in North America from death cap ingestion. The toxins of the death cap mushrooms primarily target the liver, but other organs, such as the kidneys, are also affected. Symptoms of death cap mushroom toxicity usually occur 6 to 12 hours after ingestion. Symptoms of ingestion of the death cap mushroom may include nausea and vomiting, which is then followed by jaundice, seizures, and coma which will lead to death. The mortality rate of ingestion of the death cap mushroom is believed to be around 10–30%.
Some authorities strongly advise against putting suspected death caps in the same basket with fungi collected for the table and to avoid even touching them. Furthermore, the toxicity is not reduced by cooking, freezing, or drying.
Poisoning incidents usually result from errors in identification. Recent cases highlight the issue of the similarity of A. phalloides to the edible paddy straw mushroom (Volvariella volvacea), with East- and Southeast-Asian immigrants in Australia and the West Coast of the U.S. falling victim. In an episode in Oregon, four members of a Korean family required liver transplants. Many North American incidents of death cap poisoning have occurred among Laotian and Hmong immigrants, since it is easily confused with A. princeps ("white Caesar"), a popular mushroom in their native countries. Of the 9 people poisoned in Australia's Canberra region between 1988 and 2011, three were from Laos and two were from China. In January 2012, four people were accidentally poisoned when death caps (reportedly misidentified as straw mushrooms, which are popular in Chinese and other Asian dishes) were served for dinner in Canberra; all the victims required hospital treatment and two of them died, with a third requiring a liver transplant.
Signs and symptoms
Death caps have been reported to taste pleasant. This, coupled with the delay in the appearance of symptoms—during which time internal organs are being severely, sometimes irreparably, damaged—makes them particularly dangerous. Initially, symptoms are gastrointestinal in nature and include colicky abdominal pain, with watery diarrhea, nausea, and vomiting, which may lead to dehydration if left untreated, and, in severe cases, hypotension, tachycardia, hypoglycemia, and acid–base disturbances. These first symptoms resolve two to three days after the ingestion. A more serious deterioration signifying liver involvement may then occur—jaundice, diarrhea, delirium, seizures, and coma due to fulminant liver failure and attendant hepatic encephalopathy caused by the accumulation of normally liver-removed substances in the blood. Kidney failure (either secondary to severe hepatitis or caused by direct toxic kidney damage) and coagulopathy may appear during this stage. Life-threatening complications include increased intracranial pressure, intracranial bleeding, pancreatic inflammation, acute kidney failure, and cardiac arrest. Death generally occurs six to sixteen days after the poisoning.
It is noticed that after up to 24 hours have passed, the symptoms seem to disappear and the person might feel fine for up to 72 hours. Symptoms of liver and kidney damage start 3 to 6 days after the mushrooms were eaten, with the considerable increase of the transaminases.
Mushroom poisoning is more common in Europe than in North America. Up to the mid-20th century, the mortality rate was around 60–70%, but this has been greatly reduced with advances in medical care. A review of death cap poisoning throughout Europe from 1971 to 1980 found the overall mortality rate to be 22.4% (51.3% in children under ten and 16.5% in those older than ten). This was revised to around 10–15% in surveys reviewed in 1995.
Treatment
Consumption of the death cap is a medical emergency requiring hospitalization. The four main categories of therapy for poisoning are preliminary medical care, supportive measures, specific treatments, and liver transplantation.
Preliminary care consists of gastric decontamination with either activated carbon or gastric lavage; due to the delay between ingestion and the first symptoms of poisoning, it is common for patients to arrive for treatment many hours after ingestion, potentially reducing the efficacy of these interventions. Supportive measures are directed towards treating the dehydration which results from fluid loss during the gastrointestinal phase of intoxication and correction of metabolic acidosis, hypoglycemia, electrolyte imbalances, and impaired coagulation.
No definitive antidote is available, but some specific treatments have been shown to improve survivability. High-dose continuous intravenous penicillin G has been reported to be of benefit, though the exact mechanism is unknown, and trials with cephalosporins show promise. Some evidence indicates intravenous silibinin, an extract from the blessed milk thistle (Silybum marianum), may be beneficial in reducing the effects of death cap poisoning. A long-term clinical trial of intravenous silibinin began in the US in 2010. Silibinin prevents the uptake of amatoxins by liver cells, thereby protecting undamaged liver tissue; it also stimulates DNA-dependent RNA polymerases, leading to an increase in RNA synthesis. According to one report based on a treatment of 60 patients with silibinin, patients who started the drug within 96 hours of ingesting the mushroom and who still had intact kidney function all survived. As of February 2014 supporting research has not yet been published.
SLCO1B3 has been identified as the human hepatic uptake transporter for amatoxins; moreover, substrates and inhibitors of that protein—among others rifampicin, penicillin, silibinin, antamanide, paclitaxel, ciclosporin and prednisolone—may be useful for the treatment of human amatoxin poisoning.
N-acetylcysteine has shown promise in combination with other therapies. Animal studies indicate the amatoxins deplete hepatic glutathione; N-acetylcysteine serves as a glutathione precursor and may therefore prevent reduced glutathione levels and subsequent liver damage. None of the antidotes used have undergone prospective, randomized clinical trials, and only anecdotal support is available. Silibinin and N-acetylcysteine appear to be the therapies with the most potential benefit. Repeated doses of activated carbon may be helpful by absorbing any toxins returned to the gastrointestinal tract following enterohepatic circulation. Other methods of enhancing the elimination of the toxins have been trialed; techniques such as hemodialysis, hemoperfusion, plasmapheresis, and peritoneal dialysis have occasionally yielded success, but overall do not appear to improve outcome.
In patients developing liver failure, a liver transplant is often the only option to prevent death. Liver transplants have become a well-established option in amatoxin poisoning. This is a complicated issue, however, as transplants themselves may have significant complications and mortality; patients require long-term immunosuppression to maintain the transplant. That being the case, the criteria have been reassessed, such as onset of symptoms, prothrombin time (PT), serum bilirubin, and presence of encephalopathy, for determining at what point a transplant becomes necessary for survival. Evidence suggests, although survival rates have improved with modern medical treatment, in patients with moderate to severe poisoning, up to half of those who did recover suffered permanent liver damage. A follow-up study has shown most survivors recover completely without any sequelae if treated within 36 hours of mushroom ingestion.
Notable victims
Several historical figures may have died from A. phalloides poisoning (or other similar, toxic Amanita species). These were either accidental poisonings or assassination plots. Alleged victims of this kind of poisoning include Roman Emperor Claudius, Pope Clement VII, the Russian tsaritsa Natalia Naryshkina, and Holy Roman Emperor Charles VI.
R. Gordon Wasson recounted the details of these deaths, noting the likelihood of Amanita poisoning. In the case of Clement VII, the illness that led to his death lasted five months, making the case inconsistent with amatoxin poisoning. Natalya Naryshkina is said to have consumed a large quantity of pickled mushrooms prior to her death. It is unclear whether the mushrooms themselves were poisonous or if she succumbed to food poisoning.
Charles VI experienced indigestion after eating a dish of sautéed mushrooms. This led to an illness from which he died 10 days later—symptomatology consistent with amatoxin poisoning. His death led to the War of the Austrian Succession. Noted Voltaire, "this mushroom dish has changed the destiny of Europe."
The case of Claudius's poisoning is more complex. Claudius was known to have been very fond of eating Caesar's mushroom. Following his death, many sources have attributed it to his being fed a meal of death caps instead of Caesar's mushrooms. Ancient authors, such as Tacitus and Suetonius, are unanimous about poison having been added to the mushroom dish, rather than the dish having been prepared from poisonous mushrooms. Wasson speculated the poison used to kill Claudius was derived from death caps, with a fatal dose of an unknown poison (possibly a variety of nightshade) being administered later during his illness. Other historians have speculated that Claudius may have died of natural causes.
In July 2023, four people in Leongatha, Australia were taken to hospital after consuming a Beef Wellington suspected to have contained A. phalloides. Three of the four guests subsequently died, and one survived, later receiving a liver transplant. The woman who cooked the meal, Erin Patterson, was charged with murder in November 2023.
| Biology and health sciences | Poisonous fungi | null |
184826 | https://en.wikipedia.org/wiki/Common%20blackbird | Common blackbird | The common blackbird (Turdus merula) is a species of true thrush. It is also called the Eurasian blackbird (especially in North America, to distinguish it from the unrelated New World blackbirds), or simply the blackbird where this does not lead to confusion with a local species. It breeds in Europe, western Asia, and North Africa, and has been introduced to Australia and New Zealand. It has a number of subspecies across its large range; a few former Asian subspecies are now widely treated as separate species. Depending on latitude, the common blackbird may be resident, partially migratory, or fully migratory.
The adult male of the common blackbird (Turdus merula merula, the nominate subspecies), which is found throughout most of Europe, is all black except for a yellow eye-ring and bill and has a rich, melodious song; the adult female and juvenile have mainly dark brown plumage. This species breeds in woods and gardens, building a neat, cup-shaped nest, bound together with mud. It is omnivorous, eating a wide range of insects, earthworms, berries, and fruits.
Both sexes are territorial on the breeding grounds, with distinctive threat displays, but are more gregarious during migration and in wintering areas. Pairs stay in their territory throughout the year where the climate is sufficiently temperate. This common and conspicuous species has given rise to a number of literary and cultural references, frequently related to its song.
Taxonomy and systematics
The common blackbird was described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Turdus merula (characterised as T. ater, rostro palpebrisque fulvis). The binomial name derives from two Latin words, , "thrush", and , "blackbird", the latter giving rise to its French name, , and its Scots name, merl.
About 65 species of medium to large thrushes are in the genus Turdus, characterised by rounded heads, longish, pointed wings, and usually melodious songs. Although two European thrushes, the song thrush and mistle thrush, are early offshoots from the Eurasian lineage of Turdus thrushes after they spread north from Africa, the blackbird is descended from ancestors that had colonised the Canary Islands from Africa and subsequently reached Europe from there. It is close in evolutionary terms to the island thrush (T. poliocephalus) of Southeast Asia and islands in the southwest Pacific, which probably diverged from T. merula stock fairly recently.
It may not immediately be clear why the name "blackbird", first recorded in 1486, was applied to this species, but not to one of the various other common black English birds, such as the carrion crow, raven, rook, or jackdaw. However, in Old English, and in modern English up to about the 18th century, "bird" was used only for smaller or young birds, and larger ones such as crows were called "fowl". At that time, the blackbird was therefore the only widespread and conspicuous "black bird" in the British Isles. Until about the 17th century, another name for the species was ouzel, ousel or wosel (from Old English , cf. German ). Another variant occurs in Act 3 of Shakespeare's A Midsummer Night's Dream, where Bottom refers to "The Woosell cocke, so blacke of hew, With Orenge-tawny bill". The ouzel usage survived later in poetry, and still occurs as the name of the closely related ring ouzel (Turdus torquatus), and in water ouzel, an alternative name for the unrelated but superficially similar white-throated dipper (Cinclus cinclus).
Five related Asian Turdus thrushes—the white-collared blackbird (T. albocinctus), the grey-winged blackbird (T. boulboul), the Indian blackbird (T. simillimus), the Tibetan blackbird (T. maximus), and the Chinese blackbird (T. mandarinus)—are also named blackbirds; the latter three species were formerly treated as conspecific with the common blackbird. In addition, the Somali thrush (T. (olivaceus) ludoviciae) is alternatively known as the Somali blackbird.
The icterid family of the New World is sometimes called the blackbird family because of some species' superficial resemblance to the common blackbird and other Old World thrushes, but they are not evolutionarily close, being related to the New World warblers and tanagers. The term is often limited to smaller species with mostly or entirely black plumage, at least in the breeding male, notably the cowbirds, the grackles, and for around 20 species with "blackbird" in the name, such as the red-winged blackbird and the melodious blackbird.
Subspecies
As would be expected for a widespread passerine bird species, several geographical subspecies are recognised. The treatment of subspecies in this article follows Clement et al. (2000).
T. m. merula, the nominate subspecies, breeds commonly throughout much of Europe from Iceland, the Faroes and the British Isles east to the Ural Mountains and north to about 70 N, where it is fairly scarce. A small population breeds in the Nile Valley. Birds from the north of the range winter throughout Europe and around the Mediterranean, including Cyprus and North Africa. The introduced birds in Australia and New Zealand are of the nominate race.
T. m. azorensis is a small race which breeds in the Azores. The male is darker and glossier than merula.
T. m. cabrerae, named for Ángel Cabrera, the Spanish zoologist, resembles azorensis and breeds in Madeira and the western Canary Islands.
T. m. mauritanicus, another small dark subspecies with a glossy black male plumage, breeds in central and northern Morocco, coastal Algeria and northern Tunisia.
T m. aterrimus breeds in Hungary, south and east to southern Greece, Crete, northern Turkey and northern Iran. It winters in southern Turkey, northern Egypt, Iraq and southern Iran. It is smaller than merula with a duller male and paler female plumage.
T. m. syriacus breeds on the Mediterranean coast of southern Turkey south to Jordan, Israel and the northern Sinai. It is mostly resident, but part of the population moves southwest or west to winter in the Jordan Valley and in the Nile Delta of northern Egypt south to about Cairo. Both sexes of this subspecies are darker and greyer than the equivalent merula plumages.
T. m. intermedius is an Asian race breeding from Central Russia to Tajikistan, western and northeastern Afghanistan, and eastern China. Many birds are resident, but some are altitudinal migrants and occur in southern Afghanistan and southern Iraq in winter. This is a large subspecies, with a sooty-black male and a blackish-brown female.
The Central Asian subspecies, the relatively large intermedius, also differs in structure and voice, and may represent a distinct species. Alternatively, it has been suggested that it should be considered a subspecies of T. maximus, but it differs in structure, voice and the appearance of the eye-ring.
Similar species
In Europe, the common blackbird can be confused with the paler-winged first-winter ring ouzel (Turdus torquatus) or the superficially similar common starling (Sturnus vulgaris). A number of similar Turdus thrushes exist far outside the range of the common blackbird, for example the South American Chiguanco thrush (Turdus chiguanco). The Indian blackbird (Turdus simillimus), the Tibetan blackbird (Turdus maximus), and the Chinese blackbird (Turdus mandarinus) were formerly treated as subspecies of the common blackbird.
Description
The common blackbird of the nominate subspecies T. m. merula is in length, has a long tail, and weighs . The adult male has glossy black plumage, blackish-brown legs, a yellow eye-ring and an orange-yellow bill. The bill darkens somewhat in winter. The adult female is sooty-brown with a dull yellowish-brownish bill, a brownish-white throat and some weak mottling on the breast. The juvenile is similar to the female, but has pale spots on the upperparts, and the very young juvenile also has a speckled breast. Young birds vary in the shade of brown, with darker birds presumably males. The first year male resembles the adult male, but has a dark bill and weaker eye ring, and its folded wing is brown, rather than black like the body plumage.
Distribution and habitat
The common blackbird breeds in temperate Eurasia, North Africa, the Canary Islands, and South Asia. It has been introduced to Australia and New Zealand. Populations are sedentary in the south and west of the range, although northern birds migrate south as far as northern Africa and tropical Asia in winter. Urban males are more likely to overwinter in cooler climes than rural males, an adaptation made feasible by the warmer microclimate and relatively abundant food that allow the birds to establish territories and start reproducing earlier in the year. Recoveries of blackbirds ringed on the Isle of May show that these birds commonly migrate from southern Norway (or from as far north as Trondheim) to Scotland, and some onwards to Ireland. Scottish-ringed birds have also been recovered in England, Belgium, the Netherlands, Denmark, and Sweden. Female blackbirds in Scotland and the north of England migrate more (to Ireland) in winter than do the males.
Common over most of its range in woodland, the common blackbird has a preference for deciduous trees with dense undergrowth. However, gardens provide the best breeding habitat with up to 7.3 pairs per hectare (nearly three pairs per acre), with woodland typically holding about a tenth of that density, and open and very built-up habitats even less. They are often replaced by the related ring ouzel in areas of higher altitude. The common blackbird also lives in parks, gardens and hedgerows.
The common blackbird occurs at elevations of up to in Europe, in North Africa, and at in peninsular India and Sri Lanka, but the large Himalayan subspecies range much higher, with T. m. maximus breeding at and remaining above even in winter.
This widespread species has occurred as a vagrant in many locations in Eurasia outside its normal range, but records from North America are normally considered to involve escapees, including, for example, the 1971 bird in Quebec. However, a 1994 record from Bonavista, Newfoundland, has been accepted as a genuine wild bird, and the species is therefore on the North American list.
Behaviour and ecology
The male common blackbird defends its breeding territory, chasing away other males or utilising a "bow and run" threat display. This consists of a short run, the head first being raised and then bowed with the tail dipped simultaneously. If a fight between male blackbirds does occur, it is usually short and the intruder is soon chased away. The female blackbird is also aggressive in the spring when it competes with other females for a good nesting territory, and although fights are less frequent, they tend to be more violent.
The bill's appearance is important in the interactions of the common blackbird. The territory-holding male responds more aggressively towards models with orange bills than to those with yellow bills, and reacts least to the brown bill colour typical of the first-year male. The female is, however, relatively indifferent to bill colour, but responds instead to shinier bills.
As long as winter food is available, both the male and female will remain in the territory throughout the year, although occupying different areas. Migrants are more gregarious, travelling in small flocks and feeding in loose groups in the wintering grounds. The flight of migrating birds comprises bursts of rapid wing beats interspersed with level or diving movement, and differs from both the normal fast agile flight of this species and the more dipping action of larger thrushes.
Breeding
The male common blackbird attracts the female with a courtship display which consists of oblique runs combined with head-bowing movements, an open beak, and a "strangled" low song. The female remains motionless until she raises her head and tail to permit copulation. This species is monogamous, and the established pair will usually stay together as long as they both survive. Pair separation rates of up to 20% have been noted following poor breeding. Although the species is socially monogamous, there have been studies showing as much as 17% extra-pair paternity.
The nominate T. merula may commence breeding in March, but eastern and Indian races are a month or more later, and the introduced New Zealand birds start nesting in August (late winter). The breeding pair prospect for a suitable nest site in a creeper or bush, favouring evergreen or thorny species such as ivy, holly, hawthorn, honeysuckle or pyracantha. Sometimes the birds will nest in sheds or outbuildings where a ledge or cavity is used. The cup-shaped nest is made with grasses, leaves and other vegetation, bound together with mud. It is built by the female alone. She lays three to five (usually four) bluish-green eggs marked with reddish-brown blotches, heaviest at the larger end; the eggs of nominate T. merula are in size and weigh , of which 6% is shell. Eggs of birds of the southern Indian races are paler than those from the northern subcontinent and Europe.
The female incubates for 12–14 days before the altricial chicks are hatched naked and blind. Fledging takes another 10–19 (average 13.6) days, with both parents feeding the young and removing faecal sacs. The nest is often ill-concealed compared with those of other species, and many breeding attempts fail due to predation. The young are fed by the parents for up to three weeks after leaving the nest, and will follow the adults begging for food. If the female starts another nest, the male alone will feed the fledged young. Second broods are common, with the female reusing the same nest if the brood was successful, and three broods may be raised in the south of the common blackbird's range.
A common blackbird has an average life expectancy of 2.4 years, and, based on data from bird ringing, the oldest recorded age is 21 years and 10 months.
Songs and calls
In its native Northern Hemisphere range, the first-year male common blackbird of the nominate race may start singing as early as late January in fine weather in order to establish a territory, followed in late March by the adult male. The male's song is a varied and melodious low-pitched fluted warble, given from trees, rooftops or other elevated perches mainly in the period from March to June, sometimes into the beginning of July. It has a number of other calls, including an aggressive seee, a pook-pook-pook alarm for terrestrial predators like cats, and various chink and chook, chook vocalisations. The territorial male invariably gives chink-chink calls in the evening in an attempt (usually unsuccessful) to deter other blackbirds from roosting in its territory overnight. During the northern winter, blackbirds can be heard quietly singing to themselves, so much so that September and October are the only months in which the song cannot be heard. Like other passerine birds, it has a thin high seee alarm call for threats from birds of prey since the sound is rapidly attenuated in vegetation, making the source difficult to locate.
At least two subspecies, T. m. merula and T. m. nigropileus, will mimic other species of birds, cats, humans or alarms, but this is usually quiet and hard to detect.
Feeding
The common blackbird is omnivorous, eating a wide range of insects, earthworms, seeds and berries. It feeds mainly on the ground, running and hopping with a start-stop-start progress. It pulls earthworms from the soil, usually finding them by sight, but sometimes by hearing, and roots through leaf litter for other invertebrates. Small amphibians, lizards and (on rare occasions) small mammals are occasionally hunted. This species will also perch in bushes to take berries and collect caterpillars and other active insects. Animal prey predominates, and is particularly important during the breeding season, with windfall apples and berries taken more in the autumn and winter. The nature of the fruit taken depends on what is locally available, and frequently includes exotics in gardens.
Natural threats
Near human habitation the main predator of the common blackbird is the domestic cat, with newly fledged young especially vulnerable. Foxes and predatory birds, such as the sparrowhawk and other accipiters, also take this species when the opportunity arises. However, there is little direct evidence to show that either predation of the adult blackbirds or loss of the eggs and chicks to corvids, such as the European magpie or Eurasian jay, decrease population numbers.
This species is occasionally a host of parasitic cuckoos, such as the common cuckoo (Cuculus canorus), but this is minimal because the common blackbird recognizes the adult of the parasitic species and its non-mimetic eggs. In the UK, only three nests of 59,770 examined (0.005%) contained cuckoo eggs. The introduced merula blackbird in New Zealand, where the cuckoo does not occur, has, over the past 130 years, lost the ability to recognize the adult common cuckoo but still rejects non-mimetic eggs.
As with other passerine birds, parasites are common. Intestinal parasites were found in 88% of common blackbirds, most frequently Isospora and Capillaria species. and more than 80% had haematozoan parasites (Leucocytozoon, Plasmodium, Haemoproteus and Trypanosoma species).
Common blackbirds spend much of their time looking for food on the ground where they can become infested with ticks, which are external parasites that most commonly attach to the head of a blackbird. In France, 74% of rural blackbirds were found to be infested with Ixodes ticks, whereas, only 2% of blackbirds living in urban habitats were infested. This is partly because it is more difficult for ticks to find another host on lawns and gardens in urban areas than in uncultivated rural areas, and partly because ticks are likely to be commoner in rural areas, where a variety of tick hosts, such as foxes, deer and boar, are more numerous. Although ixodid ticks can transmit pathogenic viruses and bacteria, and are known to transmit Borrelia bacteria to birds, there is no evidence that this affects the fitness of blackbirds except when they are exhausted and run down after migration.
The common blackbird is one of a number of species which has unihemispheric slow-wave sleep. One hemisphere of the brain is effectively asleep, while a low-voltage EEG, characteristic of wakefulness, is present in the other. The benefit of this is that the bird can rest in areas of high predation or during long migratory flights, but still retain a degree of alertness.
Status and conservation
The common blackbird has an extensive range, estimated at , and a large population, including an estimated 79 to 160 million individuals in Europe alone. The species is not believed to approach the thresholds for the population decline criterion of the IUCN Red List (i.e., declining more than 30% in ten years or three generations), and is therefore evaluated as least concern. In the western Palearctic, populations are generally stable or increasing, but there have been local declines, especially on farmland, which may be due to agricultural policies that encouraged farmers to remove hedgerows (which provide nesting places), and to drain damp grassland and increase the use of pesticides, both of which could have reduced the availability of invertebrate food.
The common blackbird was introduced to Australia by a bird dealer visiting Melbourne in early 1857, and its range has expanded from its initial foothold in Melbourne and Adelaide to include all of southeastern Australia, including Tasmania and the Bass Strait islands. The introduced population in Australia is considered a pest because it damages a variety of soft fruits in orchards, parks and gardens, including berries, cherries, stone fruit and grapes. It is thought to spread weeds, such as blackberry, and may compete with native birds for food and nesting sites.
The introduced common blackbird is, together with the native silvereye (Zosterops lateralis), the most widely distributed avian seed disperser in New Zealand. Introduced there along with the song thrush (Turdus philomelos) in 1862, it has spread throughout the country up to an elevation of , as well as outlying islands such as the Campbell and Kermadecs. It eats a wide range of native and exotic fruit, and makes a major contribution to the development of communities of naturalised woody weeds. These communities provide fruit more suited to non-endemic native birds and naturalised birds than to endemic birds.
The numbers of blackbirds in Europe have been significantly reduced by the Usutu virus which is spread by mosquitoes. This was detected in Italy in 1996 and has since spread to other countries including Germany and the UK.
In popular culture
The common blackbird was seen as a sacred though destructive bird in Classical Greek folklore, and was said to die if it consumed pomegranates. Like many other small birds, it has in the past been trapped in rural areas at its night roosts as an easily available addition to the diet, and in medieval times the practice of placing live birds under a pie crust just before serving may have been the origin of the familiar nursery rhyme:
Sing a song of sixpence,
A pocket full of rye;
Four and twenty blackbirds baked in a pie!
When the pie was opened the birds began to sing,
Oh, wasn't that a dainty dish to set before the king?
The common blackbird's melodious, distinctive song is mentioned in the poem Adlestrop by Edward Thomas;
And for that minute a blackbird sang
Close by, and round him, mistier,
Farther and farther, all the birds
Of Oxfordshire and Gloucestershire.
In the English Christmas carol "The Twelve Days of Christmas", the line commonly sung today as "four calling birds" is believed to have originally been written in the 18th century as "four colly birds", an archaism meaning "black as coal" that was a popular English nickname for the common blackbird.
The common blackbird, unlike many black creatures, is not normally seen as a symbol of bad luck, but R. S. Thomas wrote that there is "a suggestion of dark Places about it", and it symbolised resignation in the 17th century tragic play The Duchess of Malfi; an alternate connotation is vigilance, the bird's clear cry warning of danger.
The common blackbird is the national bird of Sweden, which has a breeding population of 1–2 million pairs, and was featured on a 30 öre Christmas postage stamp in 1970; it has also featured on a number of other stamps issued by European and Asian countries, including a 1966 4d British stamp and a 1998 Irish 30p stamp. This bird—arguably—also gives rise to the Serbian name for Kosovo (and Metohija), which is the possessive adjectival form of Serbian ("blackbird") as in Kosovo Polje ("Blackbird Field").
French composer Olivier Messiaen transcribed the songs of male blackbirds; these melodies have commonly appeared throughout his œuvre. The most notable instance of this is the 1952 chamber miniature Le merle noir, a piece for flute and piano.
A common blackbird can be heard singing on the Beatles song "Blackbird" as a symbol of the civil rights movement.
| Biology and health sciences | Passerida | Animals |
184842 | https://en.wikipedia.org/wiki/Triggerfish | Triggerfish | Triggerfish are about 40 species of often brightly colored marine ray-finned fishes belonging to the family Balistidae. Often marked by lines and spots, they inhabit tropical and subtropical oceans throughout the world, with the greatest species richness in the Indo-Pacific. Most are found in relatively shallow, coastal habitats, especially at coral reefs, but a few, such as the oceanic triggerfish (Canthidermis maculata), are pelagic. While several species from this family are popular in the marine aquarium trade, they are often notoriously ill-tempered.
Taxonomy
The triggerfish family, Balistidae. was first proposed in 1810 by the French polymath Constantine Samuel Rafinesque. The closest relantives to the triggerfishes are the filefishses belonging to the family Monacanthidae and these two families are sometimes classified together in the suborder Balistoidei, for example in the 5th edition of Fishes of the World. Other authorities. however, also include the families Aracanidae and Ostraciidae within the suborder Balistoidei.
Etymology
Triggerfish have both a common name and a scientific name that refers to the first spine of the dorsal fin being locked in place by the erection of the shorter second trigger spine, and unlocked by depressing the second spine . In the scientific name of the type genus Balistes this is taken directly from the Italian pesca ballista, the "crossbow fish". Ballista originally being a machine for throwing arrows.
Anatomy and appearance
The largest member of the family, the stone triggerfish (Pseudobalistes naufragium) reaches , but most species have a maximum length between .
Triggerfish have an oval-shaped, highly compressed body. The head is large, terminating in a small but strong-jawed mouth with teeth adapted for crushing shells. The eyes are small, set far back from the mouth, at the top of the head. The anterior dorsal fin is reduced to a set of three spines. The first spine is stout and by far the longest. All three are normally retracted into a groove. Characteristic of the order Tetraodontiformes, the anal and posterior dorsal fins are capable of undulating from side to side to provide slow movement and comprise their primary mode of propulsion. The sickle-shaped caudal fin is used only to escape predators.
The two pelvic fins are overlaid by skin for most of their length and fused to form a single spine, terminated by very short rays, their only external evidence. Gill plates (opercula), although present, are also not visible, overlaid by the tough skin, covered with rough, rhomboid scales that form a stout armor on their bodies. The only gill opening is a vertical slit, directly above the pectoral fins. This peculiar covering of the gill plates is shared with other members of the Tetradontae. Each jaw contains a row of four teeth on either side, while the upper jaw contains an additional set of six plate-like pharyngeal teeth.
As a protection against predators, triggerfish can erect the first two dorsal spines: The first (anterior) spine is locked in place by erection of the short second spine, and can be unlocked only by depressing the second, "trigger" spine, hence the family name "triggerfish".
With the exception of a few species from the genus Xanthichthys, the sexes of all species in this family are similar in appearance.
Genera and species
Behavior
The anatomy of the triggerfish reflects its typical diet of slow-moving, bottom dwelling crustaceans, mollusks, sea urchins and other echinoderms, generally creatures with protective shells and spines. Many will also take small fishes and some, notably the members of the genus Melichthys, feed on algae. A few, for example the redtoothed triggerfish (Odonus niger), mainly feed on plankton. They are known to exhibit a high level of intelligence for a fish, and have the ability to learn from previous experiences.
Some triggerfish species can be quite aggressive when guarding their eggs. Both the Picasso (Rhinecanthus aculeatus) and titan triggerfish (Balistoides viridescens) viciously defend their nests against intruders, including scuba divers and snorkelers. Their territory extends in a cone from the nest toward the surface, so swimming upwards can put a diver further into the fishes' territory; a horizontal swim away from the nest site is best when confronted by an angry triggerfish. Unlike the relatively small Picasso triggerfish, the titan triggerfish poses a serious threat to inattentive divers due to its large size and powerful teeth.
Male territoriality
Triggerfish males migrate to their traditional spawning sites prior to mating and establish territories. Some male species (i.e. Balistes carolinensis and Pseudobalistes flavimarginatus) build hollow nests within their territories. Triggerfish males are fierce in guarding their territories as having a territory is essential for reproduction. A male's territory is used for spawning and parental care. Most male territories are located over a sandy sea bottom or on a rocky reef. A single territory usually includes more than one female, and the male mates with all of the females residing in or visiting his territory (polygyny). In Hachijojima, Izu Islands, Japan, one male crosshatch triggerfish (Xanthichthys mento) has up to three females in his territory at the same time, and mates with them in pairs. Each male red-toothed triggerfish (Odonus niger) mates with more than 10 females in his territory on the same day. Yellow margin triggerfish (Pseudobalistes flavimarginatus) also exhibit polygyny.
Spawning and biparental care
Triggerfish spawning is timed in relation to lunar cycles, tides, and time of changeover of tides. In relation to lunar cycles, eggs are observed 2–6 days before the full moon and 3–5 days before the new moon. In relation to tides, spawning happens 1–5 days before the spring tide. In relation to timing of tides, eggs are observed on days when high tides take place around sunset.
Male and female triggerfish perform certain prespawning behaviors: blowing and touching. A male and female blow water on the sandy bottom (usually in the same spot at the same time) and set up their egg site. They touch their abdomens on the bottom as if they are spawning. During actual spawning, eggs are laid on the sandy sea bottom (triggerfish are demersal spawners despite their large size). Eggs are scattered and attached to sand particles. Triggerfish eggs are usually very small (diameter of 0.5–0.6 mm) and are easily spread by waves. After spawning, both the male and female participate in caring for the fertilized eggs (biparental egg care). A female triggerfish stays near the spawning ground, around 5 m off the bottom, and guards the eggs within her territory against intruders. Some common intruders include Parupeneus multifasciatus, Zanclus cornutus, Prionurus scalprum, and conspecifics. Besides guarding, females roll, fan, and blow water on eggs to provide oxygen to the embryos, thereby inducing hatching. This behavior of female triggerfish is called "tending", and males rarely perform this behavior. A male triggerfish stays farther above the eggs and guards all the females and eggs in his territory. Males exhibit aggressive behaviors against conspecific males near the boundaries of their territories.
Mating systems
In crosshatch triggerfish (Xanthichthys mento) and yellow margin triggerfish (Pseudobalistes flavimarginatus), eggs are spawned in the morning and they hatch after the sunset on the same day. After hatching of embryos, the female crosshatch triggerfish leaves the male's territory. This mating system is an example of male-territory-visiting polygamy. Triggerfishes exhibit other types of mating systems, as well, such as a nonterritorial-female (NTF) polygyny and territorial-female (TF) polygyny. In NTF polygyny, nonterritorial females stay in the male's territory and reproduce. In TF polygyny, a female owns territory within a male's territory and will spawn in her territory.
Life history
Triggerfish lay their demersal eggs in a small hole dug in the sea bottom. Off Florida, juveniles of some species of triggerfishes are found in floating Sargassum, where they feed on the small shrimp, crabs, and mollusks found there.
Edibility
Some species of triggerfish, such as the titan triggerfish, may be ciguatoxic and should be avoided. Others, however, such as the grey triggerfish (Balistes capriscus), are edible.
Gallery
| Biology and health sciences | Acanthomorpha | Animals |
184873 | https://en.wikipedia.org/wiki/Potassium%20ferricyanide | Potassium ferricyanide | Potassium ferricyanide is the chemical compound with the formula K3[Fe(CN)6]. This bright red salt contains the octahedrally coordinated [Fe(CN)6]3− ion. It is soluble in water and its solution shows some green-yellow fluorescence. It was discovered in 1822 by Leopold Gmelin.
Preparation
Potassium ferricyanide is manufactured by passing chlorine through a solution of potassium ferrocyanide. Potassium ferricyanide separates from the solution:
2 K4[Fe(CN)6] + Cl2 → 2 K3[Fe(CN)6] + 2 KCl
Structure
Like other metal cyanides, solid potassium ferricyanide has a complicated polymeric structure. The polymer consists of octahedral [Fe(CN)6]3− centers crosslinked with K+ ions that are bound to the CN ligands. The K+---NCFe linkages break when the solid is dissolved in water.
Applications
The compound is also used to harden iron and steel, in electroplating, dyeing wool, as a laboratory reagent, and as a mild oxidizing agent in organic chemistry.
Photography
Blueprint, cyanotype, toner
The compound has widespread use in blueprint drawing and in photography (Cyanotype process). Several photographic print toning processes involve the use of potassium ferricyanide.It is often used as a mild bleach in a concentration of 10g/L to reduce film or print density.
Bleaching
Potassium ferricyanide was used as an oxidizing agent to remove silver from color negatives and positives during processing, a process called bleaching. Because potassium ferricyanide bleaches are environmentally unfriendly, short-lived, and capable of releasing hydrogen cyanide gas if mixed with high concentrations and volumes of acid, bleaches using ferric EDTA have been used in color processing since the 1972 introduction of the Kodak C-41 process. In color lithography, potassium ferricyanide is used to reduce the size of color dots without reducing their number, as a kind of manual color correction called dot etching.
Farmer's reducer
Ferricyanide is also used in black-and-white photography with sodium thiosulfate (hypo) to reduce the density of a negative or gelatin silver print where the mixture is known as Farmer's reducer; this can help offset problems from overexposure of the negative, or brighten the highlights in the print.
Reagent in organic synthesis
Potassium ferricyanide is a used as an oxidant in organic chemistry. It is an oxidant for catalyst regeneration in Sharpless dihydroxylations.
Sensors and indicators
Potassium ferricyanide is also one of two compounds present in ferroxyl indicator solution (along with phenolphthalein) that turns blue (Prussian blue) in the presence of Fe2+ ions, and which can therefore be used to detect metal oxidation that will lead to rust. It is possible to calculate the number of moles of Fe2+ ions by using a colorimeter, because of the very intense color of Prussian blue.
In physiology experiments potassium ferricyanide provides a means increasing a solution's redox potential (E°' ~ 436 mV at pH 7). As such, it can oxidize reduced cytochrome c (E°' ~ 247 mV at pH 7) in isolated mitochondria. Sodium dithionite is usually used as a reducing chemical in such experiments (E°' ~ −420 mV at pH 7).
Potassium ferricyanide is used to determine the ferric reducing power potential of a sample (extract, chemical compound, etc.). Such a measurement is used to determine of the antioxidant property of a sample.
Potassium ferricyanide is a component of amperometric biosensors as an electron transfer agent replacing an enzyme's natural electron transfer agent such as oxygen as with the enzyme glucose oxidase. It is an ingredient in commercially available blood glucose meters for use by diabetics.
Other
Potassium ferricyanide is combined with potassium hydroxide (or sodium hydroxide as a substitute) and water to formulate Murakami's etchant. This etchant is used by metallographers to provide contrast between binder and carbide phases in cemented carbides.
Prussian blue
Prussian blue, the deep blue pigment in blue printing, is generated by the reaction of K3[Fe(CN)6] with ferrous (Fe2+) ions as well as K4[Fe(CN)6] with ferric salts.
In histology, potassium ferricyanide is used to detect ferrous iron in biological tissue. Potassium ferricyanide reacts with ferrous iron in acidic solution to produce the insoluble blue pigment, commonly referred to as Turnbull's blue or Prussian blue. To detect ferric (Fe3+) iron, potassium ferrocyanide is used instead in the Perls' Prussian blue staining method. The material formed in the Turnbull's blue reaction and the compound formed in the Prussian blue reaction are the same.
Safety
Potassium ferricyanide has low toxicity, its main hazard being that it is a mild irritant to the eyes and skin. However, under very strongly acidic conditions, highly toxic hydrogen cyanide gas is evolved, according to the equation:
6 H+ + [Fe(CN)6]3− → 6 HCN + Fe3+
For example, it will react with diluted sulfuric acid under heating forming potassium sulfate, ferric sulfate and hydrogen cyanide.
2 K3 [Fe(CN)6] + 6 H2SO4 → 3 K2 SO4 + Fe2 (SO4)3 + 12 HCN
This won't occour with concentrated sulfuric acid as hydrolysis to formic acid and dehydration to carbon monoxide will take place instead.
2 K3 Fe(CN)6 + 12 H2 SO4 + 12 H2O → 3 K2SO4 + 6 (NH4)2 SO4 + Fe2 (SO4)3 + 12 CO
| Physical sciences | Cyanide salts | Chemistry |
184881 | https://en.wikipedia.org/wiki/Reducing%20agent | Reducing agent | In chemistry, a reducing agent (also known as a reductant, reducer, or electron donor) is a chemical species that "donates" an electron to an (called the , , , or ).
Examples of substances that are common reducing agents include hydrogen, carbon monoxide, the alkali metals, formic acid, oxalic acid, and sulfite compounds.
In their pre-reaction states, reducers have extra electrons (that is, they are by themselves reduced) and oxidizers lack electrons (that is, they are by themselves oxidized). This is commonly expressed in terms of their oxidation states. An agent's oxidation state describes its degree of loss of electrons, where the higher the oxidation state then the fewer electrons it has. So initially, prior to the reaction, a reducing agent is typically in one of its lower possible oxidation states; its oxidation state increases during the reaction while that of the oxidizer decreases.
Thus in a redox reaction, the agent whose oxidation state increases, that "loses/donates electrons", that "is oxidized", and that "reduces" is called the or , while the agent whose oxidation state decreases, that "gains/accepts/receives electrons", that "is reduced", and that "oxidizes" is called the or .
For example, consider the overall reaction for aerobic cellular respiration:
The oxygen () is being reduced, so it is the oxidizing agent. The glucose () is being oxidized, so it is the reducing agent.
Characteristics
Consider the following reaction:
2 + → 2 + 2
The reducing agent in this reaction is ferrocyanide (). It donates an electron, becoming oxidized to ferricyanide (). Simultaneously, that electron is received by the oxidizer chlorine (), which is reduced to chloride ().
Strong reducing agents easily lose (or donate) electrons. An atom with a relatively large atomic radius tends to be a better reductant. In such species, the distance from the nucleus to the valence electrons is so long that these electrons are not strongly attracted. These elements tend to be strong reducing agents. Good reducing agents tend to consist of atoms with a low electronegativity, which is the ability of an atom or molecule to attract bonding electrons, and species with relatively small ionization energies serve as good reducing agents too.
The measure of a material's ability to reduce is known as its reduction potential. The table below shows a few reduction potentials, which can be changed to oxidation potentials by reversing the sign. Reducing agents can be ranked by increasing strength by ranking their reduction potentials. Reducers donate electrons to (that is, "reduce") oxidizing agents, which are said to "be reduced by" the reducer. The reducing agent is stronger when it has a more negative reduction potential and weaker when it has a more positive reduction potential. The more positive the reduction potential the greater the species' affinity for electrons and tendency to be reduced (that is, to receive electrons). The following table provides the reduction potentials of the indicated reducing agent at 25 °C. For example, among sodium (Na), chromium (Cr), cuprous (Cu+) and chloride (Cl−), it is Na that is the strongest reducing agent while Cl− is the weakest; said differently, Na+ is the weakest oxidizing agent in this list while Cl is the strongest.
Common reducing agents include metals potassium, calcium, barium, sodium and magnesium, and also compounds that contain the hydride H− ion, those being NaH, LiH, LiAlH4 and CaH2.
Some elements and compounds can be both reducing or oxidizing agents. Hydrogen gas is a reducing agent when it reacts with non-metals and an oxidizing agent when it reacts with metals.
2 Li(s) + H2(g) → 2 LiH(s)
Hydrogen (whose reduction potential is 0.0) acts as an oxidizing agent because it accepts an electron donation from the reducing agent lithium (whose reduction potential is -3.04), which causes Li to be oxidized and hydrogen to be reduced.
H2(g) + F2(g) → 2 HF(g)
Hydrogen acts as a reducing agent because it donates its electrons to fluorine, which allows fluorine to be reduced.
Importance
Reducing agents and oxidizing agents are the ones responsible for corrosion, which is the "degradation of metals as a result of electrochemical activity". Corrosion requires an anode and cathode to take place. The anode is an element that loses electrons (reducing agent), thus oxidation always occurs in the anode, and the cathode is an element that gains electrons (oxidizing agent), thus reduction always occurs in the cathode. Corrosion occurs whenever there's a difference in oxidation potential. When this is present, the anode metal begins deteriorating, given there is an electrical connection and the presence of an electrolyte.
Examples of redox reaction
Historically, reduction referred to the removal of oxygen from a compound, hence the name 'reduction'. An example of this phenomenon occurred during the Great Oxidation Event, in which biologically−produced molecular oxygen (dioxygen (), an oxidizer and electron recipient) was added to the early Earth's atmosphere, which was originally a weakly reducing atmosphere containing reducing gases like methane () and carbon monoxide () (along with other electron donors) and practically no oxygen because any that was produced would react with these or other reducers (particularly with iron dissolved in sea water), resulting in their .
By using water as a reducing agent, aquatic photosynthesizing cyanobacteria produced this molecular oxygen as a waste product. This initially oxidized the ocean's dissolved ferrous iron (Fe(II) − meaning iron in its +2 oxidation state) to form insoluble ferric iron oxides such as Iron(III) oxide (Fe(II) lost an electron to the oxidizer and became Fe(III) − meaning iron in its +3 oxidation state) that precipitated down to the ocean floor to form banded iron formations, thereby removing the oxygen (and the iron). The rate of production of oxygen eventually exceeded the availability of reducing materials that removed oxygen, which ultimately led Earth to gain a strongly oxidizing atmosphere containing abundant oxygen (like the modern atmosphere). The modern sense of donating electrons is a generalization of this idea, acknowledging that other components can play a similar chemical role to oxygen.
The formation of iron(III) oxide;
4Fe + 3O2 → 4Fe3+ + 6O2− → 2Fe2O3
In the above equation, the Iron (Fe) has an oxidation number of 0 before and 3+ after the reaction. For oxygen (O) the oxidation number began as 0 and decreased to 2−. These changes can be viewed as two "half-reactions" that occur concurrently:
Oxidation half reaction: Fe0 → Fe3+ + 3e−
Reduction half reaction: O2 + 4e− → 2 O2−
Iron (Fe) has been oxidized because the oxidation number increased. Iron is the reducing agent because it gave electrons to the oxygen (O2).
Oxygen (O2) has been reduced because the oxidation number has decreased and is the oxidizing agent because it took electrons from iron (Fe).
Common reducing agents
Lithium aluminium hydride (Li Al H4), a very strong reducing agent
Red-Al (NaAlH2(OCH2CH2OCH3)2), a safer and more stable alternative to lithium aluminum hydride
Hydrogen without or with a suitable catalyst; e.g. a Lindlar catalyst
Sodium amalgam (Na(Hg))
Sodium-lead alloy (Na + Pb)
Zinc amalgam (Zn(Hg)) (reagent for Clemmensen reduction)
Diborane
Sodium borohydride (Na BH4)
Ferrous compounds that contain the Fe2+ ion, such as iron(II) sulfate
Stannous compounds that contain the Sn2+ ion, such as tin(II) chloride
Sulfur dioxide (sometimes also used as an oxidizing agent), Sulfite compounds
Dithionates, e.g. Na2S2O6
Thiosulfates, e.g. Na2S2O3 (mainly in analytical chemistry)
Iodides, such as potassium iodide (K I) (mainly in analytical chemistry)
Hydrogen peroxide ()mostly an oxidant but can occasionally act as a reducing agent, typically in analytical chemistry
Hydrazine (Wolff-Kishner reduction)
Diisobutylaluminium hydride (DIBAL-H)
Oxalic acid ()
Formic acid (HCOOH)
Ascorbic acid (C6H8O6)
Reducing sugars, such as erythrose, see Aldose
Phosphites, hypophosphites, and phosphorous acid
Dithiothreitol (DTT) – used in biochemistry labs to avoid SS-bonds
Carbon monoxide (CO)
Cyanides in hydrochemical metallurgical processes
Carbon (C)
Tris-2-carboxyethylphosphine hydrochloride (TCEP)
| Physical sciences | Redox reactions | Chemistry |
184882 | https://en.wikipedia.org/wiki/Oxidizing%20agent | Oxidizing agent | An oxidizing agent (also known as an oxidant, oxidizer, electron recipient, or electron acceptor) is a substance in a redox chemical reaction that gains or "accepts"/"receives" an electron from a (called the , , or electron donor). In other words, an oxidizer is any substance that oxidizes another substance. The oxidation state, which describes the degree of loss of electrons, of the oxidizer decreases while that of the reductant increases; this is expressed by saying that oxidizers "undergo reduction" and "are reduced" while reducers "undergo oxidation" and "are oxidized".
Common oxidizing agents are oxygen, hydrogen peroxide, and the halogens.
In one sense, an oxidizing agent is a chemical species that undergoes a chemical reaction in which it gains one or more electrons. In that sense, it is one component in an oxidation–reduction (redox) reaction. In the second sense, an oxidizing agent is a chemical species that transfers electronegative atoms, usually oxygen, to a substrate. Combustion, many explosives, and organic redox reactions involve atom-transfer reactions.
Electron acceptors
Electron acceptors participate in electron-transfer reactions. In this context, the oxidizing agent is called an electron acceptor and the reducing agent is called an electron donor. A classic oxidizing agent is the ferrocenium ion , which accepts an electron to form Fe(C5H5)2. One of the strongest acceptors commercially available is "Magic blue", the radical cation derived from N(C6H4-4-Br)3.
Extensive tabulations of ranking the electron accepting properties of various reagents (redox potentials) are available, see Standard electrode potential (data page).
Atom-transfer reagents
In more common usage, an oxidizing agent transfers oxygen atoms to a substrate. In this context, the oxidizing agent can be called an oxygenation reagent or oxygen-atom transfer (OAT) agent. Examples include (permanganate), (chromate), OsO4 (osmium tetroxide), and especially (perchlorate). Notice that these species are all oxides.
In some cases, these oxides can also serve as electron acceptors, as illustrated by the conversion of to ,ie permanganate to manganate.
Common oxidizing agents
Oxygen (O2)
Ozone (O3)
Hydrogen peroxide (H2O2) and other inorganic peroxides, Fenton's reagent
Fluorine (F2), chlorine (Cl2), and other halogens
Nitric acid (HNO3) and nitrate compounds such as potassium nitrate (KNO3), the oxidizer in black powder
Potassium chlorate (KClO3)
Peroxydisulfuric acid (H2S2O8)
Peroxymonosulfuric acid (H2SO5)
Hypochlorite, chlorite, chlorate, perchlorate, and other analogous halogen oxyanions
Fluorides of chlorine, bromine, and iodine
Hexavalent chromium compounds such as chromic and dichromic acids and chromium trioxide, pyridinium chlorochromate (PCC), and chromate/dichromate compounds such as Sodium dichromate (Na2Cr2O7)
Permanganate compounds such as potassium permanganate (KMnO4)
Sodium perborate (·)
Nitrous oxide (N2O), Nitrogen dioxide/Dinitrogen tetroxide (NO2 / N2O4)
Sodium bismuthate (NaBiO3)
Cerium (IV) compounds such as ceric ammonium nitrate and ceric sulfate
Lead dioxide (PbO2)
Dangerous materials definition
The dangerous goods definition of an oxidizing agent is a substance that can cause or contribute to the combustion of other material. By this definition some materials that are classified as oxidizing agents by analytical chemists are not classified as oxidizing agents in a dangerous materials sense. An example is potassium dichromate, which does not pass the dangerous goods test of an oxidizing agent.
The U.S. Department of Transportation defines oxidizing agents specifically. There are two definitions for oxidizing agents governed under DOT regulations. These two are Class 5; Division 5.1(a)1 and Class 5; Division 5.1(a)2. Division 5.1 "means a material that may, generally by yielding oxygen, cause or enhance the combustion of other materials." Division 5.(a)1 of the DOT code applies to solid oxidizers "if, when tested in accordance with the UN Manual of Tests and Criteria (IBR, see § 171.7 of this subchapter), its mean burning time is less than or equal to the burning time of a 3:7 potassium bromate/cellulose mixture." 5.1(a)2 of the DOT code applies to liquid oxidizers "if, when tested in accordance with the UN Manual of Tests and Criteria, it spontaneously ignites or its mean time for a pressure rise from 690 kPa to 2070 kPa gauge is less than the time of a 1:1 nitric acid (65 percent)/cellulose mixture."
Common oxidizing agents and their products
| Physical sciences | Redox reactions | Chemistry |
184897 | https://en.wikipedia.org/wiki/Reagent | Reagent | In chemistry, a reagent ( ) or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates.
Definitions
Organic chemistry
In organic chemistry, the term "reagent" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents.
Analytical chemistry
In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent.
Commercial or laboratory preparations
In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions.
Biology
In the field of biology, the biotechnology revolution in the 1980s grew from the development of reagents that could be used to identify and manipulate the chemical matter in and on cells. These reagents included antibodies (polyclonal and monoclonal), oligomers, all sorts of model organisms and immortalised cell lines, reagents and methods for molecular cloning and DNA replication, and many others.
Tool compounds
Tool compounds are an important class of reagent in biology. They are small molecules or biochemicals like siRNA or antibodies that are known to affect a given biomolecule—for example a drug target—but are unlikely to be useful as drugs themselves, and are often starting points in the drug discovery process.
However, many natural substances are hits in almost any assay in which they are tested, and therefore not useful as tool compounds. Medicinal chemists class them instead as pan-assay interference compounds. One example is curcumin.
| Physical sciences | Reaction | Chemistry |
184898 | https://en.wikipedia.org/wiki/Ramsey%27s%20theorem | Ramsey's theorem | In combinatorics, Ramsey's theorem, in one of its graph-theoretic forms, states that one will find monochromatic cliques in any edge labelling (with colours) of a sufficiently large complete graph. To demonstrate the theorem for two colours (say, blue and red), let and be any two positive integers. Ramsey's theorem states that there exists a least positive integer for which every blue-red edge colouring of the complete graph on vertices contains a blue clique on vertices or a red clique on vertices. (Here signifies an integer that depends on both and .)
Ramsey's theorem is a foundational result in combinatorics. The first version of this result was proved by Frank Ramsey. This initiated the combinatorial theory now called Ramsey theory, that seeks regularity amid disorder: general conditions for the existence of substructures with regular properties. In this application it is a question of the existence of monochromatic subsets, that is, subsets of connected edges of just one colour.
An extension of this theorem applies to any finite number of colours, rather than just two. More precisely, the theorem states that for any given number of colours, , and any given integers , there is a number, , such that if the edges of a complete graph of order are coloured with different colours, then for some between 1 and , it must contain a complete subgraph of order whose edges are all colour . The special case above has (and and ).
Examples
R(3, 3) = 6
Suppose the edges of a complete graph on 6 vertices are coloured red and blue. Pick a vertex, . There are 5 edges incident to and so (by the pigeonhole principle) at least 3 of them must be the same colour. Without loss of generality we can assume at least 3 of these edges, connecting the vertex, , to vertices, , and , are blue. (If not, exchange red and blue in what follows.) If any of the edges, , , , are also blue then we have an entirely blue triangle. If not, then those three edges are all red and we have an entirely red triangle. Since this argument works for any colouring, any contains a monochromatic , and therefore . The popular version of this is called the theorem on friends and strangers.
An alternative proof works by double counting. It goes as follows: Count the number of ordered triples of vertices, , , , such that the edge, , is red and the edge, , is blue. Firstly, any given vertex will be the middle of either (all edges from the vertex are the same colour), (four are the same colour, one is the other colour), or (three are the same colour, two are the other colour) such triples. Therefore, there are at most such triples. Secondly, for any non-monochromatic triangle , there exist precisely two such triples. Therefore, there are at most 18 non-monochromatic triangles. Therefore, at least 2 of the 20 triangles in the are monochromatic.
Conversely, it is possible to 2-colour a without creating any monochromatic , showing that . The unique colouring is shown to the right. Thus .
The task of proving that was one of the problems of William Lowell Putnam Mathematical Competition in 1953, as well as in the Hungarian Math Olympiad in 1947.
A multicolour example: R(3, 3, 3) = 17
A multicolour Ramsey number is a Ramsey number using 3 or more colours. There are (up to symmetries) only two non-trivial multicolour Ramsey numbers for which the exact value is known, namely and .
Suppose that we have an edge colouring of a complete graph using 3 colours, red, green and blue. Suppose further that the edge colouring has no monochromatic triangles. Select a vertex . Consider the set of vertices that have a red edge to the vertex . This is called the red neighbourhood of . The red neighbourhood of cannot contain any red edges, since otherwise there would be a red triangle consisting of the two endpoints of that red edge and the vertex . Thus, the induced edge colouring on the red neighbourhood of has edges coloured with only two colours, namely green and blue. Since , the red neighbourhood of can contain at most 5 vertices. Similarly, the green and blue neighbourhoods of can contain at most 5 vertices each. Since every vertex, except for itself, is in one of the red, green or blue neighbourhoods of , the entire complete graph can have at most vertices. Thus, we have .
To see that , it suffices to draw an edge colouring on the complete graph on 16 vertices with 3 colours that avoids monochromatic triangles. It turns out that there are exactly two such colourings on , the so-called untwisted and twisted colourings. Both colourings are shown in the figures to the right, with the untwisted colouring on the left, and the twisted colouring on the right.
If we select any colour of either the untwisted or twisted colouring on , and consider the graph whose edges are precisely those edges that have the specified colour, we will get the Clebsch graph.
It is known that there are exactly two edge colourings with 3 colours on that avoid monochromatic triangles, which can be constructed by deleting any vertex from the untwisted and twisted colourings on , respectively.
It is also known that there are exactly 115 edge colourings with 3 colours on that avoid monochromatic triangles, provided that we consider edge colourings that differ by a permutation of the colours as being the same.
Proof
2-colour case
The theorem for the 2-colour case can be proved by induction on . It is clear from the definition that for all , . This starts the induction. We prove that exists by finding an explicit bound for it. By the inductive hypothesis and exist.
Lemma 1.
Proof. Consider a complete graph on vertices whose edges are coloured with two colours. Pick a vertex from the graph, and partition the remaining vertices into two sets and , such that for every vertex , is in if edge is blue, and is in if is red. Because the graph has vertices, it follows that either or In the former case, if has a red then so does the original graph and we are finished. Otherwise has a blue and so has a blue by the definition of . The latter case is analogous. Thus the claim is true and we have completed the proof for 2 colours.
In this 2-colour case, if and are both even, the induction inequality can be strengthened to:
Proof. Suppose and are both even. Let and consider a two-coloured graph of vertices. If is the degree of the -th vertex in the blue subgraph, then by the Handshaking lemma, is even. Given that is odd, there must be an even . Assume without loss of generality that is even, and that and are the vertices incident to vertex 1 in the blue and red subgraphs, respectively. Then both and are even. By the Pigeonhole principle, either or Since is even and is odd, the first inequality can be strengthened, so either or Suppose Then either the subgraph has a red and the proof is complete, or it has a blue which along with vertex 1 makes a blue . The case is treated similarly.
Case of more colours
Lemma 2. If , then
Proof. Consider a complete graph of vertices and colour its edges with colours. Now 'go colour-blind' and pretend that and are the same colour. Thus the graph is now -coloured. Due to the definition of such a graph contains either a mono-chromatically coloured with colour for some or a -coloured in the 'blurred colour'. In the former case we are finished. In the latter case, we recover our sight again and see from the definition of we must have either a -monochrome or a -monochrome . In either case the proof is complete.
Lemma 1 implies that any is finite. The right hand side of the inequality in Lemma 2 expresses a Ramsey number for colours in terms of Ramsey numbers for fewer colours. Therefore, any is finite for any number of colours. This proves the theorem.
Ramsey numbers
The numbers in Ramsey's theorem (and their extensions to more than two colours) are known as Ramsey numbers. The Ramsey number gives the solution to the party problem, which asks the minimum number of guests, , that must be invited so that at least will know each other or at least will not know each other. In the language of graph theory, the Ramsey number is the minimum number of vertices, , such that all undirected simple graphs of order , contain a clique of order , or an independent set of order . Ramsey's theorem states that such a number exists for all and .
By symmetry, it is true that . An upper bound for can be extracted from the proof of the theorem, and other arguments give lower bounds. (The first exponential lower bound was obtained by Paul Erdős using the probabilistic method.) However, there is a vast gap between the tightest lower bounds and the tightest upper bounds. There are also very few numbers and for which we know the exact value of .
Computing a lower bound for usually requires exhibiting a blue/red colouring of the graph with no blue subgraph and no red subgraph. Such a counterexample is called a Ramsey graph. Brendan McKay maintains a list of known Ramsey graphs. Upper bounds are often considerably more difficult to establish: one either has to check all possible colourings to confirm the absence of a counterexample, or to present a mathematical argument for its absence.
Computational complexity
A sophisticated computer program does not need to look at all colourings individually in order to eliminate all of them; nevertheless it is a very difficult computational task that existing software can only manage on small sizes. Each complete graph has edges, so there would be a total of graphs to search through (for colours) if brute force is used. Therefore, the complexity for searching all possible graphs (via brute force) is for colourings and at most nodes.
The situation is unlikely to improve with the advent of quantum computers. One of the best-known searching algorithms for unstructured datasets exhibits only a quadratic speedup (c.f. Grover's algorithm) relative to classical computers, so that the computation time is still exponential in the number of nodes.
Known values
As described above, . It is easy to prove that , and, more generally, that for all : a graph on nodes with all edges coloured red serves as a counterexample and proves that ; among colourings of a graph on nodes, the colouring with all edges coloured red contains a -node red subgraph, and all other colourings contain a 2-node blue subgraph (that is, a pair of nodes connected with a blue edge.)
Using induction inequalities and the handshaking lemma, it can be concluded that , and therefore . There are only two graphs (that is, 2-colourings of a complete graph on 16 nodes without 4-node red or blue complete subgraphs) among different 2-colourings of 16-node graphs, and only one graph (the Paley graph of order 17) among colourings. It follows that .
The fact that was first established by Brendan McKay and Stanisław Radziszowski in 1995.
The exact value of is unknown, although it is known to lie between 43 (Geoffrey Exoo (1989)) and 46 (Angeltveit and McKay (2024)), inclusive.
In 1997, McKay, Radziszowski and Exoo employed computer-assisted graph generation methods to conjecture that . They were able to construct exactly 656 graphs, arriving at the same set of graphs through different routes. None of the 656 graphs can be extended to a graph.
For with , only weak bounds are available. Lower bounds for and have not been improved since 1965 and 1972, respectively.
with are shown in the table below. Where the exact value is unknown, the table lists the best known bounds. with are given by and for all values of .
The standard survey on the development of Ramsey number research is the Dynamic Survey 1 of the Electronic Journal of Combinatorics, by Stanisław Radziszowski, which is periodically updated. Where not cited otherwise, entries in the table below are taken from the June 2024 edition. (Note there is a trivial symmetry across the diagonal since .)
Asymptotics
The inequality may be applied inductively to prove that
In particular, this result, due to Erdős and Szekeres, implies that when ,
An exponential lower bound,
was given by Erdős in 1947 and was instrumental in his introduction of the probabilistic method. There is a huge gap between these two bounds: for example, for , this gives . Nevertheless, the exponential growth factors of either bound were not improved for a long time, and for the lower bound it still stands at . There is no known explicit construction producing an exponential lower bound. The best known lower and upper bounds for diagonal Ramsey numbers are
due to Spencer and Conlon, respectively; a 2023 preprint by Campos, Griffiths, Morris and Sahasrabudhe claims to have made exponential progress using an algorithmic construction relying on a graph structure called a "book", improving the upper bound to
with and where it is believed these parameters could be optimized, in particular .
For the off-diagonal Ramsey numbers , it is known that they are of order ; this may be stated equivalently as saying that the smallest possible independence number in an -vertex triangle-free graph is
The upper bound for is given by Ajtai, Komlós, and Szemerédi, the lower bound was obtained originally by Kim, and the implicit constant was improved independently by Fiz Pontiveros, Griffiths and Morris, and Bohman and Keevash, by analysing the triangle-free process. Furthermore, studying the more general "-free process" has set the best known asymptotic lower bounds for general off-diagonal Ramsey numbers,
For the bounds become , but a 2023 preprint has improved the lower bound to which settles a question of Erdős who offered 250 dollars for a proof that the lower limit has form .
Formal verification of Ramsey numbers
The Ramsey number has been formally verified to be 28. This verification was achieved using a combination of Boolean satisfiability (SAT) solving and computer algebra systems (CAS). The proof was generated automatically using the SAT+CAS approach, marking the first certifiable proof of . The verification process required 96 hours of computation on a high-performance processor, producing a 30 GB DRAT (Deletion Resolution Asymmetric Tautology) file. This file was independently verified using the DRAT-trim proof checker in 63 hours.
The Ramsey number has been formally verified to be 25. The original proof, developed by McKay and Radziszowski in 1995, combined high-level mathematical arguments with computational steps and used multiple independent implementations to reduce the possibility of programming errors. This new formal proof was carried out using the HOL4 interactive theorem prover, limiting the potential for errors to the HOL4 kernel. Rather than directly verifying the original algorithms, the authors utilized HOL4's interface to the MiniSat SAT solver to formally prove key gluing lemmas.
Induced Ramsey
There is a less well-known yet interesting analogue of Ramsey's theorem for induced subgraphs. Roughly speaking, instead of finding a monochromatic subgraph, we are now required to find a monochromatic induced subgraph. In this variant, it is no longer sufficient to restrict our focus to complete graphs, since the existence of a complete subgraph does not imply the existence of an induced subgraph. The qualitative statement of the theorem in the next section was first proven independently by Erdős, Hajnal and Pósa, Deuber and Rödl in the 1970s. Since then, there has been much research in obtaining good bounds for induced Ramsey numbers.
Statement
Let be a graph on vertices. Then, there exists a graph such that any coloring of the edges of using two colors contains a monochromatic induced copy of (i.e. an induced subgraph of such that it is isomorphic to and its edges are monochromatic). The smallest possible number of vertices of is the induced Ramsey number .
Sometimes, we also consider the asymmetric version of the problem. We define to be the smallest possible number of vertices of a graph such that every coloring of the edges of using only red or blue contains a red induced subgraph of or blue induced subgraph of .
History and bounds
Similar to Ramsey's theorem, it is unclear a priori whether induced Ramsey numbers exist for every graph . In the early 1970s, Erdős, Hajnal and Pósa, Deuber and Rödl independently proved that this is the case. However, the original proofs gave terrible bounds (e.g. towers of twos) on the induced Ramsey numbers. It is interesting to ask if better bounds can be achieved. In 1974, Paul Erdős conjectured that there exists a constant such that every graph on vertices satisfies . If this conjecture is true, it would be optimal up to the constant because the complete graph achieves a lower bound of this form (in fact, it's the same as Ramsey numbers). However, this conjecture is still open as of now.
In 1984, Erdős and Hajnal claimed that they proved the bound
However, that was still far from the exponential bound conjectured by Erdős. It was not until 1998 when a major breakthrough was achieved by Kohayakawa, Prömel and Rödl, who proved the first almost-exponential bound of for some constant . Their approach was to consider a suitable random graph constructed on projective planes and show that it has the desired properties with nonzero probability. The idea of using random graphs on projective planes have also previously been used in studying Ramsey properties with respect to vertex colorings and the induced Ramsey problem on bounded degree graphs .
Kohayakawa, Prömel and Rödl's bound remained the best general bound for a decade. In 2008, Fox and Sudakov provided an explicit construction for induced Ramsey numbers with the same bound. In fact, they showed that every -graph with small and suitable contains an induced monochromatic copy of any graph on vertices in any coloring of edges of in two colors. In particular, for some constant , the Paley graph on vertices is such that all of its edge colorings in two colors contain an induced monochromatic copy of every -vertex graph.
In 2010, Conlon, Fox and Sudakov were able to improve the bound to , which remains the current best upper bound for general induced Ramsey numbers. Similar to the previous work in 2008, they showed that every -graph with small and edge density contains an induced monochromatic copy of every graph on vertices in any edge coloring in two colors. Currently, Erdős's conjecture that remains open and is one of the important problems in extremal graph theory.
For lower bounds, not much is known in general except for the fact that induced Ramsey numbers must be at least the corresponding Ramsey numbers. Some lower bounds have been obtained for some special cases (see Special Cases).
Special cases
While the general bounds for the induced Ramsey numbers are exponential in the size of the graph, the behaviour is much different on special classes of graphs (in particular, sparse ones). Many of these classes have induced Ramsey numbers polynomial in the number of vertices.
If is a cycle, path or star on vertices, it is known that is linear in .
If is a tree on vertices, it is known that . It is also known that is superlinear (i.e. ). Note that this is in contrast to the usual Ramsey numbers, where the Burr–Erdős conjecture (now proven) tells us that is linear (since trees are 1-degenerate).
For graphs with number of vertices and bounded degree , it was conjectured that , for some constant depending only on . This result was first proven by Łuczak and Rödl in 1996, with growing as a tower of twos with height . More reasonable bounds for were obtained since then. In 2013, Conlon, Fox and Zhao showed using a counting lemma for sparse pseudorandom graphs that , where the exponent is best possible up to constant factors.
Generalizations
Similar to Ramsey numbers, we can generalize the notion of induced Ramsey numbers to hypergraphs and multicolor settings.
More colors
We can also generalize the induced Ramsey's theorem to a multicolor setting. For graphs , define to be the minimum number of vertices in a graph such that any coloring of the edges of into colors contain an induced subgraph isomorphic to where all edges are colored in the -th color for some . Let ( copies of ).
It is possible to derive a bound on which is approximately a tower of two of height by iteratively applying the bound on the two-color case. The current best known bound is due to Fox and Sudakov, which achieves , where is the number of vertices of and is a constant depending only on .
Hypergraphs
We can extend the definition of induced Ramsey numbers to -uniform hypergraphs by simply changing the word graph in the statement to hypergraph. Furthermore, we can define the multicolor version of induced Ramsey numbers in the same way as the previous subsection.
Let be a -uniform hypergraph with vertices. Define the tower function by letting and for , . Using the hypergraph container method, Conlon, Dellamonica, La Fleur, Rödl and Schacht were able to show that for , for some constant depending on only and . In particular, this result mirrors the best known bound for the usual Ramsey number when .
Extensions of the theorem
Infinite graphs
A further result, also commonly called Ramsey's theorem, applies to infinite graphs. In a context where finite graphs are also being discussed it is often called the "Infinite Ramsey theorem". As intuition provided by the pictorial representation of a graph is diminished when moving from finite to infinite graphs, theorems in this area are usually phrased in set-theoretic terminology.
Theorem. Let be some infinite set and colour the elements of (the subsets of of size ) in different colours. Then there exists some infinite subset of such that the size subsets of all have the same colour.
Proof: The proof is by induction on , the size of the subsets. For , the statement is equivalent to saying that if you split an infinite set into a finite number of sets, then one of them is infinite. This is evident. Assuming the theorem is true for , we prove it for . Given a -colouring of the -element subsets of , let be an element of and let We then induce a -colouring of the -element subsets of , by just adding to each -element subset (to get an -element subset of ). By the induction hypothesis, there exists an infinite subset of such that every -element subset of is coloured the same colour in the induced colouring. Thus there is an element and an infinite subset such that all the -element subsets of consisting of and elements of have the same colour. By the same argument, there is an element in and an infinite subset of with the same properties. Inductively, we obtain a sequence such that the colour of each -element subset with depends only on the value of . Further, there are infinitely many values of such that this colour will be the same. Take these 's to get the desired monochromatic set.
A stronger but unbalanced infinite form of Ramsey's theorem for graphs, the Erdős–Dushnik–Miller theorem, states that every infinite graph contains either a countably infinite independent set, or an infinite clique of the same cardinality as the original graph.
Infinite version implies the finite
It is possible to deduce the finite Ramsey theorem from the infinite version by a proof by contradiction. Suppose the finite Ramsey theorem is false. Then there exist integers , , such that for every integer , there exists a -colouring of without a monochromatic set of size . Let denote the -colourings of without a monochromatic set of size .
For any , the restriction of a colouring in to (by ignoring the colour of all sets containing ) is a colouring in . Define to be the colourings in which are restrictions of colourings in . Since is not empty, neither is .
Similarly, the restriction of any colouring in is in , allowing one to define as the set of all such restrictions, a non-empty set. Continuing so, define for all integers , .
Now, for any integer ,
and each set is non-empty. Furthermore, is finite as
It follows that the intersection of all of these sets is non-empty, and let
Then every colouring in is the restriction of a colouring in . Therefore, by unrestricting a colouring in to a colouring in , and continuing doing so, one constructs a colouring of without any monochromatic set of size . This contradicts the infinite Ramsey theorem.
If a suitable topological viewpoint is taken, this argument becomes a standard compactness argument showing that the infinite version of the theorem implies the finite version.
Hypergraphs
The theorem can also be extended to hypergraphs. An -hypergraph is a graph whose "edges" are sets of vertices – in a normal graph an edge is a set of 2 vertices. The full statement of Ramsey's theorem for hypergraphs is that for any integers and , and any integers , there is an integer such that if the hyperedges of a complete -hypergraph of order are coloured with different colours, then for some between 1 and , the hypergraph must contain a complete sub--hypergraph of order whose hyperedges are all colour . This theorem is usually proved by induction on , the 'hyper-ness' of the graph. The base case for the proof is , which is exactly the theorem above.
For we know the exact value of one non-trivial Ramsey number, namely . This fact was established by Brendan McKay and Stanisław Radziszowski in 1991. Additionally, we have: , and .
Directed graphs
It is also possible to define Ramsey numbers for directed graphs; these were introduced by . Let be the smallest number such that any complete graph with singly directed arcs (also called a "tournament") and with nodes contains an acyclic (also called "transitive") -node subtournament.
This is the directed-graph analogue of what (above) has been called , the smallest number such that any 2-colouring of the edges of a complete undirected graph with nodes, contains a monochromatic complete graph on n nodes. (The directed analogue of the two possible arc colours is the two directions of the arcs, the analogue of "monochromatic" is "all arc-arrows point the same way"; i.e., "acyclic.")
We have , , , , , , , and .
Uncountable cardinals
In terms of the partition calculus, Ramsey's theorem can be stated as for all finite n and k. Wacław Sierpiński showed that the Ramsey theorem does not extend to graphs of size by showing that . In particular, the continuum hypothesis implies that . Stevo Todorčević showed that in fact in ZFC, , a much stronger statement than . Justin T. Moore has strengthened this result further. On the positive side, a Ramsey cardinal, , is a large cardinal axiomatically defined to satisfy the related formula: . The existence of Ramsey cardinals cannot be proved in ZFC.
Relationship to the axiom of choice
In reverse mathematics, there is a significant difference in proof strength between the version of Ramsey's theorem for infinite graphs (the case n = 2) and for infinite multigraphs (the case n ≥ 3). The multigraph version of the theorem is equivalent in strength to the arithmetical comprehension axiom, making it part of the subsystem ACA0 of second-order arithmetic, one of the big five subsystems in reverse mathematics. In contrast, by a theorem of David Seetapun, the graph version of the theorem is weaker than ACA0, and (combining Seetapun's result with others) it does not fall into one of the big five subsystems. Over ZF, however, the graph version implies the classical Kőnig's lemma, whereas the converse implication does not hold, since Kőnig's lemma is equivalent to countable choice from finite sets in this setting.
| Mathematics | Combinatorics | null |
184904 | https://en.wikipedia.org/wiki/Photonics | Photonics | Photonics is a branch of optics that involves the application of generation, detection, and manipulation of light in the form of photons through emission, transmission, modulation, signal processing, switching, amplification, and sensing.
Photonics is closely related to quantum electronics, where quantum electronics deals with the theoretical part of it while photonics deal with its engineering applications. Though covering all light's technical applications over the whole spectrum, most photonic applications are in the range of visible and near-infrared light.
The term photonics developed as an outgrowth of the first practical semiconductor light emitters invented in the early 1960s and optical fibers developed in the 1970s.
History
The word 'Photonics' is derived from the Greek word "phos" meaning light (which has genitive case "photos" and in compound words the root "photo-" is used); it appeared in the late 1960s to describe a research field whose goal was to use light to perform functions that traditionally fell within the typical domain of electronics, such as telecommunications, information processing, etc.
An early instance of the word was in a December 1954 letter from John W. Campbell to Gotthard Gunther:Incidentally, I’ve decided to invent a new science — photonics. It bears the same relationship to Optics that electronics does to electrical engineering. Photonics, like electronics, will deal with the individual units; optics and EE deal with the group-phenomena! And note that you can do things with electronics that are impossible in electrical engineering!Photonics as a field began with the invention of the maser and laser in 1958 to 1960. Other developments followed: the laser diode in the 1970s, optical fibers for transmitting information, and the erbium-doped fiber amplifier. These inventions formed the basis for the telecommunications revolution of the late 20th century and provided the infrastructure for the Internet.
Though coined earlier, the term photonics came into common use in the 1980s as fiber-optic data transmission was adopted by telecommunications network operators. At that time, the term was used widely at Bell Laboratories. Its use was confirmed when the IEEE Lasers and Electro-Optics Society established an archival journal named Photonics Technology Letters at the end of the 1980s.
During the period leading up to the dot-com crash circa 2001, photonics was a field focused largely on optical telecommunications. However, photonics covers a huge range of science and technology applications, including laser manufacturing, biological and chemical sensing, medical diagnostics and therapy, display technology, and optical computing. Further growth of photonics is likely if current silicon photonics developments are successful.
Relationship to other fields
Classical optics
Photonics is closely related to optics. Classical optics long preceded the discovery that light is quantized, when Albert Einstein famously explained the photoelectric effect in 1905. Optics tools include the refracting lens, the reflecting mirror, and various optical components and instruments developed throughout the 15th to 19th centuries. Key tenets of classical optics, such as Huygens Principle, developed in the 17th century, Maxwell's Equations and the wave equations, developed in the 19th, do not depend on quantum properties of light.
Modern optics
Photonics is related to quantum optics, optomechanics, electro-optics, optoelectronics and quantum electronics. However, each area has slightly different connotations by scientific and government communities and in the marketplace. Quantum optics often connotes fundamental research, whereas photonics is used to connote applied research and development.
The term photonics more specifically connotes:
The particle properties of light,
The potential of creating signal processing device technologies using photons,
The practical application of optics, and
An analogy to electronics.
The term optoelectronics connotes devices or circuits that comprise both electrical and optical functions, i.e., a thin-film semiconductor device. The term electro-optics came into earlier use and specifically encompasses nonlinear electrical-optical interactions applied, e.g., as bulk crystal modulators such as the Pockels cell, but also includes advanced imaging sensors.
An important aspect in the modern definition of Photonics is that there is not necessarily a widespread agreement in the perception of the field boundaries. Following a source on optics.org, the response of a query from the publisher of Journal of Optics: A Pure and Applied Physics to the editorial board regarding streamlining the name of the journal reported significant differences in the way the terms "optics" and "photonics" describe the subject area, with some description proposing that "photonics embraces optics". In practice, as the field evolves, evidences that "modern optics" and Photonics are often used interchangeably are very diffused and absorbed in the scientific jargon.
Emerging fields
Photonics also relates to the emerging science of quantum information and quantum optics. Other emerging fields include:
Optoacoustics or photoacoustic imaging where laser energy delivered into biological tissues will be absorbed and converted into heat, leading to ultrasonic emission.
Optomechanics, which involves the study of the interaction between light and mechanical vibrations of mesoscopic or macroscopic objects;
Optomics, in which devices integrate both photonic and atomic devices for applications such as precision timekeeping, navigation, and metrology;
Plasmonics, which studies the interaction between light and plasmons in dielectric and metallic structures. Plasmons are the quantizations of plasma oscillations; when coupled to an electromagnetic wave, they manifest as surface plasmon polaritons or localized surface plasmons.
Polaritonics, which differs from photonics in that the fundamental information carrier is a polariton. Polaritons are a mixture of photons and phonons, and operate in the range of frequencies from 300 gigahertz to approximately 10 terahertz.
Programmable photonics, which studies the development of photonic circuits that can be reprogrammed to implement different functions in the same fashion as an electronic FPGA
Applications
Applications of photonics are ubiquitous. Included are all areas from everyday life to the most advanced science, e.g. light detection, telecommunications, information processing, photovoltaics, photonic computing, lighting, metrology, spectroscopy, holography, medicine (surgery, vision correction, endoscopy, health monitoring), biophotonics, military technology, laser material processing, art diagnostics (involving infrared reflectography, X-rays, ultraviolet fluorescence, XRF), agriculture, and robotics.
Just as applications of electronics have expanded dramatically since the first transistor was invented in 1948, the unique applications of photonics continue to emerge. Economically important applications for semiconductor photonic devices include optical data recording, fiber optic telecommunications, laser printing (based on xerography), displays, and optical pumping of high-power lasers. The potential applications of photonics are virtually unlimited and include chemical synthesis, medical diagnostics, on-chip data communication, sensors, laser defense, and fusion energy, to name several interesting additional examples.
Consumer equipment: barcode scanner, printer, CD/DVD/Blu-ray devices, remote control devices
Telecommunications: fiber-optic communications, optical down converter to microwave
Renewable Energy: Solar power systems
Medicine: correction of poor eyesight, laser surgery, surgical endoscopy, tattoo removal
Industrial manufacturing: the use of lasers for welding, drilling, cutting, and various methods of surface modification
Construction: laser leveling, laser rangefinding, smart structures
Aviation: photonic gyroscopes lacking mobile parts
Military: IR sensors, command and control, navigation, search and rescue, mine laying and detection
Entertainment: laser shows, beam effects, holographic art
Information processing
Passive daytime radiative cooling
Sensors: LIDAR, sensors for consumer electronics
Metrology: time and frequency measurements, rangefinding
Photonic computing: clock distribution and communication between computers, printed circuit boards, or within optoelectronic integrated circuits; in the future: quantum computing
Microphotonics and nanophotonics usually includes photonic crystals and solid state devices.
Overview of photonics research
The science of photonics includes investigation of the emission, transmission, amplification, detection, and modulation of light.
Light sources
Photonics commonly uses semiconductor-based light sources, such as light-emitting diodes (LEDs), superluminescent diodes, and lasers. Other light sources include single photon sources, fluorescent lamps, cathode-ray tubes (CRTs), and plasma screens. Note that while CRTs, plasma screens, and organic light-emitting diode displays generate their own light, liquid crystal displays (LCDs) like TFT screens require a backlight of either cold cathode fluorescent lamps or, more often today, LEDs.
Characteristic for research on semiconductor light sources is the frequent use of III-V semiconductors instead of the classical semiconductors like silicon and germanium. This is due to the special properties of III-V semiconductors that allow for the implementation of light emitting devices. Examples for material systems used are gallium arsenide (GaAs) and aluminium gallium arsenide (AlGaAs) or other compound semiconductors. They are also used in conjunction with silicon to produce hybrid silicon lasers.
Transmission media
Light can be transmitted through any transparent medium. Glass fiber or plastic optical fiber can be used to guide the light along a desired path. In optical communications optical fibers allow for transmission distances of more than 100 km without amplification depending on the bit rate and modulation format used for transmission. A very advanced research topic within photonics is the investigation and fabrication of special structures and "materials" with engineered optical properties. These include photonic crystals, photonic crystal fibers and metamaterials.
Amplifiers
Optical amplifiers are used to amplify an optical signal. Optical amplifiers used in optical communications are erbium-doped fiber amplifiers, semiconductor optical amplifiers, Raman amplifiers and optical parametric amplifiers. A very advanced research topic on optical amplifiers is the research on quantum dot semiconductor optical amplifiers.
Detection
Photodetectors detect light. Photodetectors range from very fast photodiodes for communications applications over medium speed charge coupled devices (CCDs) for digital cameras to very slow solar cells that are used for energy harvesting from sunlight. There are also many other photodetectors based on thermal, chemical, quantum, photoelectric and other effects.
Modulation
Modulation of a light source is used to encode information on a light source. Modulation can be achieved by the light source directly. One of the simplest examples is to use a flashlight to send Morse code. Another method is to take the light from a light source and modulate it in an external optical modulator.
An additional topic covered by modulation research is the modulation format. On-off keying has been the commonly used modulation format in optical communications. In the last years more advanced modulation formats like phase-shift keying or even orthogonal frequency-division multiplexing have been investigated to counteract effects like dispersion that degrade the quality of the transmitted signal.
Photonic systems
Photonics also includes research on photonic systems. This term is often used for optical communication systems. This area of research focuses on the implementation of photonic systems like high speed photonic networks. This also includes research on optical regenerators, which improve optical signal quality.
Photonic integrated circuits
Photonic integrated circuits (PICs) are optically active integrated semiconductor photonic devices. The leading commercial application of PICs are optical transceivers for data center optical networks. PICs fabricated on III-V indium phosphide semiconductor wafer substrates were the first to achieve commercial success; PICs based on silicon wafer substrates are now also a commercialized technology.
Key Applications for Integrated Photonics include:
Data Center Interconnects: Data centers continue to grow in scale as companies and institutions store and process more information in the cloud. With the increase in data center compute, the demands on data center networks correspondingly increase. Optical cables can support greater lane bandwidth at longer transmission distances than copper cables. For short-reach distances and up to 40 Gbit/s data transmission rates, non-integrated approaches such as vertical-cavity surface-emitting lasers can be used for optical transceivers on multi-mode optical fiber networks. Beyond this range and bandwidth, photonic integrated circuits are key to enable high-performance, low-cost optical transceivers.
Analog RF Signal Applications: Using the GHz precision signal processing of photonic integrated circuits, radiofrequency (RF) signals can be manipulated with high fidelity to add or drop multiple channels of radio, spread across an ultra-broadband frequency range. In addition, photonic integrated circuits can remove background noise from an RF signal with unprecedented precision, which will increase the signal to noise performance and make possible new benchmarks in low power performance. Taken together, this high precision processing enables us to now pack large amounts of information into ultra-long-distance radio communications.
Sensors: Photons can also be used to detect and differentiate the optical properties of materials. They can identify chemical or biochemical gases from air pollution, organic produce, and contaminants in the water. They can also be used to detect abnormalities in the blood, such as low glucose levels, and measure biometrics such as pulse rate. Photonic integrated circuits are being designed as comprehensive and ubiquitous sensors with glass/silicon, and embedded via high-volume production in various mobile devices. Mobile platform sensors are enabling us to more directly engage with practices that better protect the environment, monitor food supply and keep us healthy.
LIDAR and other phased array imaging: Arrays of PICs can take advantage of phase delays in the light reflected from objects with three-dimensional shapes to reconstruct 3D images, and Light Imaging, Detection and Ranging (LIDAR) with laser light can offer a complement to radar by providing precision imaging (with 3D information) at close distances. This new form of machine vision is having an immediate application in driverless cars to reduce collisions, and in biomedical imaging. Phased arrays can also be used for free-space communications and novel display technologies. Current versions of LIDAR predominantly rely on moving parts, making them large, slow, low resolution, costly, and prone to mechanical vibration and premature failure. Integrated photonics can realize LIDAR within a footprint the size of a postage stamp, scan without moving parts, and be produced in high volume at low cost.
Biophotonics
Biophotonics employs tools from the field of photonics to the study of biology. Biophotonics mainly focuses on improving medical diagnostic abilities (for example for cancer or infectious diseases) but can also be used for environmental or other applications. The main advantages of this approach are speed of analysis, non-invasive diagnostics, and the ability to work in-situ.
| Physical sciences | Optics | Physics |
185041 | https://en.wikipedia.org/wiki/MSN | MSN | MSN is a web portal and related collection of Internet services and apps provided by Microsoft. The main webpage provides news, weather, sports, finance and other content curated from hundreds of different sources that Microsoft has partnered with. MSN is based in the United States and offers international versions of its portal for dozens of countries around the world; its dedicated app is currently available for Android and iOS systems.
MSN originally launched on August 24, 1995, alongside the release of Windows 95, as a subscription-based dial-up online service called The Microsoft Network; this later became an Internet service provider named MSN Dial-up. At the same time, the company launched a new web portal named Microsoft Internet Start and set it as the first default home page of Internet Explorer, its web browser. In 1998, Microsoft renamed and moved this web portal to the domain name www.msn.com, where it has remained since. Microsoft subsequently used the 'MSN' brand name for a wide variety of products and services over the years, notably Hotmail (later Outlook.com), Messenger (which was once synonymous with 'MSN' in Internet slang), and its web search engine, which is now Bing, and several other rebranded and discontinued services. In 2014, Microsoft reworked and relaunched the MSN website and suite of apps offered..
History
Microsoft Internet Start
From 1995 to 1998, the MSN.com domain was used by Microsoft primarily to promote MSN as an online service and Internet service provider. At the time, MSN.com also offered a custom start page and an Internet tutorial, but Microsoft's major web portal was known as "Microsoft Internet Start", and was located at home.microsoft.com.
Internet Start served as the default home page for Internet Explorer and offered basic information such as news, weather, sports, stocks, entertainment reports, links to other websites on the Internet, articles by Microsoft staff members, and software updates for Windows. Microsoft's original news website (now NBCNews.com) which launched in 1996, was also tied closely to the Internet Start portal.
MSN.com
In 1998, the largely underutilized 'MSN.com' domain name was combined with Microsoft Internet Start and reinvented as both a web portal and as the brand for a family of sites produced inside Microsoft's Interactive Media Group. The new website put MSN in direct competition with sites such as Yahoo!, Excite, and Go Network. Because the new format opened up MSN's content to the world for free, the Internet service provider and subscription service were renamed to MSN Internet Access at that time. (That service eventually became known as MSN Dial-up.)
The relaunched MSN.com contained a whole family of sites, including original content, channels that were carried over from 'web shows' that were part of Microsoft's MSN 2.0 experiment with its Internet service provider in 1996–97, and new features that were rapidly added. MSN.com became the successor to the default Internet Explorer start page, as all of the previous 'Microsoft Internet Start' website was merged with MSN.com.
Some of the original websites that Microsoft launched during that era remain active in some form today. Microsoft Investor, a business news and investments service that was once produced in conjunction with CNBC, is now MSN Money; CarPoint, an automobile comparison and shopping service, is now MSN Autos; and the Internet Gaming Zone, a website offering online casual games, is now MSN Games. Other websites since divested by Microsoft include the travel website Expedia, the online magazine Slate, and the local event and city search website Sidewalk.com.
In the late 1990s, Microsoft collaborated with many other service providers, as well as other Microsoft departments, to expand the range of MSN's services. Some examples include MSN adCenter, MSN Shopping (affiliated with eBay, PriceGrabber and Shopping.com), and the Encarta encyclopedia with various levels of access to information.
Since then, MSN.com has remained a popular destination, launching many new services and content sites. MSN's Hotmail and Messenger services were promoted from the MSN.com portal, which provided a central place for all of MSN's content. MSN Search (now Bing), a dedicated search engine, launched in 1999. The single sign-in service for Microsoft's online services, Microsoft Passport (now Microsoft account), also launched across all MSN services in 1999. The MSN.com portal and related group of services under the 'MSN' umbrella remained largely the same in the early 2000s.
The sports section of the MSN portal was ESPN.com from 2001 to 2004, and FoxSports.com from 2004 to 2014. MSN had an exclusive partnership with MSNBC.com for news content from 1996 until 2012, when Microsoft sold its remaining stake in msnbc.com to NBCUniversal and the website was renamed NBCNews.com. Since then, MSN has launched 'MSN News', an in-house news operation.
As of May 2005, MSN.com was the second most visited portal website in the United States with a share of 23.2 percent, behind Yahoo! which held a majority.
MSN released a preview of an updated home page and logo on November 3, 2009. It was originally expected to be widely available to over 100 million U.S. customers by early 2010. MSN rolled out the newer logo, together with a redesign of the overall website, on December 25, 2009.
In 2012, MSN announced on its blog that it would be unveiling a new version of the MSN.com home page on October 26, exclusively for Windows 8, saying that the new version would be "clean, simple, and built for touch". Microsoft said it would be more app-like due to the speed of Internet Explorer 10. More new features included 'Flip Ahead', which allowed users to swipe from one article to the next. MSN for Windows 8 also had new deals with the AP and Reuters.
Rebranding of services
Many of MSN's services were reorganized in 2005 and 2006 under a new brand name that Microsoft championed at the time, Windows Live. This move was part of Microsoft's strategy to improve its online offerings using the Windows brand name. The company also overhauled its online software and services due to increasing competition from rivals such as Yahoo! and Google. The new name was introduced one service at a time. The group of Windows Live services used Web 2.0 technology to offer features and functionality through a web browser that were traditionally only available through dedicated software programs.
Some of the MSN services affected by the rebranding included MSN Hotmail, which became Windows Live Hotmail (now Outlook.com); MSN Messenger, which became Windows Live Messenger (now integrated into Skype); MSN Search, which became Live Search (now known as Bing); MSN Virtual Earth, which became Live Search Maps (now Bing Maps); MSN Spaces, which became Windows Live Spaces; MSN Alerts, which became Windows Live Alerts; and MSN Groups, which became Windows Live Groups. Some other services, such as MSN Direct, remained a part of the MSN family without transitioning to Windows Live.
Following the launch of Windows Live, the MSN brand took on a different focus. MSN became primarily an online content provider of news, entertainment, and common interest topics through its web portal, MSN.com, while Windows Live provided most of Microsoft's online software and services. In 2012, Microsoft began to phase out the Windows Live brand, referring to each service separately by its individual brand name without any 'Windows' prefix or association.
Subsequent redesign
Microsoft launched a completely rewritten and redesigned MSN website, making use of the company's modern design language, on September 30, 2014. The new MSN portal features a new version of the logo that follows a style similar to other current Microsoft products. The website no longer offers original content, instead of employing editors to repurpose existing content from partners at popular and trusted organizations. Much of the existing content on MSN was eliminated as the website was simplified into a new home page and categories, some of which have corresponding apps:
News: The latest news headlines and articles from a variety of hand-picked sources. Synced with the News app.
Weather: Current weather conditions, forecasts, maps, news, and traffic. Synced with the Weather app.
Entertainment: TV, movies, music, and celebrity news, as well as theater showtimes, tickets, and TV listings. Based on the former Bing Entertainment service. Also includes the MSN Games website for online casual games.
Sports: Up-to-the-minute scores, standings, and headlines from leagues worldwide. Synced with the Sports app.
Money: Stock market tickers and watchlists, personal finance, real estate, investments, currency converter, and more. Synced with the Money app.
Lifestyle: Headlines, features, and other content related to style, home & garden, family, smart living, relationships, and horoscopes.
Health & Fitness: Tools and information about weight loss, strength, exercise, nutrition, medicine, and more.
Food & Drink: Recipes, cooking tips, news from chefs, cocktails, and shopping lists.
Travel: Destinations, trip ideas, hotel search, flight search, flight status, and arrivals and departures. Previously based on Farecast.
Autos: Research and buying advice, auto-related news, information for enthusiasts, and coverage of auto shows worldwide.
Video: Trending and viral videos, comedy and pop culture, and videos from other MSN categories. Integrates with video search from Bing Videos.
The top of the home page provides access to Microsoft services Bing, Outlook.com, Skype, Office Online, OneNote, OneDrive, Bing Maps, and Groove Music, as well as popular social media services Facebook and Twitter. Signing into MSN with a Microsoft account allows for personalized content to appear and to be synchronized across devices on the website and in the corresponding apps. The redesign of the website led to the closure of MSN's longtime personalized home page service 'My MSN', which was made up of customized RSS feeds, as the new website no longer supports user-specified RSS content. However, it added some customizability, allowing each category on the home page to be reordered or hidden.
With the 2014 relaunch, MSN now supports responsive design and eliminates the need for a separate mobile website. The redesign of MSN proved positive and helped increase traffic with an additional 10 million daily visitors after two months.
In 2022, Microsoft began phasing out MSN for Microsoft Start, with news pages being moved to Start, and ads for the website appearing on the homepage. This was reversed in November 2024, with the Microsoft Start page redirecting back to MSN.
Microsoft brought back the MSN app in November 2024.
Apps
The MSN web-based apps provides users information from sources that publish to MSN.
Microsoft launched these apps along with the 2014 redesign of the MSN web portal, rebranding many of the Bing apps that originally shipped with Windows and Windows Phone. News, Weather, Sports, Money, and Travel first shipped with Windows 8, while Health & Fitness and Food & Drink first appeared in Windows 8.1. In December 2014, the apps became available across all the other major mobile device platforms as well: iOS, Android, and Fire OS.
The apps allow users a reasonable amount of freedom to decide which sources provide information. Each app has its own color code that is used on the live tile and internally. Originally, each app brought a unified experience with the MSN website and synchronized preferences across devices.
There are currently four apps in the suite: Start (previously News), Weather, Sports, and Money. In July 2015, Microsoft announced the discontinuation of the Food & Drink, Health & Fitness, and Travel apps on all platforms, and that they will not be bundled with Windows 10; those three apps are no longer offered.
After Microsoft's acquisition of Nokia's mobile phone division, Microsoft also started bundling MSN services with its Nokia-branded feature phones, though the only supported model was the Nokia 215. In addition to these apps, Microsoft developed a separate set of mobile apps specifically for MSN China.
Microsoft Start
Microsoft Start (previously named Microsoft News) was a news aggregator and service that featured news headlines and articles chosen by editors. The app includes sections for top stories, U.S., world, money, technology, entertainment, opinion, sports, and crime, along with other miscellaneous stories. It allows users to set their own personalized favorite topics and sources, receive notifications of breaking news through alerts, filter preferred news sources, and alter font sizes to make articles easier to read.
Originally, Start included an RSS feed, but that capability was removed; Microsoft currently only allows users to subscribe to specified news sources, thereby curating news. Start uses the chaseable live tile feature introduced in the Windows 10 Anniversary Update. If a user clicks on the Microsoft News Start menu tile when a particular story is shown, the user will see a link to that story at the top of the app when it launches.
Weather
MSN Weather (originally named Bing Weather) shows weather from a user's current location or any other location worldwide, and it allows users to define their favorite places, which will synchronize back to the Microsoft Start and across devices. Users can pin Weather tiles to the Start menu to see local weather conditions from multiple locations at a glance. It also offers satellite maps and has information regarding ski resorts. The app receives its weather conditions and forecasts from a variety of sources internationally. Weather uses weather conditions as the background, making it the only app that does not have a light/dark switch in Windows 10. Weather is not available for iOS; however, it comes preinstalled on the Nokia 215 phone from Microsoft Mobile that runs Series 30+; it is currently the only feature phone to have the app built-in.
Money
MSN Money (originally MoneyCentral, then MSN Moneycentral, before being rebranded as MSN Money in the mid-2000s - prior to being relaunched as a spin-off of Bing Finance) allows users to set up lists of publicly listed companies to watch, follow certain corporations and receive stock updates, get the latest headlines regarding international markets, view real-time trading figures with a 30-minute delay, track their own personal finances, calculate mortgages, get information on bonds and other financial assets, and convert currency.
Esports Hub
MSN Esports (often referred to as MSN Esports Hub) is a Bing intelligence AI curated webpage for the growing esports industry. Users can watch integrated streams from YouTube or Twitch. Microsoft's advanced AI called "Watch For", the algorithm originally made for Microsoft's Mixer is an artificial intelligence that uses computer vision algorithms on livestreams so that it can alert the viewer of significant moments. This algorithm is implemented in the MSN Esports Hub. Users can also check the calendar for dates of upcoming e-sport events and tournaments or the news for updates on games and their tournament. After the creation of the MSN Esports Hub, Microsoft acquired Smash.gg; an e-sport tournament platform.
Discontinued apps
Food & Drink
MSN Food & Drink (originally named Bing Food & Drink) is a discontinued recipe app that offers news related to foods and drinks, a personal shopping list that synchronizes across devices and the web, and a wine encyclopedia that contains information on over 1.5million bottles of wine, over 3.3million tasting notes, and hundreds of cocktail recipes. Users can control the app hands-free, add their own recipes from physical cookbooks or personal recipes by snapping a photo, add notes to recipes, and sort the recipes into collections. The app also collects information from famous chefs and lists them according to their style of cuisine.
Health & Fitness
MSN Health & Fitness (originally named Bing Health & Fitness) allowed users to track their intake of calories, look up nutritional information for hundreds of thousands of different foods, use a built-in GPS tracker, view step-by-step workouts and exercises with images and videos, check symptoms for various health conditions, and synchronize their health data to third-party devices such as activity trackers. MSN Health & Fitness formerly connected data with the Microsoft HealthVault, but it started using a Microsoft account with MSN's own cloud service to synchronize data when it was rebranded from Bing to MSN. The app is not related in any way to Microsoft's Xbox Fitness or Microsoft Health (the companion app for the Microsoft Band), despite being similar in function.
Sports
MSN Sports (originally named Bing Sports) displayed various sports scores and standings from hundreds of leagues around the world, as well as aggregated sports-related articles and news headlines. Sports also allowed the user to view slideshows and photo galleries, look up information about individual players and fantasy leagues, and set and track their favorite teams by selecting various topics from the hamburger menu. It also powered various predictive features within Microsoft's Cortana virtual assistant.
It was discontinued on July 20, 2021, in favor of the web portal.
Travel
MSN Travel (originally named Bing Travel) was a travel search engine that allows users to book hotels and flights, aggregates travel-related headlines, and offers detailed information about thousands of travel destinations. Data in the app is powered by various travel websites, including Expedia, formerly owned by Microsoft. Other features include finding information on local restaurants, viewing pictures (including panoramas) and historical data about destinations, and reading reviews by previous travelers. If the user is signed in, Cortana can track flights and get hotel information through the app. MSN Travel was the only app in the suite that was exclusive to Windows. The app was discontinued in September 2015 but can still be accessed via the web.
Previously, Microsoft had acquired Farecast in 2008, a website in the computer reservations system industry that offered predictions regarding the best time to purchase airline tickets. Farecast was founded in 2003 and collected over 175 billion airfare observations by 2007. Farecast's team of data miners used these airfare observations to build algorithms to predict future airfare price movements. Microsoft integrated it as part of its Live Search group of tools in May 2008 as Live Search Farecast; Microsoft rebranded it as Bing Travel on June 3, 2009, as part of its efforts to create a new search identity. In 2009, there were allegations that Bing Travel had copied its layouts from Kayak.com; Microsoft denied the allegations. By January 2013, Bing Travel results were powered by Kayak.com. As of January 2014, the fare prediction feature had been removed. As of May 2015, Microsoft rebranded the service to MSN Travel. In August 2015, MSN Travel flight search pages changed from being powered by Kayak.com to its competitor Skyscanner.
Older mobile apps
Microsoft first offered content from its MSN web portal on mobile devices in the early 2000s, through a service called Pocket MSN (in line with its Pocket PC products of the era) and later renamed MSN Mobile. The original MSN Mobile software was preloaded on many cell phones and PDAs, and usually provided access to legacy MSN services like blogs (MSN Spaces), email (Hotmail), instant messaging (MSN Messenger), and web search (now called Bing). Some wireless carriers charged a premium to access it. As many former MSN properties were spun off to Bing, Windows Live, and other successors in the late 2000s, the Microsoft Mobile Services division took over the development of mobile apps related to those services.
In the meantime, Microsoft's MSN apps took on a more content-related focus, as did the web portal itself. Previous versions of MSN apps that were bundled with Windows Mobile and early versions of Windows Phone, as well as MSN apps for Android and iOS devices in the early 2010s, were primarily repositories for news articles found on MSN.com. Other earlier MSN mobile apps included versions of MSN Weather and MSN Money for Windows Mobile 6.5, MSN Money Stocks, and a men's magazine called 'MSN OnIt' for Windows Phone 7.
International
Microsoft's world headquarters is in the United States, so the main MSN website is based there. However, MSN has offered various international versions of its portal since its inception in 1995 for dozens of countries around the world. A list of international MSN affiliates is available at MSN Worldwide.
Following the redesign and relaunch of the MSN web portal in 2014, most international MSN websites share the same layout as the U.S. version and are largely indistinguishable from it, aside from their content. There were two exceptions: ninemsn, a longtime partnership between Microsoft and the Nine Network in Australia that launched in 1997 (Microsoft sold its stake in the venture in 2013 and ended its co-branding with Nine in 2016); and MSN China, an entirely customized version of MSN for China (Microsoft discontinued the portal in 2016, replacing it with a page that links to a number of other Chinese websites).
| Technology | Portals/Platform sites | null |
185152 | https://en.wikipedia.org/wiki/Chat%20%28bird%29 | Chat (bird) | Chats (formerly sometimes known as "chat-thrushes") are a group of small Old World insectivorous birds formerly classified as members of the thrush family (Turdidae), but following genetic DNA analysis, are now considered to belong to the Old World flycatcher family (Muscicapidae).
The name is normally applied to the more robust ground-feeding flycatchers found in Europe and Asia and most northern species are strong migrants. There are many genera and these birds in particular make up most of the subfamily Saxicolinae.
Other songbirds called "chats" are:
Australian chats, genera Ashbyia and Epthianura of the honeyeater family (Meliphagidae). They belong to a more ancient lineage than Saxicolinae.
American chats, genus Granatellus of the cardinal family (Cardinalidae), formerly placed in the wood-warbler family. They belong to a more modern lineage than Saxicolinae.
Yellow-breasted chat (Icteria virens), an enigmatic North American songbird tentatively placed in the wood-warbler family (Parulidae); its true relationships are unresolved.
Species in taxonomic order
Subfamily Saxicolinae
Genus Tarsiger – bush-robins
Red-flanked bluetail or orange-flanked bush-robin, Tarsiger cyanurus
Golden bush robin, Tarsiger chrysaeus
White-browed bush robin, Tarsiger indicus
Rufous-breasted bush robin, Tarsiger hyperythrus
Collared bush robin, Tarsiger johnstoniae
Genus Luscinia (4 species)
Genus Calliope (4 species)
Genus Larvivora (8 species)
Genus Erithacus – European robin
Genus Irania – white-throated robin
Genus Saxicola – bushchats and stonechats (c. 15 species)
Genus Pogonocichla
White-starred robin, Pogonocichla stellata
Genus Swynnertonia
Swynnerton's robin, Swynnertonia swynnertoni
Genus Stiphrornis – forest robins (1–5 species, depending on taxonomy)
Genus Xenocopsychus
Angola cave chat, Xenocopsychus ansorgei
Genus Saxicoloides – Indian robin (formerly)
Genus Cinclidium
Blue-fronted robin, Cinclidium frontale
Genus Myiomela
White-tailed robin, Myiomela leucura
Javan blue robin, Myiomela diana
Sumatran blue robin, Myiomela sumatrana
Genus Grandala
Grandala, Grandala coelicolor
Genus Namibornis
Herero chat, Namibornis herero
Genus Emarginata
Sickle-winged chat, Emarginata sinuata
Karoo chat, Emarginata schlegelii
Tractrac chat, Emarginata tractrac
Genus Oenanthe
Familiar chat, Oenanthe familiaris
Brown-tailed rock chat, Oenanthe scotocerca
Brown rock chat, Oenanthe fusca
Sombre rock chat, Oenanthe dubia
Blackstart, Oenanthe melanura
Moorland chat, Oenanthe sordida
Genus Myrmecocichla
Sooty chat (Myrmecocichla nigra)
Anteater chat (Myrmecocichla aethiops)
Congo moor chat (Myrmecocichla tholloni)
Ant-eating chat (Myrmecocichla formicivora)
Rüppell's black chat (Myrmecocichla melaena)
Mountain chat (Myrmecocichla monticola)
Arnot's chat (Myrmecocichla arnoti)
Ruaha chat (Myrmecocichla collaris)
Genus Thamnolaea – cliff chats
Mocking cliff chat, Thamnolaea cinnamomeiventris
White-winged cliff chat, Thamnolaea semirufa
Genus Pinarornis
Boulder chat Pinarornis plumosus
Saxicolinae genera not usually called "chats" are:
Genus Sheppardia – akalats (9 species)
Genus Cossyphicula – white-bellied robin-chat – may belong in Cossypha
Genus Cossypha – robin-chats (15 species, excluding the white-bellied robin-chat)
Genus Cichladusa – palm-thrushes (3 species)
Genus Cercotrichas – scrub-robins or bush-chats (10 species)
Genus Myophonus, whistling thrushes
Genus Copsychus – magpie-robins or shamas (12 species)
Genus Phoenicurus – true redstarts (11 species)
Genus Enicurus – forktails (7 species)
Genus Cochoa – cochoas (4 species)
Genus Brachypteryx – (10 species)
Genus Heinrichia – great shortwing
Genus Leonardina – Bagobo babbler
Genus Oenanthe – wheatears (some 20 species)
Aberrant redstarts, possibly belonging in this subfamily:
Genus Chaimarrornis – white-capped redstart
Genus Rhyacornis (2 species)
| Biology and health sciences | Passerida | Animals |
185153 | https://en.wikipedia.org/wiki/Isostasy | Isostasy | Isostasy (Greek ísos 'equal', stásis 'standstill') or isostatic equilibrium is the state of gravitational equilibrium between Earth's crust (or lithosphere) and mantle such that the crust "floats" at an elevation that depends on its thickness and density. This concept is invoked to explain how different topographic heights can exist at Earth's surface. Although originally defined in terms of continental crust and mantle, it has subsequently been interpreted in terms of lithosphere and asthenosphere, particularly with respect to oceanic island volcanoes, such as the Hawaiian Islands.
Although Earth is a dynamic system that responds to loads in many different ways, isostasy describes the important limiting case in which crust and mantle are in static equilibrium. Certain areas (such as the Himalayas and other convergent margins) are not in isostatic equilibrium and are not well described by isostatic models.
The general term isostasy was coined in 1882 by the American geologist Clarence Dutton.
History of the concept
In the 17th and 18th centuries, French geodesists (for example, Jean Picard) attempted to determine the shape of the Earth (the geoid) by measuring the length of a degree of latitude at different latitudes (arc measurement). A party working in Ecuador was aware that its plumb lines, used to determine the vertical direction, would be deflected by the gravitational attraction of the nearby Andes Mountains. However, the deflection was less than expected, which was attributed to the mountains having low-density roots that compensated for the mass of the mountains. In other words, the low-density mountain roots provided the buoyancy to support the weight of the mountains above the surrounding terrain. Similar observations in the 19th century by British surveyors in India showed that this was a widespread phenomenon in mountainous areas. It was later found that the difference between the measured local gravitational field and what was expected for the altitude and local terrain (the Bouguer anomaly) is positive over ocean basins and negative over high continental areas. This shows that the low elevation of ocean basins and high elevation of continents is also compensated at depth.
The American geologist Clarence Dutton use the word 'isostasy' in 1889 to describe this general phenomenon. However, two hypotheses to explain the phenomenon had by then already been proposed, in 1855, one by George Airy and the other by John Henry Pratt. The Airy hypothesis was later refined by the Finnish geodesist Veikko Aleksanteri Heiskanen and the Pratt hypothesis by the American geodesist John Fillmore Hayford.
Both the Airy-Heiskanen and Pratt-Hayford hypotheses assume that isostacy reflects a local hydrostatic balance. A third hypothesis, lithospheric flexure, takes into account the rigidity of the Earth's outer shell, the lithosphere. Lithospheric flexure was first invoked in the late 19th century to explain the shorelines uplifted in Scandinavia following the melting of continental glaciers at the end of the last glaciation. It was likewise used by American geologist G. K. Gilbert to explain the uplifted shorelines of Lake Bonneville. The concept was further developed in the 1950s by the Dutch geodesist Vening Meinesz.
Models
Three principal models of isostasy are used:
The Airy–Heiskanen model – where different topographic heights are accommodated by changes in crustal thickness, in which the crust has a constant density
The Pratt–Hayford model – where different topographic heights are accommodated by lateral changes in rock density.
The Vening Meinesz, or flexural isostasy model – where the lithosphere acts as an elastic plate and its inherent rigidity distributes local topographic loads over a broad region by bending.
Airy and Pratt isostasy are statements of buoyancy, but flexural isostasy is a statement of buoyancy when deflecting a sheet of finite elastic strength. In other words, the Airy and Pratt models are purely hydrostatic, taking no account of material strength, while flexural isostacy takes into account elastic forces from the deformation of the rigid crust. These elastic forces can transmit buoyant forces across a large region of deformation to a more concentrated load.
Perfect isostatic equilibrium is possible only if mantle material is in rest. However, thermal convection is present in the mantle. This introduces viscous forces that are not accounted for the static theory of isostacy. The isostatic anomaly or IA is defined as the Bouger anomaly minus the gravity anomaly due to the subsurface compensation, and is a measure of the local departure from isostatic equilibrium.
At the center of a level plateau, it is approximately equal to the free air anomaly. Models such as deep dynamic isostasy (DDI) include such viscous forces and are applicable to a dynamic mantle and lithosphere. Measurements of the rate of isostatic rebound (the return to isostatic equilibrium following a change in crust loading) provide information on the viscosity of the upper mantle.
Airy
The basis of the model is Pascal's law, and particularly its consequence that, within a fluid in static equilibrium, the hydrostatic pressure is the same on every point at the same elevation (surface of hydrostatic compensation):
h1⋅ρ1 = h2⋅ρ2 = h3⋅ρ3 = ... hn⋅ρn
For the simplified picture shown, the depth of the mountain belt roots (b1) is calculated as follows:
where is the density of the mantle (ca. 3,300 kg m−3) and is the density of the crust (ca. 2,750 kg m−3). Thus, generally:
b1 ≅ 5⋅h1
In the case of negative topography (a marine basin), the balancing of lithospheric columns gives:
where is the density of the mantle (ca. 3,300 kg m−3), is the density of the crust (ca. 2,750 kg m−3) and is the density of the water (ca. 1,000 kg m−3). Thus, generally:
b2 ≅ 3.2⋅h2
Pratt
For the simplified model shown the new density is given by: , where is the height of the mountain and c the thickness of the crust.
Vening Meinesz / flexural
This hypothesis was suggested to explain how large topographic loads such as seamounts (e.g. Hawaiian Islands) could be compensated by regional rather than local displacement of the lithosphere. This is the more general solution for lithospheric flexure, as it approaches the locally compensated models above as the load becomes much larger than a flexural wavelength or the flexural rigidity of the lithosphere approaches zero.
For example, the vertical displacement z of a region of ocean crust would be described by the differential equation
where and are the densities of the aesthenosphere and ocean water, g is the acceleration due to gravity, and is the load on the ocean crust. The parameter D is the flexural rigidity, defined as
where E is Young's modulus, is Poisson's ratio, and is the thickness of the lithosphere. Solutions to this equation have a characteristic wave number
As the rigid layer becomes weaker, approaches infinity, and the behavior approaches the pure hydrostatic balance of the Airy-Heiskanen hypothesis.
Depth of compensation
The depth of compensation (also known as the compensation level, compensation depth, or level of compensation) is the depth below which the pressure is identical across any horizontal surface. In stable regions, it lies in the deep crust, but in active regions, it may lie below the base of the lithosphere. In the Pratt model, it is the depth below which all rock has the same density; above this depth, density is lower where topographic elevation is greater.
Implications
Deposition and erosion
When large amounts of sediment are deposited on a particular region, the immense weight of the new sediment may cause the crust below to sink. Similarly, when large amounts of material are eroded away from a region, the land may rise to compensate. Therefore, as a mountain range is eroded, the (reduced) range rebounds upwards (to a certain extent) to be eroded further. Some of the rock strata now visible at the ground surface may have spent much of their history at great depths below the surface buried under other strata, to be eventually exposed as those other strata eroded away and the lower layers rebounded upwards.
An analogy may be made with an iceberg, which always floats with a certain proportion of its mass below the surface of the water. If snow falls to the top of the iceberg, the iceberg will sink lower in the water. If a layer of ice melts off the top of the iceberg, the remaining iceberg will rise. Similarly, Earth's lithosphere "floats" in the asthenosphere.
Continental collisions
When continents collide, the continental crust may thicken at their edges in the collision. It is also very common for one of the plates to be underthrust beneath the other plate. The result is that the crust in the collision zone becomes as much as thick, versus for average continental crust. As noted above, the Airy hypothesis predicts that the resulting mountain roots will be about five times deeper than the height of the mountains, or 32 km versus 8 km. In other words, most of the thickened crust moves downwards rather than up, just as most of an iceberg is below the surface of the water.
However, convergent plate margins are tectonically highly active, and their surface features are partially supported by dynamic horizontal stresses, so that they are not in complete isostatic equilibrium. These regions show the highest isostatic anomalies on the Earth's surface.
Mid-ocean ridges
Mid-ocean ridges are explained by the Pratt hypothesis as overlying regions of unusually low density in the upper mantle. This reflects thermal expansion from the higher temperatures present under the ridges.
Basin and Range
In the Basin and Range Province of western North America, the isostatic anomaly is small except near the Pacific coast, indicating that the region is generally near isostatic equilibrium. However, the depth to the base of the crust does not strongly correlate with the height of the terrain. This provides evidence (via the Pratt hypothesis) that the upper mantle in this region is inhomogeneous, with significant lateral variations in density.
Ice sheets
The formation of ice sheets can cause Earth's surface to sink. Conversely, isostatic post-glacial rebound is observed in areas once covered by ice sheets that have now melted, such as around the Baltic Sea and Hudson Bay. As the ice retreats, the load on the lithosphere and asthenosphere is reduced and they rebound back towards their equilibrium levels. In this way, it is possible to find former sea cliffs and associated wave-cut platforms hundreds of metres above present-day sea level. The rebound movements are so slow that the uplift caused by the ending of the last glacial period is still continuing.
In addition to the vertical movement of the land and sea, isostatic adjustment of the Earth also involves horizontal movements. It can cause changes in Earth's gravitational field and rotation rate, polar wander, and earthquakes.
Lithosphere-asthenosphere boundary
The hypothesis of isostasy is often used to determine the position of the lithosphere-asthenosphere boundary (LAB).
| Physical sciences | Geophysics | Earth science |
185234 | https://en.wikipedia.org/wiki/Darter | Darter | The darters, anhingas, or snakebirds are mainly tropical waterbirds in the family Anhingidae, which contains a single genus, Anhinga. There are four living species, three of which are very common and widespread while the fourth is rarer and classified as near-threatened by the IUCN. The term snakebird is usually used without any additions to signify whichever of the completely allopatric species occurs in any one region. It refers to their long thin neck, which has a snake-like appearance when they swim with their bodies submerged, or when mated pairs twist it during their bonding displays. "Darter" is used with a geographical term when referring to particular species. It alludes to their manner of procuring food, as they impale fishes with their thin, pointed beak. The American darter (A. anhinga) is more commonly known as the anhinga. It is sometimes called "water turkey" in the southern United States; though the anhinga is quite unrelated to the wild turkey, they are both large, blackish birds with long tails that are sometimes hunted for food.
Description
Anhingidae are large birds with sexually dimorphic plumage. They measure about in length, with a wingspan around , and weigh some . The males have black and dark-brown plumage, a short erectile crest on the nape and a larger bill than the female. The females have much paler plumage, especially on the neck and underparts, and are a bit larger overall. Both have grey stippling on long scapulars and upper wing coverts. The sharply pointed bill has serrated edges, a desmognathous palate and no external nostrils. The darters have completely webbed feet, and their legs are short and set far back on the body.
There is no eclipse plumage, but the bare parts vary in color around the year. During breeding, however, their small gular sac changes from pink or yellow to black, and the bare facial skin, otherwise yellow or yellow-green, turns turquoise. The iris changes in color between yellow, red or brown seasonally. The young hatch naked, but soon grow white or tan down.
Darter vocalizations include a clicking or rattling when flying or perching. In the nesting colonies, adults communicate with croaks, grunts or rattles. During breeding, adults sometimes give a caw or sighing or hissing calls. Nestlings communicate with squealing or squawking calls.
Distribution and ecology
Darters are mostly tropical in distribution, ranging into subtropical and barely into warm temperate regions. They typically inhabit fresh water lakes, rivers, marshes, swamps, and are less often found along the seashore in brackish estuaries, bays, lagoons and mangrove. Most are sedentary and do not migrate; the populations in the coolest parts of the range may migrate however. Their preferred mode of flight is soaring and gliding; in flapping flight they are rather cumbersome. On dry land, darters walk with a high-stepped gait, wings often spread for balance, just like pelicans do. They tend to gather in flocks – sometimes up to about 100 birds – and frequently associate with storks, herons or ibises, but are highly territorial on the nest: despite being a colonial nester, breeding pairs – especially males – will stab at any other bird that ventures within reach of their long neck and bill. The Oriental darter (A. melanogaster sensu stricto) is a Near Threatened species. Habitat destruction along with other human interferences (such as egg collection and pesticide overuse) are the main reasons for declining darter populations.
Diet
Darters feed mainly on mid-sized fish; far more rarely, they eat other aquatic vertebrates and large invertebrates of comparable size. These birds are foot-propelled divers which quietly stalk and ambush their prey; then they use their sharply pointed bill to impale the food animal. They do not dive deep but make use of their low buoyancy made possible by wettable plumage, small air sacs and denser bones. On the underside of the cervical vertebrae 5–7 is a keel, which allows for muscles to attach to form a hinge-like mechanism that can project the neck, head and bill forward like a throwing spear. After they have stabbed the prey, they return to the surface where they toss their food into the air and catch it again, so that they can swallow it head-first. Like cormorants, they have a vestigial preen gland and their plumage gets wet during diving. To dry their feathers after diving, darters move to a safe location and spread their wings. Darters go through a synchronous moult of all their primaries and secondaries making them temporarily flightless, although it is possible that some individuals go through incomplete moults.
Predation
Predators of darters are mainly large carnivorous birds, including passerines like the Australian raven (Corvus coronoides) and house crow (Corvus splendens), and birds of prey such as marsh harriers (Circus aeruginosus complex) or Pallas's fish eagle (Haliaeetus leucoryphus). Predation by Crocodylus crocodiles has also been noted. But many would-be predators know better than to try to catch a darter. The long neck and pointed bill in combination with the "darting" mechanism make the birds dangerous even to larger carnivorous mammals, and they will actually move toward an intruder to attack rather than defending passively or fleeing.
Breeding
They usually breed in colonies, occasionally mixed with cormorants or herons. The darters pair bond monogamously at least for a breeding season. There are many different types of displays used for mating. Males display to attract females by raising (but not stretching) their wings to wave them in an alternating fashion, bowing and snapping the bill, or giving twigs to potential mates. To strengthen the pair bond, partners rub their bills or wave, point upwards or bow their necks in unison. When one partner comes to relieve the other at the nest, males and females use the same display the male employs during courtship; during changeovers, the birds may also "yawn" at each other.
Breeding is seasonal (peaking in March/April) at the northern end of their range; elsewhere they can be found breeding all year round. The nests are made of twigs and lined with leaves; they are built in trees or reeds, usually near water. Typically, the male gathers nesting material and brings it to the female, which does most of the actual construction work. Nest construction takes only a few days (about three at most), and the pairs copulate at the nest site. The clutch size is two to six eggs (usually about four) which have a pale green color. The eggs are laid within 24–48 hours and incubated for 25 to 30 days, starting after the first has been laid; they hatch asynchronously. To provide warmth to the eggs, the parents will cover them with their large webbed feet, because like their relatives they lack a brood patch. The last young to hatch will usually starve in years with little food available. Bi-parental care is given and the young are considered altricial. They are fed by regurgitation of partly digested food when young, switching to entire food items as they grow older. After fledging, the young are fed for about two more weeks while they learn to hunt for themselves.
These birds reach sexual maturity by about two years, and generally live to around nine years. The maximum possible lifespan of darters seems to be about sixteen years.
Darter eggs are edible and considered delicious by some; they are locally collected by humans as food. The adults are also eaten occasionally, as they are rather meaty birds (comparable to a domestic duck); like other fish-eating birds such as cormorants or seaducks they do not taste particularly good though. Darter eggs and nestlings are also collected in a few places to raise the young. Sometimes this is done for food, but some nomads in Assam and Bengal train tame darters to be employed as in cormorant fishing. With an increasing number of nomads settling down in recent decades, this cultural heritage is in danger of being lost. On the other hand, as evidenced by the etymology of "anhinga" detailed below, the Tupi seem to have considered the anhinga a kind of bird of ill omen.
Systematics and evolution
The genus Anhinga was introduced by the French zoologist Mathurin Jacques Brisson in 1760, with the anhinga or American darter (Anhinga anhinga) as the type species. Anhinga is derived from the Tupi ajíŋa (also transcribed áyinga or ayingá), which in local mythology refers to a malevolent demonic forest spirit; it is often translated as "devil bird". The name changed to anhingá or anhangá as it was transferred to the Tupi–Portuguese Língua Geral. However, in its first documented use as an English term in 1818, it referred to an Old World darter. Ever since, it has also been used for the modern genus Anhinga as a whole.
This family is very closely related to the other families in the suborder Sulae, i.e. the Phalacrocoracidae (cormorants and shags) and the Sulidae (gannets and boobies). Cormorants and darters are extremely similar as regards their body and leg skeletons and may be sister taxa. In fact, several darter fossils were initially believed to be cormorants or shags (see below). Some earlier authors included the darters in the Phalacrocoracidae as subfamily Anhingina, but this is nowadays generally considered overlumping. However, as this agrees quite well with the fossil evidence, some unite the Anhingidae and Phalacrocoracidae in a superfamily Phalacrocoracoidea.
The Sulae are also united by their characteristic display behavior, which agrees with the phylogeny as laid out by anatomical and DNA sequence data. While the darters' lack of many display behaviors is shared with gannets (and that of a few with cormorants), these are all symplesiomorphies that are absent in frigatebirds, tropicbirds and pelicans also. Like cormorants but unlike other birds, darters use their hyoid bone to stretch the gular sac in display. Whether the pointing display of mates is another synapomorphy of darters and cormorants that was dropped again in some of the latter, or whether it evolved independently in darters and those cormorants that do it, is not clear. The male raised-wing display seems to be a synapomorphy of the Sulae; like almost all cormorants and shags but unlike almost all gannets and boobies, darters keep their wrists bent as they lift the wings in display, but their alternating wing-waving, which they also show before take-off, is unique. That they often balance with their outstretched wings during walking is probably an autapomorphy of darters, necessitated by their being plumper than the other Sulae.
The Sulae were traditionally included in the Pelecaniformes, then a paraphyletic group of "higher waterbirds". The supposed traits uniting them, like all-webbed toes and a bare gular sac, are now known to be convergent, and pelicans are apparently closer relatives of storks than of the Sulae. Hence, the Sulae and the frigatebirds – and some prehistoric relatives – are increasingly separated as the Suliformes, which is sometimes dubbed "Phalacrocoraciformes".
Living species
There are four living species of darters recognized, all in the genus Anhinga, although the Old World ones were often lumped together as subspecies of A. melanogaster. They may form a superspecies with regard to the more distinct anhinga:
Extinct "darters" from Mauritius and Australia known only from bones were described as Anhinga nana ("Mauritian darter") and Anhinga parva. But these are actually misidentified bones of the long-tailed cormorant (Microcarbo/Phalacrocorax africanus) and the little pied cormorant (M./P. melanoleucos), respectively. In the former case, however, the remains are larger than those of the geographically closest extant population of long-tailed cormorants on Madagascar: they thus might belong to an extinct subspecies (Mauritian cormorant), which would have to be called Microcarbo africanus nanus (or Phalacrocorax a. nanus) – quite ironically, as the Latin term nanus means dwarf. The Late Pleistocene Anhinga laticeps is not specifically distinct from the Australasian darter; it might have been a large paleosubspecies of the last ice age.
Fossil record
The fossil record of the Anhingidae is rather dense, but very apomorphic already and appears to be lacking its base. The other families placed in the Phalacrocoraciformes sequentially appear throughout the Eocene, the most distinct – frigatebirds – being known since almost 50 Ma (million years ago) and probably of Paleocene origin. With fossil gannets being known since the mid-Eocene (c. 40 Ma) and fossil cormorants appearing soon thereafter, the origin of the darters as a distinct lineage was presumably around 50–40 Ma, maybe a bit earlier.
Fossil Anhingidae are known since the Early Miocene; a number of prehistoric darters similar to those still alive have been described, as well as some more distinct genera now extinct. The diversity was highest in South America, and thus it is likely that the family originated there. Some of the genera which ultimately became extinct were very large, and a tendency to become flightless has been noted in prehistoric darters. Their distinctness has been doubted, but this was due to the supposed "Anhinga" fraileyi being rather similar to Macranhinga, rather than due to them resembling the living species:
Meganhinga Alvarenga, 1995 (Early Miocene of Chile)
"Paranavis" (Middle/Late Miocene of Paraná, Argentina) – a nomen nudum
Macranhinga Noriega, 1992 (Middle/Late Miocene – Late Miocene/Early Pliocene of SC South America) – may include "Anhinga" fraileyi
Giganhinga Rinderknecht & Noriega, 2002 (Late Pliocene/Early Pleistocene of Uruguay)
Anhinga
Prehistoric members of Anhinga were presumably distributed in similar climates as today, ranging into Europe in the hotter and wetter Miocene. With their considerable stamina and continent-wide distribution abilities (as evidenced by the anhinga and the Old World superspecies), the smaller lineage has survived for over 20 Ma. As evidenced by the fossil species' biogeography centered around the equator, with the younger species ranging eastwards out of the Americas, the Hadley cell seems to have been the main driver of the genus' success and survival:
Anhinga walterbolesi Worthy, 2012 (Late Oligocene to Early Miocene of central Australia
Anhinga subvolans (Brodkorb, 1956) (Early Miocene of Thomas Farm, US) – formerly in Phalacrocorax
Anhinga cf. grandis (Middle Miocene of Colombia –? Late Pliocene of SC South America)
Anhinga sp. (Sajóvölgyi Middle Miocene of Mátraszõlõs, Hungary) – A. pannonica?
"Anhinga" fraileyi Campbell, 1996 (Late Miocene –? Early Pliocene of SC South America) – may belong in Macranhinga
Anhinga pannonica Lambrecht, 1916 (Late Miocene of C Europe ?and Tunisia, East Africa, Pakistan and Thailand –? Sahabi Early Pliocene of Libya)
Anhinga minuta Alvarenga & Guilherme, 2003 (Solimões Late Miocene/Early Pliocene of SC South America)
Anhinga grandis Martin & Mengel, 1975 (Late Miocene –? Late Pliocene of US)
Anhinga malagurala Mackness, 1995 (Allingham Early Pliocene of Charters Towers, Australia)
Anhinga sp. (Early Pliocene of Bone Valley, US) – A. beckeri?
Anhinga hadarensis Brodkorb & Mourer-Chauviré, 1982 (Late Pliocene/Early Pleistocene of E Africa)
Anhinga beckeri Emslie, 1998 (Early – Late Pleistocene of SE US)
Protoplotus, a small Paleogene phalacrocoraciform from Sumatra, was in old times considered a primitive darter. However, it is also placed in its own family (Protoplotidae) and might be a basal member of the Sulae and/or close to the common ancestor of cormorants and darters.
Citations
General and cited sources
AnAge [2009]: Anhinga longevity data. Retrieved 2009-SEP-09.
Answers.com [2009]: darter. In: Columbia Electronic Encyclopedia (6th ed.). Columbia University Press. Retrieved 2009-Sep-09.
Christidis, Les & Boles, Walter E. (2008): Systematics and Taxonomy of Australian Birds. CSIRO Publishing, CollingwoodVictoria, Australia.
Cione, Alberto Luis; de las Mercedes Azpelicueta, María; Bond, Mariano; Carlini, Alfredo A.; Casciotta, Jorge R.; Cozzuol, Mario Alberto; de la Fuente, Marcelo; Gasparini, Zulma; Goin, Francisco J.; Noriega, Jorge; Scillatoyané, Gustavo J.; Soibelzon, Leopoldo; Tonni, Eduardo Pedro; Verzi, Diego & Guiomar Vucetich, María (2000): Miocene vertebrates from Entre Ríos province, eastern Argentina. [English with Spanish abstract] In: Aceñolaza, F.G. & Herbst, R. (eds.): El Neógeno de Argentina. INSUGEO Serie Correlación Geológica 14: 191–237.
Jobling, James A. (1991): A Dictionary of Scientific Bird Names. Oxford University Press, Oxford, UK.
Mayr, Gerald (2009): Paleogene Fossil Birds. Springer-Verlag, Heidelberg & New York.
Merriam-Webster (MW) [2009]: Online English Dictionary – Anhinga. Retrieved 2009-Sep-09.
Mlíkovský, Jirí (2002): Cenozoic Birds of the World (Part 1: Europe). Ninox Press, Prague.
Myers, P.; Espinosa, R.; Parr, C.S.; Jones, T.; Hammond, G.S. & Dewey, T.A. [2009]: Animal Diversity Web – Anhingidae. Retrieved 2009-Sep-09.
Noriega, Jorge Ignacio (1994): Las Aves del "Mesopotamiense" de la provincia de Entre Ríos, Argentina ["The birds of the 'Mesopotamian' of Entre Ríos Province, Argentina"]. Doctoral thesis, Universidad Nacional de La Plata [in Spanish]. PDF abstract
External links
Anhingidae
Extant Burdigalian first appearances
Taxa named by Ludwig Reichenbach | Biology and health sciences | Pelecanimorphae | null |
185239 | https://en.wikipedia.org/wiki/Thermal%20radiation | Thermal radiation | Thermal radiation is electromagnetic radiation emitted by the thermal motion of particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation. The emission of energy arises from a combination of electronic, molecular, and lattice oscillations in a material. Kinetic energy is converted to electromagnetism due to charge-acceleration or dipole oscillation. At room temperature, most of the emission is in the infrared (IR) spectrum, though above around 525 °C (977 °F) enough of it becomes visible for the matter to visibly glow. This visible glow is called incandescence. Thermal radiation is one of the fundamental mechanisms of heat transfer, along with conduction and convection.
The primary method by which the Sun transfers heat to the Earth is thermal radiation. This energy is partially absorbed and scattered in the atmosphere, the latter process being the reason why the sky is visibly blue. Much of the Sun's radiation transmits through the atmosphere to the surface where it is either absorbed or reflected.
Thermal radiation can be used to detect objects or phenomena normally invisible to the human eye. Thermographic cameras create an image by sensing infrared radiation. These images can represent the temperature gradient of a scene and are commonly used to locate objects at a higher temperature than their surroundings. In a dark environment where visible light is at low levels, infrared images can be used to locate animals or people due to their body temperature. Cosmic microwave background radiation is another example of thermal radiation.
Blackbody radiation is a concept used to analyze thermal radiation in idealized systems. This model applies if a radiating object meets the physical characteristics of a black body in thermodynamic equilibrium. Planck's law describes the spectrum of blackbody radiation, and relates the radiative heat flux from a body to its temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Where blackbody radiation is not an accurate approximation, emission and absorption can be modeled using quantum electrodynamics (QED).
Overview
Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. It is present in all matter of nonzero temperature. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum.
The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature and its spectral emissivity, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so a black body has an emissivity of one.
Absorptivity, reflectivity, and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity, absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation.
The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing).
History
Ancient Greece
Burning glasses are known to date back to about 700 BC. One of the first accurate mentions of burning glasses appears in Aristophanes's comedy, The Clouds, written in 423 BC. According to the Archimedes' heat ray anecdote, Archimedes is purported to have developed mirrors to concentrate heat rays in order to burn attacking Roman ships during the Siege of Syracuse (c. 213–212 BC), but no sources from the time have been confirmed. Catoptrics is a book attributed to Euclid on how to focus light in order to produce heat, but the book might have been written in 300 AD.
Renaissance
During the Renaissance, Santorio Santorio came up with one of the earliest thermoscopes. In 1612 he published his results on the heating effects from the Sun, and his attempts to measure heat from the Moon.
Earlier, in 1589, Giambattista della Porta reported on the heat felt on his face, emitted by a remote candle and facilitated by a concave metallic mirror. He also reported the cooling felt from a solid ice block. Della Porta's experiment would be replicated many times with increasing accuracy. It was replicated by astronomers Giovanni Antonio Magini and Christopher Heydon in 1603, and supplied instructions for Rudolf II, Holy Roman Emperor who performed it in 1611. In 1660, della Porta's experiment was updated by the Accademia del Cimento using a thermometer invented by Ferdinand II, Grand Duke of Tuscany.
Enlightenment
In 1761, Benjamin Franklin wrote a letter describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes. One experiment he performed consisted of placing square pieces of cloth of various colors out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow.
Caloric theory
Antoine Lavoisier considered that radiation of heat was concerned with the condition of the surface of a physical body rather than the material of which it was composed. Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric fluid which insulated the release of the rest within. He described a good radiator to be a substance with a rough surface as only a small proportion of molecules held caloric in within a given plane, allowing for greater escape from within. Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold, which became a point of contention for the theory as a whole.
In his first memoir, Augustin-Jean Fresnel responded to a view he extracted from a French translation of Isaac Newton's Optics. He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a body under illumination would increase indefinitely in heat.
In Marc-Auguste Pictet's famous experiment of 1790, it was reported that a thermometer detected a lower temperature when a set of mirrors were used to focus "frigorific rays" from a cold object.
In 1791, Pierre Prevost a colleague of Pictet, introduced the concept of radiative equilibrium, wherein all objects both radiate and absorb heat. When an object is cooler than its surroundings, it absorbs more heat than it emits, causing its temperature to increase until it reaches equilibrium. Even at equilibrium, it continues to radiate heat, balancing absorption and emission.
The discovery of infrared radiation is ascribed to astronomer William Herschel. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the calorific rays, beyond the red part of the spectrum, by an increase in the temperature recorded on a thermometer in that region.
Electromagnetic theory
At the end of the 19th century it was shown that the transmission of light or of radiant heat was allowed by the propagation of electromagnetic waves. Television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths. All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies. All bodies generate and receive electromagnetic waves at the expense of heat exchange.
In 1860, Gustav Kirchhoff published a mathematical description of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. This relation is known as Stefan–Boltzmann law.
Quantum theory
The microscopic theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900. According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta. Planck noted that energy was emitted in quantas of frequency of vibration similarly to the wave theory. The energy E an electromagnetic wave in vacuum is found by the expression E = hf, where h is the Planck constant and f is its frequency.
Bodies at higher temperatures emit radiation at higher frequencies with an increasing energy per quantum. While the propagation of electromagnetic waves of all wavelengths is often referred as "radiation", thermal radiation is often constrained to the visible and infrared regions. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature.
Radiation waves may travel in unusual patterns compared to conduction heat flow. Radiation allows waves to travel from a heated body through a cold non-absorbing or partially absorbing medium and reach a warmer body again. An example is the case of the radiation waves that travel from the Sun to the Earth.
Characteristics
Frequency
Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top.
The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot. Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law. In the diagram the peak value for each curve moves to the left as the temperature increases.
Relationship to temperature
The total radiation intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law. A kitchen oven, at a temperature about double room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb—roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area.
As for photon statistics, thermal light obeys Super-Poissonian statistics.
Appearance
When the temperature of a body is high enough, its thermal radiation spectrum becomes strong enough in the visible range to visibly glow. The visible component of thermal radiation is sometimes called incandescence,
though this term can also refer to thermal radiation in general. The term derive from the Latin verb , 'to glow white'.
In practice, virtually all solid or liquid substances start to glow around , with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point. The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible.
Reciprocity
The rate of electromagnetic radiation emitted by a body at a given frequency is proportional to the rate that the body absorbs radiation at that frequency, a property known as reciprocity. Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization, and even coherence. It is therefore possible to have thermal radiation which is polarized, coherent, and directional; though polarized and coherent sources are fairly rare in nature.
Fundamental principles
Thermal radiation is one of the three principal mechanisms of heat transfer. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction.
Electromagnetic waves
Thermal radiation is characteristically different from conduction and convection in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum. Thermal radiation is a type of electromagnetic radiation which is often modeled by the propagation of waves. These waves have the standard wave properties of frequency, and wavelength, which are related by the equationwhere is the speed of light in the medium.
Irradiation
Thermal irradiation is the rate at which radiation is incident upon a surface per unit area. It is measured in watts per square meter. Irradiation can either be reflected, absorbed, or transmitted. The components of irradiation can then be characterized by the equation
where, represents the absorptivity, reflectivity and transmissivity. These components are a function of the wavelength of the electromagnetic wave as well as the material properties of the medium.
Absorptivity and emissivity
The spectral absorption is equal to the emissivity ; this relation is known as Kirchhoff's law of thermal radiation. An object is called a black body if this holds for all frequencies, and the following formula applies:
If objects appear white (reflective in the visual spectrum), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared – see the diagram at the left. Most household radiators are painted white, which is sensible given that they are not hot enough to radiate any significant amount of heat, and are not designed as thermal radiators at all – instead, they are actually convectors, and painting them matt black would make little difference to their efficacy. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence.
Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is almost impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most standard temperature and pressure lab controlled environments).
Reflectivity
Reflectivity deviates from the other properties in that it is bidirectional in nature. In other words, this property depends on the direction of the incident of radiation as well as the direction of the reflection. Therefore, the reflected rays of a radiation spectrum incident on a real surface in a specified direction forms an irregular shape that is not easily predictable. In practice, surfaces are often assumed to reflect either in a perfectly specular or a diffuse manner. In a specular reflection, the angles of reflection and incidence are equal. In diffuse reflection, radiation is reflected equally in all directions. Reflection from smooth and polished surfaces can be assumed to be specular reflection, whereas reflection from rough surfaces approximates diffuse reflection. In radiation analysis a surface is defined as smooth if the height of the surface roughness is much smaller relative to the wavelength of the incident radiation.
Transmissivity
A medium that experiences no transmission () is opaque, in which case absorptivity and reflectivity sum to unity:
Radiation intensity
Radiation emitted from a surface can propagate in any direction from the surface. Irradiation can also be incident upon a surface from any direction. The amount of irradiation on a surface is therefore dependent on the relative orientation of both the emitter and the receiver. The parameter radiation intensity, is used to quantify how much radiation makes it from one surface to another.
Radiation intensity is often modeled using a spherical coordinate system.
Emissive power
Emissive power is the rate at which radiation is emitted per unit area. It is a measure of heat flux. The total emissive power from a surface is denoted as and can be determined by,where is in units of steradians and is the total intensity.
The total emissive power can also be found by integrating the spectral emissive power over all possible wavelengths. This is calculated as,where represents wavelength.
The spectral emissive power can also be determined from the spectral intensity, as follows,
where both spectral emissive power and emissive intensity are functions of wavelength.
Blackbody radiation
A "black body" is a body which has the property of allowing all incident rays to enter without surface reflection and not allowing them to leave again.
Blackbodies are idealized surfaces that act as the perfect absorber and emitter. They serve as the standard against which real surfaces are compared when characterizing thermal radiation. A blackbody is defined by three characteristics:
A blackbody absorbs all incident radiation, regardless of wavelength and direction.
No surface can emit more energy than a blackbody for a given temperature and wavelength.
A blackbody is a diffuse emitter.
The Planck distribution
The spectral intensity of a blackbody, was first determined by Max Planck. It is given by Planck's law per unit wavelength as:This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. Planck's law shows that radiative energy increases with temperature, and explains why the peak of an emission spectrum shifts to shorter wavelengths at higher temperatures. It can also be found that energy emitted at shorter wavelengths increases more rapidly with temperature relative to longer wavelengths.
The equation is derived as an infinite sum over all possible frequencies in a semi-sphere region. The energy, , of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied.
Stefan-Boltzmann law
The Planck distribution can be used to find the spectral emissive power of a blackbody, as follows,
The total emissive power of a blackbody is then calculated as,The solution of the above integral yields a remarkably elegant equation for the total emissive power of a blackbody, the Stefan-Boltzmann law, which is given as,where is the Steffan-Boltzmann constant.
Wien's displacement law
The wavelength for which the emission intensity is highest is given by Wien's displacement law as:
Constants
Definitions of constants used in the above equations:
Variables
Definitions of variables, with example values:
Emission from non-black surfaces
For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains as a factor:
This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body. For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral.
Heat transfer between surfaces
Calculation of radiative heat transfer between groups of objects, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors, which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy, boiler and furnace design and raytraced computer graphics.
The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface.
Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder.
Applications
Thermal radiation is an important factor of many engineering applications, especially for those dealing with high temperatures.
Solar energy
Sunlight is the incandescence of the "white hot" surface of the Sun. Electromagnetic radiation from the sun has a peak wavelength of about 550 nm, and can be harvested to generate heat or electricity.
Thermal radiation can be concentrated on a tiny spot via reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy. Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to .
A selective surface can be used when energy is being extracted from the sun. Selective surfaces are surfaces tuned to maximize the amount of energy they absorb from the sun's radiation while minimizing the amount of energy they lose to their own thermal radiation. Selective surfaces can also be used on solar collectors.
Incandescent light bulbs
The incandescent light bulb creates light by heating a filament to a temperature at which it emits significant visible thermal radiation. For a tungsten filament at a typical temperature of 3000 K, only a small fraction of the emitted radiation is visible, and the majority is infrared light. This infrared light does not help a person see, but still transfers heat to the environment, making incandescent lights relatively inefficient as a light source.
If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps.
More efficient light sources, such as fluorescent lamps and LEDs, do not function by incandescence.
Thermal comfort
Thermal radiation plays a crucial role in human comfort, influencing perceived temperature sensation. Various technologies have been developed to enhance thermal comfort, including personal heating and cooling devices.
The mean radiant temperature is a metric used to quantify the exchange of radiant heat between a human and their surrounding environment.
Personal heating
Radiant personal heaters are devices that convert energy into infrared radiation that are designed to increase a user's perceived temperature. They typically are either gas-powered or electric. In domestic and commercial applications, gas-powered radiant heaters can produce a higher heat flux than electric heaters which are limited by the amount of current that can be drawn through a circuit breaker.
Personal cooling
Personalized cooling technology is an example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler.
Windows
Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. To reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. "Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow". By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window.
Spacecraft
Shiny metal surfaces, have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft.
Since any electromagnetic radiation, including thermal radiation, conveys momentum as well as energy, thermal radiation also induces very small forces on the radiating or absorbing objects. Normally these forces are negligible, but they must be taken into account when considering spacecraft navigation. The Pioneer anomaly, where the motion of the craft slightly deviated from that expected from gravity alone, was eventually tracked down to asymmetric thermal radiation from the spacecraft. Similarly, the orbits of asteroids are perturbed since the asteroid absorbs solar radiation on the side facing the Sun, but then re-emits the energy at a different angle as the rotation of the asteroid carries the warm surface out of the Sun's view (the YORP effect).
Nanostructures
Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink.
Health and safety
Metabolic temperature regulation
In a practical, room-temperature setting, humans lose considerable energy due to infrared thermal radiation in addition to that lost by conduction to air (aided by concurrent convection, or other air movement like drafts). The heat energy lost is partially regained by absorbing heat radiation from walls or other surroundings. Human skin has an emissivity of very close to 1.0. A human, having roughly 2m2 in surface area, and a temperature of about 307 K, continuously radiates approximately 1000 W. If people are indoors, surrounded by surfaces at 296 K, they receive back about 900 W from the wall, ceiling, and other surroundings, resulting in a net loss of 100 W. These estimates are highly dependent on extrinsic variables, such as wearing clothes.
Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less. However, color makes little difference in the heat transfer between an object at everyday temperatures and its surroundings. This is because the dominant emitted wavelengths are not in the visible spectrum, but rather infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit.
Burns
Thermal radiation is a phenomenon that can burn skin and ignite flammable materials. The time to a damage from exposure to thermal radiation is a function of the rate of delivery of the heat. Radiative heat flux and effects are given as follows:
Near-field radiative heat transfer
At distances on the scale of the wavelength of a radiated electromangetic wave or smaller, Planck's law is not accurate. For objects this small and close together, the quantum tunneling of EM waves has a significant impact on the rate of radiation.
A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface. For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence.
Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs.
Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications.
| Physical sciences | Thermodynamics | Physics |
185259 | https://en.wikipedia.org/wiki/Germ%20theory%20of%20disease | Germ theory of disease | The germ theory of disease is the currently accepted scientific theory for many diseases. It states that microorganisms known as pathogens or "germs" can cause disease. These small organisms, which are too small to be seen without magnification, invade humans, other animals, and other living hosts. Their growth and reproduction within their hosts can cause disease. "Germ" refers to not just a bacterium but to any type of microorganism, such as protists or fungi, or other pathogens that can cause disease, such as viruses, prions, or viroids. Diseases caused by pathogens are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen. Pathogens are disease-carrying agents that can pass from one individual to another, both in humans and animals. Infectious diseases are caused by biological agents such as pathogenic microorganisms (viruses, bacteria, and fungi) as well as parasites.
Basic forms of germ theory were proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. However, such views were held in disdain in Europe, where Galen's miasma theory remained dominant among scientists and doctors.
By the early 19th century, the first vaccine, smallpox vaccination was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. A transitional period began in the late 1850s with the work of Louis Pasteur. This work was later extended by Robert Koch in the 1880s. By the end of that decade, the miasma theory was struggling to compete with the germ theory of disease. Viruses were initially discovered in the 1890s. Eventually, a "golden era" of bacteriology ensued, during which the germ theory quickly led to the identification of the actual organisms that cause many diseases.
Miasma theory
The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century; it is no longer accepted as a correct explanation for disease by the scientific community. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a (, Ancient Greek: "pollution"), a noxious form of "bad air" emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors.
Development of germ theory
Greece and Rome
In Antiquity, the Greek historian Thucydides ( – ) was the first person to write, in his account of the plague of Athens, that diseases could spread from an infected person to others.
One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by spore-like "seeds" (Latin: semina) that were present in and dispersible through the air. In his poem, De rerum natura (On the Nature of Things, ), the Roman poet Lucretius ( – ) stated that the world contained various "seeds", some of which could sicken a person if they were inhaled or ingested.
The Roman statesman Marcus Terentius Varro (116–27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps... because there are bred certain minute creatures which cannot be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases."
The Greek physician Galen (AD 129 – ) speculated in his On Initial Causes () that some patients might have "seeds of fever". In his On the Different Types of Fever (), Galen speculated that plagues were spread by "certain seeds of plague", which were present in the air. And in his Epidemics (), Galen explained that patients might relapse during recovery from fever because some "seed of the disease" lurked in their bodies, which would cause a recurrence of the disease if the patients did not follow a physician's therapeutic regimen.
The Middle Ages
A hybrid form of miasma and contagion theory was proposed by Persian physician Ibn Sina (known as Avicenna in Europe) in The Canon of Medicine (1025). He mentioned that people can transmit disease to others by breath, noted contagion with tuberculosis, and discussed the transmission of disease through water and dirt.
During the early Middle Ages, Isidore of Seville (–636) mentioned "plague-bearing seeds" (pestifera semina) in his On the Nature of Things (). Later in 1345, Tommaso del Garbo (–1370) of Bologna, Italy mentioned Galen's "seeds of plague" in his work Commentaria non-parum utilia in libros Galeni (Helpful commentaries on the books of Galen).
The 16th century Reformer Martin Luther appears to have had some idea of the contagion theory, commenting, "I have survived three plagues and visited several people who had two plague spots which I touched. But it did not hurt me, thank God. Afterwards when I returned home, I took up Margaret," (born 1534), "who was then a baby, and put my unwashed hands on her face, because I had forgotten; otherwise I should not have done it, which would have been tempting God." In 1546, Italian physician Girolamo Fracastoro published De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), a set of three books covering the nature of contagious diseases, categorization of major pathogens, and theories on preventing and treating these conditions. Fracastoro blamed "seeds of disease" that propagate through direct contact with an infected host, indirect contact with fomites, or through particles in the air.
The Early Modern Period
In 1668, Italian physician Francesco Redi published experimental evidence rejecting spontaneous generation, the theory that living creatures arise from nonliving matter. He observed that maggots only arose from rotting meat that was uncovered. When meat was left in jars covered by gauze, the maggots would instead appear on the gauze's surface, later understood as rotting meat's smell passing through the mesh to attract flies that laid eggs.
Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology, considered "the Father of Microbiology". Leeuwenhoek is said to be the first to see and describe bacteria in 1674, yeast cells, the teeming life in a drop of water (such as algae), and the circulation of blood corpuscles in capillaries. The word "bacteria" didn't exist yet, so he called these microscopic living organisms "animalcules", meaning "little animals". Those "very little animalcules" he was able to isolate from different sources, such as rainwater, pond and well water, and the human mouth and intestine.
Yet German Jesuit priest and scholar Athanasius Kircher (or "Kirchner", as it is often spelled) may have observed such microorganisms prior to this. One of his books written in 1646 contains a chapter in Latin, which reads in translation: "Concerning the wonderful structure of things in nature, investigated by microscope...who would believe that vinegar and milk abound with an innumerable multitude of worms." Kircher defined the invisible organisms found in decaying bodies, meat, milk, and secretions as "worms." His studies with the microscope led him to the belief, which he was possibly the first to hold, that disease and putrefaction, or decay were caused by the presence of invisible living bodies, writing that "a number of things might be discovered in the blood of fever patients." When Rome was struck by the bubonic plague in 1656, Kircher investigated the blood of plague victims under the microscope. He noted the presence of "little worms" or "animalcules" in the blood and concluded that the disease was caused by microorganisms.
Kircher was the first to attribute infectious disease to a microscopic pathogen, inventing the germ theory of disease, which he outlined in his Scrutinium Physico-Medicum, published in Rome in 1658. Kircher's conclusion that disease was caused by microorganisms was correct, although it is likely that what he saw under the microscope were in fact red or white blood cells and not the plague agent itself. Kircher also proposed hygienic measures to prevent the spread of disease, such as isolation, quarantine, burning clothes worn by the infected, and wearing facemasks to prevent the inhalation of germs. It was Kircher who first proposed that living beings enter and exist in the blood.
In the 18th century, more proposals were made, but struggled to catch on. In 1700, physician Nicolas Andry argued that microorganisms he called "worms" were responsible for smallpox and other diseases. In 1720, Richard Bradley theorised that the plague and "all pestilential distempers" were caused by "poisonous insects", living creatures viewable only with the help of microscopes.
In 1762, the Austrian physician Marcus Antonius von Plenciz (1705–1786) published a book titled Opera medico-physica. It outlined a theory of contagion stating that specific animalcules in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentery), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubiquitous such animalcules are and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community.
19th and 20th centuries
Agostino Bassi, Italy
During the early 19th century, driven by economic concerns over collapsing silk production, Italian entomologist Agostino Bassi researched a silkworm disease known as "muscardine" in French and "calcinaccio" or "mal del segno" in Italian, causing white fungal spots along the caterpillar. From 1835 to 1836, Bassi published his findings that fungal spores transmitted the disease between individuals. In recommending the rapid removal of diseased caterpillars and disinfection of their surfaces, Bassi outlined methods used in modern preventative healthcare. Italian naturalist Giuseppe Gabriel Balsamo-Crivelli named the causative fungal species after Bassi, currently classified as Beauveria bassiana.
Louis-Daniel Beauperthuy, France
In 1838 French specialist in tropical medicine Louis-Daniel Beauperthuy pioneered using microscopy in relation to diseases and independently developed a theory that all infectious diseases were due to parasitic infection with "animalcules" (microorganisms). With the help of his friend M. Adele de Rosseville, he presented his theory in a formal presentation before the French Academy of Sciences in Paris. By 1853, he was convinced that malaria and yellow fever were spread by mosquitos. He even identified the particular group of mosquitos that transmit yellow fever as the "domestic species" of "striped-legged mosquito", which can be recognised as Aedes aegypti, the actual vector. He published his theory in 1854 in the Gaceta Oficial de Cumana ("Official Gazette of Cumana"). His reports were assessed by an official commission, which discarded his mosquito theory.
Ignaz Semmelweis, Austria
Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies was implicated in its spread, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year. Despite this evidence, he and his theories were rejected by most of the contemporary medical establishment.
Gideon Mantell, UK
Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts on Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life".
John Snow, UK
British physician John Snow is credited as a founder of modern epidemiology for studying the 1854 Broad Street cholera outbreak. Snow criticized the Italian anatomist Giovanni Maria Lancisi for his early 18th century writings that claimed swamp miasma spread malaria, rebutting that bad air from decomposing organisms was not present in all cases. In his 1849 pamphlet On the Mode of Communication of Cholera, Snow proposed that cholera spread through the fecal–oral route, replicating in human lower intestines.
In the book's second edition, published in 1855, Snow theorized that cholera was caused by cells smaller than human epithelial cells, leading to Robert Koch's 1884 confirmation of the bacterial species Vibrio cholerae as the causative agent. In recognizing a biological origin, Snow recommended boiling and filtering water, setting the precedent for modern boil-water advisory directives.
Through a statistical analysis tying cholera cases to specific water pumps associated with the Southwark and Vauxhall Waterworks Company, which supplied sewage-polluted water from the River Thames, Snow showed that areas supplied by this company experienced fourteen times as many deaths as residents using Lambeth Waterworks Company pumps that obtained water from the upriver, cleaner Seething Wells. While Snow received praise for convincing the Board of Guardians of St James's Parish to remove the handles of contaminated pumps, he noted that the outbreak's cases were already declining as scared residents fled the region.
Louis Pasteur, France
During the mid-19th century, French microbiologist Louis Pasteur showed that treating the female genital tract with boric acid killed the microorganisms causing postpartum infections while avoiding damage to mucous membranes.
Building on Redi's work, Pasteur disproved spontaneous generation by constructing swan neck flasks containing nutrient broth. Since the flask contents were only fermented when in direct contact with the external environment's air by removing the curved tubing, Pasteur demonstrated that bacteria must travel between sites of infection to colonize environments.
Similar to Bassi, Pasteur extended his research on germ theory by studying pébrine, a disease that causes brown spots on silkworms. While Swiss botanist Carl Nägeli discovered the fungal species Nosema bombycis in 1857, Pasteur applied the findings to recommend improved ventilation and screening of silkworm eggs, an early form of disease surveillance.
Robert Koch, Germany
In 1884, German bacteriologist Robert Koch published four criteria for establishing causality between specific microorganisms and diseases, now known as Koch's postulates:
The microorganism must be found in abundance in all organisms with the disease, but should not be found in healthy organisms.
The microorganism must be isolated from a diseased organism and grown in pure culture.
The cultured microorganism should cause disease when introduced into a healthy organism.
The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent.
During his lifetime, Koch recognized that the postulates were not universally applicable, such as asymptomatic carriers of cholera violating the first postulate. For this same reason, the third postulate specifies "should", rather than "must", because not all host organisms exposed to an infectious agent will acquire the infection, potentially due to differences in prior exposure to the pathogen.Limiting the second postulate, it was later discovered that viruses cannot be grown in pure cultures because they are obligate intracellular parasites, making it impossible to fulfill the second postulate. Similarly, pathogenic misfolded proteins, known as prions, only spread by transmitting their structure to other proteins, rather than self-replicating.
While Koch's postulates retain historical importance for emphasizing that correlation does not imply causation, many pathogens are accepted as causative agents of specific diseases without fulfilling all of the criteria. In 1988, American microbiologist Stanley Falkow published a molecular version of Koch's postulates to establish correlation between microbial genes and virulence factors.
Joseph Lister, UK
After reading Pasteur's papers on bacterial fermentation, British surgeon Joseph Lister recognized that compound fractures, involving bones breaking through the skin, were more likely to become infected due to exposure to environmental microorganisms. He recognized that carbolic acid could be applied to the site of injury as an effective antiseptic.
| Biology and health sciences | Concepts | Health |
185266 | https://en.wikipedia.org/wiki/Fluphenazine | Fluphenazine | Fluphenazine, sold under the brand name Prolixin among others, is a high-potency typical antipsychotic medication. It is used in the treatment of chronic psychoses such as schizophrenia, and appears to be about equal in effectiveness to low-potency antipsychotics like chlorpromazine. It is given by mouth, injection into a muscle, or just under the skin. There is also a long acting injectable version that may last for up to four weeks. Fluphenazine decanoate, the depot injection form of fluphenazine, should not be used by people with severe depression.
Common side effects include movement problems, sleepiness, depression and increased weight. Serious side effects may include neuroleptic malignant syndrome, low white blood cell levels, and the potentially permanent movement disorder tardive dyskinesia. In older people with psychosis as a result of dementia it may increase the risk of dying. It may also increase prolactin levels which may result in milk production, enlarged breasts in males, impotence, and the absence of menstrual periods. It is unclear if it is safe for use in pregnancy.
Fluphenazine is a typical antipsychotic of the phenothiazine class. Its mechanism of action is not entirely clear but believed to be related to its ability to block dopamine receptors. In up to 40% of those on long term phenothiazines, liver function tests become mildly abnormal.
Fluphenazine came into use in 1959. The injectable form is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It was discontinued in Australia in 2017.
Medical use
A 2018 Cochrane review found that fluphenazine was an imperfect treatment and other inexpensive drugs less associated with side effects may be an equally effective choice for people with schizophrenia. Another 2018 Cochrane review found that there was limited evidence that newer atypical antipsychotics were more tolerable than fluphenazine. Intramuscular depot injection forms are available as both the decanoate and enanthate esters.
Side effects
Discontinuation
The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time.
There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in reoccurrence of the condition that is being treated. Rarely tardive dyskinesia can occur when the medication is stopped.
Pharmacology
Pharmacodynamics
Fluphenazine acts primarily by blocking post-synaptic dopaminergic D2 receptors in the basal ganglia, cortical and limbic system. It also blocks α1 adrenergic receptors, muscarinic M1 receptors, and histaminergic H1 receptors.
Pharmacokinetics
History
Fluphenazine came into use in 1959.
Availability
The injectable form is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. It was discontinued in Australia in 2017.
Veterinary
In horses, it is sometimes given by injection as an anxiety-relieving medication, though there are many negative common side effects and it is forbidden by many equestrian competition organizations.
| Biology and health sciences | Psychiatric drugs | Health |
185324 | https://en.wikipedia.org/wiki/Musket | Musket | A musket is a muzzle-loaded long gun that appeared as a smoothbore weapon in the early 16th century, at first as a heavier variant of the arquebus, capable of penetrating plate armour. By the mid-16th century, this type of musket gradually disappeared as the use of heavy armour declined, but musket continued as the generic term for smoothbore long guns until the mid-19th century. In turn, this style of musket was retired in the 19th century when rifled muskets (simply called rifles in modern terminology) using the Minié ball (invented by Claude-Étienne Minié in 1849) became common. The development of breech-loading firearms using self-contained cartridges (introduced by Casimir Lefaucheux in 1835) and the first reliable repeating rifles produced by Winchester Repeating Arms Company in 1860 also led to their demise. By the time that repeating rifles became common, they were known as simply "rifles", ending the era of the musket.
Etymology
According to the Online Etymology Dictionary, firearms were often named after animals, and the word musket derived from the French word , which is a male sparrowhawk. An alternative theory is that derives from the 16th-century French , from the Italian , meaning the bolt of a crossbow. The Italian is a diminutive of , a fly.
Terminology
The first recorded usage of the term "musket" or appeared in Europe in the year 1499. Evidence of the musket as a type of firearm does not appear until 1521 when it was used to describe a heavy arquebus capable of penetrating heavy armour. This version of the musket fell out of use after the mid-16th century with the decline of heavy armour; however, the term itself stuck around as a general descriptor for "shoulder arms" fire weapons into the 19th century. The differences between the arquebus and musket post-16th century are therefore not entirely clear, and the two have been used interchangeably on several occasions. According to historian David A. Parrot, the concept of the musket as a legitimate innovation is uncertain and may consist of nothing more than a name change.
Parts of a musket
Trigger guards began appearing in 1575.
Bayonets were attached to muskets in several parts of the world from the late 16th to 17th centuries.
Locks came in many different varieties. Early matchlock and wheel lock mechanisms were replaced by later flintlock mechanisms and finally percussion locks. In some parts of the world, such as China and Japan, the flintlock mechanism never caught on and they continued using matchlocks until the 19th century when percussion locks were introduced.
In the latter half of the 18th century, several improvements were added to the musket. In 1750, a detent was added to prevent the sear from catching in the half-cock notch. A roller bearing was introduced in 1770 to reduce friction and increase sparks. In 1780, waterproof pans were added.
The phrase "lock, stock, and barrel" refers to the three main parts of a musket.
Ammunition
Sixteenth- and 17th-century musketeers used bandoliers which held their pre-measured charges and lead balls.
The Minié ball, which despite its name was actually bullet-shaped and not ball-shaped, was developed in the 1840s. The Minié ball had an expanding skirt which was intended to be used with rifled barrels, leading to what was called the rifled musket, which came into widespread use in the mid-19th century. The Minié ball was small enough in diameter that it could be loaded as quickly as a round ball, even with a barrel that had been fouled with black powder residue after firing many shots, and the expanding skirt of the Minié ball meant that it would still form a tight fit with the barrel and impart a good spin into the round when fired. This gave the rifled musket an effective range of several hundred yards, which was a significant improvement over the smooth bore musket. For example, combat ranges of were achievable using the rifled muskets during the American Civil War.
Musketeers often used paper cartridges, which served a purpose similar to that of modern metallic cartridges in combining bullet and powder charge. A musket cartridge consisted of a pre-measured amount of black powder and ammunition such as a round ball, Nessler ball or Minié ball all wrapped up in paper. Cartridges would then be placed in a cartridge box, which would typically be worn on the musketeer's belt during a battle. Unlike a modern cartridge, this paper cartridge was not simply loaded into the weapon and fired. Instead, the musketeer would tear open the paper (usually with his teeth), pour some of the powder into the pan and the rest into the barrel, follow it with the ammunition (and the paper as wadding if not using a Minié ball), then use the ramrod as normal to push it all into the barrel. While not as fast as loading a modern cartridge, this method did significantly speed up the loading process since the pre-measured charges meant that the musketeer did not have to carefully measure out the black powder with every shot.
Accessories
Some ramrods were equipped with threaded ends, allowing different attachments to be used. One of the more common attachments was a ball screw or ball puller, which was a screw that could be screwed into the lead ball to remove it if it had become jammed in the barrel, similar to the way that a corkscrew is used to remove a wine cork. Another attachment was called a worm, which was used to clear debris from the barrel, such as paper wadding that had not been expelled. Some worm designs were sturdy enough that they could be used to remove stuck ammunition. The worm could also be used with a small piece of cloth for cleaning. A variation on the worm called the "screw and wiper" combined the typical design of a worm with a ball puller's screw.
History
Heavy arquebus
The heavy arquebus known as the musket appeared in Europe by 1521. In response to firearms, thicker armour was produced, from in the 15th century to in the late 16th century. Armour that was thick required nearly three times as much energy to penetrate as did armour that was only thick. During the siege of Parma in 1521, many Spanish soldiers reportedly used an "arquebus with rest", a weapon much larger and more powerful than the regular arquebus. However, at this point, long-barrelled, musket-calibre weapons had been in use as wall-defence weapons in Europe for almost a century.
The musketeers were the first infantry to give up armour entirely. Musketeers began to take cover behind walls or in sunken lanes and sometimes acted as skirmishers to take advantage of their ranged weapons. In England, the musket barrel was cut down from to around 1630. The number of musketeers relative to pikemen increased partly because they were now more mobile than pikemen.
Muskets of the 16th to 19th centuries were accurate enough to hit a target of in diameter at a distance of . At the same distance, musket bullets could penetrate a steel bib about thick, or a wooden shield about thick. The maximum range of the bullet was . The speed of the bullets was between , and the kinetic energy was .
Flintlock musket
The heavy musket went out of favour around the same time the snaphance flintlock was invented in Europe, in 1550. The snaphance was followed by the "true" flintlock in the late 17th century. While the heavy variant of the arquebus died out due to the decline of heavy armour, the term "musket" itself stuck around as a general term for 'shoulder arms' fireweapons, replacing "arquebus," and remained until the 1800s. The differences between the arquebus and musket post-16th century are therefore not entirely clear, and the two have been used interchangeably on several occasions. Flintlocks are not usually associated with arquebuses.
A variation of the musket known as the caliver, a standardized "calibre" (spelled "caliber" in the US), appeared in Europe around 1567–9. According to Jacob de Gheyn, the caliver was a smaller musket that did not require a fork rest. Benerson Little described it as a "light musket".
Asia
Matchlock firearms were used in India by 1500, in Đại Việt by 1516, and in Southeast Asia by 1540. According to a Burmese source from the late 15th century, King Minkhaung II would not dare attack the besieged town of Prome due to the defenders' use of cannon and small arms that were described as muskets, although these were probably early matchlock arquebuses or wall guns.
South Asia
The Portuguese may have introduced muskets to Sri Lanka during their conquest of the coastline and lowlands in 1505, as they regularly used short barrelled matchlocks during combat. However, P. E. P. Deraniyagala points out that the Sinhalese term for gun, 'bondikula', matches the Arabic term for gun, 'bunduk'. Also, certain technical aspects of the early Sri Lankan matchlock were similar to the matchlocks used in the Middle East, thus forming the generally accepted theory that the musket was not entirely new to the island by the time the Portuguese came. In any case, soon native Sri Lankan kingdoms, most notably the Kingdom of Sitawaka and the Kingdom of Kandy, manufactured hundreds of Lankan muskets, with a unique bifurcated stock, longer barrel and smaller calibre, which made it more efficient in directing and using the energy of the gunpowder. These were mastered by the Sri Lankan soldiers to the point where, according to the Portuguese chronicler, Queirós, they could "fire at night to put out a match" and "by day at 60 paces would sever a knife with four or five bullets" and "send as many on the same spot in the target."
Middle East
Despite initial reluctance, the Safavid Empire of Persia rapidly acquired the art of making and using handguns. A Venetian envoy, Vincenzo di Alessandri, in a report presented to the Council of Ten on 24 September 1572, observed:
Japan
During the Sengoku period of Japan, arquebuses were introduced by Portuguese merchantmen from the region of Alentejo in 1543 and by the 1560s were being mass-produced locally. By the end of the 16th century, the production of firearms in Japan reached enormous proportions, which allowed for a successful military operation in Korea during the Japanese invasions of Korea. Korean chief state councillor Yu Sŏngnyong noted the clear superiority of the Japanese musketeers over the Korean archers:
China
Arquebuses were imported by the Ming dynasty (1368–1644) at an uncertain point, but the Ming only began fielding matchlocks in 1548. The Chinese used the term "bird-gun" to refer to arquebuses and Turkish arquebuses may have reached China before Portuguese ones. In Zhao Shizhen's book of 1598 AD, the Shenqipu, there were illustrations of Ottoman Turkish musketeers with detailed illustrations of their muskets, alongside European musketeers with detailed illustrations of their muskets. There was also illustration and description of how the Chinese had adopted the Ottoman kneeling position in firing while using European-made muskets, though Zhao Shizhen described the Turkish muskets as being superior to the European muskets. The Wu Pei Chih (1621) later described Turkish muskets that used a rack and pinion mechanism, which was not known to have been used in any European or Chinese firearms at the time.
Korea
In Korea, the Joseon dynasty underwent a devastating war with the newly unified Japan that lasted from 1592 to 1598. The shock of this encounter spurred the court to undergo a process of military strengthening. One of the core elements of military strengthening was to adopt the musket. According to reformers, "In recent times in China they did not have muskets; they first learned about them from the Wokou pirates in Zhejiang Province. Qi Jiguang trained troops in their use for several years until they [muskets] became one of the skills of the Chinese, who subsequently used them to defeat the Japanese." By 1607 Korean musketeers had been trained in the fashion which Qi Jiguang prescribed, and a drill manual had been produced based on the Chinese leader's Jixiao Xinshu. Of the volley fire, the manual says that "every musketeer squad should either divide into two musketeers per layer or one and deliver fire in five volleys or in ten." Another Korean manual produced in 1649 describes a similar process: "When the enemy approaches to within a hundred paces, a signal gun is fired and a conch is blown, at which the soldiers stand. Then a gong is sounded, the conch stops blowing, and the heavenly swan [a double-reed horn] is sounded, at which the musketeers fire in concert, either all at once or in five volleys (齊放一次盡擧或分五擧)." This training method proved to be quite formidable in the 1619 Battle of Sarhu, in which 10,000 Korean musketeers managed to kill many Manchus before their allies surrendered. While Korea went on to lose both wars against the Manchu invasions of 1627 and 1636, their musketeers were well respected by Manchu leaders. It was the first Qing emperor Hong Taiji who wrote: "The Koreans are incapable on horseback but do not transgress the principles of the military arts. They excel at infantry fighting, especially in musketeer tactics."
Afterwards, the Qing dynasty requested Joseon to aid in their border conflict with Russia. In 1654, 370 Russians engaged a 1,000-man Qing-Joseon force at the mouth of the Songhua River and were defeated by Joseon musketeers. In 1658, five hundred Russians engaged a 1,400-strong Qing-Joseon force and were defeated again by Joseon musketeers. Under the Three Branch System, similar to the Spanish Tercio, Joseon organized their army under firearm troops (artillery and musketeers), archers, and pikemen or swordsmen. The percentage of firearms in the Joseon army rose dramatically as a result of the shorter training period for firearms. In addition, the sulphur mines discovered in Jinsan reduced the expense of producing gunpowder. Under the reign of Sukjong of Joseon (1700s), 76.4% of the local standing army in Chungcheong were musketeers. Under the reign of King Yeongjo, Yoon Pil-Un, Commander of the Sua-chung, improved on firearms with the Chunbochong (천보총), which had a greater range of fire than the existing ones. Its usage is thought to have been similar to the Afghan jezail or American long rifle.
Outside Eurasia
During the Musket Wars period in New Zealand, between 1805 and 1843, at least 500 conflicts took place between various Māori tribes—often using trade muskets in addition to traditional Māori weapons. The muskets were initially cheap Birmingham muskets designed for the use of coarse grain black powder. Maori favoured the shorter barrel versions. Some tribes took advantage of runaway sailors and escaped convicts to expand their understanding of muskets. Early missionaries—one of whom was a trained gunsmith—refused to help Māori repair muskets. Later, common practice was to enlarge the percussion hole and to hold progressively smaller lead balls between the fingers so that muskets could fire several shots without having to remove fouling. Likewise, Māori resorted to thumping the butt of the musket on the ground to settle the ball instead of using a ramrod. Māori favoured the use of the double-barrel shot gun (Tuparra – two barrel) during fighting often using women to reload the weapons when fighting from a Pā (fortified village or hillfort). They often resorted to using nails, stones or anything convenient as "shot". From the 1850s, Māori were able to obtain superior military style muskets with greater range. One of the authors was a Pakeha (European) who lived among Māori, spoke the language fluently, had a Māori wife and took part in many intertribal conflicts as a warrior.
Replacement by the rifle
The musket was a smoothbore firearm and lacked rifling grooves that would have spun the bullet in such a way as to increase its accuracy. The last contact with the musket barrel gives the ball a spin around an axis at right angles to the direction of flight. The aerodynamics result in the ball veering off in a random direction from the aiming point. The practice of rifling, putting grooves in the barrel of a weapon, causing the projectile to spin on the same axis as the line of flight, prevented this veering off from the aiming point. Rifles already existed in Europe by the late 15th century, but they were primarily used as sporting weapons and had little presence in warfare. The problem with rifles was the tendency for powder fouling to accumulate in the rifling, making the piece more difficult to load with each shot. Eventually, the weapon could not be loaded until the bore was wiped clean. For this reason, smoothbore muskets remained the primary firearm of most armies until the mid-19th century. It was not until 1611 that rifles started seeing some limited usage in warfare by Denmark. Around 1750, rifles began to be used by skirmishers of Frederick the Great, recruited in 1744 from a Jäger unit of game-keepers and foresters, but the rifle's slow rate of fire still restricted their usage.
The invention of the Minié ball in 1849 solved both major problems of muzzle-loading rifles. Rifled muskets of the mid-19th century, like the Springfield Model 1861 which dealt heavy casualties at the Battle of Four Lakes, were significantly more accurate, with the ability to hit a man-sized target at a distance of or more. The smoothbore musket generally allowed no more than with any accuracy.
The Crimean War (1853–1856) saw the first widespread use of the rifled musket for the common infantryman and by the time of the American Civil War (1861–1865) most infantry were equipped with the rifled musket. These were far more accurate than smoothbore muskets and had a far longer range, while preserving the musket's comparatively faster reloading rate. Their use led to a decline in the use of massed attacking formations, as these formations were too vulnerable to the accurate, long-range fire a rifle could produce. In particular, attacking troops were within range of the defenders for a longer period of time, and the defenders could also fire at them more quickly than before. As a result, while 18th-century attackers would only be within range of the defenders' weapons for the time it would take to fire a few shots, late-19th-century attackers might suffer dozens of volleys before they drew close to the defenders, with correspondingly high casualty rates. However, the use of massed attacks on fortified positions were not immediately replaced with new tactics, and as a result, major wars of the late 19th century and early 20th century tended to produce very high casualty figures.
Operation
Many soldiers preferred to reduce the standard musket reloading procedures to increase the speed of fire. This statement is from Thomas Anburey who served as a lieutenant in Burgoyne's army:
"Here I cannot help observing to you, whether it proceeded from an idea of self-preservation, or natural instinct, but the soldiers greatly improved the mode they were taught in, as to expedition. For as soon as they had primed their pieces and put the cartridge into the barrel, instead of ramming it down with their rods, they struck the butt end of the piece upon the ground, and bringing it to the present, fired it off". This practice was known as 'tap-loading'.
Tactics
Countermarch
As muskets became the default weapon of armies, the slow reloading time became an increasing problem. The difficulty of reloading—and thus the time needed to do it—was diminished by making the musket ball much smaller than the internal diameter of the barrel, so as the interior of the barrel became dirty from soot from previously fired rounds, the musket ball from the next shot could still be easily rammed. To keep the ball in place once the weapon was loaded, it would be partially wrapped in a small piece of cloth. However, the smaller ball could move within the barrel as the musket was fired, decreasing the accuracy of musket fire (it was complained that it took a man's weight in lead musket balls to kill him).
The development of volley fire—by the Ottomans, the Chinese, the Japanese, and the Dutch—made muskets more feasible for widespread adoption by the military. The volley fire technique transformed soldiers carrying firearms into organized firing squads with each row of soldiers firing in turn and reloading in a systematic fashion. Volley fire was implemented with cannons as early as 1388 by Ming artillerists, but volley fire with matchlocks was not implemented until 1526 when the Ottoman Janissaries used it during the Battle of Mohács. The matchlock volley fire technique was next seen in mid-16th-century China as pioneered by Qi Jiguang and in late-16th-century Japan. Qi Jiguang elaborates on his countermarch volley fire technique in the Jixiao Xinshu:
Frederick Lewis Taylor claims that a kneeling volley fire may have been employed by Prospero Colonna's arquebusiers as early as the Battle of Bicocca (1522). However, this has been called into question by Tonio Andrade who believes this is an over interpretation as well as mis-citation of a passage by Charles Oman suggesting that the Spanish arquebusiers kneeled to reload, when in fact Oman never made such a claim. This is contested by Idan Sherer, who quotes Paolo Giovio saying that the arquebusiers kneeled to reload so that the second line of arquebusiers could fire without endangering those in front of them.
European gunners might have implemented countermarch to some extent since at least 1579 when the Englishman Thomas Digges suggested that musketeers should, "after the old Romane manner make three or four several fronts, with convenient spaces for the first to retire and unite himselfe with the second, and both these if occasion so require, with the third; the shot [musketeers] having their convenient lanes continually during the fight to discharge their peces." The Spanish too displayed some awareness of the volley technique. Martín de Eguiluz described it in the military manual, Milicia, Discurso y Regla Militar, dating to 1586: "Start with three files of five soldiers each, separated one from the other by fifteen paces, and they should comport themselves not with fury but with calm skillfulness [con reposo diestramente] such that when the first file has finished shooting they make space for the next (which is coming up to shoot) without turning face, countermarching [contrapassando] to the left but showing the enemy only the side of their bodies, which is the narrowest of the body, and [taking their place at the rear] about one to three steps behind, with five or six pellets in their mouths, and two lighted matchlock fuses ... and they load [their pieces] promptly ... and return to shoot when it's their turn again." Most historians, including Geoffrey Parker, have ignored Eguiluz, and have erroneously attributed the invention of the countermarch to Maurice of Nassau, although the publication of the Milicia, Discurso y Regla Militar antedates Maurice's first letter on the subject by two years. Regardless, it is clear that the concept of volley fire had existed in Europe for quite some time during the 16th century, but it was in the Netherlands during the 1590s that the musketry volley really took off. The key to this development was William Louis, Count of Nassau-Dillenburg who in 1594 described the technique in a letter to his cousin:
Light infantry
In the 18th century, regular light infantry began to emerge. In contrast to the front-line infantry, they fought in the loose formation, used natural shelters and terrain folds. In addition, they were better prepared to target single targets. This type of troops was designed to fight against irregular enemy troops, such as militia, guerrillas and natives. At the beginning of the 19th century, the number of light infantry increased dramatically. In the French army, light infantry accounted for 25% of the infantry. In the Russian Army, 50 light infantry regiments and one company in each battalion were formed, which accounted for about 40% of light infantry in the entire infantry.
Attack column
In the 19th century, a new tactic was devised by the French during the French Revolutionary Wars. This was the 'colonne d'attaque, or attack column, consisting of one regiment up to two brigades of infantry. Instead of advancing slowly all across the battlefield in line formations, the French infantry were brought forward in such columns, preceded by masses of skirmishers to cover and mask their advance. The column would then normally deploy into line right before engaging the enemy with either fire or bayonet. This allowed the French Revolutionary and Napoleonic infantry a much greater degree of mobility compared to their Ancien Régime opponents, and also allowed much closer cooperation of infantry with cavalry and artillery, which were free to move in between the infantry columns of the former rather than being trapped in between the linear formation of the latter. The colonne d'attaque was henceforth adopted by all European armies during and after the Napoleonic Wars. While some British historians, such as Sir Charles Oman, have postulated that it was the standard French tactic to charge enemy lines of infantry head on with their columns, relying on the morale effect of the huge column, and hence were often beaten off by the devastating firepower of the redcoats, more current research into the subject has revealed that such occasions were far from the norm, and that the French normally tried deploying into lines before combat as well.
| Technology | Projectile weapons | null |
185427 | https://en.wikipedia.org/wiki/Function%20%28mathematics%29 | Function (mathematics) | In mathematics, a function from a set to a set assigns to each element of exactly one element of . The set is called the domain of the function and the set is called the codomain of the function.
Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept.
A function is often denoted by a letter such as , or . The value of a function at an element of its domain (that is, the element of the codomain that is associated with ) is denoted by ; for example, the value of at is denoted by . Commonly, a specific function is defined by means of an expression depending on , such as in this case, some computation, called , may be needed for deducing the value of the function at a particular value; for example, if then
Given its domain and its codomain, a function is uniquely represented by the set of all pairs , called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane.
Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.
The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details.
Definition
A function from a set to a set is an assignment of one element of to each element of . The set is called the domain of the function and the set is called the codomain of the function.
If the element in is assigned to in by the function , one says that maps to , and this is commonly written In this notation, is the argument or variable of the function. A specific element of is a value of the variable, and the corresponding element of is the value of the function at , or the image of under the function.
A function , its domain , and its codomain are often specified by the notation One may write instead of , where the symbol (read 'maps to') is used to specify where a particular element in the domain is mapped to by . This allows the definition of a function without naming. For example, the square function is the function
The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if is a real function, the determination of the domain of the function requires knowing the zeros of This is one of the reasons for which, in mathematical analysis, "a function may refer to a function having a proper subset of as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function.
The range or image of a function is the set of the images of all elements in the domain.
A function on a set means a function from the domain , without specifying a codomain. However, some authors use it as shorthand for saying that the function is .
Formal definition
The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets and is a subset of the set of all ordered pairs such that and The set of all these pairs is called the Cartesian product of and and denoted Thus, the above definition may be formalized as follows.
A function with domain and codomain is a binary relation between and that satisfies the two following conditions:
For every in there exists in such that
If and then
This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation):
A function is formed by three sets, the domain the codomain and the graph that satisfy the three following conditions.
Partial functions
Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from to is a binary relation between and such that, for every there is at most one in such that
Using functional notation, this means that, given either is in , or it is undefined.
The set of the elements of such that is defined and belongs to is called the domain of definition of the function. A partial function from to is thus a ordinary function that has as its domain a subset of called the domain of definition of the function. If the domain of definition equals , one often says that the partial function is a total function.
In several areas of mathematics the term "function" refers to partial functions rather than to ordinary functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain.
In calculus, a real-valued function of a real variable or real function is a partial function from the set of the real numbers to itself. Given a real function its multiplicative inverse is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse.
Similarly, a function of a complex variable is generally a partial function with a domain of definition included in the set of the complex numbers. The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis.
In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether belongs to its domain of definition (see Halting problem).
Multivariate functions
A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed.
Formally, a function of variables is a function whose domain is a set of -tuples.
For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. Commonly, an -tuple is denoted enclosed between parentheses, such as in When using functional notation, one usually omits the parentheses surrounding tuples, writing instead of
Given sets the set of all -tuples such that is called the Cartesian product of and denoted
Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain.
where the domain has the form
If all the are equal to the set of the real numbers or to the set of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables.
Notation
There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below.
Functional notation
The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter . Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in
The argument between the parentheses may be a variable, often , that represents an arbitrary element of the domain of the function, a specific element of the domain ( in the above example), or an expression that can be evaluated to an element of the domain ( in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let ".
When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write instead of .
Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "" for the sine function, in contrast to italic font for single-letter symbols.
The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let be a function". This is an abuse of notation that is useful for a simpler formulation.
Arrow notation
Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of is implied.
The domain and codomain can also be explicitly stated, for example:
This defines a function from the integers to the integers that returns the square of its input.
As a common application of the arrow notation, suppose is a function in two variables, and we want to refer to a partially applied function produced by fixing the second argument to the value without introducing a new function name. The map in question could be denoted using the arrow notation. The expression (read: "the map taking to of comma nought") represents this new function with just one argument, whereas the expression refers to the value of the function at the
Index notation
Index notation may be used instead of functional notation. That is, instead of writing , one writes
This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element is called the th element of the sequence.
The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map (see above) would be denoted using index notation, if we define the collection of maps by the formula for all .
Dot notation
In the notation
the symbol does not represent any value; it is simply a placeholder, meaning that, if is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, may be replaced by any symbol, often an interpunct "". This may be useful for distinguishing the function from its value at .
For example, may stand for the function , and may stand for a function defined by an integral with variable upper bound: .
Specialized notations
There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above.
Functions of more than one variable
In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function can be defined as mapping any pair of real numbers to the sum of their squares, . Such a function is commonly written as and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as , .
Other terms
A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from to instead of group homomorphism from to ). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function.
Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. | Mathematics | Mathematics: General | null |
185663 | https://en.wikipedia.org/wiki/Division%20by%20zero | Division by zero | In mathematics, division by zero, division where the divisor (denominator) is zero, is a unique and problematic special case. Using fraction notation, the general example can be written as , where is the dividend (numerator).
The usual definition of the quotient in elementary arithmetic is the number which yields the dividend when multiplied by the divisor. That is, is equivalent to By this definition, the quotient is nonsensical, as the product is always rather than some other number Following the ordinary rules of elementary algebra while allowing division by zero can create a mathematical fallacy, a subtle mistake leading to absurd results. To prevent this, the arithmetic of real numbers and more general numerical structures called fields leaves division by zero undefined, and situations where division by zero might occur must be treated with care. Since any number multiplied by zero is zero, the expression is also undefined.
Calculus studies the behavior of functions in the limit as their input tends to some value. When a real function can be expressed as a fraction whose denominator tends to zero, the output of the function becomes arbitrarily large, and is said to "tend to infinity", a type of mathematical singularity. For example, the reciprocal function, tends to infinity as tends to When both the numerator and the denominator tend to zero at the same input, the expression is said to take an indeterminate form, as the resulting limit depends on the specific functions forming the fraction and cannot be determined from their separate limits.
As an alternative to the common convention of working with fields such as the real numbers and leaving division by zero undefined, it is possible to define the result of division by zero in other ways, resulting in different number systems. For example, the quotient can be defined to equal zero; it can be defined to equal a new explicit point at infinity, sometimes denoted by the infinity symbol or it can be defined to result in signed infinity, with positive or negative sign depending on the sign of the dividend. In these number systems division by zero is no longer a special exception per se, but the point or points at infinity involve their own new types of exceptional behavior.
In computing, an error may result from an attempt to divide by zero. Depending on the context and the type of number involved, dividing by zero may evaluate to positive or negative infinity, return a special not-a-number value, or crash the program, among other possibilities.
Elementary arithmetic
The meaning of division
The division can be conceptually interpreted in several ways.
In quotitive division, the dividend is imagined to be split up into parts of size (the divisor), and the quotient is the number of resulting parts. For example, imagine ten slices of bread are to be made into sandwiches, each requiring two slices of bread. A total of five sandwiches can be made Now imagine instead that zero slices of bread are required per sandwich (perhaps a lettuce wrap). Arbitrarily many such sandwiches can be made from ten slices of bread, as the bread is irrelevant.
The quotitive concept of division lends itself to calculation by repeated subtraction: dividing entails counting how many times the divisor can be subtracted before the dividend runs out. Because no finite number of subtractions of zero will ever exhaust a non-zero dividend, calculating division by zero in this way never terminates. Such an interminable division-by-zero algorithm is physically exhibited by some mechanical calculators.
In partitive division, the dividend is imagined to be split into parts, and the quotient is the resulting size of each part. For example, imagine ten cookies are to be divided among two friends. Each friend will receive five cookies Now imagine instead that the ten cookies are to be divided among zero friends. How many cookies will each friend receive? Since there are no friends, this is an absurdity.
In another interpretation, the quotient represents the ratio For example, a cake recipe might call for ten cups of flour and two cups of sugar, a ratio of or, proportionally, To scale this recipe to larger or smaller quantities of cake, a ratio of flour to sugar proportional to could be maintained, for instance one cup of flour and one-fifth cup of sugar, or fifty cups of flour and ten cups of sugar. Now imagine a sugar-free cake recipe calls for ten cups of flour and zero cups of sugar. The ratio or proportionally is perfectly sensible: it just means that the cake has no sugar. However, the question "How many parts flour for each part sugar?" still has no meaningful numerical answer.
A geometrical appearance of the division-as-ratio interpretation is the slope of a straight line in the Cartesian plane. The slope is defined to be the "rise" (change in vertical coordinate) divided by the "run" (change in horizontal coordinate) along the line. When this is written using the symmetrical ratio notation, a horizontal line has slope and a vertical line has slope However, if the slope is taken to be a single real number then a horizontal line has slope while a vertical line has an undefined slope, since in real-number arithmetic the quotient is undefined. The real-valued slope of a line through the origin is the vertical coordinate of the intersection between the line and a vertical line at horizontal coordinate dashed black in the figure. The vertical red and dashed black lines are parallel, so they have no intersection in the plane. Sometimes they are said to intersect at a point at infinity, and the ratio is represented by a new number see below. Vertical lines are sometimes said to have an "infinitely steep" slope.
Inverse of multiplication
Division is the inverse of multiplication, meaning that multiplying and then dividing by the same non-zero quantity, or vice versa, leaves an original quantity unchanged; for example . Thus a division problem such as can be solved by rewriting it as an equivalent equation involving multiplication, where represents the same unknown quantity, and then finding the value for which the statement is true; in this case the unknown quantity is because so therefore
An analogous problem involving division by zero, requires determining an unknown quantity satisfying However, any number multiplied by zero is zero rather than six, so there exists no number which can substitute for to make a true statement.
When the problem is changed to the equivalent multiplicative statement is in this case any value can be substituted for the unknown quantity to yield a true statement, so there is no single number which can be assigned as the quotient
Because of these difficulties, quotients where the divisor is zero are traditionally taken to be undefined, and division by zero is not allowed.
Fallacies
A compelling reason for not allowing division by zero is that allowing it leads to fallacies.
When working with numbers, it is easy to identify an illegal division by zero. For example:
From and one gets Cancelling from both sides yields , a false statement.
The fallacy here arises from the assumption that it is legitimate to cancel like any other number, whereas, in fact, doing so is a form of division by .
Using algebra, it is possible to disguise a division by zero to obtain an invalid proof. For example:
This is essentially the same fallacious computation as the previous numerical version, but the division by zero was obfuscated because we wrote as .
Early attempts
The Brāhmasphuṭasiddhānta of Brahmagupta (c. 598–668) is the earliest text to treat zero as a number in its own right and to define operations involving zero. According to Brahmagupta,
A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero.
In 830, Mahāvīra unsuccessfully tried to correct the mistake Brahmagupta made in his book Ganita Sara Samgraha: "A number remains unchanged when divided by zero."
Bhāskara II's Līlāvatī (12th century) proposed that division by zero results in an infinite quantity,
A quantity divided by zero becomes a fraction the denominator of which is zero. This fraction is termed an infinite quantity. In this quantity consisting of that which has zero for its divisor, there is no alteration, though many may be inserted or extracted; as no change takes place in the infinite and immutable God when worlds are created or destroyed, though numerous orders of beings are absorbed or put forth.
Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to is contained in Anglo-Irish philosopher George Berkeley's criticism of infinitesimal calculus in 1734 in The Analyst ("ghosts of departed quantities").
Calculus
Calculus studies the behavior of functions using the concept of a limit, the value to which a function's output tends as its input tends to some specific value. The notation means that the value of the function can be made arbitrarily close to by choosing sufficiently close to
In the case where the limit of the real function increases without bound as tends to the function is not defined at a type of mathematical singularity. Instead, the function is said to "tend to infinity", denoted and its graph has the line as a vertical asymptote. While such a function is not formally defined for and the infinity symbol in this case does not represent any specific real number, such limits are informally said to "equal infinity". If the value of the function decreases without bound, the function is said to "tend to negative infinity", In some cases a function tends to two different values when tends to from above and below ; such a function has two distinct one-sided limits.
A basic example of an infinite singularity is the reciprocal function, which tends to positive or negative infinity as tends to
In most cases, the limit of a quotient of functions is equal to the quotient of the limits of each function separately,
However, when a function is constructed by dividing two functions whose separate limits are both equal to then the limit of the result cannot be determined from the separate limits, so is said to take an indeterminate form, informally written (Another indeterminate form, results from dividing two functions whose limits both tend to infinity.) Such a limit may equal any real value, may tend to infinity, or may not converge at all, depending on the particular functions. For example, in
the separate limits of the numerator and denominator are , so we have the indeterminate form , but simplifying the quotient first shows that the limit exists:
Alternative number systems
Extended real line
The affinely extended real numbers are obtained from the real numbers by adding two new numbers and read as "positive infinity" and "negative infinity" respectively, and representing points at infinity. With the addition of the concept of a "limit at infinity" can be made to work like a finite limit. When dealing with both positive and negative extended real numbers, the expression is usually left undefined. However, in contexts where only non-negative values are considered, it is often convenient to define .
Projectively extended real line
The set is the projectively extended real line, which is a one-point compactification of the real line. Here means an unsigned infinity or point at infinity, an infinite quantity that is neither positive nor negative. This quantity satisfies , which is necessary in this context. In this structure, can be defined for nonzero , and when is not . It is the natural way to view the range of the tangent function and cotangent functions of trigonometry: approaches the single point at infinity as approaches either or from either direction.
This definition leads to many interesting results. However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, is undefined in this extension of the real line.
Riemann sphere
The subject of complex analysis applies the concepts of calculus in the complex numbers. Of major importance in this subject is the extended complex numbers the set of complex numbers with a single additional number appended, usually denoted by the infinity symbol and representing a point at infinity, which is defined to be contained in every exterior domain, making those its topological neighborhoods.
This can intuitively be thought of as wrapping up the infinite edges of the complex plane and pinning them together at the single point a one-point compactification, making the extended complex numbers topologically equivalent to a sphere. This equivalence can be extended to a metrical equivalence by mapping each complex number to a point on the sphere via inverse stereographic projection, with the resulting spherical distance applied as a new definition of distance between complex numbers; and in general the geometry of the sphere can be studied using complex arithmetic, and conversely complex arithmetic can be interpreted in terms of spherical geometry. As a consequence, the set of extended complex numbers is often called the Riemann sphere. The set is usually denoted by the symbol for the complex numbers decorated by an asterisk, overline, tilde, or circumflex, for example
In the extended complex numbers, for any nonzero complex number ordinary complex arithmetic is extended by the additional rules However, , , and are left undefined.
Higher mathematics
The four basic operations – addition, subtraction, multiplication and division – as applied to whole numbers (positive integers), with some restrictions, in elementary arithmetic are used as a framework to support the extension of the realm of numbers to which they apply. For instance, to make it possible to subtract any whole number from another, the realm of numbers must be expanded to the entire set of integers in order to incorporate the negative integers. Similarly, to support division of any integer by any other, the realm of numbers must expand to the rational numbers. During this gradual expansion of the number system, care is taken to ensure that the "extended operations", when applied to the older numbers, do not produce different results. Loosely speaking, since division by zero has no meaning (is undefined) in the whole number setting, this remains true as the setting expands to the real or even complex numbers.
As the realm of numbers to which these operations can be applied expands there are also changes in how the operations are viewed. For instance, in the realm of integers, subtraction is no longer considered a basic operation since it can be replaced by addition of signed numbers. Similarly, when the realm of numbers expands to include the rational numbers, division is replaced by multiplication by certain rational numbers. In keeping with this change of viewpoint, the question, "Why can't we divide by zero?", becomes "Why can't a rational number have a zero denominator?". Answering this revised question precisely requires close examination of the definition of rational numbers.
In the modern approach to constructing the field of real numbers, the rational numbers appear as an intermediate step in the development that is founded on set theory. First, the natural numbers (including zero) are established on an axiomatic basis such as Peano's axiom system and then this is expanded to the ring of integers. The next step is to define the rational numbers keeping in mind that this must be done using only the sets and operations that have already been established, namely, addition, multiplication and the integers. Starting with the set of ordered pairs of integers, with , define a binary relation on this set by if and only if . This relation is shown to be an equivalence relation and its equivalence classes are then defined to be the rational numbers. It is in the formal proof that this relation is an equivalence relation that the requirement that the second coordinate is not zero is needed (for verifying transitivity).
Although division by zero cannot be sensibly defined with real numbers and integers, it is possible to consistently define it, or similar operations, in other mathematical structures.
Non-standard analysis
In the hyperreal numbers, division by zero is still impossible, but division by non-zero infinitesimals is possible. The same holds true in the surreal numbers.
Distribution theory
In distribution theory one can extend the function to a distribution on the whole space of real numbers (in effect by using Cauchy principal values). It does not, however, make sense to ask for a "value" of this distribution at x = 0; a sophisticated answer refers to the singular support of the distribution.
Linear algebra
In matrix algebra, square or rectangular blocks of numbers are manipulated as though they were numbers themselves: matrices can be added and multiplied, and in some cases, a version of division also exists. Dividing by a matrix means, more precisely, multiplying by its inverse. Not all matrices have inverses. For example, a matrix containing only zeros is not invertible.
One can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. It can be proven that if b−1 exists, then b+ = b−1. If b equals 0, then b+ = 0.
Abstract algebra
In abstract algebra, the integers, the rational numbers, the real numbers, and the complex numbers can be abstracted to more general algebraic structures, such as a commutative ring, which is a mathematical structure where addition, subtraction, and multiplication behave as they do in the more familiar number systems, but division may not be defined. Adjoining a multiplicative inverses to a commutative ring is called localization. However, the localization of every commutative ring at zero is the trivial ring, where , so nontrivial commutative rings do not have inverses at zero, and thus division by zero is undefined for nontrivial commutative rings.
Nevertheless, any number system that forms a commutative ring can be extended to a structure called a wheel in which division by zero is always possible. However, the resulting mathematical structure is no longer a commutative ring, as multiplication no longer distributes over addition. Furthermore, in a wheel, division of an element by itself no longer results in the multiplicative identity element , and if the original system was an integral domain, the multiplication in the wheel no longer results in a cancellative semigroup.
The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. In a field, every nonzero element is invertible under multiplication; as above, division poses problems only when attempting to divide by zero. This is likewise true in a skew field (which for this reason is called a division ring). However, in other rings, division by nonzero elements may also pose problems. For example, the ring Z/6Z of integers mod 6. The meaning of the expression should be the solution x of the equation . But in the ring Z/6Z, 2 is a zero divisor. This equation has two distinct solutions, and , so the expression is undefined.
In field theory, the expression is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Since the field axioms only guarantee the existence of such inverses for nonzero elements, this expression has no meaning when b is zero. Modern texts, that define fields as a special type of ring, include the axiom for fields (or its equivalent) so that the zero ring is excluded from being a field. In the zero ring, division by zero is possible, which shows that the other field axioms are not sufficient to exclude division by zero in a field.
Computer arithmetic
Floating-point arithmetic
In computing, most numerical calculations are done with floating-point arithmetic, which since the 1980s has been standardized by the IEEE 754 specification. In IEEE floating-point arithmetic, numbers are represented using a sign (positive or negative), a fixed-precision significand and an integer exponent. Numbers whose exponent is too large to represent instead "overflow" to positive or negative infinity (+∞ or −∞), while numbers whose exponent is too small to represent instead "underflow" to positive or negative zero (+0 or −0). A NaN (not a number) value represents undefined results.
In IEEE arithmetic, division of 0/0 or ∞/∞ results in NaN, but otherwise division always produces a well-defined result. Dividing any non-zero number by positive zero (+0) results in an infinity of the same sign as the dividend. Dividing any non-zero number by negative zero (−0) results in an infinity of the opposite sign as the dividend. This definition preserves the sign of the result in case of arithmetic underflow.
For example, using single-precision IEEE arithmetic, if , then x/2 underflows to −0, and dividing 1 by this result produces 1/(x/2) = −∞. The exact result −2150 is too large to represent as a single-precision number, so an infinity of the same sign is used instead to indicate overflow.
Integer arithmetic
Integer division by zero is usually handled differently from floating point since there is no integer representation for the result. CPUs differ in behavior: for instance x86 processors trigger a hardware exception, while PowerPC processors silently generate an incorrect result for the division and continue, and ARM processors can either cause a hardware exception or return zero. Because of this inconsistency between platforms, the C and C++ programming languages consider the result of dividing by zero undefined behavior. In typical higher-level programming languages, such as Python, an exception is raised for attempted division by zero, which can be handled in another part of the program.
In proof assistants
Many proof assistants, such as Coq and Lean, define 1/0 = 0. This is due to the requirement that all functions are total. Such a definition does not create contradictions, as further manipulations (such as cancelling out) still require that the divisor is non-zero.
Historical accidents
On 21 September 1997, a division by zero error in the "Remote Data Base Manager" aboard USS Yorktown (CG-48) brought down all the machines on the network, causing the ship's propulsion system to fail.
| Mathematics | Basics | null |
185684 | https://en.wikipedia.org/wiki/Galleon | Galleon | Galleons were large, multi-decked sailing ships developed in Spain and Portugal and first used as armed cargo carriers by Europeans from the 16th to 18th centuries during the Age of Sail and were the principal vessels drafted for use as warships until the Anglo-Dutch Wars of the mid-17th century. Galleons generally carried three or more masts with a lateen fore-and-aft rig on the rear masts, were carvel built with a prominent squared off raised stern, and used square-rigged sail plans on their fore-mast and main-masts.
Such ships played a major role in commerce in the sixteenth and seventeenth centuries and were often drafted into use as auxiliary naval war vessels—indeed, they were the mainstay of contending fleets through most of the 150 years of the Age of Exploration—before the Anglo-Dutch wars made purpose-built warships dominant at sea during the remainder of the Age of Sail.
Terminology
The word galleon has had differing meanings at different points in its history and in different regions. The term is thought to originate from gallioni (alternatively galeanni), Venetian oared vessels that were used in rivers in the fifteenth century. The galleons of the sixteenth and seventeenth centuries were fully developed sailing ships. This descriptive name was used notably in Spain, Portugal and Venice. However, inconsistency can be found, for example, in the use of "galleon" by the notaries who worked in the Basque shipbuilding region of northern Spain. Though most of the ships from this region were naos, some were galeones, but the two terms can be found being used as if they were interchangeable by some of the writers of the documents in the contemporary archives.
It is thought that the seamen of the Basque country of northern Spain were clear on the differences between a nao and a galeón, but what those distinguishing features were is not apparent to modern historians. A hypothesis has been put forward that the differences are more in the underwater hull shapesomething which cannot be discerned in contemporary illustrations.
The terminological inconsistency of Basque-built ships continues into the present day. Archival research on the Red Bay wreck 24M has identified, with reasonable confidence, this ship to have been San Juan of Pasajes. She is described 26 times in six different contemporary documents with at least three different authors as a nao, and not once as a galeón. However, published archaeological work repeatedly refers to this ship as a galleon.
Outside of the Iberian peninsula, the term "galleon" was not often used. For instance, though English shipwrights certainly built galleon-type vessels, they simply referred to them as "ships". In present-day usage, these types are referred to as galleons, with the term "race-built galleon" being applied to those with lower upper-works. In Holland, a "pinnas" was a galleon-type ship and in the Baltic, "kravel" was used (a term connected with their carvel construction).
History
During the 16th century, a lowering of the carrack's forecastle and elongation of the hull gave the ocean-going ships an unprecedented level of stability in the water, and reduced wind resistance at the front, leading to a faster, more maneuverable vessel. The galleon differed from the carrack and other older types primarily by being longer, lower and narrower, with a square tuck stern instead of a round tuck, and by having a snout or head projecting forward from the bows below the level of the forecastle. While carracks could be very large for the time, with some Portuguese carracks over 1,000 tons, galleons were generally smaller, usually under 500 tons although some Manila galleons were to reach a displacement of 2,000 tons. With the introduction of the galleon in Portuguese India Armadas during the first quarter of the 16th century, carracks' armament was reduced as they became almost exclusively cargo ships (which is why the Portuguese carracks were pushed to such large sizes), leaving any fighting to be done to the galleons. One of the largest and most famous of Portuguese galleons was the São João Baptista (nicknamed Botafogo, "Spitfire"), a 1,000-ton galleon built in 1534, said to have carried 366 guns. Friar Manuel Homem says that this galleon mounted 366 bronze pieces of artillery, including the ones that garrisoned the high castles of stern and bow.
Carracks were usually lightly armed and used for transporting cargo in all the fleets of other Western European states, while galleons were stronger, more heavily armed, and also cheaper to build for the same displacement (five galleons could cost around the same as three carracks) and were therefore a much better investment for use as heavily armed cargo ships or warships. Galleons' design changed and improved through the application of various innovations, and they were particularly linked with the military capabilities of the Atlantic sea powers. It was the captains of the Spanish navy, Pedro Menéndez de Avilés and Álvaro de Bazán, who designed the definitive long and relatively narrow hulled galleon in the 1550s.
The galleon was powered entirely by wind, using sails carried on three or four masts, with a lateen sail continuing to be used on the last (usually third and fourth) masts. They were used in both military and trade applications, most famously in the Spanish treasure fleet, and the Manila galleons. While carracks played the leading role in early global explorations, galleons also played a part in the 16th and 17th centuries. In fact, galleons were so versatile that a single vessel might be refitted for wartime and peacetime roles several times during its lifespan. The galleon was the prototype of all square-rigged ships with three or more masts for over two and a half centuries, including the later full-rigged ship.
The principal warships of the opposing English and Spanish fleets in the 1588 confrontation of the Spanish Armada and in the 1589 confrontation of the English Armada were galleons, with the modified English race-built galleons developed by John Hawkins proving their great utility in combat, while the capacious Spanish galleons, designed primarily as transports, showed great endurance in the battles and in the long and stormy return home.
Construction
Galleons were constructed from oak (for the keel), pine (for the masts) and various hardwoods for hull and decking. Hulls were usually carvel-built. The expenses involved in galleon construction were enormous. Hundreds of expert tradesmen (including carpenters, pitch-melters, blacksmiths, coopers, shipwrights, etc.) worked for months before a galleon was seaworthy. To cover the expense, galleons were often funded by groups of wealthy businessmen who pooled resources for a new ship. Therefore, most galleons were originally consigned for trade, although those captured by rival states were usually put into military service.
The most common gun used aboard a galleon was the demi-culverin, although gun sizes up to demi-cannon were possible.
Because of the long periods often spent at sea and poor conditions on board, many of the crew often perished during the voyage; therefore advanced rigging systems were developed so that the vessel could be sailed home by an active sailing crew a fraction of the size aboard at departure.
Distinguishing features
The most distinguishing features of the galleon include the long, prominent beak or beakhead followed by a foremast and mainmast, both noticeably taller than the single or double lateen-rigged mizzenmasts with their sloped lateen-rig yards, and below those the square quarter gallery at the stern. On average with three masts, in larger galleons, a fourth mast was added, usually another lateen-rigged mizzen, called the bonaventure mizzen.
The oldest English drawings
The oldest known scale drawings in England are in a manuscript called "Fragments of Ancient Shipwrightry" made in about 1586 by Mathew Baker, a master shipwright. This manuscript, held at the Pepysian Library, Magdalene College, Cambridge, provides an authentic reference for the size and shape of typical English galleons built during this period. Based on these plans, the Science Museum, London has built a 1:48 scale model ship that is an exemplar of galleons of this era.
Notable galleons
Adler von Lübeck, the largest ship of its day when launched in 1566.
Dainty, ship with which Sir Richard Hawkins sought to emulate the circumnavigation voyage of his cousin Francis Drake. She was captured by the Spanish in the action of Atacames Bay in 1594 and served in the Spanish Navy in the South American Pacific for several years.
Revenge, a galleon built in 1577, the flagship of Sir Francis Drake in the Battle of the Spanish Armada in 1588, was captured by a Spanish fleet off Flores in the Azores in 1591 and sank while being sailed back to Spain.
Triumph, the largest Elizabethan galleon; flagship of Sir Martin Frobisher in the Battle of the Spanish Armada.
Galeon Adalucia, a replica galleon built in Spain in 2014.
Golden Hind, the ship in which Sir Francis Drake circumnavigated the globe 1577–1580.
"La Galga", the Assateague Spanish galleon that was shipwrecked in 1794; according to legend, the ancestors of the now famous Chincoteague ponies swam ashore from its hold.
Nuestra Señora de la Concepción, a Spanish galleon, known to her crew as Cacafuego for her strong cannon. She was captured by Sir Francis Drake in 1578 and all her treasures were brought to England. She was holding treasures mined in one year by the Spanish in the Americas.
Padre Eterno, a Portuguese galleon launched in 1663. She was considered to be the biggest ship of her time, carrying 144 pieces of artillery with a displacement up to 2,000 tons.
San Juan Bautista (originally called Date Maru, 伊達丸 in Japanese). She crossed the Pacific Ocean from Japan to New Spain in 1614. She was of the Spanish galleon type, known in Japan as Nanban-Sen (南蛮船).
San Salvador, flagship vessel in Juan Rodríguez Cabrillo's 1542 exploration of present-day California in the United States.
Santa Luzia, a Portuguese galleon known for defeating a Dutch squadron single-handedly twice in 1650.
Santa Teresa, a Portuguese galleon, the flagship of Admiral Lope de Hoces at the Battle of the Downs, in 1639.
São João Baptista, nicknamed Botafogo, the most powerful warship in the world at the time when launched (1534) by the Portuguese; became famous during the Conquest of Tunis (1535), where it was commanded by Luís of Portugal, Duke of Beja.
São Martinho, a Portuguese galleon, the flagship of Duke of Medina Sidonia, commander-in-chief of the Spanish Armada.
Vasa, the only original galleon to be preserved. She sank in 1628 and was raised in 1961 for preservation as a museum ship.
Ark Raleigh was designed and built by Sir Walter Raleigh. She was later chosen by Lord Howard, admiral of the fleet to be the flagship of the English fleet in the fight against the Spanish Armada in 1588 and was summarily renamed Ark Royal.
San Pelayo, the large 906-ton galleon, which served as the flagship of Pedro Menéndez de Avilés during his expedition to establish St. Augustine, Florida in 1565. The vessel was so large it could not enter St. Augustine's harbor, so Menendez ordered it offloaded and sent it back to Hispaniola. At a later date her crew mutinied and sailed to Europe where the ship wrecked off the coast of Denmark.
The Manila galleons, Spanish trading ships that sailed once or twice per year across the Pacific Ocean between Manila in the Philippines and Acapulco in New Spain (now Mexico); (1565–1815).
| Technology | Naval warfare | null |
185763 | https://en.wikipedia.org/wiki/Norma%20%28constellation%29 | Norma (constellation) | Norma is a small constellation in the Southern Celestial Hemisphere between Ara and Lupus, one of twelve drawn up in the 18th century by French astronomer Nicolas-Louis de Lacaille and one of several depicting scientific instruments. Its name is Latin for normal, referring to a right angle, and is variously considered to represent a rule, a carpenter's square, a set square or a level. It remains one of the 88 modern constellations.
Four of Norma's brighter stars—Gamma, Delta, Epsilon and Eta—make up a square in the field of faint stars. Gamma2 Normae is the brightest star with an apparent magnitude of 4.0. Mu Normae is one of the most luminous stars known, with a luminosity between a quarter million and one million times that of the Sun. Four star systems are known to harbour planets. The Milky Way, particularly the Norma Arm of the galaxy, passes through Norma, and the constellation contains eight open clusters visible to observers with binoculars. The constellation also hosts Abell 3627, also called the Norma Cluster, one of the most massive galaxy clusters known.
History
Norma was introduced in 1751–52 by Nicolas-Louis de Lacaille with the French name l’Équerre et la Règle, "the Square and Rule", after he had observed and catalogued 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised 14 new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Lacaille portrayed the constellations of Norma, Circinus and Triangulum Australe, respectively, as a set square and ruler, a compass, and a surveyor's level in a set of draughtsman instruments, in his 1756 map of the southern stars. The level was dangling from the apex of a triangle, leading some astronomers to conclude he was renaming l’Équerre et la Règle to "le Niveau", "the level". In any case, the constellation's name had been shortened and Latinised by Lacaille to Norma by 1763.
Characteristics
Norma is bordered by Scorpius to the north, Lupus to the northwest, Circinus to the west, Triangulum Australe to the south and Ara to the east. Covering 165.3 square degrees and 0.401% of the night sky, it ranks 74th of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Nor". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of ten segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −42.27° and −60.44°. The whole constellation is visible to observers south of latitude 29°N.
Features
Stars
Lacaille charted and designated ten stars with the Bayer designations Alpha through to Mu in 1756, however his Alpha Normae was transferred into Scorpius and left unnamed by Francis Baily, before being named N Scorpii by Benjamin Apthorp Gould, who felt its brightness warranted recognition. Though Beta Normae was depicted on his star chart, it was inadvertently left out of Lacaille's 1763 catalogue, was likewise transferred to Scorpio by Baily and named H Scorpii by Gould. Norma's brightest star, Gamma2 Normae, is only of magnitude 4.0. Overall, there are 44 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5.
The four main stars—Gamma, Delta, Epsilon and Eta—make up a square in this region of faint stars. Gamma1 and Gamma2 Normae are an optical double, and not a true binary star system. Located 129 ± 1 light-years away from Earth, Gamma2 Normae is a yellow giant of spectral type G8III around 2 to 2.5 times as massive as the Sun. It has swollen to a diameter 10 times that of the Sun and shines with 45 times the Sun's luminosity. It also is half of a close optical double, with a magnitude 10 companion star related by line of sight only. Gamma1 Normae is a yellow-white supergiant, located much further away at around 1500 light-years from Earth. Epsilon Normae is a spectroscopic binary, with two blue-white main sequence stars of almost equal mass and spectral type (B3V) orbiting each other every 3.26 days. There is a third star separated by 22 arcseconds, which has a magnitude of 7.5 and is likely a smaller B-type main sequence star of spectral type B9V. The system is 530 ± 20 light-years distant from Earth, Eta Normae is a yellow giant of spectral type G8III with an apparent magnitude of 4.65. It shines with a luminosity approximately 66 times that of the Sun.
Iota1 Normae is a multiple star system. The AB (mag 5.2 and 5.76) pair orbit each other with a period of 26.9 years; they are 2.77 and 2.71 times as massive as the Sun respectively. The pair are 128 ± 6 light-years distant from Earth. A third component is a yellow main sequence star of spectral type G8V with an apparent magnitude of 8.02.
Mu Normae is a remote blue supergiant of spectral type O9.7Iab, one of the most luminous stars known, but is partially obscured by distance and cosmic dust. Uncertainties regarding its distance leave open the possibility that Mu Normae could be between 250,000 and one million times as luminous and up to 60 times as massive as the Sun, though it is more likely to have around 500,000 times the Sun's luminosity and 40 times its mass. It is suspected of being an Alpha Cygni variable, with a magnitude range of 4.87–4.98. QU Normae is another hot blue-white star that is a variable, ranging from magnitude 5.27 to 5.41 over 4.8 days. Lying near Eta Normae is R Normae, a Mira variable. Its visual magnitude range is 6.5–13.9 and its average period is 507.5 days. Located halfway between Eta Normae and Gamma Circini is T Normae, another Mira variable. It ranges from magnitude 6.2 to 13.6, with a period of 244 days. S Normae is a well-known Cepheid variable with a magnitude range of 6.12–6.77 and a period of 9.75411 days. It is located at the centre of the open cluster NGC 6087. It is a yellow-white supergiant of spectral type F8-G0Ib that is 6.3 times as massive as the Sun. A binary, it has a 2.4 solar mass () companion that is a blue-white main sequence star of spectral type B9.5V. A binary system composed of two wolf-rayet stars, colloquially called Apep, has been identified as a possible progenitor of a long gamma-ray burst. Located around 8000 light-years distant, it would be the first such in the Milky Way.
IM Normae is one of only ten recurrent novae known in the Milky Way. It has erupted in 1920 and 2002, reaching magnitude 8.5 from a baseline of 18.3. It was poorly monitored after the first eruption, so it is possible that it erupted in between. Norma hosts two faint R Coronae Borealis variable stars of magnitude 10—RT Normae and RZ Normae—rare degenerate stars thought to have formed from the merger of two white dwarfs that fade by several magnitudes periodically as they eject large amounts of carbon dust. A faint object of magnitude 16, QV Normae is a high mass X-ray binary star system 15,000–20,000 light-years distant from Earth. It is composed of a neutron star orbiting a blue-white supergiant approximately 20 times as massive as the Sun. The stellar wind from the more massive star is drawn to the magnetic poles of the neutron star, forming an accretion column and producing X-rays. Located 19,000 light-years away, QX Normae is an active low mass X ray binary composed of a neutron star and its companion star that is smaller and cooler than the Sun. The neutron star is 1.74 ± 0.14 times as massive as the Sun, yet its radius is a mere 9.3 ± 1.0 km. 1E161348-5055 is a neutron star found in the centre of RCW103 supernova remnant. A periodic X-ray source with a period of 6.67 hours, it is approximately 2000 years old and 10,000 light-years away from Earth. It is unusual in that it is spinning much too slowly for its young age, behaving instead like a multi-million-year-old star. SGR J1550-5418 is a soft gamma repeater (SGR)—a magnetar that is emitting gamma ray flares, located some 30,000 light-years distant from Earth. The rotation period, of approximately 2.07 seconds, is the fastest yet observed for a magnetar. XTE J1550-564 is another X-ray binary, this time composed of a large black hole around 10 times as massive as the Sun and a cool orange donor star. The black hole is a microquasar, firing off jets of material most likely from its accretion disk.
Exoplanets
Four star systems are known to harbour planets. HD 330075 is a sunlike star around 164 light-years distant that is orbited by a hot Jupiter every 3.4 days. Announced in 2004, it was the first planet discovered by the HARPS spectrograph. HD 148156 is a star 168 ± 7 light-years distant. Slightly larger and hotter than the Sun, it was found to have a roughly Jupiter-size planet with an orbital period of 2.8 years. HD 143361 is a binary star system composed of a sunlike star and a faint red dwarf separated by 30.9 AU. A planet roughly triple the mass of Jupiter orbits the brighter star every 1057± 20 days. HD 142415 is approximately 113 light-years distant and has a Jupiter-sized planet with an orbital period of around 386 days.
Deep-sky objects
Due to its location on the Milky Way, this constellation contains many deep-sky objects such as star clusters, including eight open clusters visible through binoculars. NGC 6087 is the brightest of the open clusters in Norma with a magnitude of 5.4. It lies in the southeastern corner of the constellation between Alpha Centauri and Zeta Arae. Thought to be around 100 million years old, it is about 3300 light-years away and is around 14 light-years in diameter. Its brightest member is the Cepheid variable S Normae. A rich background star field makes it less distinct, though around 36 member stars are visible though a 10 cm telescope at 150x magnification. Located 0.4° north of Kappa Normae is NGC 6067, which has an integrated magnitude of 5.6 though it is indistinct as it lies in a rich star field. It is thought to be around 102 million years old, and contain 891 solar masses. Two Cepheid variables—QZ Normae and V340 Normae—have been identified as members of the cluster. Fainter open clusters include NGC 6134 with a combined magnitude of 7.2 and located 4000 light-years away from Earth, the spread-out NGC 6167 of magnitude 6.7, NGC 6115 near Gamma Normae, NGC 6031 and NGC 5999.
Located around 4900 light-years distant is Shapley 1 (or PK 329+02.1), a planetary nebula better known as the Fine-Ring Nebula. Appearing ring-shaped, it is thought that it actually is cylindrical and oriented directly at Earth. Around 8700 years old, it lies about five degrees west-northwest of Gamma1 Normae. Its integrated magnitude is 13.6 and its mean surface brightness is 13.9. The central star is a white dwarf of magnitude 14.03. Mz 1 is a bipolar planetary nebula, thought to be an hourglass shape tilted at an angle to observers on Earth, some 3500 light-years distant. Mz 3—known as the Ant Nebula as it resembles an ant—has a complex appearance, with at least four outflow jets and two large lobes visible.
Approximately 200 million light-years from Earth with a redshift of 0.016 is Abell 3627; also called the Norma Cluster, it is one of the most massive galaxy clusters known to exist, at ten times the average cluster mass. Abell 3627 is thus theorized to be the Great Attractor, a massive object that is pulling the Local Group, the Virgo Supercluster, and the Hydra–Centaurus Supercluster towards its location at 600–1000 kilometres per second.
Meteor shower
The relatively weak meteor shower Gamma Normids (GNO), which is typically active from March 7 to 23, peaking on March 15, has its radiant near Gamma2 Normae.
Galactic Arm
The Norma Arm is a minor galactic arm named after Norma for lying in its background.
| Physical sciences | Other | Astronomy |
185775 | https://en.wikipedia.org/wiki/Crucible | Crucible | A crucible is a container in which metals or other substances may be melted or subjected to very high temperatures. Although crucibles have historically tended to be made out of clay, they can be made from any material that withstands temperatures high enough to melt or otherwise alter its contents.
History
Typology and chronology
The form of the crucible has varied through time, with designs reflecting the process for which they are used, as well as regional variation. The earliest crucible forms derive from the sixth/fifth millennium B.C. in Eastern Europe and Iran.
Chalcolithic
Crucibles used for copper smelting were generally wide shallow vessels made from clay that lacks refractory properties which is similar to the types of clay used in other ceramics of the time. During the Chalcolithic period, crucibles were heated from the top by using blowpipes. Ceramic crucibles from this time had slight modifications to their designs such as handles, knobs or pouring spouts allowing them to be more easily handled and poured. Early examples of this practice can be seen in Feinan, Jordan. These crucibles have added handles to allow for better manipulation, however, due to the poor preservation of the crucibles there is no evidence of a pouring spout. The main purpose of the crucible during this period was to keep the ore in the area where the heat was concentrated to separate it from impurities before shaping.
A crucible furnace dating to 2300–1900 BC for bronze casting has been found at a religious precinct of Kerma.
Iron Age
The use of crucibles in the Iron Age remains very similar to that of the Bronze Age with copper and tin smelting being used to produce bronze. The Iron Age crucible designs remain the same as the Bronze Age.
The Roman period shows technical innovations, with crucibles for new methods used to produce new alloys. The smelting and melting process also changed with both the heating technique and the crucible design. The crucible changed into rounded or pointed bottom vessels with a more conical shape; these were heated from below, unlike prehistoric types which were irregular in shape and were heated from above. These designs gave greater stability within the charcoal. These crucibles in some cases have thinner walls and have more refractory properties.
During the Roman period a new process of metalworking started, cementation, used in the production of brass. This process involves the combination of a metal and a gas to produce an alloy. Brass is made by mixing solid copper metal with zinc oxide or carbonate which comes in the form of calamine or smithsonite. This is heated to about 900 °C, the zinc oxide vaporizes into a gas, and the zinc gas bonds with the molten copper. This reaction has to take place in a part-closed or closed container otherwise the zinc vapor would escape before it can react with the copper. Cementation crucibles, therefore, have a lid or cap which limits the amount of gas loss from the crucible. The crucible design is similar to the smelting and melting crucibles of the period utilizing the same material as the smelting and melting crucibles. The conical shape and smallmouth allowed the lid to be added. These small crucibles are seen in Colonia Ulpia Trajana (modern-day Xanten), Germany, where the crucibles are around 4 cm in size, however, these are small examples. There are examples of larger vessels such as cooking pots and amphorae being used for cementation to process larger amounts of brass; since the reaction takes place at low temperatures lower fired ceramics could be used. The ceramic vessels which are used are important as the vessel must be able to lose gas through the walls otherwise the pressure would break the vessel. Cementation vessels are mass-produced due to crucibles having to be broken open to remove the brass once the reaction has finished as in most cases the lid would have baked hard to the vessel or the brass might have adhered to the vessel walls.
Medieval period
Smelting and melting of copper and its alloys such as leaded bronze was done in crucibles similar to those of the Roman period which have thinner walls and flat bases to sit within the furnaces. The technology for this type of smelting started to change at the end of the Medieval period with the introduction of new tempering material for the ceramic crucibles. Some of these copper alloy crucibles were used in the making of bells. Bell foundry crucibles had to be larger at about 60 cm. These later medieval crucibles were a more mass-produced product.
The cementation process, which was lost from the end of the Roman to the early Medieval period, continued in the same way with brass. Brass production increased during the medieval period due to a better understanding of the technology behind it. Furthermore, the process for carrying out cementation for brass did not change greatly until the 19th century.
However, during this period a vast and highly important technological innovation happened using the cementation process, the production of crucible steel. Steel production using iron and carbon works similarly to brass, with the iron metal being mixed with carbon to produce steel. The first examples of cementation steel are wootz steel from India, where the crucibles were filled with good quality low-carbon wrought iron and carbon in the form of organics such as leaves, wood, etc. However, no charcoal was used within the crucible. These early crucibles would only produce a small amount of steel as they would have to be broken once the process has finished.
By the late Medieval period, steel production had moved from India to modern-day Uzbekistan where new materials were being used in the production of steel crucibles, for example, Mullite crucibles were introduced. These were sandy clay crucibles which had been formed around a fabric tube. These crucibles were used in the same way as other cementation vessels but with a hole in the top of the vessel to allow pressure to escape.
Post-Medieval
At the end of the Medieval Era and into the Post-Medieval Era, new types of crucible designs and processes started. Smelting and melting crucibles types started to become more limited in designs which are produced by a few specialists. The main types used during the Post Medieval period are the Hessian crucibles which were made in the Hesse region in Germany. These are triangular vessels made on a wheel or within a mold using high alumina clay and tempered with pure quartz sand. Furthermore, another specialized crucible which was made at the same time was that of a graphite crucible from southern Germany. These had a very similar design to that of the triangular crucibles from Hesse but they also occur in conical forms. These crucibles were traded all across Europe and the New World.
The refining of methods during the Medieval and Post-Medieval periods led to the invention of the cupel which resembles a small egg cup, made of ceramic or bone ash which was used to separate base metals from noble metals. This process is known as cupellation. Cupellation started long before the Post Medieval period, however, the first vessels made to carry out this process started in the 16th Century. Another vessel used for the same process is a scorifier which is similar to a cupel but slightly larger and removes the lead and leaves the noble metals behind. Cupels and scorifiers were mass-produced as after each reduction the vessels would have absorbed all of the lead and become fully saturated. These vessels were also used in the process of metallurgical assay where the noble metals are removed from a coin or a weight of metal to determine the amount of the noble metals within the object.
Modern-day uses
Crucibles are used in the laboratory to contain chemical compounds when they are heated to extremely high temperatures. Crucibles are available in several sizes and typically come with a correspondingly-sized lid. When heated over a flame, the crucible is often held inside a pipeclay triangle which itself is held on top of a tripod.
Crucibles and their covers are made of heat-resistant materials, usually porcelain, alumina or an inert metal. One of the earliest uses of platinum was to make crucibles. Ceramics such as alumina, zirconia, and especially magnesia will tolerate the highest temperatures. However, chemical reactions with the material in the crucible must be kept in mind; the emergence of melting point-lowering eutectic systems is an especially important consideration. More recently, metals such as nickel and zirconium have been used. The lids are typically loose-fitting in order to allow gases to escape during the heating of a sample inside. Crucibles and their lids can come in high form and low form shapes and in various sizes, but rather small 10 to 15 ml size porcelain crucibles are commonly used for gravimetric chemical analysis. These smaller crucibles and their covers made of porcelain are quite cheap when sold in large quantities to laboratories, and the crucibles are sometimes disposed of after use in precise quantitative chemical analysis. There is usually a large mark-up when they are sold individually in hobby shops.
In the area of chemical analysis, crucibles are used in quantitative gravimetric chemical analysis (analysis by measuring mass of an analyte or its derivative). Common crucible use may be as follows. A residue or precipitate in a chemical analysis method can be collected or filtered from some sample or solution on special "ashless" filter paper. The crucible and lid to be used are pre-weighed very accurately on an analytical balance. After some possible washing and/or pre-drying of this filtrate, the residue on the filter paper can be placed in the crucible and fired (heated at very high temperature) until all the volatiles and moisture are driven out of the sample residue in the crucible. The "ashless" filter paper is completely burned up in this process. The crucible with the sample and lid is allowed to cool in a desiccator. The crucible and lid with the sample inside are weighed very accurately again only after it has completely cooled to room temperature (higher temperature would cause air currents around the balance giving inaccurate results). The mass of the empty, pre-weighed crucible and lid is subtracted from this result to yield the mass of the completely dried residue in the crucible.
A crucible with a bottom perforated with small holes which are designed specifically for use in filtration, especially for gravimetric analysis as just described, is called a Gooch crucible after its inventor, Frank Austin Gooch.
For completely accurate results, the crucible is handled with clean tongs because fingerprints can add a weighable mass to the crucible. Porcelain crucibles are hygroscopic, i. e. they absorb a bit of weighable moisture from the air. For this reason, the porcelain crucible and lid is also pre-fired (pre-heating to high temperature) to constant mass before the pre-weighing. This determines the mass of the completely dry crucible and lid. At least two firings, coolings, and weighings resulting in exactly the same mass are needed to confirm the constant (completely dry) mass of the crucible and lid and similarly again for the crucible, lid, and sample residue inside. Since the mass of every crucible and lid is different, the pre-firing/pre-weighing must be done for every new crucible/lid used. The desiccator contains desiccant to absorb moisture from the air inside, so the air inside will be completely dry.
| Technology | Metallurgy | null |
185848 | https://en.wikipedia.org/wiki/Propranolol | Propranolol | Propranolol is a medication of the beta blocker class. It is used to treat high blood pressure, some types of irregular heart rate, thyrotoxicosis, capillary hemangiomas, akathisia, performance anxiety, and essential tremors, as well to prevent migraine headaches, and to prevent further heart problems in those with angina or previous heart attacks. It can be taken orally or by intravenous injection. The formulation that is taken orally comes in short-acting and long-acting versions. Propranolol appears in the blood after 30 minutes and has a maximum effect between 60 and 90 minutes when taken orally.
Common side effects include nausea, abdominal pain, and constipation. It may worsen the symptoms of asthma. Propranolol may cause harmful effects for the baby if taken during pregnancy; however, its use during breastfeeding is generally considered to be safe. It is a non-selective beta blocker which works by blocking β-adrenergic receptors.
Propranolol was patented in 1962 and approved for medical use in 1964. It is on the World Health Organization's List of Essential Medicines. Propranolol is available as a generic medication. In 2022, it was the 77th most commonly prescribed medication in the United States, with more than 8million prescriptions.
Medical uses
Propranolol is used for treating various conditions, including:
Cardiovascular
Hypertension
Angina pectoris (with the exception of variant angina)
Myocardial infarction
Tachycardia (and other sympathetic nervous system symptoms, such as muscle tremor) associated with various conditions, including anxiety, panic, hyperthyroidism, and lithium therapy
Portal hypertension, to lower portal vein pressure
Prevention of esophageal variceal bleeding and ascites
Anxiety
Hypertrophic cardiomyopathy
While once a first-line treatment for hypertension, the role of beta blockers was downgraded in June 2006 in the United Kingdom to fourth-line, as they do not perform as well as other drugs, particularly in the elderly, and evidence is increasing that the most frequently used beta blockers at usual doses carry an unacceptable risk of provoking type 2 diabetes.
Propranolol is not recommended for the treatment of high blood pressure by the Eighth Joint National Committee (JNC 8) because a higher rate of the primary composite outcome of cardiovascular death, myocardial infarction, or stroke compared to an angiotensin receptor blocker was noted in one study.
Psychiatric
Propranolol is occasionally used to treat performance anxiety, although evidence to support its use in any anxiety disorders is poor. Its efficacy in managing panic disorder appears similar to benzodiazepines, while carrying lower risks for addiction or abuse. Although beta-blockers such as propranolol have been suggested to be beneficial in managing physical symptoms of anxiety, its efficacy in treating generalized anxiety disorder and panic disorder remain unestablished. Some experimentation has been conducted in other psychiatric areas:
Post-traumatic stress disorder (PTSD) and specific phobias
Aggressive behavior of patients with brain injuries
Treating the excessive drinking of fluids in psychogenic polydipsia
PTSD and phobias
Propranolol is being investigated as a potential treatment for PTSD. Propranolol works to inhibit the actions of norepinephrine (noradrenaline), a neurotransmitter that enhances memory consolidation. In one small study, individuals given propranolol immediately after trauma experienced fewer stress-related symptoms and lower rates of PTSD than respective control groups who did not receive the drug. Due to the fact that memories and their emotional content are reconsolidated in the hours after they are recalled/re-experienced, propranolol can also diminish the emotional impact of already formed memories; for this reason, it is also being studied in the treatment of specific phobias, such as arachnophobia, dental fear, and social phobia. It has also been found to be helpful for some individuals with misophonia.
Ethical and legal questions have been raised surrounding the use of propranolol-based medications for use as a "memory damper", including altering memory-recalled evidence during an investigation, modifying the behavioral response to past (albeit traumatic) experiences, the regulation of these drugs, and others. However, Hall and Carter have argued that many such objections are "based on wildly exaggerated and unrealistic scenarios that ignore the limited action of propranolol in affecting memory, underplay the debilitating impact that PTSD has on those who suffer from it, and fail to acknowledge the extent to which drugs like alcohol are already used for this purpose".
Other uses
Essential tremor. Evidence for use for akathisia however is insufficient
Migraine and cluster headache prevention and in primary exertional headache
Hyperhidrosis (excessive sweating)
Infantile hemangioma
Glaucoma
Thyrotoxicosis by deiodinase inhibition
Propranolol may be used to treat severe infantile hemangiomas (IHs). This treatment shows promise as being superior to corticosteroids when treating IHs. Extensive clinical case evidence and a small controlled trial support its efficacy.
Contraindications
Propranolol may be contraindicated in people with:
Reversible airway diseases, particularly asthma or chronic obstructive pulmonary disease (COPD)
Slow heart rate (bradycardia) (<60 beats/minute)
Sick sinus syndrome
Atrioventricular block (second- or third-degree)
Shock
Severe low blood pressure
Adverse effects
Propranolol should be used with caution in people with:
Diabetes mellitus or hyperthyroidism, since signs and symptoms of hypoglycemia may be masked
Peripheral artery disease and Raynaud syndrome, which may be exacerbated
Phaeochromocytoma, as hypertension may be aggravated without prior alpha blocker therapy
Myasthenia gravis, which may be worsened
Other drugs with bradycardic effects
Pregnancy and lactation
Propranolol, like other beta-blockers, is classified as pregnancy category C in the United States and ADEC category C in Australia. β-blocking agents in general reduce perfusion of the placenta, which may lead to adverse outcomes for the neonate, including lung or heart complications, or premature birth. The newborn may experience additional adverse effects such as low blood sugar and a slower than normal heart rate.
Most β-blocking agents appear in the milk of lactating women. However, propranolol is highly bound to proteins in the bloodstream and is distributed into breast milk at very low levels. These low levels are not expected to pose any risk to the breastfeeding infant, and the American Academy of Pediatrics considers propranolol therapy "generally compatible with breastfeeding."
Overdose
In overdose, propranolol is associated with seizures. Cardiac arrest may occur in propranolol overdose due to sudden ventricular arrhythmias, or cardiogenic shock which may ultimately culminate in bradycardic PEA.
Interactions
Since beta blockers are known to relax the cardiac muscle and constrict the smooth muscle, beta-adrenergic antagonists, including propranolol, have an additive effect with other drugs that decrease blood pressure or decrease cardiac contractility or conductivity. Clinically significant interactions particularly occur with:
Verapamil
Epinephrine (adrenaline)
β2-adrenergic receptor agonists
Salbutamol (albuterol), levosalbutamol, formoterol, salmeterol, clenbuterol, others
Clonidine
Ergot alkaloids
Isoprenaline (isoproterenol)
Nonsteroidal anti-inflammatory drugs (NSAIDs)
Quinidine
Cimetidine
Lidocaine
Phenobarbital
Rifampicin
Fluvoxamine (slows down the metabolism of propranolol significantly, leading to increased blood levels of propranolol)
Pharmacology
Pharmacodynamics
Propranolol is classified as a competitive non-cardioselective sympatholytic beta blocker that crosses the blood–brain barrier. It is lipid soluble and also has sodium channel-blocking effects. Propranolol is a non-selective β-adrenergic receptor antagonist, or beta blocker; that is, it blocks the action of epinephrine (adrenaline) and norepinephrine (noradrenaline) at both β1- and β2-adrenergic receptors. It has little intrinsic sympathomimetic activity, but has strong membrane stabilizing activity (only at high blood concentrations, e.g. overdose). Propranolol can cross the blood-brain barrier and exert effects in the central nervous system in addition to its peripheral activity.
In addition to blockade of adrenergic receptors, propranolol has very weak inhibitory effects on the norepinephrine transporter and/or weakly stimulates norepinephrine release (i.e., the concentration of norepinephrine is increased in the synapse). Since propranolol blocks β-adrenoceptors, the increase in synaptic norepinephrine only results in α-adrenoceptor activation, with the α1-adrenoceptor being particularly important for effects observed in animal models. Therefore, it can be looked upon as a weak indirect α1-adrenoceptor agonist in addition to potent β-adrenoceptor antagonist. In addition to its effects on the adrenergic system, there is evidence that indicates that propranolol may act as a weak antagonist of certain serotonin receptors, namely the 5-HT1A, 5-HT1B, and 5-HT2B receptors. The latter may be involved in the effectiveness of propranolol in the treatment of migraine at high doses.
Both enantiomers of propranolol have a local anesthetic (topical) effect, which is normally mediated by blockade of voltage-gated sodium channels. Studies have demonstrated propranolol's ability to block cardiac, neuronal, and skeletal voltage-gated sodium channels, accounting for its known membrane stabilizing effect and antiarrhythmic and other central nervous system effects.
Mechanism of action
Propranolol is a non-selective beta receptor antagonist. This means that it does not have preference to β1 or β2 receptors. It competes with sympathomimetic neurotransmitters for binding to receptors, which inhibits sympathetic stimulation of the heart. Blockage of neurotransmitter binding to β1 receptors on cardiac myocytes inhibits activation of adenylate cyclase, which in turn inhibits cAMP synthesis leading to reduced Protein kinase A (PKA) activation. This results in less calcium influx to cardiac myocytes through voltage-gated L-type calcium channels meaning there is a decreased sympathetic effect on cardiac cells, resulting in antihypertensive effects including reduced heart rate and lower arterial blood pressure. Blockage of neurotransmitter binding to β2 receptors on smooth muscle cells will increase contraction, which will increase hypertension.
Pharmacokinetics
Propranolol is rapidly and completely absorbed, with peak plasma levels achieved about 1–3 hours after ingestion. More than 90% of the drug is found bound to plasma protein in the blood. Coadministration with food appears to enhance bioavailability. Despite complete absorption, propranolol has a variable bioavailability due to extensive first-pass metabolism. Hepatic impairment therefore increases its bioavailability. Propranolol can be absorbed along the whole intestine with the main absorption site being the colon, which means people who have lost their colon due to surgery may absorb relatively less percentage of propranolol. The main metabolite 4-hydroxypropranolol, with a longer half-life (5.2–7.5 hours) than the parent compound (3–4 hours), is also pharmacologically active. Most of the metabolites are excreted in the urine.
Propranolol is a highly lipophilic drug achieving high concentrations in the brain. The duration of action of a single oral dose is longer than the half-life and may be up to 12 hours if the single dose is high enough (e.g., 80 mg). Effective plasma concentrations are between 10 and 100 mg/L. Toxic levels are associated with plasma concentrations above 2000 mg/L.
History
Scottish scientist James W. Black developed propranolol in the 1960s. It was the first beta-blocker effectively used in the treatment of coronary artery disease and hypertension.
Newer, more cardio-selective beta blockers (such as bisoprolol, nebivolol, carvedilol, or metoprolol) are used preferentially in the treatment of hypertension.
Society and culture
In a 1987 study by the International Conference of Symphony and Opera Musicians, it was reported that 27% of interviewed members said they used beta blockers such as propranolol for musical performances. For about 10–16% of performers, their degree of stage fright is considered pathological. Propranolol is used by musicians, actors, and public speakers for its ability to treat anxiety symptoms activated by the sympathetic nervous system. It has also been used as a performance-enhancing drug in sports where high accuracy is required, including archery, shooting, golf, and snooker. In the 2008 Summer Olympics, 50-metre pistol silver medalist and 10-metre air pistol bronze medalist Kim Jong-su tested positive for propranolol and was stripped of his medals.
Brand names
Propranolol was first marketed under the brand name Inderal, manufactured by ICI Pharmaceuticals (now AstraZeneca), in 1965. "Inderal" is a quasi-anagram of "Alderlin", the trade name of pronethalol (which propranolol replaced); both names are an homage to Alderley Park, the ICI headquarters where the drugs were first developed.
Propranolol is also marketed under brand names Avlocardyl, Deralin, Dociton, Inderalici, InnoPran XL, Indoblok, Sumial, Anaprilin, and Bedranol SR (Sandoz). In India, it is marketed under brand names such as Ciplar and Ciplar LA by Cipla. Hemangeol, a 4.28 mg/mL solution of propranolol, is indicated for the treatment of proliferating infantile hemangioma.
| Biology and health sciences | Specific drugs | Health |
185868 | https://en.wikipedia.org/wiki/Wireless | Wireless | Wireless communication (or just wireless, when the context allows) is the transfer of information (telecommunication) between two or more points without the use of an electrical conductor, optical fiber or other continuous guided medium for the transfer. The most common wireless technologies use radio waves. With radio waves, intended distances can be short, such as a few meters for Bluetooth, or as far as millions of kilometers for deep-space radio communications. It encompasses various types of fixed, mobile, and portable applications, including two-way radios, cellular telephones, personal digital assistants (PDAs), and wireless networking. Other examples of applications of radio wireless technology include GPS units, garage door openers, wireless computer mouse, keyboards and headsets, headphones, radio receivers, satellite television, broadcast television and cordless telephones. Somewhat less common methods of achieving wireless communications involve other electromagnetic phenomena, such as light and magnetic or electric fields, or the use of sound.
The term wireless has been used twice in communications history, with slightly different meanings. It was initially used from about 1890 for the first radio transmitting and receiving technology, as in wireless telegraphy, until the new word radio replaced it around 1920. Radio sets in the UK and the English-speaking world that were not portable continued to be referred to as wireless sets into the 1960s. The term wireless was revived in the 1980s and 1990s mainly to distinguish digital devices that communicate without wires, such as the examples listed in the previous paragraph, from those that require wires or cables. This became its primary usage in the 2000s, due to the advent of technologies such as mobile broadband, Wi-Fi, and Bluetooth.
Wireless operations permit services, such as mobile and interplanetary communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls, etc.) that use some form of energy (e.g. radio waves and acoustic energy) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances.
History
Photophone
The first wireless telephone conversation occurred in 1880 when Alexander Graham Bell and Charles Sumner Tainter invented the photophone, a telephone that sent audio over a beam of light. The photophone required sunlight to operate, and a clear line of sight between the transmitter and receiver, which greatly decreased the viability of the photophone in any practical use. It would be several decades before the photophone's principles found their first practical applications in military communications and later in fiber-optic communications.
Electric wireless technology
Early wireless
A number of wireless electrical signaling schemes including sending electric currents through water and the ground using electrostatic and electromagnetic induction were investigated for telegraphy in the late 19th century before practical radio systems became available. These included a patented induction system by Thomas Edison allowing a telegraph on a running train to connect with telegraph wires running parallel to the tracks, a William Preece induction telegraph system for sending messages across bodies of water, and several operational and proposed telegraphy and voice earth conduction systems.
The Edison system was used by stranded trains during the Great Blizzard of 1888 and earth conductive systems found limited use between trenches during World War I but these systems were never successful economically.
Radio waves
In 1894, Guglielmo Marconi began developing a wireless telegraph system using radio waves, which had been known about since proof of their existence in 1888 by Heinrich Hertz, but discounted as a communication format since they seemed, at the time, to be a short-range phenomenon. Marconi soon developed a system that was transmitting signals way beyond distances anyone could have predicted (due in part to the signals bouncing off the then unknown ionosphere). Marconi and Karl Ferdinand Braun were awarded the 1909 Nobel Prize for Physics for their contribution to this form of wireless telegraphy.
Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
Wireless revolution
The wireless revolution began in the 1990s, with the advent of digital wireless networks leading to a social revolution, and a paradigm shift from wired to wireless technology, including the proliferation of commercial wireless technologies such as cell phones, mobile telephony, pagers, wireless computer networks, cellular networks, the wireless Internet, and laptop and handheld computers with wireless connections. The wireless revolution has been driven by advances in radio frequency (RF), microelectronics, and microwave engineering, and the transition from analog to digital RF technology, which enabled a substantial increase in voice traffic along with the delivery of digital data such as text messaging, images and streaming media.
Modes
Wireless communications can be via:
Radio
Radio and microwave communication carry information by modulating properties of electromagnetic waves transmitted through space. Specifically, the transmitter generates artificial electromagnetic waves by applying time-varying electric currents to its antenna. The waves travel away from the antenna until they eventually reach the antenna of a receiver, which induces an electric current in the receiving antenna. This current can be detected and demodulated to recreate the information sent by the transmitter.
Wireless optical
Free-space optical (long-range)
Free-space optical communication (FSO) is an optical communication technology that uses light propagating in free space to transmit wireless data for telecommunications or computer networking. "Free space" means the light beams travel through the open air or outer space. This contrasts with other communication technologies that use light beams traveling through transmission lines such as optical fiber or dielectric "light pipes".
The technology is useful where physical connections are impractical due to high costs or other considerations. For example, free space optical links are used in cities between office buildings that are not wired for networking, where the cost of running cable through the building and under the street would be prohibitive. Another widely used example is consumer IR devices such as remote controls and IrDA (Infrared Data Association) networking, which is used as an alternative to WiFi networking to allow laptops, PDAs, printers, and digital cameras to exchange data.
Sonic
Sonic, especially ultrasonic short-range communication involves the transmission and reception of sound.
Electromagnetic induction
Electromagnetic induction only allows short-range communication and power transmission. It has been used in biomedical situations such as pacemakers, as well as for short-range RFID tags.
Services
Common examples of wireless equipment include:
Infrared and ultrasonic remote control devices
Professional LMR (Land Mobile Radio) and SMR (Specialized Mobile Radio) are typically used by business, industrial, and Public Safety entities.
Consumer Two-way radio including FRS Family Radio Service, GMRS (General Mobile Radio Service), and Citizens band ("CB") radios.
The Amateur Radio Service (Ham radio).
Consumer and professional Marine VHF radios.
Airband and radio navigation equipment used by aviators and air traffic control
Cellular telephones and pagers: provide connectivity for portable and mobile applications, both personal and business.
Global Positioning System (GPS): allows drivers of cars and trucks, captains of boats and ships, and pilots of aircraft to ascertain their location anywhere on earth.
Cordless computer peripherals: the cordless mouse is a common example; wireless headphones, keyboards, and printers can also be linked to a computer via wireless using technology such as Wireless USB or Bluetooth.
Cordless telephone sets: these are limited-range devices, not to be confused with cell phones.
Satellite television: Is broadcast from satellites in geostationary orbit. Typical services use direct broadcast satellite to provide multiple television channels to viewers.
Electromagnetic spectrum
AM and FM radios and other electronic devices make use of the electromagnetic spectrum. The frequencies of the radio spectrum that are available for use for communication are treated as a public resource and are regulated by organizations such as the American Federal Communications Commission, Ofcom in the United Kingdom, the international ITU-R or the European ETSI. Their regulations determine which frequency ranges can be used for what purpose and by whom. In the absence of such control or alternative arrangements such as a privatized electromagnetic spectrum, chaos might result if, for example, airlines did not have specific frequencies to work under and an amateur radio operator was interfering with a pilot's ability to land an aircraft. Wireless communication spans the spectrum from 9 kHz to 300 GHz.
Applications
Mobile telephones
One of the best-known examples of wireless technology is the mobile phone, also known as a cellular phone, with more than 6.6 billion mobile cellular subscriptions worldwide as of the end of 2010. These wireless phones use radio waves from signal-transmission towers to enable their users to make phone calls from many locations worldwide. They can be used within the range of the mobile telephone site used to house the equipment required to transmit and receive the radio signals from these instruments.
Data communications
Wireless data communications allow wireless networking between desktop computers, laptops, tablet computers, cell phones, and other related devices. The various available technologies differ in local availability, coverage range, and performance, and in some circumstances, users employ multiple connection types and switch between them using connection manager software or a mobile VPN to handle the multiple connections as a secure, single virtual network. Supporting technologies include:
Wi-Fi is a wireless local area network that enables portable computing devices to connect easily with other devices, peripherals, and the Internet. Standardized as IEEE 802.11 a, b, g, n, ac, ax, Wi-Fi has link speeds similar to older standards of wired Ethernet. Wi-Fi has become the de facto standard for access in private homes, within offices, and at public hotspots. Some businesses charge customers a monthly fee for service, while others have begun offering it free in an effort to increase the sales of their goods.
Cellular data service offers coverage within a range of 10-15 miles from the nearest cell site. Speeds have increased as technologies have evolved, from earlier technologies such as GSM, CDMA and GPRS, through 3G, to 4G networks such as W-CDMA, EDGE or CDMA2000. As of 2018, the proposed next generation is 5G.
Low-power wide-area networks (LPWAN) bridge the gap between Wi-Fi and Cellular for low-bitrate Internet of things (IoT) applications.
Mobile-satellite communications may be used where other wireless connections are unavailable, such as in largely rural areas or remote locations. Satellite communications are especially important for transportation, aviation, maritime and military use.
Wireless sensor networks are responsible for sensing noise, interference, and activity in data collection networks. This allows us to detect relevant quantities, monitor and collect data, formulate clear user displays, and to perform decision-making functions
Wireless data communications are used to span a distance beyond the capabilities of typical cabling in point-to-point communication and point-to-multipoint communication, to provide a backup communications link in case of normal network failure, to link portable or temporary workstations, to overcome situations where normal cabling is difficult or financially impractical, or to remotely connect mobile users or networks.
Peripherals
Peripheral devices in computing can also be connected wirelessly, as part of a Wi-Fi network or directly via an optical or radio-frequency (RF) peripheral interface. Originally these units used bulky, highly local transceivers to mediate between a computer and a keyboard and mouse; however, more recent generations have used smaller, higher-performance devices. Radio-frequency interfaces, such as Bluetooth or Wireless USB, provide greater ranges of efficient use, usually up to 10 feet, but distance, physical obstacles, competing signals, and even human bodies can all degrade the signal quality. Concerns about the security of wireless keyboards arose at the end of 2007 when it was revealed that Microsoft's implementation of encryption in some of its 27 MHz models were highly insecure.
Energy transfer
Wireless energy transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. There are two different fundamental methods for wireless energy transfer. Energy can be transferred using either far-field methods that involve beaming power/lasers, radio or microwave transmissions, or near-field using electromagnetic induction. Wireless energy transfer may be combined with wireless information transmission in what is known as Wireless Powered Communication. In 2015, researchers at the University of Washington demonstrated far-field energy transfer using Wi-Fi signals to power cameras.
Medical technologies
New wireless technologies, such as mobile body area networks (MBAN), have the capability to monitor blood pressure, heart rate, oxygen level, and body temperature. The MBAN works by sending low-powered wireless signals to receivers that feed into nursing stations or monitoring sites. This technology helps with the intentional and unintentional risk of infection or disconnection that arise from wired connections.
Categories of implementations, devices, and standards
Cellular networks: 0G, 1G, 2G, 3G, 4G, 5G, 6G
Cordless telephony: DECT (Digital Enhanced Cordless Telecommunications)
Land Mobile Radio or Professional Mobile Radio: TETRA, P25, OpenSky, EDACS, DMR, dPMR
List of emerging technologies
Radio station in accordance with ITU RR (article 1.61)
Radiocommunication service in accordance with ITU RR (article 1.19)
Radio communication system
Short-range point-to-point communication: Wireless microphones, Remote controls, IrDA, RFID (Radio Frequency Identification), TransferJet, Wireless USB, DSRC (Dedicated Short Range Communications), EnOcean, Near Field Communication
Wireless sensor networks: Zigbee, EnOcean; Personal area networks, Bluetooth, TransferJet, Ultra-wideband (UWB from WiMedia Alliance).
Wireless networks: Wireless LAN (WLAN), (IEEE 802.11 branded as Wi-Fi and HiperLAN), Wireless Metropolitan Area Networks (WMAN) and (LMDS, WiMAX, and HiperMAN)
| Technology | Media and communication: Basics | null |
185887 | https://en.wikipedia.org/wiki/Respiratory%20failure | Respiratory failure | Respiratory failure results from inadequate gas exchange by the respiratory system, meaning that the arterial oxygen, carbon dioxide, or both cannot be kept at normal levels. A drop in the oxygen carried in the blood is known as hypoxemia; a rise in arterial carbon dioxide levels is called hypercapnia. Respiratory failure is classified as either Type 1 or Type 2, based on whether there is a high carbon dioxide level, and can be acute or chronic. In clinical trials, the definition of respiratory failure usually includes increased respiratory rate, abnormal blood gases (hypoxemia, hypercapnia, or both), and evidence of increased work of breathing. Respiratory failure causes an altered state of consciousness due to ischemia in the brain.
The typical partial pressure reference values are oxygen Pa more than 80 mmHg (11 kPa) and carbon dioxide Pa less than 45 mmHg (6.0 kPa).
Cause
A variety of conditions that can potentially result in respiratory failure. The etiologies of each type of respiratory failure (see below) may differ, as well. Different types of conditions may cause respiratory failure:
Conditions that reduce the flow of air into and out of the lungs, including physical obstruction by foreign bodies or masses and reduced breathing due to drugs or changes to the chest.
Conditions that impair the lungs' blood supply. These include thromboembolic conditions and conditions that reduce the output of the right heart, such as right heart failure and some myocardial infarctions.
Conditions that limit the ability of the lung tissue to exchange oxygen and carbon dioxide between the blood and the air within the lungs. Any disease which can damage the lung tissue can fit into this category. The most common causes are (in no particular order) infections, interstitial lung disease, and pulmonary edema.
Types
Respiratory failure is generally organized into 4 types. Below is a diagram that provides a general overview of the 4 types of respiratory failure, their distinguishing characteristics, and major causes of each.
Type 1
Type 1 respiratory failure is characterized by a low level of oxygen in the blood (hypoxemia) (PaO2) < 60 mmHg with a normal (normocapnia) or low (hypocapnia) level of carbon dioxide (PaCO2) in the blood.
The fundamental defect in type 1 respiratory failure is a failure of oxygenation characterized by:
{| class="wikitable"
|PaO2 || decreased (< )
|-
| PaCO2 || normal or decreased (<)
|-
| PA-aO2 || increased
|}
Type I respiratory failure is caused by conditions that affect oxygenation and therefore lead to lower-than-normal oxygen in the blood. These include:
Low ambient oxygen (e.g. at high altitude).
Ventilation-perfusion mismatch (parts of the lung receive oxygen but not enough blood to absorb it, e.g. pulmonary embolism, Acute respiratory distress syndrome, Chronic obstructive pulmonary disease, Congestive heart failure.
Alveolar hypoventilation (decreased minute volume due to reduced respiratory muscle activity, e.g. in acute neuromuscular disease); this form can also cause type 2 respiratory failure if severe.
Diffusion problem (oxygen cannot enter the capillaries due to parenchymal disease, e.g. in pneumonia or ARDS).
Right-to-left shunt (oxygenated blood mixes with non-oxygenated blood from the venous system, e.g. Arteriovenous malformation, Complete atelectasis, Severe pneumonia, Severe pulmonary edema).
Type 2
Hypoxemia (PaO2 <8kPa or normal) with hypercapnia (PaCO2 >6.0kPa).
The basic defect in type 2 respiratory failure is characterized by:
{| class="wikitable"
|PaO2 || decreased (< )or normal
|-
| PaCO2 || increased (> )
|-
| PA-aO2 || normal
|-
|pH || <7.35
|}
Type 2 respiratory failure is caused by inadequate alveolar ventilation; both oxygen and carbon dioxide are affected. Defined as the buildup of carbon dioxide levels (PaCO2) that has been generated by the body but cannot be eliminated. The underlying causes include:
Increased airways resistance (chronic obstructive pulmonary disease, asthma, suffocation)
Reduced breathing effort (drug effects, brain stem lesion, extreme obesity)
A decrease in the area of the lung available for gas exchange (such as in chronic bronchitis)
Neuromuscular problems (Guillain–Barré syndrome, motor neuron disease)
Deformed (kyphoscoliosis), rigid (ankylosing spondylitis), or flail chest.
Type 3
Type 3 respiratory failure is a type of Type 1 respiratory failure, with decreased PaO2 (hypoxemia) and either normal or decreased PaCO2. However, because of its prevalence, it has been given its own category. Type 3 respiratory failure is often referred to as peri-operative respiratory failure, because it is distinguished by being a Type 1 respiratory failure that is specifically associated with an operation, procedure, or surgery.
The pathophysiology of type 3 respiratory failure often includes lung atelectasis, which is a term used to describe a collapsing of the functional units of the lung that allow for gas exchange. Because atelectasis occurs so commonly in the perioperative period, this form is also called perioperative respiratory failure. After general anesthesia, decreases in functional residual capacity leads to collapse of dependent lung units.
Type 4
Type 4 respiratory failure occurs when metabolic (oxygen) demands exceed what the cardiopulmonary system can provide. It often results from hypoperfusion of respiratory muscles as in patients in shock, such as cardiogenic shock or hypovolemic shock. Patients in shock often experience respiratory distress due to pulmonary edema (e.g., in cardiogenic shock). Lactic acidosis and anemia can also result in type 4 respiratory failure. However, type 1 and 2 are the most widely accepted.
Physical exam
Physical exam findings often found in patients with respiratory failure include findings indicative of impaired oxygenation (low blood oxygen level). These include, but are not limited to, the following:
Accessory muscle use in breathing or other signs of respiratory distress
Altered mental status (eg. confusion, lethargy)
Clubbing of fingertips (see image right)
Peripheral cyanosis (eg. bluish color on mucosal membranes or fingers and/or toes)
Tachypnea (faster breathing rate)
Pale conjunctiva
People with respiratory failure often exhibit other signs or symptoms that are associated with the underlying cause of their respiratory failure. For instance, if respiratory failure is caused by cardiogenic shock (decreased perfusion due to heart dysfunction, symptoms of heart dysfunction (e.g., pitting edema) are also expected.
Diagnosis
Arterial blood gas (ABG) assessment is considered the gold standard diagnostic test for establishing a diagnosis of respiratory failure. This is because ABG can be used to measure blood oxygen levels (PaO2), and respiratory failure (all types) is characterized by a low blood oxygen level.
Alternative or supporting diagnostic methods include the following:
Capnometry: measures the amount of carbon dioxide in exhaled air.
Pulse Oximetry: measures the fraction of hemoglobin saturated with oxygen (SpO2).
Imaging (eg. ultrasonography, radiography) may be used to assist in the diagnostic workup. For example, it may be utilized to determine the etiology of a person's respiratory failure.
Treatment
Treatment of the underlying cause is required, if possible. The treatment of acute respiratory failure may involve medication such as bronchodilators (for airways disease), antibiotics (for infections), glucocorticoids (for numerous causes), diuretics (for pulmonary oedema), amongst others. Respiratory failure resulting from an overdose of opioids may be treated with the antidote naloxone. In contrast, most benzodiazepine overdose does not benefit from its antidote, flumazenil. Respiratory therapy/respiratory physiotherapy may be beneficial in some cases of respiratory failure.
Type 1 respiratory failure may require oxygen therapy to achieve adequate oxygen saturation. Lack of oxygen response may indicate other modalities such as heated humidified high-flow therapy, continuous positive airway pressure or (if severe) endotracheal intubation and mechanical ventilation. .
Type 2 respiratory failure often requires non-invasive ventilation (NIV) unless medical therapy can improve the situation. Mechanical ventilation is sometimes indicated immediately or otherwise if NIV fails. Respiratory stimulants such as doxapram are now rarely used.
There is tentative evidence that in those with respiratory failure identified before arrival in hospital, continuous positive airway pressure can be helpful when started before conveying to hospital.
Prognosis
Prognosis is highly variable and dependent on etiology and availability of appropriate treatment and management. One of three hospitalized cases of acute respiratory failure is fatal.
| Biology and health sciences | Injury | null |
185901 | https://en.wikipedia.org/wiki/Subspecies | Subspecies | In biological classification, subspecies (: subspecies) is a rank below species, used for populations that live in different areas and vary in size, shape, or other physical characteristics (morphology), but that can successfully interbreed. Not all species have subspecies, but for those that do there must be at least two. Subspecies is abbreviated as subsp. or ssp. and the singular and plural forms are the same ("the subspecies is" or "the subspecies are").
In zoology, under the International Code of Zoological Nomenclature, the subspecies is the only taxonomic rank below that of species that can receive a name. In botany and mycology, under the International Code of Nomenclature for algae, fungi, and plants, other infraspecific ranks, such as variety, may be named. In bacteriology and virology, under standard bacterial nomenclature and virus nomenclature, there are recommendations but not strict requirements for recognizing other important infraspecific ranks.
A taxonomist decides whether to recognize a subspecies. A common criterion for recognizing two distinct populations as subspecies rather than full species is the ability of them to interbreed even if some male offspring may be sterile. In the wild, subspecies do not interbreed due to geographic isolation or sexual selection. The differences between subspecies are usually less distinct than the differences between species.
Nomenclature
The scientific name of a species is a binomial or binomen, and comprises two Latin words, the first denoting the genus and the second denoting the species. The scientific name of a subspecies is formed slightly differently in the different nomenclature codes. In zoology, under the International Code of Zoological Nomenclature (ICZN), the scientific name of a subspecies is termed a trinomen, and comprises three words, namely the binomen followed by the name of the subspecies. For example, the binomen for the leopard is Panthera pardus. The trinomen Panthera pardus fusca denotes a subspecies, the Indian leopard. All components of the trinomen are written in italics.
In botany, subspecies is one of many ranks below that of species, such as variety, subvariety, form, and subform. To identify the rank, the subspecific name must be preceded by "subspecies" (which can be abbreviated to "subsp." or "ssp."), as in Schoenoplectus californicus subsp. tatora.
In bacteriology, the only rank below species that is regulated explicitly by the code of nomenclature is subspecies, but infrasubspecific taxa are extremely important in bacteriology; Appendix 10 of the code lays out some recommendations that are intended to encourage uniformity in describing such taxa. Names published before 1992 in the rank of variety are taken to be names of subspecies (see International Code of Nomenclature of Prokaryotes). As in botany, subspecies is conventionally abbreviated as "subsp.", and is used in the scientific name: Bacillus subtilis subsp. spizizenii.
Nominotypical subspecies and subspecies autonyms
In zoological nomenclature, when a species is split into subspecies, the originally described population is retained as the "nominotypical subspecies" or "nominate subspecies", which repeats the same name as the species. For example, Motacilla alba alba (often abbreviated M. a. alba) is the nominotypical subspecies of the white wagtail (Motacilla alba).
The subspecies name that repeats the species name is referred to in botanical nomenclature as the subspecies "autonym", and the subspecific taxon as the "autonymous subspecies".
Doubtful cases
When zoologists disagree over whether a certain population is a subspecies or a full species, the species name may be written in parentheses. Thus Larus (argentatus) smithsonianus means the American herring gull; the notation within the parentheses means that some consider it a subspecies of a larger herring gull species and therefore call it Larus argentatus smithsonianus, while others consider it a full species and therefore call it Larus smithsonianus (and the user of the notation is not taking a position).
Criteria
A subspecies is a taxonomic rank below species – the only such rank recognized in the zoological code, and one of three main ranks below species in the botanical code. When geographically separate populations of a species exhibit recognizable phenotypic differences, biologists may identify these as separate subspecies; a subspecies is a recognized local variant of a species. Botanists and mycologists have the choice of ranks lower than subspecies, such as variety (varietas) or form (forma), to recognize smaller differences between populations.
Monotypic and polytypic species
In biological terms, rather than in relation to nomenclature, a polytypic species has two or more genetically and phenotypically divergent subspecies, races, or more generally speaking, populations that differ from each other so that a separate description is warranted. These distinct groups do not interbreed as they are isolated from another, but they can interbreed and have fertile offspring, e.g. in captivity. These subspecies, races, or populations, are usually described and named by zoologists, botanists and microbiologists.
In a monotypic species, all populations exhibit the same genetic and phenotypical characteristics. Monotypic species can occur in several ways:
All members of the species are very similar and cannot be sensibly divided into biologically significant subcategories.
The individuals vary considerably, but the variation is essentially random and largely meaningless so far as genetic transmission of these variations is concerned.
The variation among individuals is noticeable and follows a pattern, but there are no clear dividing lines among separate groups: they fade imperceptibly into one another. Such clinal variation always indicates substantial gene flow among the apparently separate groups that make up the population(s). Populations that have a steady, substantial gene flow among them are likely to represent a monotypic species, even when a fair degree of genetic variation is obvious.
| Biology and health sciences | Taxonomic rank | Biology |
185982 | https://en.wikipedia.org/wiki/Home%20appliance | Home appliance | A home appliance, also referred to as a domestic appliance, an electric appliance or a household appliance, is a machine which assists in household functions such as cooking, cleaning and food preservation.
The domestic application attached to home appliance is tied to the definition of appliance as "an instrument or device designed for a particular use or function". Collins English Dictionary defines "home appliance" as: "devices or machines, usually electrical, that are in your home and which you use to do jobs such as cleaning or cooking". The broad usage allows for nearly any device intended for domestic use to be a home appliance, including consumer electronics as well as stoves, refrigerators, toasters and air conditioners.
The development of self-contained electric and gas-powered appliances, an American innovation, emerged in the early 20th century. This evolution is linked to the decline of full-time domestic servants and desire to reduce household chores, allowing for more leisure time. Early appliances included washing machines, water heaters, refrigerators, and sewing machines. The industry saw significant growth post-World War II, with the introduction of dishwashers and clothes dryers. By the 1980s, the appliance industry was booming, leading to mergers and antitrust legislation. The US National Appliance Energy Conservation Act of 1987 mandated a 25% reduction in energy consumption every five years. By the 1990s, five companies dominated over 90% of the market.
Major appliances, often called white goods, include items like refrigerators and washing machines, while small appliances encompass items such as toasters and coffee makers. Product design shifted in the 1960s, embracing new materials and colors. Consumer electronics, often referred to as brown goods, include items like TVs and computers. There is a growing trend towards home automation and internet-connected appliances. Recycling of home appliances involves dismantling and recovering materials.
History
While many appliances have existed for centuries, the self-contained electric or gas powered appliances are a uniquely American innovation that emerged in the early twentieth century. The development of these appliances is tied to the disappearance of full-time domestic servants and the desire to reduce the time-consuming activities in pursuit of more recreational time. In the early 1900s, electric and gas appliances included washing machines, water heaters, refrigerators, kettles and sewing machines. The invention of Earl Richardson's small electric clothes iron in 1903 gave a small initial boost to the home appliance industry. In the Post–World War II economic expansion, the domestic use of dishwashers, and clothes dryers were part of a shift for convenience. Increasing discretionary income was reflected by a rise in miscellaneous home appliances.
In America during the 1980s, the industry shipped $1.5 billion worth of goods each year and employed over 14,000 workers, with revenues doubling between 1982 and 1990 to $3.3 billion. Throughout this period, companies merged and acquired one another to reduce research and production costs and eliminate competitors, resulting in antitrust legislation.
The United States Department of Energy reviews compliance with the National Appliance Energy Conservation Act of 1987, which required manufacturers to reduce the energy consumption of the appliances by 25% every five years.
In the 1990s, the appliance industry was very consolidated, with over 90% of the products being sold by just five companies. For example, in 1991, dishwasher manufacturing market share was split between General Electric with 40% market share, Whirlpool with 31%, Electrolux with 20%, Maytag with 7% and Thermador with just 2%.
Major appliances
Major appliances, also known as white goods, comprise major household appliances and may include: air conditioners, dishwashers, clothes dryers, drying cabinets, freezers, refrigerators, kitchen stoves, water heaters, washing machines, trash compactors, microwave ovens, and induction cookers. White goods were typically painted or enameled white, and many of them still are.
Small appliances
Small appliances are typically small household electrical machines, also very useful and easily carried and installed. Yet another category is used in the kitchen, including: juicers, electric mixers, meat grinders, coffee grinders, deep fryers, herb grinders, food processors, electric kettles, waffle irons, coffee makers, blenders, rice cookers, toasters and exhaust hoods.
Product design
In the 1960s the product design for appliances such as washing machines, refrigerators, and electric toasters shifted away from Streamline Moderne and embraced technological advances in the fabrication of sheet metal. A choice in color, as well as fashionable accessory, could be offered to the mass market without increasing production cost. Home appliances were sold as space-saving ensembles.
Consumer electronics
Consumer electronics or home electronics are electronic (analog or digital) equipment intended for everyday use, typically in private homes. Consumer electronics include devices used for entertainment, communications and recreation. In British English, they are often called brown goods by producers and sellers, to distinguish them from "white goods" which are meant for housekeeping tasks, such as washing machines and refrigerators, although nowadays, these could be considered brown goods, some of these being connected to the Internet. Some such appliances were traditionally finished with genuine or imitation wood, hence the name. This has become rare but the name has stuck, even for goods that are unlikely ever to have had a wooden case (e.g. camcorders). In the 2010s, this distinction is absent in large big box consumer electronics stores, which sell both entertainment, communication, and home office devices and kitchen appliances such as refrigerators. The highest selling consumer electronics products are compact discs. Examples are: home electronics, radio receivers, TV sets, VCRs, CD and DVD players, digital cameras, camcorders, still cameras, clocks, alarm clocks, computers, video game consoles, HiFi and home cinema, telephones and answering machines.
Life spans
A survey conducted in 2020 of more than thirteen thousand people in the UK revealed how long appliance owners had their appliances before needing to replace them due to a fault, deteriorating performance, or the age of the appliance.
Home automation
There is a trend of networking home appliances together, and combining their controls and key functions. For instance, energy distribution could be managed more evenly so that when a washing machine is on, an oven can go into a delayed start mode, or vice versa. Or, a washing machine and clothes dryer could share information about load characteristics (gentle/normal, light/full), and synchronize their finish times so the wet laundry does not have to wait before being put in the dryer.
Additionally, some manufacturers of home appliances are quickly beginning to place hardware that enables Internet connectivity in home appliances to allow for remote control, automation, communication with other home appliances, and more functionality enabling connected cooking. Internet-connected home appliances were especially prevalent during recent Consumer Electronics Show events.
Recycling
Appliance recycling consists of dismantling waste home appliances and scrapping their parts for reuse. The main types of appliances that are recycled are T.V.s, refrigerators, air conditioners, washing machines, and computers. It involves disassembly, removal of hazardous components and destruction of the equipment to recover materials, generally by shredding, sorting and grading.
| Technology | Household appliances | null |
186108 | https://en.wikipedia.org/wiki/Color%20management | Color management | Color management is the process of ensuring consistent and accurate colors across various devices, such as monitors, printers, and cameras. It involves the use of color profiles, which are standardized descriptions of how colors should be displayed or reproduced.
Color management is necessary because different devices have different color capabilities and characteristics. For example, a monitor may display colors differently than a printer can reproduce them. Without color management, the same image may appear differently on different devices, leading to inconsistencies and inaccuracies.
To achieve color management, a color profile is created for each device involved in the color workflow. This profile describes the device's color capabilities and characteristics, such as its color gamut (range of colors it can display or reproduce) and color temperature. These profiles are then used to translate colors between devices, ensuring consistent and accurate color reproduction.
Color management is particularly important in industries such as graphic design, photography, and printing, where accurate color representation is crucial. It helps to maintain color consistency throughout the entire workflow, from capturing an image to displaying or printing it.
Parts of color management are implemented in the operating system (OS), helper libraries, the application, and devices. The type of color profile that is typically used is called an ICC profile. A cross-platform view of color management is the use of an ICC-compatible color management system. The International Color Consortium (ICC) is an industry consortium that has defined:
an open standard for a Color Matching Module (CMM) at the OS level
color profiles for:
devices, including DeviceLink profiles that transform one device profile (color space) to another device profile without passing through an intermediate color space, such as L*A*B*, more accurately preserving color
working spaces, the color spaces in which color data is meant to be manipulated
There are other approaches to color management besides using ICC profiles. This is partly due to history and partly because of other needs than the ICC standard covers. The film and broadcasting industries make use of some of the same concepts, but they frequently rely on more limited boutique solutions. The film industry, for instance, often uses 3D LUTs (lookup table) to represent a complete color transformation for a specific RGB encoding.
At the consumer level, system wide color management is available in most of Apple's products (macOS, iOS, iPadOS, watchOS). Microsoft Windows lacks system wide color management and virtually all applications do not employ color management. Windows' media player API is not color space aware, and if applications want to color manage videos manually, they have to incur significant performance and power consumption penalties. Android supports system wide color management, but most devices ship with color management disabled.
Overview
Characterize. Every color-managed device requires a personalized table, or "color profile," which characterizes the color response of that particular device.
Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the "Profile Connection Space").
Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).
Hardware
Characterization
To describe the behavior of various output devices, they must be compared (measured) in relation to a standard color space. Often a step called linearization is performed first, to undo the effect of gamma correction that was done to get the most out of limited 8-bit color paths. Instruments used for measuring device colors include colorimeters and spectrophotometers. As an intermediate result, the device gamut is described in the form of scattered measurement data. The transformation of the scattered measurement data into a more regular form, usable by the application, is called profiling. Profiling is a complex process involving mathematics, intense computation, judgment, testing, and iteration. After the profiling is finished, an idealized color description of the device is created. This description is called a profile.
Calibration
Calibration is like characterization, except that it can include the adjustment of the device, as opposed to just the measurement of the device. Color management is sometimes sidestepped by calibrating devices to a common standard color space such as sRGB; when such calibration is done well enough, no color translations are needed to get all devices to handle colors consistently. This avoidance of the complexity of color management was one of the goals in the development of sRGB.
Color profiles
Embedding
Image formats themselves (such as TIFF, JPEG, PNG, EPS, PDF, and SVG) may contain embedded color profiles but are not required to do so by the image format. The International Color Consortium standard was created to bring various developers and manufacturers together. The ICC standard permits the exchange of output device characteristics and color spaces in the form of metadata. This allows the embedding of color profiles into images as well as storing them in a database or a profile directory.
Working spaces
Working spaces, such as sRGB, Adobe RGB or ProPhoto are color spaces that facilitate good results while editing. For instance, pixels with equal values of R,G,B should appear neutral. Using a large (gamut) working space will lead to posterization, while using a small working space will lead to clipping. This trade-off is a consideration for the critical image editor.
Color transformation
Color transformation, or color space conversion, is the transformation of the representation of a color from one color space to another. This calculation is required whenever data is exchanged inside a color-managed chain and carried out by a Color Matching Module. Transforming profiled color information to different output devices is achieved by referencing the profile data into a standard color space. It makes it easier to convert colors from one device to a selected standard color space and from that to the colors of another device. By ensuring that the reference color space covers the many possible colors that humans can see, this concept allows one to exchange colors between many different color output devices. Color transformations can be represented by two profiles (source profile and target profile) or by a devicelink profile. In this process there are approximations involved which make sure that the image keeps its important color qualities and also gives an opportunity to control on how the colors are being changed.
Profile connection space
In the terminology of the International Color Consortium, a translation between two color spaces can go through a profile connection space (PCS): Color Space 1 → PCS (CIELAB or CIEXYZ) → Color space 2; conversions into and out of the PCS are each specified by a profile.
Gamut mapping
In nearly every translation process, we have to deal with the fact that the color gamut of different devices vary in range which makes an accurate reproduction impossible. They therefore need some rearrangement near the borders of the gamut. Some colors must be shifted to the inside of the gamut, as they otherwise cannot be represented on the output device and would simply be clipped. This so-called gamut mismatch occurs for example, when we translate from the RGB color space with a wider gamut into the CMYK color space with a narrower gamut range. In this example, the dark highly saturated purplish-blue color of a typical computer monitor's "blue" primary is impossible to print on paper with a typical CMYK printer. The nearest approximation within the printer's gamut will be much less saturated. Conversely, an inkjet printer's "cyan" primary, a saturated mid-brightness blue, is outside the gamut of a typical computer monitor. The color management system can utilize various methods to achieve desired results and give experienced users control of the gamut mapping behavior.
Rendering intent
When the gamut of source color space exceeds that of the destination, saturated colors are liable to become clipped (inaccurately represented), or more formally burned. The color management module can deal with this problem in several ways. The ICC specification includes four different rendering intents, listed below. Before the actual rendering intent is carried out, one can temporarily simulate the rendering by soft proofing. It is a useful tool as it predicts the outcome of the colors and is available as an application in many color management systems:
In practice, photographers almost always use relative or perceptual intent, as for natural images, absolute causes color cast, while saturation produces unnatural colors. If an entire image is in-gamut, relative is perfect, but when there are out of gamut colors, which is preferable depends on a case-by-case basis. CMMs may offer options for BPC and partial chromatic adaptation.
A black point correction (BPC) is not applied for absolute colorimetric or devicelink profiles. For ICCv4, it is always applied to the perceptual intent. ICCv2 sRGB profiles differ among each other in a number of ways, one of which being whether BPC is applied.
Implementation
Color management module
Color matching module (also -method or -system) is a software algorithm that adjusts the numerical values that get sent to or received from different devices so that the perceived color they produce remains consistent. The key issue here is how to deal with a color that cannot be reproduced on a certain device in order to show it through a different device as if it were visually the same color, just as when the reproducible color range between color transparencies and printed matters are different. There is no common method for this process, and the performance depends on the capability of each color matching method.
Some well known CMMs are ColorSync, Adobe CMM, Little CMS, and ArgyllCMS.
Operating system level
Apple
Apple's classic Mac OS and macOS operating systems have provided OS-level color management APIs since 1993, through ColorSync. macOS has added automatic color management (assuming sRGB for most things) automatically in the OS, but applications can explicitly target other color spaces if they wish to. System wide color management is used in iOS, iPadOS and watchOS as well.
Windows
Since 1997 color management in Windows is available through an ICC color management system: ICM (Image Color Management).
Beginning with Windows Vista, Microsoft introduced a new color architecture known as WCS (Windows Color System). WCS supplements the ICM system in Windows 2000 and Windows XP, originally written by Heidelberg.
Apps need to be aware of color management and tag the content appropriately to accurately display colors. Otherwise, (unlike macOS) Windows will display the colors to the maximum extent of the display's gamut, resulting in over-saturated colors on wide-gamut displays. To fix this issue, Microsoft includes a new feature called "Auto Color Management" since Windows 11 2022.
Windows Photo Viewer from Windows 7 (also included in later Windows versions) performs proper color management, however, the newer Windows Photos app in Windows 8, 10, 11 does not perform color management until version v2022.31070.26005.0. Other Windows components, including Microsoft Paint, Snipping Tool, Windows Desktop, Windows Explorer, do not perform color management.
Unfortunately, the vast majority of applications do not use the Windows Color System. For applications that do employ color management (typically web browsers), color management tend to apply for only images and UI, but not videos. This is because Windows' media player API is not color space aware. Thus, browsers (Chrome, Firefox, Edge) are only able to do color management for images but not video. For the same reason, virtually no video players on Windows support color management (including the default Movies & TV app and VLC), with Media Player Classic Home Cinema being a rare exception.
Android
On Android, system wide color management is introduced in Android Oreo 8.1. However, most Android phones are shipped with color management disabled (ex: 'adaptive' color profile on Google Pixel, 'vivid' color profile on Samsung Galaxy). This oversaturates sRGB content to the native display gamut, typically DCI-P3. Users need to manually select the 'natural' color profile to enable color management, enabling accurate display of sRGB and P3 wide color content.
Others
Operating systems that use the X Window System for graphics can use ICC profiles, and support for color management on Linux, still less mature than on other platforms, is coordinated through OpenICC at freedesktop.org and makes use of LittleCMS.
File level
Certain image filetypes (TIFF and Photoshop) include the notion of color channels for specifying the color mode of the file. The most commonly used channels are RGB (mainly for display (monitors) but also for some desktop printing) and CMYK (for commercial printing). An additional alpha channel may specify a transparency mask value. Some image software (such as Photoshop) perform automatic color separation to maintain color information in CMYK mode using a specified ICC profile such as US Web Coated (SWOP) v2.
Creative software
Adobe software includes its own color management engine - Adobe Color Engine. It is also available as a separate Color Management Module - Adobe CMM for use by non-Adobe applications that supports 3rd-party CMMs.
Web browsers
, most web browsers ignored color profiles. Notable exceptions were Safari, starting with version 2.0, and Firefox starting with version 3. Although disabled by default in Firefox 3.0, ICC v2 and ICC v4 color management could be enabled by using an add-on or setting a configuration option.
As of July 2019, Safari, Chrome and Firefox fully support color management. However, it is important to note that most browsers only do color management for images and CSS elements, but not video.
Firefox: version 3.5 (released in 2011) onwards supports ICC v2 tagged images, and version 8.0 (released in 2011) adds ICC v4 profiles support. Version 89 (released in 2021) and above apply color management to all untagged images and page elements by default.
Internet Explorer: support ICC profiles from version 9 onwards, but only converts non-sRGB images to the sRGB profile, regardless of the actual monitor colorspace.
Google Chrome: uses the system provided ICC v2 and v4 support on macOS, and from version 22 (released in 2012) supports ICC v2 profiles by default on other platforms. macOS versions of Chrome correctly render video.
Safari: has support starting with version 2.0 (released in 2005). Supports v2 and v4 ICC profiles, and correctly renders video.
Opera: has support since 12.10 (released in 2012) for ICC v4.
Pale Moon supported ICC v2 from its first release, and v4 since Pale Moon 20.2 (released in 2013).
Regarding mobile browsers, Safari 13.1 (on iOS 13.4.1) recognizes the device color profile and can displays images accordingly. Chrome 83 (on Android 9) ignores the display profile, simply converting all images to sRGB.
As of 2023, Chrome 114, Android Browser 114 and Firefox for Android 115 support multiple colorspaces. The same is valid for their desktop counterparts: Chrome 118, Edge 114, Safari 16.6, Firefox 117 and Opera 100.
| Technology | Computer science | null |
186182 | https://en.wikipedia.org/wiki/Evaporite | Evaporite | An evaporite () is a water-soluble sedimentary mineral deposit that results from concentration and crystallization by evaporation from an aqueous solution. There are two types of evaporite deposits: marine, which can also be described as ocean deposits, and non-marine, which are found in standing bodies of water such as lakes. Evaporites are considered sedimentary rocks and are formed by chemical sediments.
Formation
Although all water bodies on the surface and in aquifers contain dissolved salts, the water must evaporate into the atmosphere for the minerals to precipitate. For this to happen, the water body must enter a restricted environment where water input into this environment remains below the net rate of evaporation. This is usually an arid environment with a small drainage basin fed by a limited input of water. When evaporation occurs, the remaining water is enriched in salts, and they precipitate when the water becomes supersaturated.
Depositional environments
Marine
Marine evaporites tend to have thicker deposits and are usually the focus of more extensive research. When scientists evaporate ocean water in a laboratory, the minerals are deposited in a defined order that was first demonstrated by Usiglio in 1884. The first phase of precipitation begins when about 50% of the original water depth remains. At this point, minor carbonates begin to form. The next phase in the sequence comes when the experiment is left with about 20% of its original level. At this point, the mineral gypsum begins to form, which is then followed by halite at 10%, excluding carbonate minerals that tend not to be evaporites. The most common marine evaporites are calcite, gypsum and anhydrite, halite, sylvite, carnallite, langbeinite, polyhalite, and kainite. Kieserite (MgSO4) may also be included, which often will make up less than four percent of the overall content. However, there are approximately 80 different minerals that have been reported found in evaporite deposits, though only about a dozen are common enough to be considered important rock formers.
Non-marine
Non-marine evaporites are usually composed of minerals that are not common in marine environments because in general the water from which non-marine evaporite precipitates has proportions of chemical elements different from those found in the marine environments. Common minerals that are found in these deposits include blödite, borax, epsomite, gaylussite, glauberite, mirabilite, thenardite and trona. Non-marine deposits may also contain halite, gypsum, and anhydrite, and may in some cases even be dominated by these minerals, although they did not come from ocean deposits. This, however, does not make non-marine deposits any less important; these deposits often help to paint a picture into past Earth climates. Some particular deposits even show important tectonic and climatic changes. These deposits also may contain important minerals that help in today's economy. Thick non-marine deposits that accumulate tend to form where evaporation rates will exceed the inflow rate, and where there is sufficient soluble supplies. The inflow also has to occur in a closed basin, or one with restricted outflow, so that the sediment has time to pool and form in a lake or other standing body of water. Primary examples of this are called "saline lake deposits". Saline lakes includes things such as perennial lakes, which are lakes that are there year-round, playa lakes, which are lakes that appear only during certain seasons, or any other terms that are used to define places that hold standing bodies of water intermittently or year-round. Examples of modern non-marine depositional environments include the Great Salt Lake in Utah and the Dead Sea, which lies between Jordan and Israel.
Evaporite depositional environments that meet the above conditions include:
Graben areas and half-grabens within continental rift environments fed by limited riverine drainage, usually in subtropical or tropical environments
Example environments at the present that match this is the Denakil Depression, Ethiopia; Death Valley, California
Graben environments in oceanic rift environments fed by limited oceanic input, leading to eventual isolation and evaporation
Examples include the Red Sea, and the Dead Sea in Jordan and Israel
Internal drainage basins in arid to semi-arid temperate to tropical environments fed by ephemeral drainage
Example environments at the present include the Simpson Desert, Western Australia, the Great Salt Lake in Utah
Non-basin areas fed exclusively by groundwater seepage from artesian waters
Example environments include the seep-mounds of the Victoria Desert, fed by the Great Artesian Basin, Australia
Restricted coastal plains in regressive sea environments
Examples include the sabkha deposits of Iran, Saudi Arabia, and the Red Sea; the Garabogazköl of the Caspian Sea
Drainage basins feeding into extremely arid environments
Examples include the Chilean deserts, certain parts of the Sahara, and the Namib
The most significant known evaporite depositions happened during the Messinian salinity crisis in the basin of the Mediterranean.
Evaporitic formations
Evaporite formations need not be composed entirely of halite salt. In fact, most evaporite formations do not contain more than a few percent of evaporite minerals, the remainder being composed of the more typical detrital clastic rocks and carbonates. Examples of evaporite formations include occurrences of evaporite sulfur in Eastern Europe and West Asia.
For a formation to be recognised as evaporitic it may simply require recognition of halite pseudomorphs, sequences composed of some proportion of evaporite minerals, and recognition of mud crack textures or other textures.
Economic importance
Evaporites are important economically because of their mineralogy, their physical properties in-situ, and their behaviour within the subsurface.
Evaporite minerals, especially nitrate minerals, are economically important in Peru and Chile. Nitrate minerals are often mined for use in the production on fertilizer and explosives.
Thick halite deposits are expected to become an important location for the disposal of nuclear waste because of their geologic stability, predictable engineering and physical behaviour, and imperviousness to groundwater.
Halite formations are famous for their ability to form diapirs, which produce ideal locations for trapping petroleum deposits.
Halite deposits are often mined for use as salt.
Major groups of evaporite minerals
This is a chart that shows minerals that form the marine evaporite rocks. They are usually the most common minerals that appear in this kind of deposit.
{| class="wikitable"
!Mineral Class
!Mineral name
!Chemical Composition
|-
| rowspan="4" |Chlorides
|Halite || NaCl
|-
|Sylvite || KCl
|-
|Carnallite ||
|-
|Kainite ||
|-
| rowspan="5" |Sulfates
|Anhydrite ||
|-
|Gypsum ||
|-
|Kieserite ||
|-
|Langbeinite ||
|-
|Polyhalite ||
|-
| rowspan="3" |Carbonates
|Dolomite ||
|-
|Calcite ||
|-
|Magnesite ||
|}
Halides: halite, sylvite (KCl), and fluorite
Sulfates: such as gypsum, barite, and anhydrite
Nitrates: nitratine (soda niter) and niter
Borates: typically found in arid-salt-lake deposits plentiful in the southwestern US. A common borate is borax, which has been used in soaps as a surfactant.
Carbonates: such as trona, formed in inland brine lakes.
Some evaporite minerals, such as hanksite, are from multiple groups.
Evaporite minerals start to precipitate when their concentration in water reaches such a level that they can no longer exist as solutes.
The minerals precipitate out of solution in the reverse order of their solubilities, such that the order of precipitation from sea water is:
Calcite (CaCO3) and dolomite ()
Gypsum () and anhydrite (CaSO4).
Halite (i.e. common salt, NaCl)
Potassium and magnesium salts
The abundance of rocks formed by seawater precipitation is in the same order as the precipitation given above. Thus, limestone (dolomite are more common than gypsum, which is more common than halite, which is more common than potassium and magnesium salts.
Evaporites can also be easily recrystallized in laboratories in order to investigate the conditions and characteristics of their formation.
Possible evaporites on Titan
Recent evidence from satellite observations and laboratory experiments suggest evaporites are likely present on the surface of Titan, Saturn's largest moon. Instead of water oceans, Titan hosts lakes and seas of liquid hydrocarbons (mainly methane) with many soluble hydrocarbons, such as acetylene, that can evaporate out of solution. Evaporite deposits cover large regions of Titan's surface, mainly along the coastlines of lakes or in isolated basins (Lacunae) that are equivalent to salt pans on Earth.
| Physical sciences | Sedimentary rocks | Earth science |
186193 | https://en.wikipedia.org/wiki/Sinkhole | Sinkhole | A sinkhole is a depression or hole in the ground caused by some form of collapse of the surface layer. The term is sometimes used to refer to doline, enclosed depressions that are also known as shakeholes, and to openings where surface water enters into underground passages known as ponor, swallow hole or swallet. A cenote is a type of sinkhole that exposes groundwater underneath. Sink, and stream sink are more general terms for sites that drain surface water, possibly by infiltration into sediment or crumbled rock.
Most sinkholes are caused by karst processes – the chemical dissolution of carbonate rocks, collapse or suffosion processes. Sinkholes are usually circular and vary in size from tens to hundreds of meters both in diameter and depth, and vary in form from soil-lined bowls to bedrock-edged chasms. Sinkholes may form gradually or suddenly, and are found worldwide.
Formation
Natural processes
Sinkholes may capture surface drainage from running or standing water, but may also form in high and dry places in specific locations. Sinkholes that capture drainage can hold it in large limestone caves. These caves may drain into tributaries of larger rivers.
The formation of sinkholes involves natural processes of erosion or gradual removal of slightly soluble bedrock (such as limestone) by percolating water, the collapse of a cave roof, or a lowering of the water table. Sinkholes often form through the process of suffosion. For example, groundwater may dissolve the carbonate cement holding the sandstone particles together and then carry away the lax particles, gradually forming a void.
Occasionally a sinkhole may exhibit a visible opening into a cave below. In the case of exceptionally large sinkholes, such as the Minyé sinkhole in Papua New Guinea or Cedar Sink at Mammoth Cave National Park in Kentucky, an underground stream or river may be visible across its bottom flowing from one side to the other.
Sinkholes are common where the rock below the land surface is limestone or other carbonate rock, salt beds, or in other soluble rocks, such as gypsum, that can be dissolved naturally by circulating ground water. Sinkholes also occur in sandstone and quartzite terrains.
As the rock dissolves, spaces and caverns develop underground. These sinkholes can be dramatic, because the surface land usually stays intact until there is not enough support. Then, a sudden collapse of the land surface can occur.
Space and planetary bodies
On 2 July 2015, scientists reported that active pits, related to sinkhole collapses and possibly associated with outbursts, were found on the comet 67P/Churyumov-Gerasimenko by the Rosetta space probe.
Artificial processes
Collapses, commonly incorrectly labeled as sinkholes, also occur due to human activity, such as the collapse of abandoned mines and salt cavern storage in salt domes in places like Louisiana, Mississippi, and Texas, in the United States. More commonly, collapses occur in urban areas due to water main breaks or sewer collapses when old pipes give way. They can also occur from the overpumping and extraction of groundwater and subsurface fluids.
Sinkholes can also form when natural water drainage patterns are changed and new water diversion systems are developed. Some sinkholes form when the land surface is changed, such as when industrial and runoff storage ponds are created; the substantial weight of the new material can trigger a collapse of the roof of an existing void or cavity in the subsurface, resulting in development of a sinkhole.
Classification
Solution sinkholes
Solution or dissolution sinkholes form where water dissolves limestone under a soil covering. Dissolution enlarges natural openings in the rock such as joints, fractures, and bedding planes. Soil settles down into the enlarged openings forming a small depression at the ground surface.
Cover-subsidence sinkholes
Cover-subsidence sinkholes form where voids in the underlying limestone allow more settling of the soil to create larger surface depressions.
Cover-collapse sinkholes
Cover-collapse sinkholes or "dropouts" form where so much soil settles down into voids in the limestone that the ground surface collapses. The surface collapses may occur abruptly and cause catastrophic damages. New sinkhole collapses can also form when human activity changes the natural water-drainage patterns in karst areas.
Pseudokarst sinkholes
Pseudokarst sinkholes resemble karst sinkholes but are formed by processes other than the natural dissolution of rock.
Human accelerated sinkholes
The U.S. Geological Survey notes that "It is a frightening thought to imagine the ground below your feet or house suddenly collapsing and forming a big hole in the ground." Human activities can accelerate collapses of karst sinkholes, causing collapse within a few years that would normally evolve over thousands of years under natural conditions. Soil-collapse sinkholes, which are characterized by the collapse of cavities in soil that have developed where soil falls down into underlying rock cavities, pose the most serious hazards to life and property. Fluctuation of the water level accelerates this collapse process. When water rises up through fissures in the rock, it reduces soil cohesion. Later, as the water level moves downward, the softened soil seeps downwards into rock cavities. Flowing water in karst conduits carries the soil away, preventing soil from accumulating in rock cavities and allowing the collapse process to continue.
Induced sinkholes occur where human activity alters how surface water recharges groundwater. Many human-induced sinkholes occur where natural diffused recharge is disturbed and surface water becomes concentrated. Activities that can accelerate sinkhole collapses include timber removal, ditching, laying pipelines, sewers, water lines, storm drains, and drilling. These activities can increase the downward movement of water beyond the natural rate of groundwater recharge. The increased runoff from the impervious surfaces of roads, roofs, and parking lots also accelerate man-induced sinkhole collapses.
Some induced sinkholes are preceded by warning signs, such as cracks, sagging, jammed doors, or cracking noises, but others develop with little or no warning. However, karst development is well understood, and proper site characterization can avoid karst disasters. Thus most sinkhole disasters are predictable and preventable rather than "acts of God". The American Society of Civil Engineers has declared that the potential for sinkhole collapse must be a part of land-use planning in karst areas. Where sinkhole collapse of structures could cause loss of life, the public should be made aware of the risks.
The most likely locations for sinkhole collapse are areas where there is already a high density of existing sinkholes. Their presence shows that the subsurface contains a cave system or other unstable voids. Where large cavities exist in the limestone large surface collapses can occur, such the Winter Park, Florida sinkhole collapse. Recommendations for land uses in karst areas should avoid or minimize alterations of the land surface and natural drainage.
Since water level changes accelerate sinkhole collapse, measures must be taken to minimize water level changes. The areas most susceptible to sinkhole collapse can be identified and avoided. In karst areas the traditional foundation evaluations (bearing capacity and settlement) of the ability of soil to support a structure must be supplemented by geotechnical site investigation for cavities and defects in the underlying rock. Since the soil/rock surface in karst areas are very irregular the number of subsurface samples (borings and core samples) required per unit area is usually much greater than in non-karst areas.
In 2015, the U.S. Geological Survey estimated the cost for repairs of damage arising from karst-related processes as at least $300 million per year over the preceding 15 years, but noted that this may be a gross underestimate based on inadequate data. The greatest amount of karst sinkhole damage in the United States occurs in Florida, Texas, Alabama, Missouri, Kentucky, Tennessee, and Pennsylvania. The largest recent sinkhole in the USA is possibly one that formed in 1972 in Montevallo, Alabama, as a result of man-made lowering of the water level in a nearby rock quarry. This "December Giant" or "Golly Hole" sinkhole measures long, wide and deep.
Other areas of significant karst hazards include the Ebro Basin in northern Spain; the island of Sardinia; the Italian peninsula; the Chalk areas in southern England; Sichuan, China; Jamaica; France;Croatia; Bosnia and Herzegovina; Slovenia; and Russia, where one-third of the total land area is underlain by karst.
Occurrence
Sinkholes tend to occur in karst landscapes. Karst landscapes can have up to thousands of sinkholes within a small area, giving the landscape a pock-marked appearance. These sinkholes drain all the water, so there are only subterranean rivers in these areas. Examples of karst landscapes with numerous massive sinkholes include Khammouan Mountains (Laos) and Mamo Plateau (Papua New Guinea). The largest known sinkholes formed in sandstone are Sima Humboldt and Sima Martel in Venezuela.
Some sinkholes form in thick layers of homogeneous limestone. Their formation is facilitated by high groundwater flow, often caused by high rainfall; such rainfall causes formation of the giant sinkholes in the Nakanaï Mountains, on the New Britain island in Papua New Guinea. Powerful underground rivers may form on the contact between limestone and underlying insoluble rock, creating large underground voids.
In such conditions, the largest known sinkholes of the world have formed, like the Xiaozhai Tiankeng (Chongqing, China), giant sótanos in Querétaro and San Luis Potosí states in Mexico and others.
Unusual processes have formed the enormous sinkholes of Sistema Zacatón in Tamaulipas (Mexico), where more than 20 sinkholes and other karst formations have been shaped by volcanically heated, acidic groundwater. This has produced not only the formation of the deepest water-filled sinkhole in the world—Zacatón—but also unique processes of travertine sedimentation in upper parts of sinkholes, leading to sealing of these sinkholes with travertine lids.
The U.S. state of Florida in North America is known for having frequent sinkhole collapses, especially in the central part of the state. Underlying limestone there is from 15 to 25 million years old. On the fringes of the state, sinkholes are rare or non-existent; limestone there is around 120,000 years old.
The Murge area in southern Italy also has numerous sinkholes. Sinkholes can be formed in retention ponds from large amounts of rain.
On the Arctic seafloor, methane emissions have caused large sinkholes to form.
Human uses
Sinkholes have been used for centuries as disposal sites for various forms of waste. A consequence of this is the pollution of groundwater resources, with serious health implications in such areas.
The Maya civilization sometimes used sinkholes in the Yucatán Peninsula (known as cenotes) as places to deposit precious items and human sacrifices.
When sinkholes are very deep or connected to caves, they may offer challenges for experienced cavers or, when water-filled, divers. Some of the most spectacular are the Zacatón cenote in Mexico (the world's deepest water-filled sinkhole), the Boesmansgat sinkhole in South Africa, Sarisariñama tepuy in Venezuela, the Sótano del Barro in Mexico, and in the town of Mount Gambier, South Australia. Sinkholes that form in coral reefs and islands that collapse to enormous depths are known as blue holes and often become popular diving spots.
Local names
Large and visually unusual sinkholes have been well known to local people since ancient times. Nowadays sinkholes are grouped and named in site-specific or generic names. Some examples of such names are listed below.
Aven – In the south of France (this name means pit cave in the Occitan language).
Black holes (not to be confused with cosmic black holes) – This term refers to a group of unique, round, water-filled pits in the Bahamas. These formations seem to be dissolved in carbonate mud from above, by the sea water. The dark color of the water is caused by a layer of phototropic microorganisms concentrated in a dense, purple colored layer at depth; this layer "swallows" the light. Metabolism in the layer of microorganisms causes heating of the water. One of them is the Black Hole of Andros.
Blue holes – This name was initially given to the deep underwater sinkholes of the Bahamas but is often used for any deep water-filled pits formed in carbonate rocks. The name originates from the deep blue color of water in these sinkholes, which is created by the high clarity of the water and the great depth of the sinkholes; only the deep blue color of the visible spectrum can penetrate such depth and return after reflection.
Cenote – This refers to the characteristic water-filled sinkholes in the Yucatán Peninsula, Belize and some other regions. Many cenotes have formed in limestone deposited in shallow seas created by the Chicxulub meteorite's impact.
Dolina – Slovenian toponym internationally used for karst sinkholes. The original meaning is "valley" or "dale".
Foiba – Friulan Italian dialect word (from the Latin fŏvea: "pit" or "chasm"). The name is given to sinkholes in the frontier zone between the Italian region of Friuli-Venezia Giulia, Croatia and Slovenia, in the Karst Plateau.
Sótanos – This name is given to several giant pits in several states of Mexico.
Tiankengs – These are extremely large sinkholes, typically deeper and wider than , with mostly vertical walls, most often created by the collapse of caverns. The term means sky holes in Chinese; many of this largest type of sinkhole are located in China.
Tomo – This term is used in New Zealand karst country to describe sinkholes.
Vrtača, ponikva, dolac, dô – South Slavic terms for sinkhole.
Piping pseudokarst
The 2010 Guatemala City sinkhole formed suddenly in May of that year; torrential rains from Tropical Storm Agatha and a bad drainage system were blamed for its creation. It swallowed a three-story building and a house; it measured approximately wide and deep. A similar hole had formed nearby in February 2007.
This large vertical hole is not a true sinkhole, as it did not form via the dissolution of limestone, dolomite, marble, or any other water-soluble rock. Instead, they are examples of "piping pseudokarst", created by the collapse of large cavities that had developed in the weak, crumbly Quaternary volcanic deposits underlying the city. Although weak and crumbly, these volcanic deposits have enough cohesion to allow them to stand in vertical faces and to develop large subterranean voids within them. A process called "soil piping" first created large underground voids, as water from leaking water mains flowed through these volcanic deposits and mechanically washed fine volcanic materials out of them, then progressively eroded and removed coarser materials. Eventually, these underground voids became large enough that their roofs collapsed to create large holes.
Crown hole
A crown hole is subsidence due to subterranean human activity, such as mining and military trenches. Examples have included, instances above World War I trenches in Ypres, Belgium; near mines in Nitra, Slovakia; a limestone quarry in Dudley, England; and above an old gypsum mine in Magheracloone, Ireland.
Notable examples
Some of the largest sinkholes in the world are:
Africa
Boesmansgat – South African freshwater sinkhole, approximately deep.
Lake Kashiba – Zambia. About in area and about deep.
Asia
Blue Hole – Dahab, Egypt. A round sinkhole or blue hole, deep. It includes an archway leading out to the Red Sea at , which has been the site for many freediving and scuba attempts, the latter often fatal.
Akhayat sinkhole is in Mersin Province, Turkey. Its dimensions are about in diameter with a maximum depth of .
Well of Barhout – Yemen. A deep pit cave in Al-Mahara.
Bimmah Sinkhole (Hawiyat Najm, the Falling Star Sinkhole, Dibab Sinkhole) – Oman, approximately deep.
The Baatara gorge sinkhole and the Baatara gorge waterfall next to Tannourine in Lebanon
Dashiwei Tiankeng in Guangxi, China, is deep, with vertical walls. At the bottom is an isolated patch of forest with rare species.
The Dragon Hole, located south of the Paracel Islands, is the deepest known underwater ocean sinkhole in the world. It is deep.
Shaanxi tiankeng cluster, in the Daba Mountains of southern Shaanxi, China, covers an area of nearly 5019 square kilometers with the largest sinkhole being 520 meters in diameter and 320 meters deep.
Teiq Sinkhole (Taiq, Teeq, Tayq) in Oman is one of the largest sinkholes in the world by volume: . Several perennial wadis fall with spectacular waterfalls into this deep sinkhole.
Xiaozhai Tiankeng – Chongqing, China. Double nested sinkhole with vertical walls, deep.
2024 Kuala Lumpur sinkhole, Malaysia 8 meters deep with one victim missing.
Caribbean
Dean's Blue Hole – Bahamas. The second deepest known sinkhole under the sea, depth . Popular location for world championships of free diving, as well as recreational diving.
Central America
Great Blue Hole – Belize. Spectacular, round sinkhole, deep. Unusual features are tilted stalactites in great depth, which mark the former orientation of limestone layers when this sinkhole was above sea level.
2007 Guatemala City sinkhole
2010 Guatemala City sinkhole
Europe
Hranice Abyss, in the Moravia region of the Czech Republic, is the deepest known underwater cave in the world. The lowest confirmed depth (as of 27 September 2016) is 473 m (404 m below the water level).
Maqluba, in Malta is a sinkhole with a surface area of around 4,765 square metres (51,290 sq ft) situated in the village of Qrendi in Malta. The diameter is around 50m, the depth is around 15m, and the perimeter 300m.
Pozzo del Merro, near Rome, Italy. At the bottom of an conical pit, and approximately deep, it is among the deepest sinkholes in the world (see Sótano del Barro below).
Red Lake – Croatia. Approximately deep pit with nearly vertical walls, contains an approximately deep lake.
Gouffre de Padirac – France. It is deep, with a diameter of . Visitors descend 75 m via a lift or a staircase to a lake allowing a boat tour after entering into the cave system which contains a 55 km subterranean river.
Vouliagmeni – Greece. The sinkhole of Vouliagmeni is known as "The Devil Well", because it is considered extremely dangerous. Four scuba divers have died in it. Maximum depth of and horizontal penetration of .
Pouldergaderry – Ireland. This sinkhole is located in the townland of Kilderry South near Milltown, County Kerry at . The sinkhole, which is located in an area of karst bedrock, is approximately in diameter and deep with many mature trees growing on the floor of the hole. At the level of the surrounding ground, the sinkhole covers an area of approximately 1.3 acres. Its presence is indicated on Ordnance Survey maps dating back to 1829.
North America
Mexico
Cave of Swallows – San Luis Potosí. deep, round sinkhole with overhanging walls.
Puebla sinkhole – Santa Maria Zacatepec, Puebla. diameter and deep, it is still growing . 2021.
Sima de las Cotorras – Chiapas. across, deep, with thousands of green parakeets and ancient rock paintings.
Zacatón – Tamaulipas. Deepest water-filled sinkhole in world, deep.
United States
Amberjack Hole – blue hole located off the coast of Sarasota, Florida.
Bayou Corne sinkhole – Assumption Parish, Louisiana. About 25 acres in area and deep.
The Blue Hole – Santa Rosa, New Mexico. The surface entrance is only in diameter, it expands to a diameter of at the bottom.
Daisetta Sinkholes – Daisetta, Texas. Several sinkholes have formed, the most recent in 2008 with a maximum diameter of and maximum depth of .
Devil's Millhopper – Gainesville, Florida. deep, wide. Twelve springs, some more visible than others, feed a pond at the bottom.
Golly Hole or December Giant – Calera, Alabama. Appeared 2 December 1972. Approximately and deep.
Grassy Cove – Cumberland County, Tennessee. in area and deep, a National Natural Landmark.
Green Banana Hole – a blue hole located off the coast of Sarasota, Florida.
Gypsum Sinkhole – Utah, in Capitol Reef National Park. Nearly in diameter and approximately deep.
Kingsley Lake – Clay County, Florida. in area, deep and almost perfectly round.
Lake Peigneur – New Iberia, Louisiana. Original depth , currently at Diamond Crystal Salt Mine collapse.
Winter Park Sinkhole – Winter Park, Florida. Appeared 8 May 1981. It was approximately wide and deep. It was notable as one of the largest recent sinkholes to form in the United States. It is now known as Lake Rose.
Oceania
Harwoods Hole – Abel Tasman National Park, New Zealand. deep.
South America
Sima Humboldt – Bolívar, Venezuela. Largest sinkhole in sandstone, deep, with vertical walls. Unique, isolated forest on bottom.
In the western part of Cerro Duida, Venezuela, there is a complex of canyons with sinkholes. Deepest sinkhole is deep (from lowest rim within canyon); total depth .
| Physical sciences | Caves | Earth science |
186229 | https://en.wikipedia.org/wiki/Raft | Raft | A raft is any flat structure for support or transportation over water. It is usually of basic design, characterized by the absence of a hull. Rafts are usually kept afloat by using any combination of buoyant materials such as wood, sealed barrels, or inflated air chambers (such as pontoons), and are typically not propelled by an engine. Rafts are an ancient mode of transport; naturally-occurring rafts such as entwined vegetation and pieces of wood have been used to traverse water since the dawn of humanity.
Human-made rafts
Traditional or primitive rafts were constructed of wood, bamboo or reeds; early buoyed or float rafts use inflated animal skins or sealed clay pots which are lashed together. Modern float rafts may also use pontoons, drums, or extruded polystyrene blocks. Depending on its use and size, it may have a superstructure, masts, or rudders.
Timber rafting is used by the logging industry for the transportation of logs, by tying them together into rafts and drifting or pulling them down a river. This method was very common up until the middle of the 20th century but is now used only rarely.
Large rafts made of balsa logs and using sails for navigation were important in maritime trade on the Pacific Ocean coast of South America from pre-Columbian times until the 19th century. Voyages were made to locations as far away as Mexico, and many trans-Pacific voyages using replicas of ancient rafts have been undertaken to demonstrate possible contacts between South America and Polynesia.
Rafts used for recreational rafting are almost exclusively inflatable rafts, manufactured of flexible materials such as PVC, hypalon, polyurethane, and nylon. These materials are resistant to the collisions and heavy wear the boats experience when traveling through whitewater. Whitewater rafts are also designed with high rocker, an raised bow and stern which allows them to pass over waves and obstacles more easily. Most have drain holes in the floor to prevent the boat from becoming swamped with water.
Natural rafts
In biology, particularly in island biogeography, non-manmade rafts are an important concept. Such rafts consist of matted clumps of vegetation that has been swept off the dry land by a storm, tsunami, tide, earthquake or similar event; in modern times they sometimes also incorporate other kind of flotsam and jetsam, e.g. plastic containers. They stay afloat by its natural buoyancy and can travel for hundreds, even thousands of miles and are ultimately destroyed by wave action and decomposition, or make landfall.
Rafting events are important means of oceanic dispersal for non-flying animals. For amphibians, reptiles, and small mammals, in particular, but for many invertebrates as well, such rafts of vegetation were often the only means by which they could reach and – if they were lucky – colonize oceanic islands before human-built vehicles provided another mode of transport.
Image gallery
| Technology | Naval transport | null |
186266 | https://en.wikipedia.org/wiki/ITunes | ITunes | iTunes is a media player, media library, and mobile device management utility developed by Apple. It is used to purchase, play, download and organize digital multimedia on personal computers running the macOS and Windows operating systems, and can be used to rip songs from CDs as well as playing content from dynamic, smart playlists. It includes options for sound optimization and wirelessly sharing iTunes libraries.
iTunes was announced by Apple CEO Steve Jobs on January 9, 2001. Its original and main focus was music, with a library offering organization and storage of Mac users' music collections. With the 2003 addition of the iTunes Store for purchasing and downloading digital music, and a Windows version of the program, it became an ubiquitous tool for managing music and configuring other features on Apple's line of iPod media players, which extended to the iPhone and iPad upon their introduction. From 2005 on, Apple expanded its core music features with support for digital video, podcasts, e-books, and mobile apps purchased from the iOS App Store. Since the release of iOS 5 in 2011, these devices have become less dependent on iTunes, though it can still be used to back up their contents.
Though well received in its early years, iTunes received increasing criticism for a bloated user experience, which incorporated features beyond its original focus on music. Beginning with Macs running macOS Catalina, iTunes was replaced by separate apps, namely Music, Podcasts, and TV, with Finder and Apple Devices taking over the device management capabilities. This change did not affect iTunes running on Windows or older macOS versions.
In February 2024, most features of iTunes for Windows were split into the Apple TV, Music, Podcasts, Books, and Apple Devices apps. When the apps are installed, iTunes is still used for podcasts and audiobooks.
History
SoundJam MP, released by Casady & Greene in 1999, was renamed "iTunes" when Apple purchased it the next year. The primary developers of the software moved to Apple as part of the acquisition, and simplified SoundJam's user interface, added the ability to burn CDs, and removed its recording feature and skin support. The first version of iTunes, promotionally dubbed "World's Best and Easiest To Use Jukebox Software", was announced on January 9, 2001. Subsequent releases of iTunes often coincided with new hardware devices, and gradually included support for new features, including "smart playlists", the iTunes Store, and new audio formats.
Platform availability
Apple released iTunes for Windows on October 16, 2003.
On April 26, 2018, iTunes was released on Microsoft Store for Windows 10, primarily to allow it to be installed on Windows 10 devices configured to only allow installation of software from Microsoft Store. Unlike Windows versions for other platforms, it is more self-contained due to technical requirements for distribution on the store (not installing background helper services such as Bonjour), and is updated automatically through the store rather than using Apple Software Update.
The role of iTunes has been replaced with independent apps, Apple Music, Apple Podcasts, Apple Books, and Apple TV; with iPhone, iPod, and iPad management integrated into the Finder starting with macOS 10.15 Catalina, and appearing as Apple Devices starting with Windows 10.
Music library
iTunes features a music library. Each track has attributes, called metadata, that can be edited by the user, including changing the name of the artist, album, and genre, year of release, artwork, among other additional settings. The software supports importing digital audio tracks that can then be transferred to iOS devices, as well as supporting ripping content from CDs. iTunes supports WAV, AIFF, Apple Lossless, AAC, and MP3 audio formats. It uses the Gracenote music database to provide track name listings for audio CDs. When users rip content from a CD, iTunes attempts to match songs to the Gracenote service. For self-published CDs, or those from obscure record labels, iTunes would normally only list tracks as numbered entries ("Track 1" and "Track 2") on an unnamed album by an unknown artist, requiring manual input of data.
File metadata is displayed in users' libraries in columns, including album, artist, genre, composer, and more. Users can enable or disable different columns, as well as change view settings.
Special playlists
Introduced in 2004, "Party Shuffle" selected tracks to play randomly from the library, though users could press a button to skip a song and go to the next in the list. The feature was later renamed "iTunes DJ", before being discontinued altogether, replaced by a simpler "Up Next" feature that notably lost some of "iTunes DJ"'s functionality.
Introduced in iTunes 8 in 2008, "" can automatically generate a playlist of songs from the user's library that "go great together". "Genius" transmits information about the user's library to Apple anonymously, and evolves over time to enhance its recommendation system. It can also suggest purchases to fill out "holes" in the library. The feature was updated with iTunes 9 in 2009 to offer "Genius Mixes", which generated playlists based on specific music genres.
"Smart playlists" are a set of playlists that can be set to automatically filter the library based on a customized list of selection criteria, much like a database query. Multiple criteria can be entered to manage the smart playlist. Selection criteria examples include a genre like Christmas music, songs that have not been played recently, or songs the user has listened to the most in a time period.
Library sharing
Through a "Home Sharing" feature, users can share their iTunes library wirelessly. Computer firewalls must allow network traffic, and users must specifically enable sharing in the iTunes preferences menu. iOS applications also exist that can transfer content without Internet. Additionally, users can set up a network-attached storage system, and connect to that storage system through an app.
Sound processing
iTunes includes sound processing features, such as equalization, "sound enhancement" and crossfade. There is also a feature called , which normalizes the playback volume of all songs in the library to the same level.
Online music functionality
iTunes Store
Introduced on April 28, 2003, The iTunes Music Store allows users to buy and download songs, with 200,000 tracks available at launch. In its first week, customers bought more than one million songs. Music purchased was protected by FairPlay, an encryption layer referred to as digital rights management (DRM). The use of DRM, which limited devices capable of playing purchased files, sparked efforts to remove the protection mechanism. Eventually, after an open letter to the music industry by CEO Steve Jobs in February 2007, Apple introduced a selection of DRM-free music in the iTunes Store in April 2007, followed by its entire music catalog without DRM in January 2009.
iTunes in the Cloud and iTunes Match
In June 2011, Apple announced "iTunes in the Cloud", in which music purchases were stored on Apple's servers and made available for automatic downloading on new devices. For music the user owns, such as content ripped from CDs, the company introduced "iTunes Match", a feature that can upload content to Apple's servers, match it to its catalog, change the quality to 256 kbit/s AAC format, and make it available to other devices.
Internet radio, iTunes Radio and Apple Music
When iTunes was first released, it came with support for the Kerbango Internet radio tuner service. In June 2013, the company announced iTunes Radio, a free music streaming service. In June 2015, Apple announced Apple Music, a subscription-based music streaming service, and subsequently integrated iTunes Radio functionality. Music tracks provided by Apple Music via iTunes are available at up to 256 kbit/s AAC fidelity. The Apple Music app also integrates Apple Music 1, a live music radio station.
Phasing out
As of 2024, Apple is phasing out iTunes in favour of three dedicated apps, Music, Podcasts, and TV, but the iTunes Store will still remain.
Other features
Video
In May 2005, video support was introduced to iTunes with the release of iTunes 4.8, though it was limited to bonus features part of album purchases. The following October, Apple introduced iTunes 6, enabling support for purchasing and viewing video content purchased from the iTunes Store. At launch, the store offered popular shows from the ABC network, including Desperate Housewives and Lost, along with Disney Channel series That's So Raven and The Suite Life of Zack & Cody. CEO Steve Jobs told the press that "We're doing for video what we've done for music — we're making it easy and affordable to purchase and download, play on your computer, and take with you on your iPod."
In 2008, Apple and select film studios introduced "iTunes Digital Copy", a feature on select DVDs and Blu-ray discs allowing a digital copy in iTunes and associated media players.
Podcasts
In June 2005, Apple updated iTunes with support for podcasts. Users can subscribe to podcasts, change update frequency, define how many episodes to download and how many to delete.
Similar to songs, "Smart playlists" can be used to control podcasts in a playlist, setting criteria such as date and number of times listened to.
Apple is credited for being the major catalyst behind the early growth of podcasting.
Apps
On July 10, 2008, Apple introduced native mobile apps for its iOS operating system. On iOS, a dedicated App Store application served as the storefront for browsing, downloading, updating, and otherwise managing applications, whereas iTunes on computers had a dedicated section for apps rather than a separate app. In September 2017, Apple updated iTunes to version 12.7, removing the App Store section in the process. iTunes 12.6.3 was released the following month, retaining App Store functionality, with 9to5Mac noting that the secondary release was positioned by Apple as "necessary for some businesses performing internal app deployments".
iTunes U
In May 2007, Apple announced the launch of "iTunes U" via the iTunes Store, which delivers university lectures from top U.S. colleges. With iTunes version 12.7 in August 2017, iTunes U collections became a part of the Podcasts app. On June 10, 2020, Apple formally announced that iTunes U would be discontinued at the end of 2021.
Apple mobile device connectivity
iTunes was required to activate early iPhone and iPad devices. Beginning with the iPhone 3G in June 2008, activation did not require iTunes, making use of activation at point of sale. Later iPhone models are able to be activated and set-up on their own, without requiring the use of iTunes.
iTunes also allows users to backup and restore the content of their Apple mobile devices, such as music, photos, videos, ringtones and device settings, and restore the firmware of their devices. However, as of iTunes 12.7, apps can no longer be purchased and installed using iTunes.
Ping
With the release of iTunes 10 in September 2010, Apple announced iTunes Ping, which CEO Steve Jobs described as "social music discovery". It had features reminiscent of Facebook, including profiles and the ability to follow other users. Ping was discontinued in September 2012.
Criticism
Security
The Telegraph reported in November 2011 that Apple had been aware of a security vulnerability since 2008 that would let unauthorized third parties install "updates" to users' iTunes software. Apple fixed the issue before the Telegraphs report and told the media that "The security and privacy of our users is extremely important", though this was questioned by security researcher Brian Krebs, who told the publication that "A prominent security researcher warned Apple about this dangerous vulnerability in mid-2008, yet the company waited more than 1,200 days to fix the flaw."
Software bloat
iTunes has been repeatedly accused of being bloated as part of Apple's efforts to turn it from a music player to an all-encompassing multimedia platform. Former PC World editor Ed Bott accused the company of hypocrisy in its advertising attacks on Windows for similar practices.
| Technology | Multimedia_2 | null |
186409 | https://en.wikipedia.org/wiki/Population%20bottleneck | Population bottleneck | A population bottleneck or genetic bottleneck is a sharp reduction in the size of a population due to environmental events such as famines, earthquakes, floods, fires, disease, and droughts; or human activities such as genocide, speciocide, widespread violence or intentional culling. Such events can reduce the variation in the gene pool of a population; thereafter, a smaller population, with a smaller genetic diversity, remains to pass on genes to future generations of offspring. Genetic diversity remains lower, increasing only when gene flow from another population occurs or very slowly increasing with time as random mutations occur. This results in a reduction in the robustness of the population and in its ability to adapt to and survive selecting environmental changes, such as climate change or a shift in available resources. Alternatively, if survivors of the bottleneck are the individuals with the greatest genetic fitness, the frequency of the fitter genes within the gene pool is increased, while the pool itself is reduced.
The genetic drift caused by a population bottleneck can change the proportional random distribution of alleles and even lead to loss of alleles. The chances of inbreeding and genetic homogeneity can increase, possibly leading to inbreeding depression. Smaller population size can also cause deleterious mutations to accumulate.
Population bottlenecks play an important role in conservation biology (see minimum viable population size) and in the context of agriculture (biological and pest control).
Minimum viable population size
In conservation biology, minimum viable population (MVP) size helps to determine the effective population size when a population is at risk for extinction. The effects of a population bottleneck often depend on the number of individuals remaining after the bottleneck and how that compares to the minimum viable population size.
Founder effects
A slightly different form of bottleneck can occur if a small group becomes reproductively (e.g., geographically) separated from the main population, such as through a founder event, e.g., if a few members of a species successfully colonize a new isolated island, or from small captive breeding programs such as animals at a zoo. Alternatively, invasive species can undergo population bottlenecks through founder events when introduced into their invaded range.
Examples
Humans
According to a 1999 model, a severe population bottleneck, or more specifically a full-fledged speciation, occurred among a group of Australopithecina as they transitioned into the species known as Homo erectus two million years ago. It is believed that additional bottlenecks must have occurred since Homo erectus started walking the Earth, but current archaeological, paleontological, and genetic data are inadequate to give much reliable information about such conjectured bottlenecks. Nonetheless, a 2023 genetic analysis discerned such a human ancestor population bottleneck of a possible 100,000 to 1000 individuals "around 930,000 and 813,000 years ago [which] lasted for about 117,000 years and brought human ancestors close to extinction."
A 2005 study from Rutgers University theorized that the pre-1492 native populations of the Americas are the descendants of only 70 individuals who crossed the land bridge between Asia and North America.
The Neolithic Y-chromosome bottleneck refers to a period around 5000 BC where the diversity in the male y-chromosome dropped precipitously, to a level equivalent to reproduction occurring with a ratio between men and women of 1:17. Discovered in 2015 the research suggests that the reason for the bottleneck was not a reduction in the number of males, but a drastic decrease in the percentage of males with reproductive success.
Toba catastrophe theory
The controversial Toba catastrophe theory, presented in the late 1990s to early 2000s, suggested that a bottleneck of the human population occurred approximately 75,000 years ago, proposing that the human population was reduced to perhaps 10,000–30,000 individuals when the Toba supervolcano in Indonesia erupted and triggered a major environmental change. Parallel bottlenecks were proposed to exist among chimpanzees, gorillas, rhesus macaques, orangutans and tigers. The hypothesis was based on geological evidence of sudden climate change and on coalescence evidence of some genes (including mitochondrial DNA, Y-chromosome DNA and some nuclear genes) and the relatively low level of genetic variation in humans.
However, subsequent research, especially in the 2010s, appeared to refute both the climate argument and the genetic argument. Recent research shows the extent of climate change was much smaller than believed by proponents of the theory.
In 2000, a Molecular Biology and Evolution paper suggested a transplanting model or a 'long bottleneck' to account for the limited genetic variation, rather than a catastrophic environmental change. This would be consistent with suggestions that in sub-Saharan Africa numbers could have dropped at times as low as 2,000, for perhaps as long as 100,000 years, before numbers began to expand again in the Late Stone Age.
Other animals
European bison, also called wisent (Bison bonasus), faced extinction in the early 20th century. The animals living today are all descended from 12 individuals and they have extremely low genetic variation, which may be beginning to affect the reproductive ability of bulls.
The population of American bison (Bison bison) fell due to overhunting, nearly leading to extinction around the year 1890, though it has since begun to recover (see table).
A classic example of a population bottleneck is that of the northern elephant seal, whose population fell to about 30 in the 1890s. Although it now numbers in the hundreds of thousands, the potential for bottlenecks within colonies remains. Dominant bulls are able to mate with the largest number of females—sometimes as many as 100. With so much of a colony's offspring descended from just one dominant male, genetic diversity is limited, making the species more vulnerable to diseases and genetic mutations.
The golden hamster is a similarly bottlenecked species, with the vast majority of domesticated hamsters descended from a single litter found in the Syrian desert around 1930, and very few wild golden hamsters remain.
An extreme example of a population bottleneck is the New Zealand black robin, of which every specimen today is a descendant of a single female, called Old Blue. The Black Robin population is still recovering from its low point of only five individuals in 1980.
The genome of the giant panda shows evidence of a severe bottleneck about 43,000 years ago. There is also evidence of at least one primate species, the golden snub-nosed monkey, that also suffered from a bottleneck around this time. An unknown environmental event is suspected to have caused the bottlenecks observed in both of these species. The bottlenecks likely caused the low genetic diversity observed in both species.
Other facts can sometimes be inferred from an observed population bottleneck. Among the Galápagos Islands giant tortoises—themselves a prime example of a bottleneck—the comparatively large population on the slopes of the Alcedo volcano is significantly less diverse than four other tortoise populations on the same island. DNA analyses date the bottleneck to around 88,000 years before present (YBP). About 100,000 YBP the volcano erupted violently, deeply burying much of the tortoise habitat in pumice and ash.
Another example can be seen in the greater prairie chickens, which were prevalent in North America until the 20th century. In Illinois alone, the number of greater prairie chickens plummeted from over 100 million in 1900 to about 46 in 1998. These declines in population were the result of hunting and habitat destruction, but the random consequences have also caused a great loss in species diversity. DNA analysis comparing the birds from 1990 and mid-century shows a steep genetic decline in recent decades. Management of the greater prairie chickens now includes genetic rescue efforts including the translocation prairie chickens between leks to increase each populations genetic diversity.
Population bottlenecking poses a major threat to the stability of species populations as well. Papilio homerus is the largest butterfly in the Americas and is endangered according to the IUCN. The disappearance of a central population poses a major threat of population bottleneck. The remaining two populations are now geographically isolated and the populations face an unstable future with limited remaining opportunity for gene flow.
Genetic bottlenecks exist in cheetahs.
Selective breeding
Bottlenecks also exist among pure-bred animals (e.g., dogs and cats: pugs, Persian) because breeders limit their gene pools to a few (show-winning) individuals for their looks and behaviors. The extensive use of desirable individual animals at the exclusion of others can result in a popular sire effect.
Selective breeding for dog breeds caused constricting breed-specific bottlenecks. These bottlenecks have led to dogs having an average of 2–3% more genetic loading than gray wolves. The strict breeding programs and population bottlenecks have led to the prevalence of diseases such as heart disease, blindness, cancers, hip dysplasia, and cataracts.
Selective breeding to produce high-yielding crops has caused genetic bottlenecks in these crops and has led to genetic homogeneity. This reduced genetic diversity in many crops could lead to broader susceptibility to new diseases or pests, which threatens global food security.
Plants
Research showed that there is incredibly low, nearly undetectable amounts of genetic diversity in the genome of the Wollemi pine (Wollemia nobilis). The IUCN found a population count of 80 mature individuals and about 300 seedlings and juveniles in 2011, and previously, the Wollemi pine had fewer than 50 individuals in the wild. The low population size and low genetic diversity indicates that the Wollemi pine went through a severe population bottleneck.
A population bottleneck was created in the 1970s through the conservation efforts of the endangered Mauna Kea silversword (Argyroxiphium sandwicense ssp. sandwicense). The small natural population of silversword was augmented through the 1970s with outplanted individuals. All of the outplanted silversword plants were found to be first or subsequent generation offspring of just two maternal founders. The low amount of polymorphic loci in the outplanted individuals led to the population bottleneck, causing the loss of the marker allele at eight of the loci.
| Biology and health sciences | Basics_4 | Biology |
186415 | https://en.wikipedia.org/wiki/Mojibake | Mojibake | Mojibake (; , 'character transformation') is the garbled or gibberish text that is the result of text being decoded using an unintended character encoding. The result is a systematic replacement of symbols with completely unrelated ones, often from a different writing system.
This display may include the generic replacement character in places where the binary representation is considered invalid. A replacement can also involve multiple consecutive symbols, as viewed in one encoding, when the same binary code constitutes one symbol in the other encoding. This is either because of differing constant length encoding (as in Asian 16-bit encodings vs European 8-bit encodings), or the use of variable length encodings (notably UTF-8 and UTF-16).
Failed rendering of glyphs due to either missing fonts or missing glyphs in a font is a different issue that is not to be confused with mojibake. Symptoms of this failed rendering include blocks with the code point displayed in hexadecimal or using the generic replacement character. Importantly, these replacements are valid and are the result of correct error handling by the software.
Causes
To correctly reproduce the original text that was encoded, the correspondence between the encoded data and the notion of its encoding must be preserved (i.e. the source and target encoding standards must be the same). As mojibake is the instance of non-compliance between these, it can be achieved by manipulating the data itself, or just relabelling it.
Mojibake is often seen with text data that have been tagged with a wrong encoding; it may not even be tagged at all, but moved between computers with different default encodings. A major source of trouble are communication protocols that rely on settings on each computer rather than sending or storing metadata together with the data.
The differing default settings between computers are in part due to differing deployments of Unicode among operating system families, and partly the legacy encodings' specializations for different writing systems of human languages. Whereas Linux distributions mostly switched to UTF-8 in 2004, Microsoft Windows generally uses UTF-16, and sometimes uses 8-bit code pages for text files in different languages.
For some writing systems, such as Japanese, several encodings have historically been employed, causing users to see mojibake relatively often. As an example, the word mojibake itself ("文字化け") stored as EUC-JP might be incorrectly displayed as , "ハクサ嵂ス、ア" (MS-932), or "ハクサ郾ス、ア" if interpreted as Shift-JIS, or as "ʸ»ú²½¤±" in software that assumes text to be in the Windows-1252 or ISO 8859-1 encodings, usually labelled Western or Western European. This is further exacerbated if other locales are involved: the same text stored as UTF-8 appears as if interpreted as Shift-JIS, as "æ–‡å—化ã‘" if interpreted as Western, or (for example) as "鏂囧瓧鍖栥亼" if interpreted as being in a GBK (Mainland China) locale.
Underspecification
If the encoding is not specified, it is up to the software to decide it by other means. Depending on the type of software, the typical solution is either configuration or charset detection heuristics, both of which are prone to mis-prediction.
The encoding of text files is affected by locale setting, which depends on the user's language and brand of operating system, among other conditions. Therefore, the assumed encoding is systematically wrong for files that come from a computer with a different setting, or even from a differently localized piece of software within the same system. For Unicode, one solution is to use a byte order mark, but many parsers do not tolerate this for source code or other machine-readable text. Another solution is to store the encoding as metadata in the file system; file systems that support extended file attributes can store this as user.charset. This also requires support in software that wants to take advantage of it, but does not disturb other software.
While some encodings are easy to detect, such as UTF-8, there are many that are hard to distinguish (see charset detection). For example, a web browser may not be able to distinguish between a page coded in EUC-JP and another in Shift-JIS if the encoding is not assigned explicitly using HTTP headers sent along with the documents, or using the document's meta tags that are used to substitute for missing HTTP headers if the server cannot be configured to send the proper HTTP headers; see character encodings in HTML.
Mis-specification
Mojibake also occurs when the encoding is incorrectly specified. This often happens between encodings that are similar. For example, the Eudora email client for Windows was known to send emails labelled as ISO 8859-1 that were in reality Windows-1252. Windows-1252 contains extra printable characters in the C1 range (the most frequently seen being curved quotation marks and extra dashes), that were not displayed properly in software complying with the ISO standard; this especially affected software running under other operating systems such as Unix.
User oversight
Of the encodings still in common use, many originated from taking ASCII and appending atop it; as a result, these encodings are partially compatible with each other. Examples of this include Windows-1252 and ISO 8859-1. People thus may mistake the expanded encoding set they are using with plain ASCII.
Overspecification
When there are layers of protocols, each trying to specify the encoding based on different information, the least certain information may be misleading to the recipient.
For example, consider a web server serving a static HTML file over HTTP. The character set may be communicated to the client in any number of 3 ways:
in the HTTP header. This information can be based on server configuration (for instance, when serving a file off disk) or controlled by the application running on the server (for dynamic websites).
in the file, as an HTML meta tag (http-equiv or charset) or the encoding attribute of an XML declaration. This is the encoding that the author meant to save the particular file in.
in the file, as a byte order mark. This is the encoding that the author's editor actually saved it in. Unless an accidental encoding conversion has happened (by opening it in one encoding and saving it in another), this will be correct. It is, however, only available in Unicode encodings such as UTF-8 or UTF-16.
Lack of hardware or software support
Much older hardware is typically designed to support only one character set and the character set typically cannot be altered. The character table contained within the display firmware will be localized to have characters for the country the device is to be sold in, and typically the table differs from country to country. As such, these systems will potentially display mojibake when loading text generated on a system from a different country. Likewise, many early operating systems do not support multiple encoding formats and thus will end up displaying mojibake if made to display non-standard text—early versions of Microsoft Windows and Palm OS for example, are localized on a per-country basis and will only support encoding standards relevant to the country the localized version will be sold in, and will display mojibake if a file containing a text in a different encoding format from the version that the OS is designed to support is opened.
Resolutions
Applications using UTF-8 as a default encoding may achieve a greater degree of interoperability because of its widespread use and backward compatibility with US-ASCII. UTF-8 also has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings.
The difficulty of resolving an instance of mojibake varies depending on the application within which it occurs and the causes of it. Two of the most common applications in which mojibake may occur are web browsers and word processors. Modern browsers and word processors often support a wide array of character encodings. Browsers often allow a user to change their rendering engine's encoding setting on the fly, while word processors allow the user to select the appropriate encoding when opening a file. It may take some trial and error for users to find the correct encoding.
The problem gets more complicated when it occurs in an application that normally does not support a wide range of character encoding, such as in a non-Unicode computer game. In this case, the user must change the operating system's encoding settings to match that of the game. However, changing the system-wide encoding settings can also cause Mojibake in pre-existing applications. In Windows XP or later, a user also has the option to use Microsoft AppLocale, an application that allows the changing of per-application locale settings. Even so, changing the operating system encoding settings is not possible on earlier operating systems such as Windows 98; to resolve this issue on earlier operating systems, a user would have to use third party font rendering applications.
Problems in different writing systems
English
Mojibake in English texts generally occurs in punctuation, such as em dashes (—), en dashes (–), and curly quotes (“, ”, ‘, ’), but rarely in character text, since most encodings agree with ASCII on the encoding of the English alphabet. For example, the pound sign £ will appear as £ if it was encoded by the sender as UTF-8 but interpreted by the recipient as one of the Western European encodings (CP1252 or ISO 8859-1). If iterated using CP1252, this can lead to £, £, £, £, and so on.
Similarly, the right single quotation mark (’), when encoded in UTF-8 and decoded using Windows-1252, becomes ’, ’, ’, and so on.
In older eras, some computers had vendor-specific encodings which caused mismatch also for English text. Commodore brand 8-bit computers used PETSCII encoding, particularly notable for inverting the upper and lower case compared to standard ASCII. PETSCII printers worked fine on other computers of the era, but inverted the case of all letters. IBM mainframes use the EBCDIC encoding which does not match ASCII at all.
Other Western European languages
The alphabets of the North Germanic languages, Catalan, Romanian, Finnish, French, German, Italian, Portuguese and Spanish are all extensions of the Latin alphabet. The additional characters are typically the ones that become corrupted, making texts only mildly unreadable with mojibake:
å, ä, ö in Finnish and Swedish (š and ž are present in some Finnish loanwords, é marginally in Swedish, mainly also in loanwords)
à, ç, è, é, ï, í, ò, ó, ú, ü in Catalan
æ, ø, å in Norwegian and Danish as well as optional acute accents on é etc for disambiguation
á, é, ó, ij, è, ë, ï in Dutch
ä, ö, ü, and ß in German
á, ð, í, ó, ú, ý, æ, ø in Faroese
á, ð, é, í, ó, ú, ý, þ, æ, ö in Icelandic
à, â, ç, è, é, ë, ê, ï, î, ô, ù, û, ü, ÿ, æ, œ in French
à, è, é, ì, ò, ù in Italian
á, é, í, ñ, ó, ú, ü, ¡, ¿ in Spanish
à, á, â, ã, ç, é, ê, í, ó, ô, õ, ú in Portuguese (ü no longer used)
á, é, í, ó, ú in Irish
à, è, ì, ò, ù in Scottish Gaelic
ă, â, î, ș, ț in Romanian
£ in British English (æ and œ are rarely used)
... and their uppercase counterparts, if applicable.
These are languages for which the ISO 8859-1 character set (also known as Latin 1 or Western) has been in use. However, ISO 8859-1 has been obsoleted by two competing standards, the backward compatible Windows-1252, and the slightly altered ISO 8859-15. Both add the Euro sign € and the French œ, but otherwise any confusion of these three character sets does not create mojibake in these languages. Furthermore, it is always safe to interpret ISO 8859-1 as Windows-1252, and fairly safe to interpret it as ISO 8859-15, in particular with respect to the Euro sign, which replaces the rarely used currency sign (¤). However, with the advent of UTF-8, mojibake has become more common in certain scenarios, e.g. exchange of text files between UNIX and Windows computers, due to UTF-8's incompatibility with Latin-1 and Windows-1252. But UTF-8 has the ability to be directly recognised by a simple algorithm, so that well written software should be able to avoid mixing UTF-8 up with other encodings, so this was most common when many had software not supporting UTF-8. Most of these languages were supported by MS-DOS default CP437 and other machine default encodings, except ASCII, so problems when buying an operating system version were less common. Windows and MS-DOS are not compatible, however.
In Swedish, Norwegian, Danish and German, vowels are rarely repeated, and it is usually obvious when one character gets corrupted, e.g. the second letter in the Swedish word ("love") when it is encoded in UTF-8 but decoded in Western, producing "kärlek", or für in German, which becomes "für". This way, even though the reader has to guess what the original letter is, almost all texts remain legible. Finnish, on the other hand, frequently uses repeating vowels in words like ("wedding night") which can make corrupted text very hard to read (e.g. appears as "hääyö"). Icelandic has ten possibly confounding characters, and Faroese has eight, making many words almost completely unintelligible when corrupted (e.g. Icelandic , "outstanding hospitality", appears as "þjóðlöð").
In German, ("letter salad") is a common term for this phenomenon, in Spanish, (literally "deformation") is used, and in Portuguese, (literally "deformatting") is used.
Some users transliterate their writing when using a computer, either by omitting the problematic diacritics, or by using digraph replacements (å → aa, ä/æ → ae, ö/ø → oe, ü → ue etc.). Thus, an author might write "ueber" instead of "über", which is standard practice in German when umlauts are not available. The latter practice seems to be better tolerated in the German language sphere than in the Nordic countries. For example, in Norwegian, digraphs are associated with archaic Danish, and may be used jokingly. However, digraphs are useful in communication with other parts of the world. As an example, the Norwegian football player Ole Gunnar Solskjær had his last name spelled "SOLSKJAER" on his uniform when he played for Manchester United.
An artifact of UTF-8 misinterpreted as ISO 8859-1, "" being rendered as "Ring meg nå", was seen in 2014 in an SMS scam targeting Norway.
The same problem occurs also in Romanian, see these examples:
Central and Eastern European
Users of Central and Eastern European languages can also be affected. Because most computers were not connected to any network during the mid- to late-1980s, there were different character encodings for every language with diacritical characters (see ISO/IEC 8859 and KOI-8), often also varying by operating system.
Hungarian
In Hungarian, the phenomenon is referred to as betűszemét, meaning "letter garbage". Hungarian has been particularly susceptible as it contains the accented letters á, é, í, ó, ú, ö, ü (all present in the Latin-1 character set), plus the two characters ő and ű which are not in Latin-1. These two characters can be correctly encoded in Latin-2, Windows-1250, and Unicode. However, before Unicode became common in e-mail clients, e-mails containing Hungarian text often had the letters ő and ű corrupted, sometimes to the point of unrecognizability. It is common to respond to a corrupted e-mail with the nonsense phrase "Árvíztűrő tükörfúrógép" (literally "Flood-resistant mirror-drilling machine") which contains all accented characters used in Hungarian.
Examples
Polish
Prior to the creation of ISO 8859-2 in 1987, users of various computing platforms used their own character encodings such as AmigaPL on Amiga, Atari Club on Atari ST and Masovia, IBM CP852, Mazovia and Windows CP1250 on IBM PCs. Polish companies selling early DOS computers created their own mutually-incompatible ways to encode Polish characters and simply reprogrammed the EPROMs of the video cards (typically CGA, EGA, or Hercules) to provide hardware code pages with the needed glyphs for Polish—arbitrarily located without reference to where other computer sellers had placed them.
The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as the "Internet standard" with limited support of the dominant vendors' software (today largely replaced by Unicode). With the numerous problems caused by the variety of encodings, even today some users tend to refer to Polish diacritical characters as (, lit. "little shrubs").
Russian and other Cyrillic-based alphabets
Mojibake is colloquially called ( ) in Russian, which was and remains complicated by several systems for encoding Cyrillic. The Soviet Union and early Russian Federation developed KOI encodings (, , which translates to "Code for Information Exchange"). This began with Cyrillic-only 7-bit KOI7, based on ASCII but with Latin and some other characters replaced with Cyrillic letters. Then came 8-bit KOI8 encoding that is an ASCII extension which encodes Cyrillic letters only with high-bit set octets corresponding to 7-bit codes from KOI7. It is for this reason that KOI8 text, even Russian, remains partially readable after stripping the eighth bit, which was considered as a major advantage in the age of 8BITMIME-unaware email systems. For example, the words "" (), when encoded in KOI8 and passed through the high bit stripping process, end up being rendered as "[KOLA RUSSKOGO qZYKA". Eventually, KOI8 gained different flavors for Russian and Bulgarian (KOI8-R), Ukrainian (KOI8-U), Belarusian (KOI8-RU), and even Tajik (KOI8-T).
Meanwhile, in the West, Code page 866 supported Ukrainian and Belarusian, as well as Russian and Bulgarian in MS-DOS. For Microsoft Windows, Code Page 1251 added support for Serbian and other Slavic variants of Cyrillic.
Most recently, the Unicode encoding includes code points for virtually all characters in all languages, including all Cyrillic characters.
Before Unicode, it was necessary to match text encoding with a font using the same encoding system; failure to do this produced unreadable gibberish whose specific appearance varied depending on the exact combination of text and font encoding. For example, attempting to view non-Unicode Cyrillic text using a font that is limited to the Latin alphabet, or using the default ("Western") encoding, typically results in text that consists almost entirely of capitalized vowels with diacritical marks (e.g. KOI8 "" (, library) becomes "âÉÂÌÉÏÔÅËÁ", while "Школа русского языка" (, Russian-language school) becomes "ûËÏÌÁ ÒÕÓÓËÏÇÏ ÑÚÙËÁ"). Using Code Page 1251 to view text in KOI8, or vice versa, results in garbled text that consists mostly of capital letters (KOI8 and Code Page 1251 share the same ASCII region, but KOI8 has uppercase letters in the region where Code Page 1251 has lowercase, and vice versa).
During the early years of the Russian sector of the World Wide Web, both KOI8 and Code Page 1251 were common. Nearly all websites now use Unicode, but an estimated 0.35% of all web pages worldwide – all languages included – are still encoded in Code Page 1251, while less than 0.003% of sites are still encoded in KOI8-R. Though the HTML standard includes the ability to specify the encoding for any given web page in its source, this is sometimes neglected, forcing the user to switch encodings in the browser manually.
In Bulgarian, mojibake is often called (), meaning "monkey's [alphabet]". In Serbian, it is called (), meaning "trash". Unlike the former USSR, South Slavs never used something like KOI8, and Code Page 1251 was the dominant Cyrillic encoding before Unicode; therefore, these languages experienced fewer encoding incompatibility troubles than Russian. In the 1980s, Bulgarian computers used their own MIK encoding, which is superficially similar to (although incompatible with) CP866.
Yugoslav languages
Croatian, Bosnian, Serbian (the seceding varieties of Serbo-Croatian language) and Slovenian add to the basic Latin alphabet the letters š, đ, č, ć, ž, and their capital counterparts Š, Đ, Č, Ć, Ž (only č/Č, š/Š and ž/Ž are officially used in Slovenian, although others are used when needed, mostly in foreign names). All of these letters are defined in Latin-2 and Windows-1250, while only some (š, Š, ž, Ž, Đ) exist in the usual OS-default Windows-1252, and are there because of some other languages.
Although Mojibake can occur with any of these characters, the letters that are not included in Windows-1252 are much more prone to errors. Thus, even nowadays, "šđčćž ŠĐČĆŽ" is often displayed as "šðèæž ŠÐÈÆŽ", although ð, È, and Æ are never used in Slavic languages.
When confined to basic ASCII (most user names, for example), common replacements are: š→s, đ→dj, č→c, ć→c, ž→z (capital forms analogously, with Đ→Dj or Đ→DJ depending on word case). All of these replacements introduce ambiguities, so reconstructing the original from such a form is usually done manually if required.
The Windows-1252 encoding is important because the English versions of the Windows operating system are most widespread, not localized ones. The reasons for this include a relatively small and fragmented market, increasing the price of high quality localization, a high degree of software piracy (in turn caused by high price of software compared to income), which discourages localization efforts, and people preferring English versions of Windows and other software.
The drive to differentiate Croatian from Serbian, Bosnian from Croatian and Serbian, and now even Montenegrin from the other three creates many problems. There are many different localizations, using different standards and of different quality. There are no common translations for the vast amount of computer terminology originating in English. In the end, people use English loanwords ("kompjuter" for "computer", "kompajlirati" for "compile," etc.), and if they are unaccustomed to the translated terms, they may not understand what some option in a menu is supposed to do based on the translated phrase. Therefore, people who understand English, as well as those who are accustomed to English terminology (who are most, because English terminology is also mostly taught in schools because of these problems) regularly choose the original English versions of non-specialist software.
When Cyrillic script is used (for Macedonian and partially Serbian), the problem is similar to other Cyrillic-based scripts.
Newer versions of English Windows allow the code page to be changed (older versions require special English versions with this support), but this setting can be and often was incorrectly set. For example, Windows 98 and Windows Me can be set to most non-right-to-left single-byte code pages including 1250, but only at install time.
Caucasian languages
The writing systems of certain languages of the Caucasus region, including the scripts of Georgian and Armenian, may produce mojibake. This problem is particularly acute in the case of ArmSCII or ARMSCII, a set of obsolete character encodings for the Armenian alphabet which have been superseded by Unicode standards. ArmSCII is not widely used because of a lack of support in the computer industry. For example, Microsoft Windows does not support it.
Asian encodings
Another type of mojibake occurs when text encoded in a single-byte encoding is erroneously parsed in a multi-byte encoding, such as one of the encodings for East Asian languages. With this kind of mojibake more than one (typically two) characters are corrupted at once. For example, if the Swedish word is encoded in Windows-1252 but decoded using GBK, it will appear as "k鋜lek", where "" is parsed as "鋜". Compared to the above mojibake, this is harder to read, since letters unrelated to the problematic å, ä or ö are missing, and is especially problematic for short words starting with å, ä or ö (e.g. "än" becomes "鋘"). Since two letters are combined, the mojibake also seems more random (over 50 variants compared to the normal three, not counting the rarer capitals). In some rare cases, an entire text string which happens to include a pattern of particular word lengths, such as the sentence "Bush hid the facts", may be misinterpreted.
Vietnamese
In Vietnamese, the phenomenon is called chữ ma (Hán–Nôm: 𡨸魔, "ghost characters") or loạn mã (from Chinese 乱码, luànmǎ). It can occur when a computer tries to decode text encoded in UTF-8 as Windows-1258, TCVN3 or VNI. In Vietnam, chữ ma was commonly seen on computers that ran pre-Vista versions of Windows or cheap mobile phones.
Japanese
In Japan, mojibake is especially problematic as there are many different Japanese text encodings. Alongside Unicode encodings (UTF-8 and UTF-16), there are other standard encodings, such as Shift-JIS (Windows machines) and EUC-JP (UNIX systems). Even to this day, mojibake is often encountered by both Japanese and non-Japanese people when attempting to run software written for the Japanese market.
Chinese
In Chinese, the same phenomenon is called Luàn mǎ (Pinyin, Simplified Chinese , Traditional Chinese , meaning 'chaotic code'), and can occur when computerised text is encoded in one Chinese character encoding but is displayed using the wrong encoding. When this occurs, it is often possible to fix the issue by switching the character encoding without loss of data. The situation is complicated because of the existence of several Chinese character encoding systems in use, the most common ones being: Unicode, Big5, and Guobiao (with several backward compatible versions), and the possibility of Chinese characters being encoded using Japanese encoding.
It is relatively easy to identify the original encoding when luànmǎ occurs in Guobiao encodings:
An additional problem in Chinese occurs when rare or antiquated characters, many of which are still used in personal or place names, do not exist in some encodings. Examples of this are:
The Big5 encoding's lack of the "煊" (xuān) in the name of Taiwanese politician Wang Chien-shien (), the "堃" (kūn) in the name of Yu Shyi-kun (), and the "喆" (zhé) in the name of singer David Tao (),
GB 2312's lack of the "镕" (róng) in ex-PRC Premier Zhu Rongji (), and
GBK's lack of the copyright symbol "©".
Newspapers have dealt with missing characters in various ways, including using image editing software to synthesize them by combining other radicals and characters; using a picture of the personalities (in the case of people's names), or simply substituting homophones in the hope that readers would be able to make the correct inference.
Indic text
A similar effect can occur in Brahmic or Indic scripts of South Asia, used in such Indo-Aryan or Indic languages as Hindustani (Hindi-Urdu), Bengali, Punjabi, Marathi, and others, even if the character set employed is properly recognized by the application. This is because, in many Indic scripts, the rules by which individual letter symbols combine to create symbols for syllables may not be properly understood by a computer missing the appropriate software, even if the glyphs for the individual letter forms are available.
One example of this is the old Wikipedia logo, which attempts to show the character analogous to "wi" (the first syllable of "Wikipedia") on each of many puzzle pieces. The puzzle piece meant to bear the Devanagari character for "wi" instead used to display the "wa" character followed by an unpaired "i" modifier vowel, easily recognizable as mojibake generated by a computer not configured to display Indic text. The logo as redesigned has fixed these errors.
The idea of Plain Text requires the operating system to provide a font to display Unicode codes. This font is different from OS to OS for Singhala and it makes orthographically incorrect glyphs for some letters (syllables) across all operating systems. For instance, the 'reph', the short form for 'r' is a diacritic that normally goes on top of a plain letter. However, it is wrong to go on top of some letters like 'ya' or 'la' in specific contexts. For Sanskritic words or names inherited by modern languages, such as कार्य, IAST: kārya, or आर्या, IAST: āryā, it is apt to put it on top of these letters. By contrast, for similar sounds in modern languages which result from their specific rules, it is not put on top, such as the word करणाऱ्या, IAST: karaṇāryā, a stem form of the common word करणारा/री, IAST: karaṇārā/rī, in the Marathi language. But it happens in most operating systems. This appears to be a fault of internal programming of the fonts. In Mac OS and iOS, the muurdhaja l (dark l) and 'u' combination and its long form both yield wrong shapes.
Some Indic and Indic-derived scripts, most notably Lao, were not officially supported by Windows XP until the release of Vista. However, various sites have made free-to-download fonts.
Burmese
Due to Western sanctions and the late arrival of Burmese language support in computers, much of the early Burmese localization was homegrown without international cooperation. The prevailing means of Burmese support is via the Zawgyi font, a font that was created as a Unicode font but was in fact only partially Unicode compliant. In the Zawgyi font, some codepoints for Burmese script were implemented as specified in Unicode, but others were not. The Unicode Consortium refers to this as ad hoc font encodings. With the advent of mobile phones, mobile vendors such as Samsung and Huawei simply replaced the Unicode compliant system fonts with Zawgyi versions.
Due to these ad hoc encodings, communications between users of Zawgyi and Unicode would render as garbled text. To get around this issue, content producers would make posts in both Zawgyi and Unicode. Myanmar government designated 1 October 2019 as "U-Day" to officially switch to Unicode. The full transition was estimated to take two years.
African languages
In certain writing systems of Africa, unencoded text is unreadable. Texts that may produce mojibake include those from the Horn of Africa such as the Ge'ez script in Ethiopia and Eritrea, used for Amharic, Tigre, and other languages, and the Somali language, which employs the Osmanya alphabet. In Southern Africa, the Mwangwego alphabet is used to write languages of Malawi and the Mandombe alphabet was created for the Democratic Republic of the Congo, but these are not generally supported. Various other writing systems native to West Africa present similar problems, such as the N'Ko alphabet, used for Manding languages in Guinea, and the Vai syllabary, used in Liberia.
Arabic
Another affected language is Arabic (see below), in which text becomes completely unreadable when the encodings do not match.
Examples
The examples in this article do not have UTF-8 as browser setting, because UTF-8 is easily recognisable, so if a browser supports UTF-8 it should recognise it automatically, and not try to interpret something else as UTF-8.
| Technology | Software development: General | null |
186468 | https://en.wikipedia.org/wiki/Wilkinson%20Microwave%20Anisotropy%20Probe | Wilkinson Microwave Anisotropy Probe | The Wilkinson Microwave Anisotropy Probe (WMAP), originally known as the Microwave Anisotropy Probe (MAP and Explorer 80), was a NASA spacecraft operating from 2001 to 2010 which measured temperature differences across the sky in the cosmic microwave background (CMB) – the radiant heat remaining from the Big Bang. Headed by Professor Charles L. Bennett of Johns Hopkins University, the mission was developed in a joint partnership between the NASA Goddard Space Flight Center and Princeton University. The WMAP spacecraft was launched on 30 June 2001 from Florida. The WMAP mission succeeded the COBE space mission and was the second medium-class (MIDEX) spacecraft in the NASA Explorer program. In 2003, MAP was renamed WMAP in honor of cosmologist David Todd Wilkinson (1935–2002), who had been a member of the mission's science team. After nine years of operations, WMAP was switched off in 2010, following the launch of the more advanced Planck spacecraft by European Space Agency (ESA) in 2009.
WMAP's measurements played a key role in establishing the current Standard Model of Cosmology: the Lambda-CDM model. The WMAP data are very well fit by a universe that is dominated by dark energy in the form of a cosmological constant. Other cosmological data are also consistent, and together tightly constrain the Model. In the Lambda-CDM model of the universe, the age of the universe is billion years. The WMAP mission's determination of the age of the universe is to better than 1% precision. The current expansion rate of the universe is (see Hubble constant) . The content of the universe currently consists of ordinary baryonic matter; cold dark matter (CDM) that neither emits nor absorbs light; and of dark energy in the form of a cosmological constant that accelerates the expansion of the universe. Less than 1% of the current content of the universe is in neutrinos, but WMAP's measurements have found, for the first time in 2008, that the data prefer the existence of a cosmic neutrino background with an effective number of neutrino species of . The contents point to a Euclidean flat geometry, with curvature () of . The WMAP measurements also support the cosmic inflation paradigm in several ways, including the flatness measurement.
The mission has won various awards: according to Science magazine, the WMAP was the Breakthrough of the Year for 2003. This mission's results papers were first and second in the "Super Hot Papers in Science Since 2003" list. Of the all-time most referenced papers in physics and astronomy in the INSPIRE-HEP database, only three have been published since 2000, and all three are WMAP publications. Bennett, Lyman A. Page Jr., and David N. Spergel, the latter both of Princeton University, shared the 2010 Shaw Prize in astronomy for their work on WMAP. Bennett and the WMAP science team were awarded the 2012 Gruber Prize in cosmology. The 2018 Breakthrough Prize in Fundamental Physics was awarded to Bennett, Gary Hinshaw, Norman Jarosik, Page, Spergel, and the WMAP science team.
In October 2010, the WMAP spacecraft was derelict in a heliocentric graveyard orbit after completing nine years of operations. All WMAP data are released to the public and have been subject to careful scrutiny. The final official data release was the nine-year release in 2012.
Some aspects of the data are statistically unusual for the Standard Model of Cosmology. For example, the largest angular-scale measurement, the quadrupole moment, is somewhat smaller than the Model would predict, but this discrepancy is not highly significant. A large cold spot and other features of the data are more statistically significant, and research continues into these.
Objectives
The WMAP objective was to measure the temperature differences in the Cosmic Microwave Background (CMB) radiation. The anisotropies then were used to measure the universe's geometry, content, and evolution; and to test the Big Bang model, and the cosmic inflation theory. For that, the mission created a full-sky map of the CMB, with a 13 arcminutes resolution via multi-frequency observation. The map required the fewest systematic errors, no correlated pixel noise, and accurate calibration, to ensure angular-scale accuracy greater than its resolution. The map contains 3,145,728 pixels, and uses the HEALPix scheme to pixelize the sphere. The telescope also measured the CMB's E-mode polarization, and foreground polarization. Its service life was 27 months; 3 to reach the position, and 2 years of observation.
Development
The MAP mission was proposed to NASA in 1995, selected for definition study in 1996, and approved for development in 1997.
The WMAP was preceded by two missions to observe the CMB; (i) the Soviet RELIKT-1 that reported the upper-limit measurements of CMB anisotropies, and (ii) the U.S. COBE satellite that first reported large-scale CMB fluctuations. The WMAP was 45 times more sensitive, with 33 times the angular resolution of its COBE satellite predecessor. The successor European Planck mission (operational 2009–2013) had a higher resolution and higher sensitivity than WMAP and observed in 9 frequency bands rather than WMAP's 5, allowing improved astrophysical foreground models.
Spacecraft
The telescope's primary reflecting mirrors are a pair of Gregorian dishes (facing opposite directions), that focus the signal onto a pair of secondary reflecting mirrors. They are shaped for optimal performance: a carbon fibre shell upon a Korex core, thinly-coated with aluminium and silicon oxide. The secondary reflectors transmit the signals to the corrugated feedhorns that sit on a focal plane array box beneath the primary reflectors.
The receivers are polarization-sensitive differential radiometers measuring the difference between two telescope beams. The signal is amplified with High-electron-mobility transistor (HEMT) low-noise amplifiers, built by the National Radio Astronomy Observatory (NRAO). There are 20 feeds, 10 in each direction, from which a radiometer collects a signal; the measure is the difference in the sky signal from opposite directions. The directional separation azimuth is 180°; the total angle is 141°. To improve subtraction of foreground signals from our Milky Way galaxy, the WMAP used five discrete radio frequency bands, from 23 GHz to 94 GHz.
The WMAP's base is a -diameter solar panel array that keeps the instruments in shadow during CMB observations, (by keeping the craft constantly angled at 22°, relative to the Sun). Upon the array sit a bottom deck (supporting the warm components) and a top deck. The telescope's cold components: the focal-plane array and the mirrors, are separated from the warm components with a cylindrical, -long thermal isolation shell atop the deck.
Passive thermal radiators cool the WMAP to approximately ; they are connected to the low-noise amplifiers. The telescope consumes 419 W of power. The available telescope heaters are emergency-survival heaters, and there is a transmitter heater, used to warm them when off. The WMAP spacecraft's temperature is monitored with platinum resistance thermometers.
The WMAP's calibration is effected with the CMB dipole and measurements of Jupiter; the beam patterns are measured against Jupiter. The telescope's data are relayed daily via a 2-GHz transponder providing a 667 kbit/s downlink to a Deep Space Network station. The spacecraft has two transponders, one a redundant backup; they are minimally active – about 40 minutes daily – to minimize radio frequency interference. The telescope's position is maintained, in its three axes, with three reaction wheels, gyroscopes, two star trackers and Sun sensors, and is steered with eight hydrazine thrusters.
Launch, trajectory, and orbit
The WMAP spacecraft arrived at the Kennedy Space Center on 20 April 2001. After being tested for two months, it was launched via Delta II 7425 launch vehicle on 30 June 2001. It began operating on its internal power five minutes before its launching, and continued so operating until the solar panel array deployed. The WMAP was activated and monitored while it cooled. On 2 July 2001, it began working, first with in-flight testing (from launching until 17 August 2001), then began constant, formal work. Afterwards, it effected three Earth-Moon phase loops, measuring its sidelobes, then flew by the Moon on 30 July 2001, en route to the Sun-Earth Lagrange point, arriving there on 1 October 2001, becoming the first CMB observation mission posted there.
Locating the spacecraft at Lagrange 2, ( from Earth) thermally stabilizes it and minimizes the contaminating solar, terrestrial, and lunar emissions registered. To view the entire sky, without looking to the Sun, the WMAP traces a path around in a Lissajous orbit ca. 1.0° to 10°, with a 6-month period. The telescope rotates once every 2 minutes 9 seconds (0.464 rpm) and precesses at the rate of 1 revolution per hour. WMAP measured the entire sky every six months, and completed its first, full-sky observation in April 2002.
Experiment
Pseudo-Correlation Radiometer
The WMAP instrument consists of pseudo-correlation differential radiometers fed by two back-to-back primary Gregorian reflectors. This instrument uses five frequency bands from 22 GHz to 90 GHz to facilitate rejection of foreground signals from our own Galaxy. The WMAP instrument has a 3.5° x 3.5° field of view (FoV).
Foreground radiation subtraction
The WMAP observed in five frequencies, permitting the measurement and subtraction of foreground contamination (from the Milky Way and extra-galactic sources) of the CMB. The main emission mechanisms are synchrotron radiation and free-free emission (dominating the lower frequencies), and astrophysical dust emissions (dominating the higher frequencies). The spectral properties of these emissions contribute different amounts to the five frequencies, thus permitting their identification and subtraction.
Foreground contamination is removed in several ways. First, subtract extant emission maps from the WMAP's measurements; second, use the components' known spectral values to identify them; third, simultaneously fit the position and spectra data of the foreground emission, using extra data sets. Foreground contamination was reduced by using only the full-sky map portions with the least foreground contamination, while masking the remaining map portions.
Measurements and discoveries
One-year data release
On 11 February 2003, NASA published the first-year's worth of WMAP data. The latest calculated age and composition of the early universe were presented. In addition, an image of the early universe, that "contains such stunning detail, that it may be one of the most important scientific results of recent years" was presented. The newly released data surpass previous CMB measurements.
Based upon the Lambda-CDM model, the WMAP team produced cosmological parameters from the WMAP's first-year results. Three sets are given below; the first and second sets are WMAP data; the difference is the addition of spectral indices, predictions of some inflationary models. The third data set combines the WMAP constraints with those from other CMB experiments (ACBAR and CBI), and constraints from the 2dF Galaxy Redshift Survey and Lyman alpha forest measurements. There are degenerations among the parameters, the most significant is between and ; the errors given are at 68% confidence.
Using the best-fit data and theoretical models, the WMAP team determined the times of important universal events, including the redshift of reionization, ; the redshift of decoupling, (and the universe's age at decoupling, ); and the redshift of matter/radiation equality, . They determined the thickness of the surface of last scattering to be in redshift, or . They determined the current density of baryons, , and the ratio of baryons to photons, . The WMAP's detection of an early reionization excluded warm dark matter.
The team also examined Milky Way emissions at the WMAP frequencies, producing a 208-point source catalogue.
Three-year data release
The three-year WMAP data were released on 17 March 2006. The data included temperature and polarization measurements of the CMB, which provided further confirmation of the standard flat Lambda-CDM model and new evidence in support of inflation.
The 3-year WMAP data alone shows that the universe must have dark matter. Results were computed both only using WMAP data, and also with a mix of parameter constraints from other instruments, including other CMB experiments (Arcminute Cosmology Bolometer Array Receiver (ACBAR), Cosmic Background Imager (CBI) and BOOMERANG), Sloan Digital Sky Survey (SDSS), the 2dF Galaxy Redshift Survey, the Supernova Legacy Survey and constraints on the Hubble constant from the Hubble Space Telescope.
[a] Optical depth to reionization improved due to polarization measurements.
[b] <0.30 when combined with SDSS data. No indication of non-gaussianity.
Five-year data release
The five-year WMAP data were released on 28 February 2008. The data included new evidence for the cosmic neutrino background, evidence that it took over half billion years for the first stars to reionize the universe, and new constraints on cosmic inflation.
The improvement in the results came from both having an extra two years of measurements (the data set runs between midnight on 10 August 2001 to midnight of 9 August 2006), as well as using improved data processing techniques and a better characterization of the instrument, most notably of the beam shapes. They also make use of the 33-GHz observations for estimating cosmological parameters; previously only the 41-GHz and 61-GHz channels had been used.
Improved masks were used to remove foregrounds. Improvements to the spectra were in the 3rd acoustic peak, and the polarization spectra.
The measurements put constraints on the content of the universe at the time that the CMB was emitted; at the time 10% of the universe was made up of neutrinos, 12% of atoms, 15% of photons and 63% dark matter. The contribution of dark energy at the time was negligible. It also constrained the content of the present-day universe; 4.6% atoms, 23% dark matter and 72% dark energy.
The WMAP five-year data was combined with measurements from Type Ia supernova (SNe) and Baryon acoustic oscillations (BAO).
The elliptical shape of the WMAP skymap is the result of a Mollweide projection.
The data puts limits on the value of the tensor-to-scalar ratio, r <0.22 (95% certainty), which determines the level at which gravitational waves affect the polarization of the CMB, and also puts limits on the amount of primordial non-gaussianity. Improved constraints were put on the redshift of reionization, which is , the redshift of decoupling, (as well as age of universe at decoupling, ) and the redshift of matter/radiation equality, .
The extragalactic source catalogue was expanded to include 390 sources, and variability was detected in the emission from Mars and Saturn.
Seven-year data release
The seven-year WMAP data were released on 26 January 2010. As part of this release, claims for inconsistencies with the standard model were investigated. Most were shown not to be statistically significant, and likely due to a posteriori selection (where one sees a weird deviation, but fails to consider properly how hard one has been looking; a deviation with 1:1000 likelihood will typically be found if one tries one thousand times). For the deviations that do remain, there are no alternative cosmological ideas (for instance, there seem to be correlations with the ecliptic pole). It seems most likely these are due to other effects, with the report mentioning uncertainties in the precise beam shape and other possible small remaining instrumental and analysis issues.
The other confirmation of major significance is of the total amount of matter/energy in the universe in the form of dark energy – 72.8% (within 1.6%) as non 'particle' background, and dark matter – 22.7% (within 1.4%) of non baryonic (sub-atomic) 'particle' energy. This leaves matter, or baryonic particles (atoms) at only 4.56% (within 0.16%).
Nine-year data release
On 29 December 2012, the nine-year WMAP data and related images were released. billion-year-old temperature fluctuations and a temperature range of ± 200 microkelvins are shown in the image. In addition, the study found that 95% of the early universe is composed of dark matter and dark energy, the curvature of space is less than 0.4% of "flat" and the universe emerged from the cosmic Dark Ages "about 400 million years" after the Big Bang.
Main result
The main result of the mission is contained in the various oval maps of the CMB temperature differences. These oval images present the temperature distribution derived by the WMAP team from the observations by the telescope during the mission. Measured is the temperature obtained from a Planck's law interpretation of the microwave background. The oval map covers the whole sky. The results are a snapshot of the universe around 375,000 years after the Big Bang, which happened about 13.8 billion years ago. The microwave background is very homogeneous in temperature (the relative variations from the mean, which presently is still 2.7 kelvins, are only of the order of ). The temperature variations corresponding to the local directions are presented through different colors (the "red" directions are hotter, the "blue" directions cooler than the average).
Follow-on missions and future measurements
The original timeline for WMAP gave it two years of observations; these were completed by September 2003. Mission extensions were granted in 2002, 2004, 2006, and 2008 giving the spacecraft a total of 9 observing years, which ended August 2010 and in October 2010 the spacecraft was moved to a heliocentric "graveyard" orbit.
The Planck spacecraft also measured the CMB from 2009 to 2013 and aims to refine the measurements made by WMAP, both in total intensity and polarization. Various ground- and balloon-based instruments have also made CMB contributions, and others are being constructed to do so. Many are aimed at searching for the B-mode polarization expected from the simplest models of inflation, including The E and B Experiment (EBEX), Spider, BICEP and Keck Array (BICEP2), Keck, QUIET, Cosmology Large Angular Scale Surveyor (CLASS), South Pole Telescope (SPTpol) and others.
On 21 March 2013, the European-led research team behind the Planck spacecraft released the mission's all-sky map of the cosmic microwave background. The map suggests the universe is slightly older than previously thought. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about 370,000 years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is 13.799 ± 0.021 billion years and the Hubble constant is 67.74 ± 0.46 (km/s)/Mpc.
| Technology | Unmanned spacecraft | null |
186495 | https://en.wikipedia.org/wiki/Centipede | Centipede | Centipedes (from Neo-Latin , "hundred", and Latin , "foot") are predatory arthropods belonging to the class Chilopoda (Ancient Greek , kheilos, "lip", and Neo-Latin suffix , "foot", describing the forcipules) of the subphylum Myriapoda, an arthropod group which includes millipedes and other multi-legged animals. Centipedes are elongated segmented (metameric) animals with one pair of legs per body segment. All centipedes are venomous and can inflict painful stings, injecting their venom through pincer-like appendages known as forcipules or toxicognaths, which are actually modified legs instead of fangs. Despite the name, no species of centipede has exactly 100 legs; the number of pairs of legs is an odd number that ranges from 15 pairs to 191 pairs.
Centipedes are predominantly generalist carnivorous, hunting for a variety of prey items that can be overpowered. They have a wide geographical range, which can be found in terrestrial habitats from tropical rainforests to deserts. Within these habitats, centipedes require a moist microhabitat because they lack the waxy cuticle of insects and arachnids, therefore causing them to rapidly lose water. Accordingly, they avoid direct sunlight by staying under cover or by being active at night.
Description
Centipedes have a rounded or flattened head, bearing a pair of antennae at the forward margin. They have a pair of elongated mandibles, and two pairs of maxillae. The first pair of maxillae form the lower lip, and bear short palps. The first pair of limbs stretch forward from the body over the mouth. These limbs, or forcipules, end in sharp claws and include venom glands that help the animal to kill or paralyze its prey.
Their size ranges from a few millimetres in the smaller lithobiomorphs and geophilomorphs to about in the largest scolopendromorphs.
Sensory organs
Many species of centipedes lack eyes, but some possess a variable number of ocelli, sometimes clustered together to form true compound eyes. However, these eyes are only capable of discerning light from dark, and provide no true vision. In some species, the first pair of legs can function as sensory organs, similar to antennae; unlike the antennae of most other invertebrates, these point backwards. An unusual clustering of sensory organs found in some centipedes is the organ of Tömösváry. The organs, at the base of the antennae, consist of a disc-like structure and a central pore, with an encircling of sensitive cells. They are likely used for sensing vibrations, and may provide a weak form of hearing.
Forcipules
Forcipules are unique to centipedes. The forcipules are modifications of the first pair of legs (the maxillipeds), forming a pincer-like appendage, just behind the head. Forcipules are not oral mouthparts, though they are used to subdue prey by injecting venom and gripping the prey animal. Venom glands run through a tube, from inside the head to the tip of each forcipule.
Body
Behind the head, the body consists of at least fifteen segments. Most of the segments bear a single pair of legs; the maxillipeds project forward from the first body segment, while the final two segments are small and legless. Each pair of legs is slightly longer than the pair preceding them, ensuring that they do not overlap, which reduces the chance that they will collide and trip the animal. The last pair of legs may be as much as twice the length of the first pair. The final segment bears a telson, and includes the openings of the reproductive organs.
Centipedes mainly use their antennae to seek out their prey. The digestive tract forms a simple tube, with digestive glands attached to the mouthparts. Like insects, centipedes breathe through a tracheal system, typically with a single opening, or spiracle, on each body segment. They excrete waste through a single pair of malpighian tubules.
Ultimate legs
Just as the first pair of legs are modified into forcipules, the back legs are modified into "ultimate legs", also called anal legs, caudal legs, and terminal legs. Their use varies between species, but does not include locomotion. The ultimate legs may be elongated and thin, thickened, or pincer-like. They are frequently sexually dimorphic, and may play a role in mating rituals. Because glandular pores occur more frequently on ultimate legs than on the "walking" legs, they may serve a sensory role. They are sometimes used in defensive postures, and some species use them to capture prey, defend themselves against predators, or suspend themselves from objects such as branches, using the legs as pincers. Several species use their ultimate legs upon encountering another centipede, trying to grab the body of the other centipede.
Members of the genus Alipes can stridulate their leaf-like ultimate legs to distract or threaten predators. Rhysida immarginata togoensis makes a faint creaking sound when it swings its ultimate legs.
Distinction from millipedes
There are many differences between millipedes and centipedes. Both groups of myriapods have long, multi-segmented bodies, many legs, a single pair of antennae, and the presence of postantennal organs. Centipedes have one pair of legs per segment, while millipedes have two. Their heads differ in that millipedes have short, elbowed antennae, a pair of robust mandibles and a single pair of maxillae fused into a lip; centipedes have long, threadlike antennae, a pair of small mandibles, two pairs of maxillae and a pair of large venom claws.
Life cycle
Reproduction
Centipede reproduction does not involve copulation. Males deposit a spermatophore for the female to take up. In temperate areas, egg laying occurs in spring and summer. A few parthenogenetic species are known. Females provide parental care, both by curling their bodies around eggs and young, and by grooming them, probably to remove fungi and bacteria.
Centipedes are longer-lived than insects; the European Lithobius forficatus may live for 5 to 6 years, and the wide-ranging Scolopendra subspinipes can live for over 10 years. The combination of a small number of eggs laid, long gestation period, and long time of development to reproduction has led authors to label lithobiomorph centipedes as K-selected.
Development
Centipedes grow their legs at different points in their development. In the primitive condition, seen in the orders Lithobiomorpha, Scutigeromorpha, and Craterostigmomorpha, development is anamorphic: more segments and pairs of legs are grown between moults. For example, Scutigera coleoptrata, the house centipede, hatches with only four pairs of legs and in successive moults has 5, 7, 9, 11, 15, 15, 15 and 15 pairs respectively, before becoming a sexually mature adult. Life stages with fewer than 15 pairs of legs are called larval stadia (there are about five stages). After the full complement of legs is achieved, the now postlarval stadia (about five more stages) develop gonopods, sensory pores, more antennal segments, and more ocelli. All mature lithobiomorph centipedes have 15 leg-bearing segments. The Craterostigmomorpha only have one phase of anamorphosis, with embryos having 12 pairs, and adults 15.
The clade Epimorpha, consisting of the orders Geophilomorpha and Scolopendromorpha, is epimorphic, meaning that all pairs of legs are developed in the embryonic stages, and offspring do not develop more legs between moults. This clade contains the longest centipedes. In the Geophilomorpha, the number of thoracic segments usually varies within species, often on a geographical basis, and in most cases, females bear more legs than males. The number of leg-bearing segments varies within each order (usually 21 or 23 in the Scolopendromorpha; from 27 to 191 in the Geophilomorpha), but the total number of pairs is always odd, so there are never exactly 100 legs or 100 pairs, despite the group's common name.
Centipede segments are developed in two phases. Firstly, the head gives rise to a fixed but odd number of segments, driven by Hox genes as in all arthropods. Secondly, pairs of segments are added at the tail (posterior) end by the creation of a prepattern unit, a double segment, which is then always divided into two. The repeated creation of these prepattern units is driven by an oscillator clock, implemented with the Notch signalling pathway. The segments are homologous with the legs of other arthropods such as trilobites; it would be sufficient for the Notch clock to run faster, as it does in snakes, to create more legs.
Ecology
Diet
Centipedes are predominantly generalist predators, which means they are adapted to eat a broad range of prey. Common prey items include lumbricid earthworms, dipteran fly larvae, collembolans, and other centipedes. They are carnivorous; study of gut contents suggests that plant material is an unimportant part of their diets, although they eat vegetable matter when starved during laboratory experiments.
Species of Scolopendromorph, noticeably members from the genera Scolopendra and Ethmostigmus, are able to hunt for substantial prey items, including large invertebrates and sizable vertebrates, which could be larger than the myriapod itself. For instance, Scolopendra gigantea (the Amazonian giant centipede) preys on tarantulas, scorpions, lizards, frogs, birds, mice, snakes, and even bats, catching them in midflight. Three species (Scolopendra cataracta, S. paradoxa, and S. alcyona) are amphibious, believed to hunt aquatic or amphibious invertebrates.
Predators
Many larger animals prey upon centipedes, such as mongooses, mice, salamanders, beetles and some specialist snake species. They form an important item of diet for many species and the staple diet of some such as the African ant Amblyopone pluto, which feeds solely on geophilomorph centipedes, and the South African Cape black-headed snake Aparallactus capensis.
Defences
Some Geophilomorph, Lithobiomorph, and Scolopendromorph centipedes produce sticky, toxic secretions to defend themselves. The various secretions ward off or entangle predators. Scolopendromorph secretions contain hydrogen cyanide.
Among Geophilomorphs, the secretions of Geophilus vittatus are sticky and odorous, and contain hydrogen cyanide.
The giant desert centipede of Arizona, Scolopendra polymorpha, has a black head and tail, and an orange body; this conspicuous pattern may be aposematic, an honest signal of the animal's toxicity. Many species raise and splay their ultimate legs and display the spines found on the legs in a defensive threat posture.
Habitat and behaviour
Because centipedes lack the waxy water-resistant cuticle of other arthropods, they are more susceptible to water loss via evaporation. Thus, centipedes are most commonly found in high-humidity environments to avoid dehydration, and are mostly nocturnal.
Centipedes live in many different habitats including in soil and leaf litter; they are found in environments as varied as tropical rain forests, deserts, and caves. Some geophilomorphs are adapted to littoral habitats, where they feed on barnacles.
Threatened species
According to the IUCN Red List, there are one vulnerable, six endangered, and three critically endangered species of centipede. For example, the Serpent Island centipede (Scolopendra abnormis) is vulnerable, and Turk's earth centipede (Nothogeophilus turki) and the Seychelles long-legged centipede (Seychellonema gerlachi) are both endangered.
Evolution
Fossil history
The fossil record of centipedes extends back to , during the Late Silurian (Crussolum), though they are rare throughout the Paleozoic. The Devonian Panther Mountain Formation contains two species of centipede. One is a species of the scutigeromorph Crussolum. The other is Devonobius, which is included in the extinct group Devonobiomorpha. Another Devonian site, the Rhynie chert, also bears Crussolum fossils, and possible scutigeromorph head material. Rhyniognatha, which was once thought to be the oldest insect fossil, is also found in the Rhynie Chert. Three species, one scutigeromorph (Latzelia) and two scolopendromorphs (Mazoscolopendra and the poorly known Palenarthrus), have been described from the Mazon Creek fossil beds, which are Carboniferous, 309–307 mya. More species appear in the Mesozoic, including scolopendromorphs and scutigeromorphs in the Cretaceous.
External phylogeny
The following cladogram shows the position of the Chilopoda within the arthropods as of 2019:
Internal phylogeny
Within the myriapods, centipedes are believed to be the first of the extant classes to branch from the last common ancestor. The five orders of centipedes are: Craterostigmomorpha, Geophilomorpha, Lithobiomorpha, Scolopendromorpha, and Scutigeromorpha. These orders are united into the clade Chilopoda by the following synapomorphies:
The first postcephalic appendage is modified to venom claws.
The embryonic cuticle on second maxilliped has an egg tooth.
The trochanter–prefemur joint is fixed.
A spiral ridge occurs on the nucleus of the spermatozoon.
The Chilopoda are then split into two clades: the Notostigmophora including the Scutigeromorpha and the Pleurostigmophora including the other four orders. The following physical and developmental traits can be used to separate members of the Pleurostigmomorpha from Notostigmomorpha:
The spiracles are located on the sides of the centipede (in Notostigmomorphs, they are located dorsally).
The spiracles are deep, more complex, and always present in pairs.
The head is somewhat flatter.
The centipedes can develop through either anamorphosis or epimorphosis.
It was previously believed that Chilopoda was split into Anamorpha (Lithobiomorpha and Scutigeromorpha) and Epimorpha (Geophilomorpha and Scolopendromorpha), based on developmental modes, with the relationship of the Craterostigmomorpha being uncertain. Recent phylogenetic analyses using combined molecular and morphological characters supports the previous phylogeny. The Epimorpha still exist as a monophyletic group within the Pleurostigmophora, but the Anamorpha are paraphyletic, as shown in the cladogram:
Evolution of venoms
All centipedes are venomous. Over the first 50 million years of the clade's evolutionary history, centipede venoms appear to have consisted of a simple cocktail of about four different components, and differentiation into specific venom types appears to have only occurred after the currently recognized five orders had developed. The evolution of the venom includes horizontal gene transfer, involving bacteria, fungi and oomycetes.
Interaction with humans
As food
As a food item, certain large centipedes are consumed in China, usually skewered and grilled or deep fried. They are often seen in street vendors’ stalls in large cities, including Donghuamen and Wangfujing markets in Beijing.
Large centipedes are steeped in alcohol to make centipede vodka.
Hazard
Some species of centipedes can be hazardous to humans because of their bite. While a bite to an adult human is usually very painful and may cause severe swelling, chills, fever, and weakness, it is unlikely to be fatal. Bites can be dangerous to small children and those with allergies to bee stings. The venomous bite of larger centipedes can induce anaphylactic shock in such people. Smaller centipedes are generally incapable of piercing human skin.
Even small centipedes that cannot pierce human skin are considered frightening by some humans due to their dozens of legs moving at the same time and their tendency to dart swiftly out of the darkness towards one's feet. A 19th-century Tibetan poet warned his fellow Buddhists, "if you enjoy frightening others, you will be reborn as a centipede."
| Biology and health sciences | Myriapoda | null |
186622 | https://en.wikipedia.org/wiki/Old%20World%20vulture | Old World vulture | Old World vultures are vultures that are found in the Old World, i.e. the continents of Europe, Asia and Africa, and which belong to the family Accipitridae, which also includes eagles, buzzards, kites, and hawks.
Old World vultures are not closely related to the superficially similar New World vultures and condors, and do not share that group's good sense of smell. The similarities between the two groups of vultures are due to convergent evolution, rather than a close relationship. They were widespread in both the Old World and North America during the Neogene.
Old World vultures are probably a polyphyletic group within Accipitridae, belonging to two separate not closely related groups within the family. Most authorities refer to two major clades: Gypaetinae (Gypaetus, Gypohierax and Neophron) and Aegypiinae (Aegypius, Gyps, Sarcogyps, Torgos, Trigonoceps and possibly Necrosyrtes). The former seem to be nested with Perninae hawks, while the latter are closely related and possibly even synonymous with Aquilinae. Within Aegypiinae, Torgos, Aegypius, Sarcogyps and Trigonoceps are particularly closely related and possibly within the same genus. Despite the name of the group, "Old World" vultures were widespread in North America until relatively recently, until the end of the Late Pleistocene epoch around 11,000 years ago.
Both Old World and New World vultures are scavenging birds, feeding mostly from carcasses of dead animals. Old World vultures find carcasses exclusively by sight. A particular characteristic of many vultures is a semi-bald head, sometimes without feathers or with just simple down. Historically, it was thought that this was due to feeding habits, as feathers would be glued with decaying flesh and blood. However, more recent studies have shown that it is actually a thermoregulatory adaptation to avoid facial overheating; the presence or absence of complex feathers seems to matter little in feeding habits, as some vultures are quite raptorial.
Species
† = extinct
Population declines, threats, and implications
Population declines
More than half of the Old World vulture species are listed as vulnerable, endangered, or critically endangered by the IUCN Red List. Population declines are caused by a variety of threats that vary by species and region, with most notable declines in Asia due to diclofenac use. Within Africa, a combination of poisonings and vulture trade (including use as bushmeat and traditional medicine) account for roughly 90% of the population declines. And because vultures are scavengers, their population decline can have cultural, public health, and economic implications for communities and be even more problematic than the decline of other endangered species. Vulture populations are particularly vulnerable because they typically feed in large groups and easily fall victim to mass poisoning events.
Threats
Diclofenac
Diclofenac poisoning has caused the vulture population in India and Pakistan to decline by up to 99%, and two or three species of vulture in South Asia are nearing extinction. This has been caused by the practice of medicating working farm animals with diclofenac, which is a non-steroidal anti-inflammatory drug (NSAID) with anti-inflammatory and pain-killing actions. Diclofenac administration keeps animals that are ill or in pain working on the land for longer, but, if the ill animals die, their carcasses contain diclofenac. Farmers leave the dead animals out in the open, relying on vultures to tidy up. Diclofenac present in carcass flesh is eaten by vultures, which are sensitive to diclofenac, and they suffer kidney failure, visceral gout, and death as a result of diclofenac poisoning. The drug is poisonous enough that only a small amount of animal carcases need to contain it to have detrimental effects on vulture populations.
Meloxicam (another NSAID) has been found to be harmless to vultures and should prove an acceptable alternative to diclofenac. Bans on diclofenac in veterinary practices have been implemented in Pakistan and Nepal and selling or using the drug in India can result in jail time. But while the Government of India banned diclofenac, over a year later, in 2007, it continued to be sold and remains a problem in other parts of the world.
Poached carcass poisonings
Poisoning accounts for a majority of vulture deaths in Africa. Ivory poachers poison carcasses with the intent of killing vultures, since vultures circling over a carcass alert authorities to a kill. An increase in demand for ivory has both increased the rate of elephant poachings as well as increased the rate at which vultures are killed off by consuming the poisoned elephant remains. In Kruger National Park, white-backed vultures will be eradicated in the next 60 years if poisoned carcasses are not detected and neutralized. Eliminating carcass poisoning is challenging because it is far easier to carry out than to regulate. Park officials often lack the training to identify toxic chemicals before it is too late and calling on average community members to turn in perpetrators reports is challenging if financial incentives to do so are insufficient.
Agricultural poisonings
Vultures are also unintentionally poisoned when they consume carcasses of predators that have been poisoned by livestock farmers. For those who rely on livestock to make a living, illegal pesticides are often used on fruits, meats, or even the water in a wateringhole in order to eliminate large predators that threaten their livestock. Agricultural poisoning is relatively easy as it does not require specific skills and the poison is cheap with a long shelf life.
Traditional medicine/belief and use
Vultures in Africa are killed for use in traditional medicine as part of the African vulture trade. Vultures can be targeted for the industry directly or collected from other poisoning events, but close to 30% of vulture deaths recorded in Africa can be tied back to belief-based use. In South Africa, vulture consumption events have been estimated to occur 59,000 times a year. Vulture heads are believed to provide clairvoyance or good luck like winning the lottery. The length of time a vulture can be used by healers is dependent on size and species. Some healers have been recorded using Cape vultures for 6 years because they are said to last longer than other species. Others use 1-2 individuals a year but this rate is unsustainable given the estimated number of healers.
Muthi
In Southern Africa, traditional medicine is called Muthi. For some healers it is believed to cure illnesses, while others believe it cures curses. Vulture muthi involves separate body parts being dried, burned, or ground up. The results may be consumed by mixing with food, drinking, snorting, or applying to cuts. Some healers look for signs of poisoning when purchasing vultures, but others are unaware of how to do this and are at risk of poisoning their clients.
Bushmeat consumption
Another part of the African vulture trade is use for bushmeat consumption.
Electrical infrastructure
Collisions with electrical infrastructure account for roughly 9% of vulture deaths in Africa. Some organizations in South Africa are working with power companies to mitigate this threat.
Implications
As vultures play an important role in ecosystems, their population decline can have cultural, public health, and economic implications for communities.
The decline in vultures has led to hygiene problems in India as carcasses of dead animals now tend to rot, or be eaten by rats or feral dogs, rather than be consumed by vultures. Rabies among these other scavengers is a major health threat. India has one of the world's highest incidences of rabies.
For communities such as the Parsi, who practice sky burials in which human corpses are put on the top of a Tower of Silence, vulture population declines can have serious cultural implications.
Conservation efforts
Conservation efforts would be most effective in large, protected areas because vultures are most populous in those. Small but frequent poisoning events have a more detrimental effect on vulture populations than larger, infrequent events because population recovery is more successful when there is a longer time between poisonings. To increase populations, vultures can be reintroduced to poison-free protected areas near other groups of vultures to keep the populations high. This will make it easier for vultures to maintain some individuals after a poisoning event. A project named "Vulture Restaurant" is underway in Nepal in an effort to conserve the dwindling number of vultures. The "restaurant" is an open grassy area where naturally dying, sick, and old cows are fed to the vultures.
Organizations across Africa are working to reduce threats to vulture species with efforts to change and create policies to protect species both at the national and international scale. Proposed strategies to reduce poisoning events include mobile phone numbers to report offenders, campaigns to educate about poisoning risks to humans, and improving response speed to poisoning events. Poison response training would be an important implementation in conservation efforts because this is one of the most prevalent threats to vulture populations.
| Biology and health sciences | Accipitriformes and Falconiformes | null |
186630 | https://en.wikipedia.org/wiki/Cosmic%20Background%20Explorer | Cosmic Background Explorer | The Cosmic Background Explorer (COBE ), also referred to as Explorer 66, was a NASA satellite dedicated to cosmology, which operated from 1989 to 1993. Its goals were to investigate the cosmic microwave background radiation (CMB or CMBR) of the universe and provide measurements that would help shape the understanding of the cosmos.
COBE's measurements provided two key pieces of evidence that supported the Big Bang theory of the universe: that the CMB has a near-perfect black-body spectrum, and that it has very faint anisotropies. Two of COBE's principal investigators, George F. Smoot III and John C. Mather, received the Nobel Prize in Physics in 2006 for their work on the project. According to the Nobel Prize committee, "the COBE project can also be regarded as the starting point for cosmology as a precision science".
COBE was the second cosmic microwave background satellite, following RELIKT-1, and was followed by two more advanced spacecraft: the Wilkinson Microwave Anisotropy Probe (WMAP) operated from 2001 to 2010 and the Planck spacecraft from 2009 to 2013.
Mission
The purpose of the Cosmic Background Explorer (COBE) mission was to take precise measurements of the diffuse radiation between 1 micrometre and over the whole celestial sphere. The following quantities were measured: (1) the spectrum of the 3 K radiation over the range 100 micrometres to (2) the anisotropy of this radiation from 3 to ; and, (3) the spectrum and angular distribution of diffuse infrared background radiation at wavelengths from 1 to 300 micrometres.
History
In 1974, NASA issued an Announcement of Opportunity for astronomical missions that would use a small- or medium-sized Explorer spacecraft. Out of the 121 proposals received, three dealt with studying the cosmological background radiation. Though these proposals lost out to the Infrared Astronomical Satellite (IRAS), their strength made NASA further explore the idea. In 1976, NASA formed a committee of members from each of 1974's three proposal teams to put together their ideas for such a satellite. A year later, this committee suggested a polar-orbiting satellite called COBE to be launched by either a Delta 5920-8 launch vehicle or the Space Shuttle. It would contain the following instruments:
NASA accepted the proposal provided that the costs be kept under US$30 million, excluding launcher and data analysis. Due to cost overruns in the Explorer program due to IRAS, work on constructing the satellite at Goddard Space Flight Center (GSFC) did not begin until 1981. To save costs, the infrared detectors and liquid helium dewar on COBE would be similar to those used on Infrared Astronomical Satellite (IRAS).
COBE was originally planned to be launched on a Space Shuttle mission STS-82-B in 1988 from Vandenberg Air Force Base, but the Challenger explosion delayed this plan when the Shuttles were grounded. NASA prevented COBE's engineers from going to other space companies to launch COBE, and eventually a redesigned COBE was placed into Sun-synchronous orbit on 18 November 1989 aboard a Delta launch vehicle.
On 23 April 1992, COBE scientists announced at the APS April Meeting in Washington, D.C. the finding of the "primordial seeds" (CMBE anisotropy) in data from the DMR instrument; until then the other instruments were "unable to see the template." The following day The New York Times ran the story on the front page, explaining the finding as "the first evidence revealing how an initially smooth cosmos evolved into today's panorama of stars, galaxies and gigantic clusters of galaxies."
The Nobel Prize in Physics for 2006 was jointly awarded to John C. Mather, NASA Goddard Space Flight Center, and George F. Smoot III, University of California, Berkeley, "for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation".
Spacecraft
COBE was an Explorer class satellite, with technology borrowed heavily from IRAS, but with some unique characteristics.
The need to control and measure all the sources of systematic errors required a rigorous and integrated design. COBE would have to operate for a minimum of 6 months and constrain the amount of radio interference from the ground, COBE and other satellites as well as radiative interference from the Earth, Sun and Moon. The instruments required temperature stability and to maintain gain, and a high level of cleanliness to reduce entry of stray light and thermal emission from particulates.
The need to control systematic error in the measurement of the CMB anisotropy and measuring the zodiacal cloud at different elongation angles for subsequent modeling required that the satellite rotate at a 0.8 rpm spin rate. The spin axis is also tilted back from the orbital velocity vector as a precaution against possible deposits of residual atmospheric gas on the optics as well against the infrared glow that would result from fast neutral particles hitting its surfaces at extremely high speed.
In order to meet the twin demands of slow rotation and three-axis attitude control, a sophisticated pair of yaw angular momentum wheels were employed with their axis oriented along the spin axis . These wheels were used to carry an angular momentum opposite that of the entire spacecraft in order to create a zero net angular momentum system.
The orbit would prove to be determined based on the specifics of the spacecraft's mission. The overriding considerations were the need for full sky coverage, the need to eliminate stray radiation from the instruments and the need to maintain thermal stability of the dewar and the instruments. A circular Sun-synchronous orbit satisfied all these requirements. A altitude orbit with a 99° inclination was chosen as it fit within the capabilities of either a Space Shuttle (with an auxiliary propulsion on COBE) or a Delta launch vehicle. This altitude was a good compromise between Earth's radiation and the charged particles in Earth's radiation belts at higher altitudes. An ascending node at 18:00 was chosen to allow COBE to follow the boundary between sunlight and darkness on Earth throughout the year.
The orbit combined with the spin axis made it possible to keep the Earth and the Sun continually below the plane of the shield, allowing a full sky scan every six months.
The last two important parts pertaining to the COBE mission were the dewar and Sun-Earth shield. The dewar was a superfluid helium cryostat designed to keep the FIRAS and DIRBE instruments cooled during the duration of the mission. It was based on the same design as one used on IRAS and was able to vent helium along the spin axis near the communication arrays. The conical Sun-Earth shield protected the instruments from direct solar and Earth-based radiation as well as radio interference from Earth and the COBE's transmitting antenna. Its multilayer insulating blankets provided thermal isolation for the dewar.
In January 1994, engineering operations concluded and the operation of the spacecraft was transferred to Wallops Flight Facility (WFF) for use as a test satellite.
Instruments
Differential Microwave Radiometers (DMR)
The Differential Microwave Radiometer (DMR) investigation uses three differential radiometers to map the sky at 31.4, 53, and 90 GHz. The radiometers are distributed around the outer surface of the cryostat. Each radiometer employs a pair of horn antennas viewing at 30° from the spin axis of the spacecraft, measuring the differential temperature between points in the sky separated by 60°. At each frequency, there are two channels for dual-polarization measurements for improved sensitivity and for reliability. Each radiometer is a microwave receiver whose input is switched rapidly between the two horn antennas, obtaining the difference in brightness of two fields of view 7° in diameter located 60° apart and 30° from the axis of the spacecraft. High sensitivity is achieved by temperature stabilization (at 300 K for 31.4 GHz and at 140 K for 53 and 90 GHz), by spacecraft spin, and by the ability to integrate over the entire year. Sensitivity to large-scale anisotropies is about 3E-5 K. The instrument weighs , uses 114 watts, and has a data rate of 500 bit/s.
Diffuse Infrared Background Experiment (DIRBE)
The Diffuse Infrared Background Experiment (DIRBE) consists of a cryogenically cooled (to 2 K) multiband radiometer used to investigate diffuse infrared radiation from 1 to 300 micrometres. The instrument measures the absolute flux in 10 wavelength bands with a 1° field of view pointed 30° off the spin axis. Detectors (photoconductors) and filters for the 8 to 100 micrometre channels are the same as for the IRAS mission. Bolometers are used for the longest wavelength channel (120 to 300 micrometres). The telescope is a well baffled, off-axis, Gregorian flux collector with re-imaging. The instrument weighs approximately , uses 100 W and has a data rate of 1700 bit/s.
Far Infrared Absolute Spectrophotometer (FIRAS)
The Far Infrared Absolute Spectrophotometer (FIRAS) is a cryogenically cooled polarizing Michelson interferometer used as a Fourier transform spectrometer. The instrument points along the spin axis and has a 7° field of view. This device measures the spectrum to a precision of 1/1000 of the peak flux at for each 7° field of view on the sky (over the range 0.1 to ). The FIRAS uses a special flared trumpet horn flux collector having very low sidelobe levels and an external calibrator covering the entire beam; precise temperature regulation and calibration are required. The instrument has a differential input to compare the sky with an internal reference at 3 K. This feature provides immunity from systematic errors in the spectrometer and contributes significantly to the ability to detect small deviations from a blackbody spectrum. The instrument weighs , uses 84 watts and has a data rate of 1200 bit/s.
Scientific findings
The science mission was conducted by the three instruments detailed previously: DIRBE, FIRAS and DMR. The instruments overlapped in wavelength coverage, providing consistency check on measurements in the regions of spectral overlap and assistance in discriminating signals from our galaxy, Solar System and CMB.
COBE's instruments would fulfill each of their objectives as well as making observations that would have implications outside COBE's initial scope.
Black-body curve of CMB
During the 15-year-long period between the proposal and launch of COBE, there were two significant astronomical developments:
First, in 1981, two teams of astronomers, one led by David Wilkinson of Princeton University and the other by Francesco Melchiorri of the University of Florence, simultaneously announced that they detected a quadrupole distribution of CMB using balloon-borne instruments. This finding would have been the detection of the black-body distribution of CMB that FIRAS on COBE was to measure. In particular, the Florence group claimed a detection of intermediate angular scale anisotropies at the level 100 microkelvins in agreement with later measurements made by the BOOMERanG experiment. However, a number of other experiments attempted to duplicate their results and were unable to do so.
Second, in 1987 a Japanese-American team led by Andrew E. Lange and Paul Richards of University of California, Berkeley and Toshio Matsumoto of Nagoya University made an announcement that CMB was not that of a true black body. In a sounding rocket experiment, they detected an excess brightness at 0.5 and wavelengths.
With these developments serving as a backdrop to COBE's mission, scientists eagerly awaited results from FIRAS. The results of FIRAS were startling in that they showed a perfect fit of the CMB and the theoretical curve for a black body at a temperature of 2.7 K, in contrast to the Berkeley-Nagoya results.
FIRAS measurements were made by measuring the spectral difference between a 7° patch of the sky against an internal black body. The interferometer in FIRAS covered between 2- and 95-cm−1 in two bands separated at 20-cm−1. There are two scan lengths (short and long) and two scan speeds (fast and slow) for a total of four different scan modes. The data were collected over a ten-month period.
Intrinsic anisotropy of CMB
The DMR was able to spend four years mapping the detectable anisotropy of cosmic background radiation as it was the only instrument not dependent on the dewar's supply of helium to keep it cooled. This operation was able to create full sky maps of the CMB by subtracting out galactic emissions and dipole at various frequencies. The cosmic microwave background fluctuations are extremely faint, only one part in 100,000 compared to the 2.73 K average temperature of the radiation field. The cosmic microwave background radiation is a remnant of the Big Bang and the fluctuations are the imprint of density contrast in the early universe. The density ripples are believed to have produced structure formation as observed in the universe today: clusters of galaxies and vast regions devoid of galaxies.
Detecting early galaxies
DIRBE also detected 10 new far-IR emitting galaxies in the region not surveyed by IRAS as well as nine other candidates in the weak far-IR that may be spiral galaxies. Galaxies that were detected at the 140 and 240 μm were also able to provide information on very cold dust (VCD). At these wavelengths, the mass and temperature of VCD can be derived. When these data were joined with 60 and 100 μm data taken from IRAS, it was found that the far-infrared luminosity arises from cold (≈17–22 K) dust associated with diffuse H I region cirrus clouds, 15-30% from cold (≈19 K) dust associated with molecular gas, and less than 10% from warm (≈29 K) dust in the extended low-density H II regions.
DIRBE
On top of the findings DIRBE had on galaxies, it also made two other significant contributions to science. The DIRBE instrument was able to conduct studies on interplanetary dust (IPD) and determine if its origin was from asteroid or cometary particles. The DIRBE data collected at 12, 25, 50 and 100 μm were able to conclude that grains of asteroidal origin populate the IPD bands and the smooth IPD cloud.
The second contribution DIRBE made was a model of the Galactic disk as seen edge-on from our position. According to the model, if the Sun is 8.6 kpc from the Galactic Center, then it is 15.6% above the midplane of the disk, which has a radial and vertical scale lengths of 2.64 and 0.333 kpc, respectively, and is warped in a way consistent with the HI layer. There is also no indication of a thick disk.
To create this model, the IPD had to be subtracted out of the DIRBE data. It was found that this cloud, which as seen from Earth is Zodiacal light, was not centered on the Sun, as previously thought, but on a place in space a few million kilometers away. This is due to the gravitation influence of Saturn and Jupiter.
Cosmological implications
In addition to the science results detailed in the last section, there are numerous cosmological questions left unanswered by COBE's results. A direct measurement of the extragalactic background light (EBL) can also provide important constraints on the integrated cosmological history of star formation, metal and dust production, and the conversion of starlight into infrared emissions by dust.
By looking at the results from DIRBE and FIRAS in the 140 to 5000 μm we can detect that the integrated EBL intensity is ≈16 nW/(m2·sr). This is consistent with the energy released during nucleosynthesis and constitutes about 20–50% of the total energy released in the formation of helium and metals throughout the history of the universe. Attributed only to nuclear sources, this intensity implies that more than 5–15% of the baryonic mass density implied by Big Bang nucleosynthesis analysis has been processed in stars to helium and heavier elements.
There were also significant implications into star formation. COBE observations provide important constraints on the cosmic star formation rate and help us calculate the EBL spectrum for various star formation histories. Observation made by COBE require that star formation rate at redshifts of z ≈ 1.5 to be larger than that inferred from UV-optical observations by a factor of 2. This excess stellar energy must be mainly generated by massive stars in yet - undetected dust enshrouded galaxies or extremely dusty star-forming regions in observed galaxies. The exact star formation history cannot unambiguously be resolved by COBE and further observations must be made in the future.
On 30 June 2001, NASA launched a follow-up mission to COBE led by DMR Deputy Principal Investigator Charles L. Bennett. The Wilkinson Microwave Anisotropy Probe has clarified and expanded upon COBE's accomplishments. Following WMAP, the European Space Agency's probe, Planck has continued to increase the resolution at which the background has been mapped.
| Technology | Unmanned spacecraft | null |
186633 | https://en.wikipedia.org/wiki/Bearded%20vulture | Bearded vulture | The bearded vulture (Gypaetus barbatus), also known as the lammergeier and ossifrage, is a very large bird of prey in the monotypic genus Gypaetus. The bearded vulture is the only known vertebrate whose diet consists of 70–90% bone.
Traditionally considered an Old World vulture, it actually forms a separate minor lineage of Accipitridae together with the Egyptian vulture (Neophron percnopterus), its closest living relative. It is not much more closely related to the Old World vultures proper than to, for example, hawks, and differs from the former by its feathered neck. Although dissimilar, the Egyptian and bearded vulture each have a lozenge-shaped tail—unusual among birds of prey. It is vernacularly known as Homa, a bird in Iranian mythology.
The bearded vulture population is thought to be in decline; in 2004, it was classified on the IUCN Red List as least concern but has been listed as near threatened since 2014. It lives and breeds on crags in high mountains in Iran, southern Europe, East Africa, the Indian subcontinent, Tibet, and the Caucasus. Females lay one or two eggs in mid-winter that hatch at the beginning of spring.
Taxonomy
The bearded vulture was formally described in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. He placed it with the vultures and condors in the genus Vultur and coined the binomial name Vultur barbatus. Linnaeus based his account on the "bearded vulture" that had been described and illustrated in 1750 by the English naturalist George Edwards. Edwards had based his hand-coloured etching on a specimen that had been collected at Santa Cruz near the town of Oran in Algeria. Linnaeus specified the type locality as Africa, but in 1914 this was restricted to Santa Cruz by the German orthithologist Ernst Hartert. The bearded vulture is now the only species placed in the genus Gypaetus that was introduced in 1784 by the German naturalist Gottlieb Storr. The genus name Gypaetus is from Ancient Greek gupaietos, a corrupt form of hupaietos meaning "eagle" or "vulture". The specific epithet barbatus is Latin meaning "bearded" (from barba, "beard"). The name "lammergeier" originates from the German word , which means "lamb-vulture". The name stems from the belief that it attacked lambs.
Two subspecies are recognised:
G. b. barbatus (Linnaeus, 1758) (includes G. b. hemachalanus and G. b. aureus) – south Europe and northwest Africa to northeast China through the Himalayas to Nepal and west Pakistan
G. b. meridionalis Keyserling & Blasius, JH, 1840 – southwest Arabia and northeast, east, south Africa
Description
This bearded vulture is long with a wingspan of . It weighs , with the nominate race averaging and G. b. meridionalis of Africa averaging . In Eurasia, vultures found around the Himalayas tend to be slightly larger than those from other mountain ranges. Females are slightly larger than males. It is essentially unmistakable with other vultures or indeed other birds in flight due to its long, narrow wings, with the wing chord measuring , and long, wedge-shaped tail, which measures in length. The tail is longer than the width of the wing. The tarsus is relatively small for the bird's size, at . The proportions of the species have been compared to a falcon, scaled to an enormous size.
Unlike most vultures, the bearded vulture does not have a bald head. This species is relatively small-headed, although its neck is powerful and thick. It has a generally elongated, slender shape, sometimes appearing bulkier due to the often hunched back of these birds. The gait on the ground is waddling and the feet are large and powerful. The adult is mostly dark grey, rusty, and whitish in colour. It is grey-blue to grey-black above. The creamy-coloured forehead contrasts against a black band across the eyes and lores and bristles under the chin, which form a black beard that give the species its English name. Bearded vultures are variably orange or rust of plumage on their head, breast, and leg feathers, but this is thought to be cosmetic. This colouration comes from dust-bathing or rubbing iron-rich mud on its body. They also transfer the brown colour to the eggs. The tail feathers and wings are dark grey. The juveniles are dark black-brown over most of the body, with a grey-brown breast, gradually attaining more adult-like plumage over successive years; they take five to seven years to reach full maturity, with the first breeding at eight years or older. The bearded vulture is silent, apart from shrill whistles in their breeding displays and a falcon-like cheek-acheek call made around the nest.
Physiology
The acid concentration in the bearded vulture's stomach has been estimated to be of pH about 1. Large bones are digested in about 24 hours, aided by slow mixing or churning of the stomach content. The high fat content of bone marrow makes the net energy value of bone almost as good as that of muscle, even if bone is less completely digested. A skeleton left on a mountain will dehydrate and become protected from bacterial degradation, and the bearded vulture can return to consume the remainder of a carcass even months after the soft parts have been consumed by other animals, larvae, and bacteria.
Distribution and habitat
The bearded vulture is sparsely distributed across a vast range. It occurs in mountainous regions in the Pyrenees, the Alps, the Arabian Peninsula, the Caucasus region, the Zagros Mountains and Alborz Mountains in Iran, the Koh-i-Baba in Bamyan, Afghanistan, the Altai Mountains, the Himalayas, Ladakh in northern India, and western and central China. In Africa, it lives in the Atlas Mountains, the Ethiopian Highlands and south from Sudan to northeastern Democratic Republic of the Congo, central Kenya, and northern Tanzania. An isolated population inhabits the Drakensberg in South Africa. It has been reintroduced in several places in Spain, such as the Sierras de Cazorla, Segura and Las Villas Jaén, the Province of Castellón and Asturias. The resident population as of 2018 was estimated at 1,200 to 1,500 individuals.
In Israel it is locally extinct as a breeder since 1981, but young birds have been reported in 2000, 2004, and 2016. The species is extinct in Romania, the last specimens from the Carpathians being shot in 1927. However, unconfirmed sightings of the bearded vulture happened in the 2000s, and in 2016 a specimen from a restoration project in France also flew over the country before returning to the Alps.
In southern Africa, the total population as of 2010 was estimated at 408 adult birds and 224 young birds of all age classes therefore giving an estimate of about 632 birds.
In Ethiopia, it is common at garbage dumps tips on the outskirts of small villages and towns. Although it occasionally descends to , the bearded vulture is rare below altitudes of and normally resides above in some parts of its range. It typically lives around or above the tree line which are often near the tops of the mountains, at up to in Europe, in Africa and in central Asia. In southern Armenia, it breeds below if cliff availability permits. It has even been observed living at elevations of in the Himalayas and been observed flying at a height of .
There are two records of bearded vultures from the Alps reintroduction schemes which have reached the United Kingdom, with the first sighting taking place in 2016 in Wales and the Westcountry. A series of sightings took place in 2020, when an individual bird was sighted separately over the Channel Island of Alderney after migrating north through France, then in the Peak District, Derbyshire, Cambridgeshire, and Lincolnshire. The bird, nicknamed 'Vigo' by Tim Birch of the Derbyshire Wildlife Trust, originated from the reintroduced population in the Alps. As these two birds were both released captive birds, not wild, they have been placed in Category E ("escapes"), and not added to the formal British bird list.
Behaviour and ecology
Diet and feeding
The bearded vulture is a scavenger, feeding mostly on the remains of dead animals. Its diet comprises mammals (93%), birds (6%) and reptiles (1%), with medium-sized ungulates forming a large part of the diet. It usually disdains the actual meat and typically lives on 85–90% bones including bone marrow. This is the only living bird species that specializes in feeding on bones. Meat and skin only makes up a small part of what the adults eat, but scraps of meat or skin makes up a larger amount of the chicks' diet. The bearded vulture can swallow whole or bite through brittle bones up to the size of a lamb's femur and its powerful digestive system quickly dissolves even large pieces. Their favoured variants of bones to consume consist of fattier and elongated bones like tarsal bones and tibias. They contain more levels of oleic acid which is highly nutritional for them compared to bones that are tinier. Smaller bones will contain less accessible bone marrow therefore being of less value. The bearded vulture has learned to crack bones too large to be swallowed by carrying them in flight to a height of above the ground and then dropping them onto rocks below, which smashes them into smaller pieces and exposes the nutritious marrow. They can fly with bones up to in diameter and weighing over , or nearly equal to their own weight.
After dropping the large bones, the bearded vulture spirals or glides down to inspect them and may repeat the act if the bone is not sufficiently cracked. This learned skill requires extensive practice by immature birds and takes up to seven years to master. Its old name of ossifrage ("bone breaker") relates to this habit. Less frequently, these birds have been observed trying to break bones (usually of a medium size) by hammering them with their bill directly into rocks while perched. During the breeding season they feed mainly on carrion. They prefer the limbs of sheep and other small mammals and they carry the food to the nest, unlike other vultures which feed their young by regurgitation.
Bearded vultures sometimes attack live prey, with perhaps greater regularity than any other vulture. Among these, tortoises seem to be especially favoured depending on their local abundance. Tortoises preyed on may be nearly as heavy as the preying vulture. To kill tortoises, bearded vultures fly with them to some height and drop them to crack open the bulky reptiles' hard shells. Golden eagles have been observed to kill tortoises in the same way. Other live animals, up to nearly their own size, have been observed to be seized predaceously and dropped in flight. Among these are rock hyraxes, hares, marmots and, in one case, a long monitor lizard. Larger animals have been known to be attacked by bearded vultures, including ibex, Capra goats, chamois, and steenbok. These animals have been killed by being surprised by the large birds and battered with wings until they fall off precipitous rocky edges to their deaths; although in some cases these may be accidental killings when both the vulture and the mammal surprise each other. Many large animals killed by bearded vultures are unsteady young, or have appeared sickly or obviously injured. Humans have been anecdotally reported to have been killed in the same way. This is unconfirmed, however, and if it does happen, most biologists who have studied the birds generally agree it would be accidental on the part of the vulture. Occasionally smaller ground-dwelling birds, such as partridges and pigeons, have been reported eaten, possibly either as fresh carrion (which is usually ignored by these birds) or killed with beating wings by the vulture. When foraging for bones or live prey while in flight, bearded vultures fly fairly low over the rocky ground, staying around high. Occasionally, breeding pairs may forage and hunt together. In the Ethiopian Highlands, bearded vultures have adapted to living largely off human refuse.
Reproduction and life cycle
The bearded vulture occupies an enormous territory year-round. It may forage over each day. The breeding period is variable, being December through September in Eurasia, November to June in the Indian subcontinent, October to May in Ethiopia, throughout the year in eastern Africa, and May to January in southern Africa. Although generally solitary, the bond between a breeding pair is often considerably close. Biparental monogamous care occurs in the bearded vulture. In a few cases, polyandry has been recorded in the species. The territorial and breeding display between bearded vultures is often spectacular, involving the showing of talons, tumbling, and spiraling while in solo flight. The large birds also regularly lock feet with each other and fall some distance through the sky with each other. In Europe, the breeding pairs of bearded vultures are estimated to be 120. The mean productivity of the bearded vulture is 0.43±0.28 fledgings per breeding pair per year and the breeding success averaged 0.56±0.30 fledgings per pair with clutches/year.
The nest is a massive pile of sticks, that goes from around across and deep when first constructed up to across and deep, with a covering of various animal matter from food, after repeated uses. The female usually lays a clutch of 1 to 2 eggs, though 3 have been recorded on rare occasions, which are incubated for 53 to 60 days. After hatching, the young spend 100 to 130 days in the nest before fledging. The young may be dependent on the parents for up to 2 years, forcing the parents to nest in alternate years on a regular basis. Typically, the bearded vulture nests in caves and on ledges and rock outcrops or caves on steep rock walls, so are very difficult for nest-predating mammals to access. Wild bearded vultures have a mean lifespan of 21.4 years, but have lived for up to at least 45 years in captivity.
Threats
The bearded vulture is one of the most endangered European bird species as over the last century its abundance and breeding range have drastically declined. It naturally occurs at low densities, with anywhere from a dozen to 500 pairs now being found in each mountain range in Eurasia where the species breeds. The species is most common in Ethiopia, where an estimated 1,400 to 2,200 are believed to breed. Relatively large, healthy numbers seem to occur in some parts of the Himalayas as well. It was largely wiped out in Europe and, by the beginning of the 20th century, the only substantial population was in the Spanish and French Pyrenees. Since then, it has been successfully reintroduced to the Swiss and Italian Alps, from where they have spread into France. They have also declined somewhat in parts of Asia and Africa, though less severely than in Europe.
Many raptor species were shielded from anthropogenic influences in previously underdeveloped areas therefore they are greatly impacted as the human population rises and infrastructure increases in underdeveloped areas. The increase in human population and infrastructure results in the declines of the bearded vulture populations today. The increase of infrastructure includes the building of houses, roads, and power lines. A major issue with infrastructure and bird species populations is collision with power lines. The declines of the bearded vulture populations have been documented throughout their range resulting from a decrease in habitat space, fatal collisions with energy infrastructure, reduced food availability, poisons left out for carnivores and direct persecution in the form of trophy hunting.
This species is currently listed as near threatened by the IUCN Red List last accessed on 1 October 2016, and the population continues to decline.
Conservation
Mitigation plans have been established to reduce the population declines in bearded vulture populations. One of these plans includes the South African Biodiversity Management Plan that has been ratified by the government to stop the population decline in the short term. Actions that have been implemented include the mitigation of existing and proposed energy structures to prevent collision risks, the improved management of supplementary feeding sites as well to reduce the populations from being exposed to human persecution and poisoning accidents and outreach programmes that are aimed at reducing poisoning incidents.
The Foundation for the Conservation of the Bearded Vulture (), established in Spain in 1995, was created in response to the national population dropping to 30 specimens by the end of the 20th century. Focused on conserving the species in the Pyrenees, it also returned the species to other already extinct areas such as the Picos de Europa in the north of the country or the Sierra de Cazorla, in the south. After 25 years of work, the Foundation reported that they had managed to recover the population, with more than 1,000 individuals throughout the country.
Reintroduction in the Alps
Efforts to reintroduce the bearded vulture began in the 1970s in the French Alps. Zoologists Paul Geroudet and Gilbert Amigues attempted to release vultures that had been captured in Afghanistan, but this approach proved unsuccessful: it was too difficult to capture the vultures in the first place, and too many died in transport on their way to France. A second attempt was made in 1987, using a technique called "hacking", in which young individuals (from 90 to 100 days) from zoological parks would be taken from the nest and placed in a protected area in the Alps. As they were still unable to fly at that age, the chicks were hand-fed by humans until the birds learned to fly and were able to reach food without human assistance. This method has proven more successful, with over 200 birds released in the Alps from 1987 to 2015, and a bearded vulture population has reestablished itself in the Alps.
In culture
The bearded vulture is considered a threatened species in Iran. Iranian mythology considers the rare bearded vulture (; ) the symbol of luck and happiness. It was believed that if the shadow of a Homa fell on one, he would rise to sovereignty and anyone shooting the bird would die in forty days. The habit of eating bones and apparently not killing living animals was noted by Sa'di in Gulistan, written in 1258, and Emperor Jahangir had a bird's crop examined in 1625 to find that it was filled with bones.
The Greek playwright Aeschylus was said to have been killed in 456 or 455 BC by a tortoise dropped by an eagle who mistook his bald head for a stone. If this incident did occur, the bearded vulture is a likely candidate for the "eagle" in this story.
The ancient Greeks used ornithomancers to guide their political decisions: bearded vultures, or ossifrage, were one of the few species of birds that could yield valid signs to these soothsayers.
In the Bible/Torah, the bearded vulture, as the ossifrage, is among the birds forbidden to be eaten (Leviticus 11:13).
In 1944, Shimon Peres and David Ben-Gurion found a nest of bearded vultures in the Negev desert. The bird is called in Hebrew, and Shimon Persky liked it so much he adopted it as his surname.
| Biology and health sciences | Accipitrimorphae | Animals |
186703 | https://en.wikipedia.org/wiki/George%20Washington%20Bridge | George Washington Bridge | The George Washington Bridge is a double-decked suspension bridge spanning the Hudson River, connecting Fort Lee in Bergen County, New Jersey, with the Washington Heights neighborhood of Manhattan, New York City. It is named after George Washington, a Founding Father of the United States and the country's first president. The George Washington Bridge is the world's busiest motor vehicle bridge, carrying a traffic volume of over 104million vehicles , and is the world's only suspension bridge with 14 vehicular lanes.
The bridge is owned by the Port Authority of New York and New Jersey, a bi-state government agency that operates infrastructure in the Port of New York and New Jersey. The George Washington Bridge is also informally known as the GW Bridge, the GWB, the GW, or the George, and was known as the Fort Lee Bridge or Hudson River Bridge during construction. The George Washington Bridge measures long, and its main span is long. It was the longest main bridge span in the world from its 1931 opening until the Golden Gate Bridge in San Francisco opened in 1937.
The George Washington Bridge is an important travel corridor within the New York metropolitan area. It has an upper level that carries four lanes in each direction and a lower level with three lanes in each direction, for a total of 14 lanes of travel. The speed limit on the bridge is . The bridge's upper level also carries pedestrian and bicycle traffic. Interstate 95 (I-95) and U.S. Route 1/9 (US 1/9, composed of US 1 and US 9) cross the river via the bridge. U.S. Route 46 (US 46), which lies entirely within New Jersey, terminates halfway across the bridge at the state border with New York. At its eastern terminus in New York City, the bridge continues onto the Trans-Manhattan Expressway (part of I-95, connecting to the Cross Bronx Expressway).
The idea of a bridge across the Hudson River was first proposed in 1906, but it was not until 1925 that the state legislatures of New York and New Jersey voted to allow for the planning and construction of such a bridge. Construction on the George Washington Bridge started in September 1927; the bridge was ceremonially dedicated on October 24, 1931, and opened to traffic the next day. The opening of the George Washington Bridge contributed to the development of Bergen County, New Jersey, in which Fort Lee is located. The upper deck was widened from six to eight lanes in 1946. The six-lane lower deck was constructed beneath the existing span from 1959 to 1962 because of increasing traffic.
Description
The George Washington Bridge was designed by chief civil engineer Othmar Ammann, design engineer Allston Dana, and assistant chief engineer Edward W. Stearns, with Cass Gilbert as consulting architect. It connects Fort Lee in Bergen County, New Jersey, with Washington Heights in Manhattan, New York. The bridge's construction required of fabricated steel; of wire, stretching ; and of masonry.
Decks
The bridge carries 14 lanes of traffic, seven in each direction. As such, the George Washington Bridge contains more vehicular lanes than any other suspension bridge and is the world's busiest vehicular bridge. The fourteen lanes of the bridge are split unevenly across two levels: the upper level contains eight lanes while the lower level contains six lanes. The upper level opened on October 25, 1931, and is wide. The upper level originally had six lanes, though two more lanes were added in 1946. Although the lower level was part of the original plans for the bridge, it did not open until August 29, 1962. The upper level has a vertical clearance of , and all trucks and other oversize vehicles must use the upper level. Trucks are banned from the lower level, which has a clearance of . All lanes on both levels are wide. Vehicles carrying hazardous materials (HAZMATs) are prohibited on the lower level due to its enclosed nature. HAZMAT-carrying vehicles may use the upper level, provided that they conform to strict guidelines as outlined in the Port Authority's "Red Book".There are two sidewalks on the upper span of the bridge, one on each side. The northern sidewalk was largely closed after the September 11 attacks; it reopened in 2017 while a temporary suicide prevention fence was installed on the southern sidewalk, in preparation for the installation of permanent fences on both sidewalks. Prior to 2023, pedestrians had to traverse a total of 171 steps while using the northern sidewalk. As part of a renovation, the steps were replaced by a ramp, and two viewing platforms were added. , the northern sidewalk is closed at night.
The George Washington Bridge has a total length of , while its main span is long. Accounting for the height of the lower deck, the bridge stretches above mean high water at its center, and above mean high water under the New York anchorage. The bridge's main span was the longest main bridge span in the world at the time of its opening in 1931, and was nearly double the of the previous recordholder, the Ambassador Bridge in Detroit. It held this title until the opening of the Golden Gate Bridge in 1937. Prior to the bridge's construction, engineers had believed that a suspension span's length was a large indicator of a suspension bridge's economic feasibility, but the bridge's completion proved that longer suspension bridges were both physically and economically feasible.
The George Washington Bridge's total width is . When the upper deck was built, it was only thick without any stiffening trusses on the sides, resulting in a deck weighing and a length-to-thickness ratio of about 350 to 1. At the time of the George Washington Bridge's opening, most long suspension spans had stiffening trusses on their sides, and spans generally had a length-to-thickness ratio of 60 to 1, which translated to a weight of and a thickness equivalent to an 11-story building. During the planning process, Ammann designed the deck around the "deflection theory", an as-yet-unconfirmed assumption that a longer suspension deck did not need to be as stiff in proportion to its length, because the weight of the longer deck itself would provide a counterweight against the deck's movement. This had been tested by Leon Moisseiff when he designed the Manhattan Bridge in 1909, though it was less than half the length of the George Washington Bridge. Stiffening trusses were ultimately excluded from the George Washington Bridge's design to save money; instead, a system of plate girders was installed under the upper deck. This provided the stiffening that was necessary for the bridge deck, and it was replicated on the lower deck during its construction. The plate-girders underneath each deck, combined with an open-truss design on the bridge's side that connected the decks with each other, resulted in an even stiffer span that was able to resist torsional forces.
Cables
Four -diameter main cables support the bridge deck. Each main cable contained 61 strands, with each strand made of 434 individual wires, for a total of 26,474 wires per main cable, and 105,986 in all. The cables are covered by a sheath of weather-resistant steel. The upper bridge deck is held by vertical suspender wires attached directly to the main cables by saddle connections; the lower deck is supported by girders attached to the upper.
The main cables are anchored in concrete on both sides of the bridge, in a purpose-built anchorage on the New York side and bored and set directly into the cliffs of the Palisades on the New Jersey side. Originally, the end of each cable was supposed to receive one of several ornamental designs, such as a wing, fin, tire, or statue; cost-savings after the start of the Great Depression in 1929 preempted the flourishes.
Suspension towers
The suspension towers on each side of the river are each tall. They are composed of sections weighing between and contain a combined 475,000 rivets. Each bridge has two archways, one above and one below the decks. The George Washington Bridge is classified as a fracture critical bridge, making it vulnerable to collapse if parts of the towers were to fail, although the towers are located offshore.
The original design called for the towers to be encased in concrete and granite in a Revival style, similar to the Brooklyn Bridge. Additional scrutiny of the proposed bridge's engineering found that the steel alone could support the towers, with only a decorative stone facade being retained in the plan. Elevators to carry sightseers to restaurants and observatory proposed decks at the top of each tower were also all pared from the design. Ultimately, even stone facades were postponed in 1929 during the Great Depression due to rising material costs. Even though the steel towers had been left that way for cost reasons, some aesthetic critiques of the bare steel towers were favorable. Several groups, such as the American Institute of Steel Construction, believed that covering the steel framework with masonry would be both misleading and "fundamentally ugly".
While the exposed steel towers' design was negatively received by a few critics such as Raymond Hood and William A. Boring, the public reception at the bridge's opening was generally positive. The Swiss-French architect Le Corbusier wrote of the towers: "The structure is so pure, so resolute, so regular that here, finally, steel architecture seems to laugh." Milton MacKaye wrote in The New Yorker that the George Washington Bridge had established Ammann as "one of the immortals of bridge engineering and design, a genius." Ammann never incorporated masonry towers in his bridge plans again. He wrote that the "sturdy appearance and well-balanced distribution of steel in the columns and bracing" gave the bridge's towers "a good appearance, a neat appearance". Over time, the exposed steel towers, with their distinctive criss-crossed bracing, became one of the George Washington Bridge's most identifiable characteristics.
American flag
Since 1947 or 1948, the bridge has flown the world's largest free-flying American flag on special occasions. Hung inside the arch of the New Jersey tower, it measures long, wide, and . Until 1976, the flag was taken out of a garage in New Jersey and manually erected on national holidays. During the United States' bicentennial, a mechanical hoisting system was installed, and the flag was stored along the bridge's girders when not in use. When weather allows, it is hoisted on Martin Luther King Jr. Day, Presidents Day, Memorial Day, Flag Day, Independence Day, Labor Day, Columbus Day, and Veterans Day. Since 2006, the flag is also flown on September 11 of each year, honoring those lost in the September 11 attacks. On events where the flag is flown, the tower lights are lit from dusk until 11:59 p.m.
History
The bridge sits near the sites of Fort Washington (in New York) and Fort Lee (in New Jersey), which were fortified positions used by General George Washington and his American forces as they attempted to deter the occupation of New York City in 1776 during the American Revolutionary War. Unsuccessful, Washington evacuated Manhattan by ferrying his army between the two forts.
By the end of the 19th century there were more than 200 separate municipalities along the lower Hudson River and the New York Bay, with no unified agency to control commerce or transport in the area and no fixed crossing. The first was proposed in 1888 by civil engineer Gustav Lindenthal, who later became New York City's bridge commissioner. The Hudson and Manhattan Railroad and the Pennsylvania Railroad opened three pairs of tubes under the lower Hudson in the 1900s. The first vehicular crossing across the lower Hudson River, the Holland Tunnel, was opened in 1927, connecting Lower Manhattan with Jersey City.
Planning
A vehicular bridge across the Hudson River was being considered as early as 1906, during the planning for the Holland Tunnel. Three possible locations for a suspension bridge were considered in the vicinity of 57th, 110th, and 179th Streets in Manhattan, with others rejected on the grounds of aesthetics, geography, or traffic flows. In 1920, English architect Alfred C. Bossom proposed a double-decker bridge with room for vehicular and railroad traffic near 57th Street in Midtown Manhattan. The same year, Othmar Ammann and Gustav Lindenthal proposed a vehicular and railroad bridge to 57th Street in Manhattan, topped by an office building on the Manhattan end that would have been the world's tallest. Lindenthal's plan failed because it did not receive permits from the United States Army Corps of Engineers or approval from the city, and because Midtown Manhattan real estate developers and planners opposed the plan. Ammann unsuccessfully attempted to convince Lindenthal to build his bridge elsewhere, without a tower atop the bridge's terminus. In January 1924, the New York State Chamber of Commerce voted against the 57th Street location in favor of another upstream. Despite this, Lindenthal proposed that a bridge be built there, and carry 16 railroad tracks and 12 lanes of automotive traffic.
Meanwhile, Ammann became chief engineer of the Port of New York Authority (now the Port Authority of New York and New Jersey), which was created in 1921 to oversee commerce and transport along the lower Hudson River and New York Bay. He had disassociated himself from Lindenthal's proposal by 1923, conducting his own studies on the feasibility of a bridge from 178th Street in Fort Washington, Manhattan, to Fort Lee in New Jersey. Ammann's advocacy for the Fort Washington–Fort Lee bridge gained support from both New Jersey governor George Silzer and New York governor Alfred E. Smith by mid-1923. In May 1924, Colonel Frederick Stuart Greene, the New York Superintendent of Public Works, announced a plan to construct a suspension bridge between Fort Lee and Fort Washington. At that location, both sides were surrounded by steep cliffs (The Palisades on the New Jersey side, and Washington Heights on the New York side). Thus, it was possible to build the bridge without either impeding maritime traffic or requiring lengthy approach ramps from ground level.
A New Jersey state assemblyman introduced a bill for the Hudson River bridge that December. This bill was passed in the New Jersey Assembly in February 1925. After an initial rejection by Silzer, the Assembly made modifications before passing the bill again in March, after which Silzer signed the bill. Around the same time, the New York state legislature was also considering a similar bill. A dispute developed between New York civic groups, who supported the construction of the Hudson River Bridge; and the Parks Conservation Association, who believed that the bridge towers would degrade the quality of Fort Washington Park directly underneath the proposed bridge's deck. In late March 1925, the chairman of the Parks Conservation Association noted that the proposed New York state legislation would provide for the actual construction of the bridge, rather than just the planning. Ultimately, the Hudson River bridge bill was passed in the New York state legislature, and Smith approved the bill that April.
In March 1925, Silzer asked Ammann to devise preliminary plans for the Hudson River bridge. Ammann found that the width of the Hudson River decreased by more than when it passed between Fort Lee and Fort Washington. The ledges of Fort Lee and Fort Washington were respectively above mean water level at this point, which was not only ideal geography for a suspension bridge, but also allowed the bridge to be high enough to give sufficient clearance for maritime traffic. However, the differing heights meant that a large cut had to be made through the Fort Lee ledge so that the bridge approach could be built there. The same month, the New Jersey legislature asked for funds for test bores to determine if the geological strata would support the bridge. In response to continuing concerns from park preservationists, Ammann stated that placing the New York suspension tower anywhere else would make the bridge look asymmetrical, which he believed was a worse outcome than placing the tower within the park.
The states conducted a study in mid-1925, which found that the Hudson River bridge would be able to pay for itself in twenty-five years if a 50-cent toll were to be placed on every vehicle. After funding was secured, surveyors began examining feasible sites for the future bridge's approaches in August 1925. By law, the New York end of the Hudson River Bridge could only be constructed between 178th and 185th Streets, and the New Jersey end had to be built directly across the river. Geologists made test bores on the New Jersey side to determine if the site was feasible for laying foundations for the bridge. Othmar Ammann was hired as the bridge's chief engineer. In Ammann's original plans for the bridge, which had been published in March 1925, he had envisioned that the bridge would contain two sidewalks; a roadway that could carry up to 8,000 vehicles per hour; and space for four railroad tracks, in case the two North River railroad tunnels downstream exceeded their train capacity. Cass Gilbert was hired in January 1926 to design architectural elements for the Hudson River bridge, including the suspension towers. The bridge design had yet to be finalized, and its cost could not even be estimated at that point due to the complexity of factors.
Gilbert released preliminary sketches of the Hudson River bridge that March; by then, the architect had decided that the span would be a suspension bridge. The sketch accompanied a feasibility report that Ammann and other engineers presented to the Port of New York Authority, which was to operate the bridge. The central span was to be long, longer than any other suspension bridge in existence at the time, and 200 feet above mean high water. The bridge would initially carry four lanes of vehicular traffic and sidewalk lanes; the plans called for three additional phases of expansion, culminating in an eight-lane bridge deck with four rapid-transit tracks underneath. The span would be supported by two towers, each tall. There would also be space to build a second deck in the future below the main deck. Ammann's team also found that the most feasible location for the bridge was at 179th Street in Manhattan (as opposed to 181st or 175th Streets). This was both because the 179th Street location was more aesthetically appealing than the other two locations, and because a 179th Street bridge would be cheaper and shorter in length than a bridge at either of the other streets. At this point in the planning process, the Hudson River bridge's estimated cost was $40million or $50million. Because of the proposed bridge's length, engineers also had to test the strength of materials, including suspension cables, that were to be used in the span. Ammann's research department constructed scale models of various designs for the bridge and tested them in wind tunnels.
By late 1926, one engineer predicted that construction on the Hudson River bridge would start the following summer. In December 1926, the final plans for the bridge were approved by the public and by the War Department. The Port Authority planned to sell off $50million worth of bonds to pay for the bridge, and the initial $20million bond issue was sold that December. Further issues arose when the New Jersey Assembly passed a bill in March 1927, which increased the New Jersey governor's power to veto Port Authority contracts. Smith, the New York governor, and Silzer, the now-former New Jersey governor who had been appointed Port Authority chairman, both objected to the bill since the Port Authority had been intended as a bi-state venture. Afterward, the then-current New Jersey governor A. Harry Moore worked with legislators to revise the legislation. The revised law was ultimately not a significant deviation from the Port Authority's practice at the time, wherein the Port Authority was already submitting its contracts to New Jersey government for review.
Construction
The George Washington Bridge's construction employed three teams of workers: one each for the New Jersey tower, the New York tower, and the deck. The construction process was relatively safe, although twelve or thirteen workers died during its construction. Of these, three were killed when the foundation for the New Jersey tower flooded; a fourth worker was killed by a blast at the New Jersey anchorage; and the others died because of their own carelessness, according to Port Authority records.
First contracts
In April 1927, the Port Authority opened the first bids for the construction of the Hudson River bridge. It was specifically seeking bids for the construction of the New Jersey suspension tower's foundation. The Manhattan suspension anchorage's location was still undecided at this time. A bid for the New Jersey tower was awarded later that month. In May, the Port Authority opened more bids for the construction of the bridge's approaches and anchorage on the New Jersey side. Dredging operations on the Hudson River, which would allow large ships to pass underneath the bridge, also started that May. By late August, the Port Authority had started condemning plots of land for the bridge's approaches.
Montgomery B. Case, the bridge's chief construction engineer, began construction on the Hudson River bridge on September 21, 1927, with groundbreaking ceremonies held at the sites of both future suspension towers. Each tower was to have a base with a perimeter measuring , and descending 80 feet into the riverbed. The riverbed around the towers' sites was dredged first, and then steel pilings were placed in the riverbed to create a watertight cofferdam. The cofferdams for the bridge were the largest ever built at the time. In early October of that year, the Port Authority received bids for the construction of the bridge deck. There were two main methods being considered for the span's construction: the cheaper "wire-cable" method and the more expensive "eyebar" method. The wire-cable method, where the vertical suspender wires are attached directly to the main cables and the deck directly, would require a stiffening truss to support the deck. The eyebar method, where the suspender wires are attached to a chain of eyebars (metal bars with holes in them), would be self-supporting. Ultimately, the Port Authority chose the wire-cable design because of costs, and it awarded the contract for constructing the deck to John A. Roebling Sons' Company. The corresponding contract for manufacturing the steel was awarded to the . The first serious accident during the bridge's construction occurred in December 1927, when three men drowned while working in a caisson on the New Jersey side.
Towers and anchorages
Bids for the Manhattan suspension tower were advertised in March 1928. At this point, 64% of the total projected worth of construction contracts had been awarded. The piers that provided foundation for the New Jersey suspension tower and approaches were being constructed. The cliffs on both sides of the river were high enough that only minimal bridge approaches were required on either side. The towers' foundations could reach at most below mean low water, where the foundations would hit a layer of solid rock. In May 1928, builders started drilling a cut through the Palisades on the New Jersey side so that the Hudson River bridge approach could be built. By June 1928, half of the money earned during the previous year's $20million bond sale had been spent on construction. By that October, nearly all blasting operations had been completed. The suspension tower on the New Jersey side had been constructed to a height of , and the tower on the New York side was progressing as well. The suspension towers consisted of 13 segments, each of which were almost 50 feet high.
The New York anchorage required of concrete, being freestanding, while the New Jersey anchorage was blasted into the Palisades. By March 1929, the concrete structure of the New York anchorage had been completed, three months ahead of schedule. The anchorage on the New Jersey side, which had been fully bored, consisted of two holes that had been bored 250 feet into the face of the Palisades. On the New Jersey side, of rock had to be blasted out to make way for the New Jersey approach. The suspension towers were nearly complete at the time of the report; only 100 feet of each tower's height remained to be built. Anchors were being placed in the two holes that were being drilled for the New Jersey anchorage, and this task was also nearing completion.
In April, the Port Authority acquired the last of the properties that were in the path of the bridge's Manhattan approach. Plans for the Manhattan approach were approved by the New York City Board of Estimate around the same time. The approach was to consist of scenic, meandering ramps leading to both Riverside Drive and the Henry Hudson Parkway, which run along the eastern bank of the Hudson River at the bottom of the cliff in Washington Heights. The bridge would also connect to 178th and 179th Streets, at the top of Washington Heights. A third connection would be made to an underground highway running between and parallel to 178th and 179th Streets; this connection would become the 178th–179th Street Tunnels, and would later be replaced by the Trans-Manhattan Expressway. The original plan for the approach to the underground highway stated that the approach would be made using a monumental stone viaduct descending from the span at a 2.2% gradient. The Port Authority started evicting residents in the approach's path in October 1929. The same month, the Port Authority sold the final $30million in bonds to pay for the bridge.
The plans for the Hudson River bridge's Fort Lee approach were also changed in January 1930. Originally, the bridge would have terminated in a traffic circle, a type of intersection design that was being built around New Jersey during the 1920s and 1930s. However, the revised plans called for a grade-separated highway approach that would connect to a traffic "distributing basin" with ramps to nearby highways. The total cost of land acquisition for the bridge approaches on both sides of the Hudson River exceeded $10million.
Cable spinning
After the towers were completed, two temporary catwalks were built between the two suspension towers. Then, workers began laying the bridge's four main cables, a series of thick cables that stretch between the tops of the two towers and carry what would later become the upper deck. The first strand of the first main cable was hoisted between both towers in July 1929, in a ceremony attended by the governors of both states and the mayors of New York City and Fort Lee. The two temporary catwalks allowed workers to spin the wires for the main cables on-site. The wires for the cables were spun by dozens of reels at a dock near the base of the New York anchorage; each reel contained 30 miles of wire at any given time. A total of 105,986 wires were used in the bridge when it was completed.
By February 1930, the bridge was halfway complete; since construction was two months ahead of schedule, the bridge was scheduled to open in early 1932. A team of 350 men was spinning the wires for each of the main cables, which were 22% complete. In addition, the builders had started ordering steel for the deck. By April, the spinning of the main cables was half complete. The first main cable was completed in late July 1930, and the other three main cables were completed that August, with the laying of the last wire being marked by a ceremony. The spinning of the main cables had taken ten months in total.
After the main cables were laid, workers spun the suspender wires that connected the main cables with the deck. When it was finished, the system of cables would support of the deck's weight, though the cables would be strong enough to carry , four times as much weight. The construction of a lower deck for rail usage was postponed, since the start of the Depression meant that there would not be enough railroad traffic to justify the construction of such a deck in the near future.
Nearing completion
An agreement between the Port of New York Authority and the City of New York, dated July 29, 1930, was formed to convey property and property easements granted in relation to the New York Approach to the then Hudson River Bridge. That month, the Port Authority opened the bidding process for contracts to build the Hudson River bridge's approaches on the New York side. These included contracts for the 178th–179th Street Tunnels and the Riverside Drive connection. The tunnel contracts were awarded later that month. In August, the bidding process for the Fort Lee approaches was opened. Bids for the Riverside Drive connection were received the following month.
Prior to and during construction, the bridge was unofficially known as the "Hudson River Bridge" or "Fort Lee Bridge". The Hudson River Bridge Association started seeking suggestions for the bridge's official name in October 1930. Residents of New York and New Jersey were encouraged to send naming choices to the association, which would then forward the suggestions to the Port Authority. According to ballot voting submitted to the Port Authority, the "Hudson River Bridge" name was the most popular choice. The Port Authority preferred the name "George Washington Memorial Bridge", which had been proposed by a board member, and still others championed the name "Palisades Bridge". However, the Port Authority formally adopted the "George Washington" name on January 13, 1931, honoring the general and future president's evacuation of Manhattan at the bridge's location during the Revolutionary War. This was described as potentially confusing, since there was already a "Washington Bridge" connecting 181st Street with the Bronx, directly across Manhattan from the "George Washington Bridge" across the Hudson River. Shortly afterward, the Port Authority Board of Commissioners voted to reconsider the renaming of the Hudson River Bridge, stating that it was open to alternate names. Hundreds of naming choices had been submitted by this time. The most popular naming choices were those of Washington, Christopher Columbus, and Hudson River namesake Henry Hudson. The span was again officially named for George Washington in April 1931. This decision was applauded by then-congressman Fiorello La Guardia, who felt that other options "insulted the memory of our first President and encouraged the Reds".
The system of girders to support the deck was installed throughout 1930, and the last girder was installed in late December 1930. In March 1931, the Port Authority announced that the Hudson River Bridge was set to open later that year, rather than in 1932 as originally planned. At that time, the Port Authority had opened bids for paving the road surface. Later that month, the agency published a report, which stated that the bridge's early opening date was attributable to how quickly and efficiently the various materials had been transported. In June 1931, forty bankers became the first people to cross the bridge.
Work was progressing quickly on the bridge approaches in New Jersey, and the New York City government was considering building another bridge between Manhattan and the Bronx (the Alexander Hamilton Bridge) to connect with the George Washington Bridge. Bids for constructing tollbooths and floodlight towers were opened in July 1931.
1930s to 1960s
The George Washington Bridge was dedicated on October 24, 1931, and the bridge opened to traffic on October 25, 1931, eight months ahead of schedule. The opening ceremony, attended by 30,000 guests, was accompanied by a show from military airplanes, as well as speeches from politicians including Morgan Foster Larson, the governor of New Jersey, and Franklin D. Roosevelt, the governor of New York. The first people to cross the George Washington Bridge were reportedly two elementary school students who roller-skated across the bridge from the New York side. Pedestrians were allowed to walk the length of the George Washington Bridge between 6 p.m. and 11 p.m. The bridge was formally opened to traffic the next day. The Port Authority collected tolls for drivers who used the bridge in either direction; as with the Holland Tunnel, the toll was set at 50 cents for passenger cars, with different toll rates for other vehicle types. Pedestrians paid a toll of 10 cents each, which was lowered to 5 cents in 1934. Within the first 24 hours of the George Washington Bridge's official opening, 56,312 cars used the span, as well as 100,000 pedestrians (including those who had walked across after the ceremony). The Port Authority reported that 33,540 pedestrians crossed the bridge on the first day, of which 20,000 paid a toll to cross.
During the George Washington Bridge's construction, the cost of the bridge was estimated at $75million, and the bridge was expected to carry eight million vehicles and 1.5million pedestrians in its first year. When the George Washington Bridge opened, it was estimated that eight million vehicles would use the bridge in its first year, and that the bridge could ultimately carry 60million vehicles annually after a second deck was added. The bridge's final cost was estimated at $60million. Real-estate speculators believed that the bridge's construction would raise real-estate values in Fort Lee, since the borough's residents would be able to more easily access New York City. During the construction of the George Washington Bridge, speculators spent millions of dollars to buy land around the bridge's New Jersey approach. The bridge was later credited with helping raise land prices and encouraging residential development in formerly agricultural parts of Bergen County. It also spurred the rise of the trucking industry along the United States' East Coast, supplanting much of the freight railroads that used to carry cargo. In the George Washington Bridge's first week of operation, the bridge carried 116,265 vehicles, compared to the Holland Tunnel's 173,010 vehicles, despite the fact that the tunnel had fewer lanes than the bridge did. During that time span, 56,000 pedestrians used the bridge. A week after the bridge opened, the 10-lane tollbooth was expanded to 14 lanes because of heavy weekend traffic volumes. During its first year, the George Washington Bridge saw 5.5million vehicular crossings and nearly 500,000 pedestrian crossings. Traffic counts on the George Washington Bridge grew year after year. By the time of the bridge's tenth anniversary in 1941, the span had been used by 72million vehicles total, including a record 9.1million vehicles in 1940.
On February 22, 1932, George Washington's 200th birthday, the Port Authority planted 70 red oak trees along an approach to the bridge. New Jersey Route 4, which connected directly to the bridge's western end, opened in July 1932. The 178th–179th Street Tunnels, which connected Amsterdam Avenue on the eastern side of Manhattan to the bridge's eastern end on the west side of Manhattan, were supposed to be completed in late 1932. Direct approaches to Riverside Drive and the Hudson River Parkway were completed in 1937, and the tunnels were completed in 1938–1939. A ramp eastward from the bridge and southward to the Harlem River Drive was also completed around this time. The bridge's westbound entrance ramp from Fort Washington Avenue, at the top of the cliff on the Manhattan side, opened in April 1939; another approach in New Jersey had opened by July 1939. The corresponding eastbound exit ramp, as well as the 178th Street Tunnel, opened in June 1940, while the 179th Street Tunnel opened in 1950. In May 1935, a court ruled that the New Jersey and New York governments controlled their respective sides of the bridge.
The bridge was initially lit by 200 lights to provide warning to pilots flying at night. The Port Authority enacted a photography ban during World War II in the 1940s. An aviation obstruction light was installed in 1936 as a memorial to Will Rogers. Additionally, from May 1942 to May 1945, the lights on the bridge were shut off at night as a precautionary measure. After the war ended, the lights were turned back on, but the photography ban was upheld. In August 1946, the bridge's towers were repainted. The underside of the bridge was also repainted in May of the following year.
Originally, the George Washington Bridge's deck consisted of six lanes, with an unpaved center median. In 1946, the median was paved over and two more lanes were created on the upper level, widening it from six lanes to eight lanes. The two center lanes on the upper level served as reversible lanes, which could handle traffic in either direction, depending on which direction had the greater traffic flow.
In November 1950, workers began to tighten the bridge's suspender ropes after almost 20 years of being in place; the project was completed by 1951. New ramps onto the Henry Hudson Parkway were opened in late 1953, followed by the ramps with the Palisades Interstate Parkway in December 1954. In addition, the barrier system on the bridge was adjusted in mid-1954, and new navigational signs were added to assist motorists. In 1955, the lighting system on the deck was replaced.
1960s: Modernization and lower level
Construction of the lower deck, as well as the construction of a new bus terminal and other highway connections near the bridge, were recommended in a 1955 study that suggested improvements to the New York City area's highway system. The lower deck was approved by the U.S. Army Corps of Engineers. A Bergen County leader voted against the construction of the lower level in 1956, temporarily delaying construction plans. The New York City Planning Commission approved the George Washington Bridge improvement in June 1957, and the Port Authority allocated funds to the improvement that July. The $183million project included the construction of the lower deck; the George Washington Bridge Expressway, a 12-lane expressway connecting to the Alexander Hamilton Bridge and the Cross Bronx Expressway (later I-95 and US 9); the George Washington Bridge Bus Station above the expressway; and a series of new ramps to and from the Henry Hudson Parkway. On the New Jersey side, two depressed toll plazas, one in each direction, were to be constructed for lower level traffic. Highway connections were also being built on the New Jersey side, including a direct approach from I-95.
Construction of the approaches started in September 1958. Work on the lower level itself started on June 2, 1959, but work was briefly halted later that year because of a lack of steel. By February 1960, construction was underway on the lower level; the supporting steelwork for the future deck had been completed, and the sections for the lower deck were being installed. The George Washington Bridge's lower deck would comprise 75 steel slabs; each slab weighed 220 tons and measured wide by feet long, with a thickness of . The construction of the slabs proceeded from either side of the bridge. The right-of-way for the George Washington Bridge Expressway had been almost entirely cleared except for the ventilation buildings for the 178th–179th Street Tunnels. During construct The segments of the lower deck had been laid completely by September 1960, at which point workers started pouring the concrete for the deck's roadway, a process that took five weeks. The layer of concrete measured thick. Finally, the deck was paved over with a layer of asphalt.
New ramps to the George Washington Bridge in New Jersey, including from the newly completed I-95, opened in mid-1962. The lower deck was opened to the public on August 29, 1962. The lower level, nicknamed "Martha" after George's wife Martha Washington, increased the capacity of the bridge by 75 percent, and simultaneously made the George Washington Bridge the world's only 14-lane suspension bridge. In addition to providing extra capacity, the lower level served to stiffen the bridge in high winds; before the lower deck was constructed, the George Washington Bridge was known to swing up to . The George Washington Bridge Bus Station opened on January 17, 1963 and the Alexander Hamilton Bridge opened on January 15, 1963, thus allowing more traffic to use the George Washington Bridge. In the first year after the lower level's opening, the expanded bridge had carried 44million vehicles. By comparison, 35.86million vehicles had crossed the bridge in an 11-month period between September 1, 1961 and July 31, 1962. In addition, traffic congestion at the George Washington Bridge was reduced after the lower level opened, and the Port Authority repaired the upper level for the first time in the bridge's history.
1970s to 1990s
A fixed median was added to the upper deck in 1970; the concrete barrier was destroyed to allow for such. The next year, I-95 was extended across through New Jersey.
In early 1977, a project to replace the deteriorated deck on the upper level began. The original deck was demolished, and the new deck, which was constructed beforehand, was placed. As part of the project, the eastbound lanes on the upper level were partially closed; the westbound upper-level lanes carried two-way traffic during off-peak hours. The PANYNJ also encouraged traffic to use the lower level. It was completed in late 1978.
The bridge was carrying 82.8million vehicles per year by 1980. The American Society of Civil Engineers named the bridge as a National Historic Civil Engineering Landmark on October 24, 1981, the 50th anniversary of the bridge. The 50th anniversary was also marked with a parade of automobiles. At that point, 1.8billion vehicles had used the bridge throughout its lifetime. The bridge was repaired in 1984, and the bridge's upper deck was resurfaced in 1987. After one a building above one of the approaches was found to have a cracked foundation on March 9, 1989, it was closed to undergo temporary repairs for four days.
In 1990, the Port Authority announced a minor rehabilitation for the George Washington Bridge. As part of the project, the supporting structural steel for the upper deck would be replaced, and some ramps would be rebuilt. The ramps on the New York side, connecting with Riverside Drive and the Henry Hudson Parkway, were to be reconstructed for $27.6million after studies in the late 1980s showed deterioration on these ramps. Although the Port Authority had announced the repairs in advance, the start of roadwork in September 1990 caused extensive traffic jams. The upper deck was repaved in 1995, A further inspection in 1997 found that some of the wires at the New York anchorage had corroded, so these wires were replaced.
2000s to present
Starting on July 4, 2000, and for subsequent special occasions, each of the George Washington Bridge's suspension towers has been illuminated by 380 light fixtures that highlight the exposed steel structure. On each tower there are a mix of 150 and 1000 watt metal halide lamp fixtures. The architectural lighting design was completed by Domingo Gonzalez Associates. Additionally, workers started rehabilitating about of approach ramps in 1999; the project was finished in May 2001 and cost $38 million. Trucks were banned from the lower deck around this time. The Port Authority also proposed a ramp from the lower level to the Palisades Interstate Parkway on the New Jersey side in 2000. The ramp would have cost $86.5million and would have been completed in 2003 or 2004, but the connection was ultimately not built. The northern sidewalk was closed after the September 11 attacks in 2001 because of security concerns.
In 2002, the Port Authority began to repaint the towers and the underside of the upper deck. The old lead-based paint was replaced with a lead-free coat of paint. The $62 million project was completed in September 2006, in advance of the bridge's 75th anniversary. The Port Authority announced the next year that the suspender lighting was to be replaced by new energy efficient diodes. This project was completed in 2009.
Following 15 reported deaths and 68 attempts in 2017, the Port Authority installed protective netting and an fence along each upper level sidewalk. The netting partially overhangs the sidewalks in order to prevent potential jumpers from scaling the fence directly. The southern sidewalk was closed from September to December 2017 so that a temporary fence could be installed there. Once the temporary fence had been erected, the permanent 11-foot-high barrier was constructed on the northern sidewalk, followed by the permanent barrier on the southern sidewalk.
2010s renovation
In December 2011, the Port Authority announced plans to extensively rehabilitate the bridge.Of many improvements and repairs, the vertical suspender ropes would be replaced, at an expected cost of more than $1billion paid for by toll revenue. In August 2013, repair crews began an $82 million effort to fix cracks in the upper deck's structural steel. Work restarted in June 2014 after a pause lasting several months. The Port Authority also started a $2 billion project to renovate or replace bridge components. The lower level was repaved in 2016, and repainting work and maintenance platform replacement on the lower deck was completed in 2017. The bridge's 592 vertical suspender ropes were then replaced to fix damage caused by excessive heat and humidity. The staircases leading to the sidewalks on both the northern and southern sides of the upper deck were also being replaced with ramps that were compliant with the Americans with Disabilities Act of 1990. The Trans-Manhattan Expressway was being renovated in conjunction with this project. On the New Jersey side, the Palisades Interstate Parkway "Helix" ramp onto the bridge would be replaced at a cost of $112.6million; this was completed in March 2019.
The northern sidewalk reopened in early 2023, after the suspender ropes on that side had been replaced. By early 2024, the restoration project was half complete, and workers were restoring the southern sidewalk and its cables. The same year, the Port Authority awarded $455 million in contracts for structural steel replacement. The suspender cable replacements were almost finished by December 2024.
Road connections
New Jersey
The George Washington Bridge carries I-95 and US 1/9 between New Jersey and New York. Coming from New Jersey, US 46 terminates at the state border in the middle of the bridge. Further west, I-80, US 9W, New Jersey Route 4, and the New Jersey Turnpike also feed into the bridge via either I-95, U.S. 1/9, or U.S. 46 but end before reaching it. I-80 also gives drivers from the Garden State Parkway and Route 17 access to the bridge, and access to them as seen by signage on I-95 south. The Palisades Interstate Parkway connects directly to the bridge's upper level, though not to the lower level; however, a ramp to link the Interstate Parkway to the lower level was proposed in 2000. The marginal roads and local streets above the highways are known as GWB Plaza. The bridge's toll plaza, which collects tolls from eastbound/northbound traffic only, is located on the New Jersey side.
New York
On the New York side, the 12-lane Trans-Manhattan Expressway heads east across the narrow neck of Upper Manhattan, from the bridge to the Harlem River. It provides access from both decks to 178th and 179th Streets, which cross Manhattan horizontally, where U.S. 9 leaves the expressway and follows Broadway; the Henry Hudson Parkway and Riverside Drive, on the Hudson River's eastern bank along the west side of Manhattan; and to Amsterdam Avenue and the Harlem River Drive, on the Harlem River's western bank on the east side. The expressway connects directly with the Alexander Hamilton Bridge, which spans the Harlem River as part of the Cross-Bronx Expressway (I-95), providing access to the Major Deegan Expressway (I-87). Heading towards New Jersey, local access to the bridge is available from 179th Street. There are also ramps connecting the bridge to the George Washington Bridge Bus Terminal, a commuter bus terminal with direct access to the New York City Subway at the 175th Street station on the IND Eighth Avenue Line (served by the ).
Originally, the approach to the George Washington Bridge from the New York side consisted of a roundabout encircling a fountain, which was designed by Cass Gilbert. This plan was deemed not feasible as a result of the congestion that the weaving movements would create. The final plans called for meandering roadways from Riverside Drive and Henry Hudson Parkway, which run along the eastern bank of the Hudson River at the bottom of the cliff in Washington Heights. The Henry Hudson Parkway actually passes under the New York side's anchorage using an underpass designed by Gilbert. The connection to the 178th–179th Street Tunnels, which connected to the southbound Harlem River Drive, opened in 1940. The tunnels were replaced by the Trans-Manhattan Expressway, which opened in 1962. The tunnels and expressway were built to minimize disruption to the Washington Heights neighborhood, which had already been developed at the time.
Alternate routes
Further south along the Hudson River, the Lincoln Tunnel (Route 495) and Holland Tunnel (Interstate 78/Route 139) also enter Manhattan. Both tunnels are operated by the Port Authority, which collects tolls from drivers crossing the Hudson River eastbound toward New York City. The Verrazzano-Narrows Bridge (I-278), connecting the New York City boroughs of Staten Island and Brooklyn, is the southernmost alternate route. It connects to the Bayonne Bridge, Goethals Bridge, and Outerbridge Crossing between Staten Island and New Jersey. All four bridges to Staten Island collect tolls for drivers driving into the island.
Farther north within the New York metropolitan area, the Tappan Zee Bridge (Interstates 87/287 and New York State Thruway) avoids the congested Cross Bronx Expressway and the city proper. Thruway traffic sometimes uses the George Washington Bridge as a detour, since no roads cross the Hudson River between the George Washington and Tappan Zee bridges. The Tappan Zee Bridge also charges tolls for eastbound drivers. Even farther north is the Bear Mountain Bridge, carrying U.S. 6 and U.S. 202, about north of the Tappan Zee Bridge; it also charges tolls for eastbound drivers.
Tolls
, the tolls-by-mail rate going from New Jersey to New York City is $18.31 for cars and motorcycles; there is no toll for passenger vehicles going from New York City to New Jersey. New Jersey and New York–issued E-ZPass users are charged $14.06 for cars and $13.06 for motorcycles during off-peak hours, and $16.06 for cars and $15.06 for motorcycles during peak hours. Users with E-ZPass issued from agencies outside of New Jersey and New York are charged the tolls-by-mail rate.
Originally, tolls were collected in both directions. The original toll booth was designed by Gilbert, who also designed a classical-style maintenance booth, neither of which is extant. In August 1970, the toll was abolished for westbound drivers, and at the same time, eastbound drivers saw their tolls doubled. The tolls of eleven other New York–New Jersey and Hudson River crossings along a stretch, from the Outerbridge Crossing in the south to the Rip Van Winkle Bridge in the north, were also changed to south- or eastbound-only at that time. There were a series of tollbooths on the New Jersey side. The bridge had 29 toll lanes: 12 in the main upper-level toll plaza, 10 in the lower-level toll plaza, and seven in the Palisades Interstate Parkway toll plaza leading to the upper level. The toll plazas on the lower level and Palisades Parkway were not staffed during the overnight hours and accepted only E-ZPass transactions during this period. E-ZPass was accepted for toll payment on the George Washington Bridge starting in July 1997. Soon afterward, the Port Authority proposed removing the tollbooths for the E-ZPass lanes on the lower level and Palisades Parkway toll plazas, replacing them with electronic toll collection gantries to allow motorists to maintain highway speeds.
Pedestrians and cyclists may cross free of charge on the south sidewalk. Pedestrians traveling in either direction originally paid tolls of 10 cents when the bridge opened. The pedestrian toll was reduced to 5 cents in 1935 and discontinued altogether in 1940.
Open road tolling was implemented for drivers going from Palisades Interstate Parkway on February 2, 2020, on the lower level on November 7, 2020, and on the upper level on July 10, 2022. The tollbooths will be dismantled, and drivers will no longer be able to pay cash at the bridge. Instead, there will be cameras mounted onto new overhead gantries on the New Jersey side going to New York. A vehicle without E-ZPass will have a picture taken of its license plate and a bill for the toll will be mailed to its owner. For E-ZPass users, sensors will detect their transponders wirelessly. In March 2020, due to the COVID-19 pandemic, all-electronic tolling was temporarily placed in effect for all Port Authority crossings, including the George Washington Bridge. Cash toll collection was temporarily reinstated on the upper level only from October 2020 to July 2022 while the required open road tolling infrastructure was being installed. The carpool discount was eliminated when open-road tolling commenced on the upper level in July 2022. The toll plazas were demolished starting in March 2023; the removal was expected to take two years.
Historic toll rates
Prior to July 10, 2022, a discounted carpool toll ($7.75) was available at all times for cars with three or more passengers using NY or NJ E-ZPass, who proceed through a staffed toll lane (provided they have registered with the free "Carpool Plan"), except if entering from the Palisades Interstate Parkway entrance to the bridge. The Carpool Plan ended when the George Washington Bridge implemented cashless tolling.
Non-motorized access
The George Washington Bridge contains two sidewalks that can be used by pedestrians and bicyclists. The southern sidewalk (accessible by a long, steep ramp on the Manhattan side of the bridge) is shared by cyclists and pedestrians. The entrance in Manhattan is at 178th Street, just west of Cabrini Boulevard, and also has access to the Hudson River Greenway north of the bridge. Both sidewalks are accessible on the New Jersey side from Hudson Terrace. The George Washington Bridge carries New York State Bicycle Route 9, a bike route that runs from New York City north to Rouses Point. , the bike lanes are open from 5 a.m. to midnight every day.
The Port Authority closed the northern sidewalk at all times in 2008. Though it offers direct access into Palisades Interstate Park, the northern sidewalk requires stairway climbs and descents on both sides, which was inaccessible for people with physical disabilities and posed a risk in poor weather conditions. Advocacy groups such as Transportation Alternatives also suggested improvements.
As part of the project to replace the bridge's vertical support cables, the connections to both sidewalks will be enhanced or rebuilt and made ADA-compliant. While the south-side cables are being replaced, that sidewalk will be closed and the north sidewalk will be open. Once the entire project is completed in 2027, pedestrians will use the south sidewalk and cyclists will use the north sidewalk. The sidewalk aspect of the project is expected to cost $118million.
Incidents
Suicides and deaths
The George Washington Bridge is among the most frequently chosen sites in the New York metropolitan area for suicide by jumping or falling off the bridge. The first death by jumping was unintentional and occurred before the bridge opened. On September 21, 1930, a stunt jumper named Norman J. Terry jumped off the bridge's deck in front of a crowd of thousands, and because his body was facing the wrong way, he broke his neck upon hitting the water. The first intentional suicide occurred on November 3, 1931, a little more than one week after the bridge opened.
Several suicide attempts off the George Washington Bridge have been widely publicized. In 1994, a person going by the name "Prince" called The Howard Stern Show while on the bridge, said he would kill himself, but Howard Stern talked him out of it. The 2010 suicide of Tyler Clementi, who had jumped from the bridge, drew national attention to cyberbullying and the struggles facing LGBT youth.
In 2012, a record 18 people threw themselves off the bridge to their deaths, while 43 others attempted to do so but survived. There were 18 deaths reported in both 2014 and 2015. In 2014, 74 people were stopped by the Port Authority police, while the next year, another 86 people were stopped by the Port Authority police. In 2016, there were 12 reported deaths, a decrease from previous years, while 70 people were stopped by the Port Authority police. In 2017, the Port Authority proposed equipping a two-person Emergency Services Unit team with harnesses to prevent suicides from the bridge.
Controversies and protests
On September 9, 2013, dedicated toll lanes for one of the local Fort Lee entrances to the bridge's upper level were reduced from three to one, with the two other lanes diverted to highway traffic. The closures were made without notification to local government officials and emergency responders. The local toll lane reductions caused massive traffic congestion, with major delays for school transportation and police and emergency service responses within Fort Lee. The lanes were reopened by the Port Authority on September 13. After a four-month investigation, it was revealed that the lane closures were made by the aides and appointees of New Jersey Governor Chris Christie, causing a political scandal. The repercussions and controversy surrounding these actions have been investigated by the Port Authority, federal prosecutors, and a New Jersey legislature committee.
On September 12, 2020, a hundred anti-police brutality protesters from the Black Lives Matter movement converged from both New York and New Jersey, blocking the upper level of the bridge for about an hour before walking to the New York City Police Department's 34th Precinct in Manhattan.
Other incidents
On December 28, 1966, a 19-year-old pilot made an emergency landing on the bridge's New Jersey side after his plane's engine failed. There were no deaths reported, because there was very little traffic at the time, but the pilot and his passenger were injured. At the time, there was no median barrier on the bridge's upper deck.
In June 1977, two tractor-trailers nearly fell off the lower level after jackknifing, then going through both the roadway barrier and a mesh net next to the roadway. One of the drivers was hurt slightly, while the other driver was not hurt. The accident also involved a third tractor-trailer and two passenger cars, none of whose occupants were hurt. Accidents involving trucks dumping their cargo have also occurred on the George Washington Bridge. Watermelons, frozen chicken parts, and horse manure have all fallen onto the bridge's roadway at some point.
The first-ever complete closure of the George Washington Bridge occurred on August 6, 1980, when a truck carrying highly flammable propane gas across the bridge started to leak. As a safety precaution in case the fuel started to ignite, traffic across the bridge was halted for several hours, and 2,000 people living near the bridge were evacuated. Since the George Washington Bridge is the primary crossing between New Jersey and New York City, the closure caused traffic jams that stretched for up to , and the effects of this congestion could be seen more than away. Two police officers eventually plugged the leak with an inexpensive device. Up to that point, trucks carrying flammable material had been allowed to use the George Washington Bridge. After the incident, New York City officials conducted a study on whether to prohibit hazardous cargo from traveling through the city.
During the terrorist attacks on September 11, 2001, several news organizations, including CNN, reported that a vehicle filled with explosives had been found on the lower level of the bridge. However, several investigations found no evidence of a vehicle containing explosives on the bridge.
In popular culture
The bridge is seen in a number of movies set in New York:
Ball of Fire (1941) was the first film to show the bridge.
In Force of Evil (1948), Leo Morse is buried under the bridge by the mob of gangsters employing his brother Joe.
In How to Marry a Millionaire (1953), Loco and Brewster are fêted as being in the 50 millionth car to cross the bridge as part of the "George Washington Bridge Week" festivities.
In Network (1976), Schumacher tells a story in which, having overslept for a news shoot about the bridge's new lower deck, he gets into a cab wearing a raincoat over his pajamas and tells the driver to take him to the middle of the bridge. The taxi driver, concerned that Schumacher intends to jump, begs him: "Don't do it buddy! You're a young man!"
Sully (2016) reenacts how Sullenberger overflew the bridge by a few hundred feet.
The bridge was also shown in The Godfather (1972), and Cop Land (1997).
The bridge has been featured in music. In the opening singalong for Sesame Street, Ernie sang the words "George Washington Bridge" to the tune of Sobre las Olas ("The Loveliest Night of the Year"). In addition, William Schuman's 1950 work George Washington Bridge. Nina Rosario sings "Just me and the GWB asking, 'Gee, Nina, what'll you be? in "Breathe" from In the Heights.
In visual art, the first issue of the comic Atomic War! published in November 1952, the George Washington Bridge is shown collapsing during a bombing of New York City. Additionally, painters George Ault and Valeri Larko have both created artworks named after the bridge. Video games such as Metal Gear Solid 2: Sons of Liberty also showed the George Washington Bridge.
The construction of the bridge is detailed in George Washington Bridge: A Timeless Marvel and George Washington Bridge: Poetry in Steel. The bridge and the nearby Little Red Lighthouse are the subjects of Hildegarde Swift's 1942 children's book The Little Red Lighthouse and the Great Gray Bridge.
| Technology | Bridges | null |
186716 | https://en.wikipedia.org/wiki/Intercity%20Express | Intercity Express | Intercity Express (commonly known as ICE () and running under this category) is a high-speed rail system in Germany. It also serves destinations in Austria, France, Belgium, Switzerland and the Netherlands as part of cross-border services. It is the flagship of the German state railway, Deutsche Bahn. ICE fares are fixed for station-to-station connections, on the grounds that the trains have a higher level of comfort. Travelling at speeds up to within Germany and when in France, they are aimed at business travellers and long-distance commuters and marketed by Deutsche Bahn as an alternative to flights.
The ICE 3 also has been the development base for the Siemens Velaro family of trainsets which has subsequently been exported to RENFE in Spain (AVE Class 103), which are certified to run at speeds up to , as well as versions ordered by China for the Beijing–Tianjin intercity railway link (CRH 3) and by Russia for the Moscow–Saint Petersburg and Moscow–Nizhny Novgorod routes (Velaro RUS) with further customers being Eurostar as well as Turkey and Egypt.
History
The Deutsche Bundesbahn started a series of trials in 1985 using the InterCityExperimental (also called ICE-V) test train. The IC Experimental was used as a showcase train and for high-speed trials, setting a new world speed record at 406.9 km/h (253 mph) on 1 May 1988.
The train was retired in 1996 and replaced with a new trial unit, called the ICE S.
After extensive discussion between the Bundesbahn and the Ministry of Transport regarding onboard equipment, length and width of the train and the number of trainsets required, a first batch of 41 units was ordered in 1988. The order was extended to 60 units in 1990, with German reunification in mind. However, not all trains could be delivered in time.
The ICE network was officially inaugurated on 29 May 1991 with several vehicles converging on the newly built station Kassel-Wilhelmshöhe from different directions.
In 2007, a line between Paris and Frankfurt/Stuttgart opened, jointly operated by ICE and SNCF's TGV.
Equipment
ICE livery
A notable characteristic of the ICE trains is their colour design, which has been registered by the DB as an aesthetic model and hence is protected as intellectual property. The trains are painted in Pale Grey (RAL 7035) with a Traffic Red (RAL 3020) stripe on the lower part of the vehicle. The continuous black band of windows and their oval door windows differentiate the ICEs from any other DB train.
The ICE 1 and ICE 2 units originally had an Orient Red (RAL 3031) stripe, accompanied by a Pastel Violet stripe below (RAL 4009, 26 cm wide). These stripes were repainted with the current Traffic Red between 1998 and 2000, when all ICE units were being checked and repainted in anticipation of the EXPO 2000.
The "ICE" lettering uses the colour Agate Grey (RAL 7038), the frame is painted in Quartz Grey (RAL 7039). The plastic platings in the interior all utilise the Pale Grey (RAL 7035) colour tone.
Originally, the ICE 1 interior was designed in pastel tones with an emphasis on mint, following the DB colour scheme of the day. However, ICE 1 trains were refurbished in the mid-2000s and now follow the same design as the ICE 3, which makes heavy usage of indirect lighting and wooden furnishings.
The distinctive ICE design was developed by a team of designers around Alexander Neumeister in the early 1980s and first used on the InterCityExperimental (ICE V). The team around Neumeister then designed the ICE 1, ICE 2, and ICE 3/T/TD. The interior of the trains was designed by Jens Peters working for BPR-Design in Stuttgart. Among others, he was responsible for the heightened roof in the restaurant car and the special lighting. The same team also developed the design for the now discontinued InterRegio trains in the mid-1980s.
Overview
First generation
The first ICE trains were the trainsets of ICE 1 (power cars: Class 401), which came into service in 1989. The first regularly scheduled ICE trains ran from 2 June 1991 from Hamburg-Altona via Hamburg Hbf–Hannover Hbf–Kassel-Wilhelmshöhe–Fulda–Frankfurt Hbf–Mannheim Hbf and Stuttgart Hbf toward München Hbf at hourly intervals on the new ICE line 6. The Hanover-Würzburg line and the Mannheim-Stuttgart line, which had both opened the same year, were hence integrated into the ICE network from the very beginning.
Due to the lack of trainsets in 1991 and early 1992, the ICE line 4 (Bremen Hbf–Hannover Hbf–Kassel-Wilhelmshöhe–Fulda–Würzburg Hbf–Nürnberg Hbf–München Hbf) could not start operating until 1 June 1992. Prior to that date, ICE trainsets were used when available and were integrated in the Intercity network and with IC tariffs.
In 1993, the ICE line 6's terminus was moved from Hamburg to Berlin (later, in 1998, via the Hanover-Berlin line and the former IC line 3 from Hamburg-Altona via Hannover Hbf–Kassel-Wilhelmshöhe–Fulda–Frankfurt Hbf–Mannheim Hbf–Karlsruhe Hbf–Freiburg Hbf to Basel SBB was upgraded to ICE standards as a replacement).
Second generation
From 1997, the successor, the ICE 2 trains pulled by Class 402 powerheads, was put into service. One of the goals of the ICE 2 was to improve load balancing by building smaller train units which could be coupled or detached as needed.
These trainsets were used on the ICE line 10 Berlin-Cologne/Bonn. However, since the driving van trailers of the trains were still awaiting approval, the DB joined two portions (with one powerhead each) to form a long train, similar to the ICE 1. Only from 24 May 1998 were the ICE 2 units fully equipped with driving van trailers and could be portioned on their run from Hamm via either Dortmund Hbf–Essen Hbf–Duisburg Hbf–Düsseldorf Hbf or Hagen Hbf–Wuppertal Hbf–Solingen-Ohligs.
In late 1998, the Hanover–Berlin high-speed railway was opened as the third high-speed line in Germany, cutting travel time on line 10 (between Berlin and the Ruhr valley) by 2½ hours.
The ICE 1 and ICE 2 trains' loading gauge exceeds that recommended by the international railway organisation UIC. Even though the trains were originally to be used only domestically, some units are licensed to run in Switzerland and Austria. Some ICE 1 units have been equipped with an additional smaller pantograph to be able to run on the different Swiss overhead wire geometry.
All ICE 1 and ICE 2 trains are single-voltage 15 kV AC, which restricts their radius of operation largely to the German-speaking countries of Europe. ICE 2 trains can run at a top speed of 280 km/h (174 mph).
Third generation
To overcome the restrictions imposed on the ICE 1 and ICE 2, their successor, the ICE 3, was built to a smaller loading gauge to permit usability throughout the entire European standard gauge network, with the sole exception being the UK's domestic railway network. Unlike their predecessors, the ICE 3 units are built not as trains with separate passenger and power cars, but as electric multiple units with underfloor motors throughout. This also reduced the load per axle and enabled the ICE 3 to comply with the pertinent UIC standard.
Initially two different classes were developed: the Class 403 (domestic ICE 3) and the Class 406 (ICE 3M), the M standing for Mehrsystem (multi-system). Later came Class 407 and Class 408. The trains were labelled and marketed as the Velaro by their manufacturer, Siemens.
Just like the ICE 2, the ICE 3 and the ICE 3M were developed as short trains (when compared to an ICE 1), and are able to travel in a system where individual units run on different lines, then being coupled to travel together. Since the ICE 3 trains are the only ones able to run on the Köln-Frankfurt high-speed line with its 4.0% incline at the allowed maximum speed of 300 km/h, they are used predominantly on services that utilise this line.
In 2009 Deutsche Bahn ordered another 16 units – worth € 495 million – for international traffic, especially to France.
The Erfurt–Leipzig/Halle high-speed railway, which opened in December 2015, is one of three lines in Germany (the others being the Nuremberg-Ingolstadt high-speed rail line and Cologne–Frankfurt high-speed rail line) that are equipped for a line speed of . Since only 3rd generation ICE trains can travel at this speed, the ICE line 41, formerly running from Essen Hbf via Duisburg Hbf–Frankfurt Südbf to Nürnberg Hbf, was extended over the Nuremberg-Ingolstadt high-speed rail line and today the service run is Oberhausen Hbf–Duisburg Hbf–Frankfurt Hbf–Nürnberg Hbf–Ingolstadt Hbf–München Hbf.
The ICE 3 runs at speeds up to on the LGV Est railway Strasbourg–Paris in France.
A new generation ICE 3, Class 407, is part of the Siemens Velaro family with the model designation Velaro D. It currently runs on many services in Germany and through to other countries like France. Initially this train type was meant to execute the planned Deutsche Bahn services through the Channel Tunnel to London. As the trains had not received a certification for running in Belgium and due to the competition of budget airlines the London service was cancelled.
In 2020 Deutsche Bahn placed an order with Siemens for 30 trains, and options for another 60, of the Velaro design and based on the previously procured ICE Class 407. Referenced by Siemens as Velaro MS ("multi-system"), these trains are called ICE 3neo by Deutsche Bahn and classified as 408. The trains are designed for operation at 320 km/h and were deployed at the end of 2022 on routes that use the Cologne – Frankfurt high speed line which is designed for operation at 300 km/h. After a production time of only 12 months including trial runs the first train was presented to journalists in February of 2022. At that occasion the order was increased by 43 trainsets, with all 73 trains supposed to be in service by early 2029. In May of 2023 Deutsche Bahn announced that it was calling the last 17 trains from the option, bringing the total order up to 90 trains.
Fourth generation
Procurement of ICx trainsets started c. 2008 as replacements for locomotive hauled InterCity and EuroCity train services - the scope was later expanded to include replacements for ICE 1 and ICE 2 trainsets. In 2011 Siemens was awarded the contract for 130 seven car intercity train replacements, and 90 ten car ICE train replacements, plus further options - the contract for the ten car sets was modified in 2013 to expand the trainset length to twelve vehicles. The name ICx was used for the trains during the initial stages of the procurement; in late 2015 the trains were rebranded ICE 4, at the unveiling of the first trainset, and given the class designation 412 by Deutsche Bahn.
Two pre-production trainsets were manufactured and used for testing prior to the introduction of the main series.
ICE T and ICE TD
Simultaneously with the ICE 3, Siemens developed trains with tilting technology, using much of the ICE 3 technical design. The class 411 (seven cars) and 415 (five cars) ICE T EMUs and class 605 ICE TD DMUs (four cars) were built with a similar interior and exterior design. They were specially designed for older railway lines not suitable for high speeds, for example the twisting lines in Thuringia. ICE-TD has diesel traction. ICE-T and ICE-TD can be operated jointly, but this is not done routinely.
ICE T
A total of 60 class 411 and 11 class 415 have been built so far (units built after 2004 belong to the modified second generation ICE-T2 batch). Both classes work reliably. Austria's ÖBB purchased three units in 2007, operating them jointly with DB. Even though DB assigned the name ICE-T to class 411/415, the T originally did not stand for tilting, but for Triebwagen (railcar), as DB's marketing department at first deemed the top speed too low for assignment of the InterCityExpress brand and therefore planned to refer to this class as IC-T (InterCity-Triebwagen).
The trainsets of the T series were manufactured in 1999. The tilting system has been provided by Fiat Ferroviaria, now part of Alstom. ICE T trains can run at speeds of up to 230 km/h (143 mph).
ICE TD
Deutsche Bahn ordered 20 units of ICE-T with diesel engines in 2001, called Class 605 ICE-TD. The ICE-TD was intended for certain routes without electric overhead cables such as Dresden-Munich and Munich-Zürich lines. However, the Class 605 trains (ICE-TD) experienced many technical issues and unanticipated escalation in operating cost due to the diesel fuel being fully taxed in Germany. They were taken off revenue service shortly after delivery. During the 2006 FIFA World Cup, the ICE-TD trains were pressed temporarily into supplementary service for transporting fans between cities in Germany.
At the end of 2007, ICE-TD trains were put into revenue service for the lines between Hamburg and Copenhagen as well as Hamburg and Aarhus. A large part of the Danish railway network had not been electrified so DSB (Danish State Railways) used the diesel-powered trains. When DSB ordered the new IC4 train sets, the company did not anticipate the long delay with the delivery and the technical issues with the train sets. To compensate for the shortage of available trains, DSB leased the ICE-TD while the delivery and technical issues with IC4 were being addressed. The operating cost was much lower due to the lower fuel tax in Denmark. After the issues with IC4 were resolved, the ICE-TD fleet was removed from revenue service and stored.
Deutsche Bahn retired the entire ICE TD fleet in 2018.
Differences in train layouts
Trainset numbers
While every car in an ICE train has its own unique registration number, the trains usually remain coupled as fixed trainsets for several years. For easier reference, each has been assigned a trainset number that is printed over each bogie of every car. These numbers usually correspond with the registration numbers of the powerheads or cab cars.
Interior equipment
The ICE trains adhere to a high standard of technology: all cars are fully air-conditioned and nearly every seat features a headphone jack which enables the passenger to listen to several on-board music and voice programmes as well as several radio stations. Some seats in the 1st class section (in some trains also in 2nd class) are equipped with video displays showing movies and pre-recorded infotainment programmes. Each train is equipped with special cars that feature in-train repeaters for improved mobile phone reception as well as designated quiet zones where the use of mobile phones is discouraged. The newer ICE 3 trains also have larger digital displays in all coaches, displaying, among other things, Deutsche Bahn advertising, the predicted arrival time at the next destination and the current speed of the train.
The ICE 1 was originally equipped with a passenger information system based on BTX, however this system was eventually taped over and removed in the later refurbishment. The ICE 3 trains feature touch screen terminals in some carriages, enabling travellers to print train timetables. The system is also located in the restaurant car of the ICE 2.
The ICE 1 fleet saw a major overhaul between 2005 and 2008, supposed to extend the lifetime of the trains by another 15 to 20 years. Seats and the interior design were adapted to the ICE 3 design, electric sockets were added to every seat, the audio and video entertainment systems were removed and electronic seat reservation indicators were added above the seats. The ICE 2 trains have been undergoing the same procedure since 2010.
ICE 2 trains feature electric sockets at selected seats, ICE 3 and ICE T trains have sockets at nearly every seat.
The ICE 3 and ICE T are similar in their interior design, but the other ICE types differ in their original design. The ICE 1, the ICE 2 and seven-car ICE T (Class 411) are equipped with a full restaurant car. The five-car ICE T (Class 415) and ICE 3 however, have been designed without a restaurant, they feature a bistro coach instead. Since 1 October 2006, smoking is prohibited in the bistro coaches, similar to the restaurant cars, which have always been non-smoking.
All trains feature a toilet for disabled passengers and wheelchair spaces. The ICE 1 and ICE 2 have a special conference compartment whilst the ICE 3 features a compartment suitable for small children. The ICE 3 and ICE T omit the usual train manager's compartment and have an open counter named "ServicePoint" instead.
An electronic display above each seat indicates the locations between which the seat has been reserved. Passengers without reservations are permitted to take seats with a blank display or seats with no reservation on the current section.
Maintenance
The maintenance schedule of the trains is divided into seven steps:
Every 4,000 kilometres, an inspection taking about 1½ hours is undertaken. The waste collection tanks are emptied and fresh water tanks are refilled. Acute defects (e.g. malfunctioning doors) are rectified. Safety tests are also conducted. These include checking the pantograph pressure, cleaning and checking for fissures in the rooftop insulators, inspecting transformers and checking the pantograph's current collector for wear. The wheels are also checked in this inspection.
Every 20,000 kilometres, a 2½ hour inspection is conducted, called Nachschau. In this inspection, the brakes, the Linienzugbeeinflussung systems and the anti-lock brakes are checked.
After 80,000 kilometres, the train undergoes the Inspektionsstufe 1. During the two modules, each lasting eight hours, the brakes receive a thorough check, as well as the air conditioning and the kitchen equipment. The batteries are checked, as well as the seats and the passenger information system.
Once the train has reached 240,000 kilometres, the Inspektionsstufe 2 mandates a check of the electric motors, the bearings and the driveshafts of the bogies and the couplers. This inspection is usually carried out in two modules taking eight hours each.
About once a year (when reaching 480,000 km), the Inspektionsstufe 3 takes place, at three times eight hours each. In addition to the other checkup phases, it includes checks on the pneumatics systems, and the transformer cooling. Maintenance work is performed inside the passenger compartment.
The 1st Revision is carried out after 1.2 million km. It includes a thorough check of all components of the train and is carried out in two five-day segments.
The seventh and final step is the 2nd Revision, which happens when reaching 2.4 million kilometres. The bogies are exchanged for new ones and many components of the train are disassembled and checked. This step also takes two five-day segments.
Maintenance on the ICE trains is carried out in special ICE workshops located in Basel, Berlin, Cologne, Dortmund, Frankfurt, Hamburg, Leipzig and Munich. The train is worked upon at up to four levels at a time and fault reports are sent to the workshops in advance by the on-board computer system to minimize maintenance time.
Lines in operation
Lines under construction
Stuttgart–Wendlingen high-speed railway (new line, 250 km/h, under construction)
Vogelfluglinie (partially new line, partially being upgraded)
Lübeck–Hamburg railway (German part, to be upgraded to reach 200 km/h)
Lübeck–Puttgarden railway (German part, to be electrified to reach 200 km/h up from the current 160 km/h, under construction)
Fehmarn Belt Fixed Link (tunnel part, will replace the Rødby–Puttgarden ferry, 200 km/h, under construction, completion expected in 2028)
Sydbanen (Danish part, new tracks to be laid by 2021, to be electrified to reach 200 km/h by 2024, under construction)
Lines planned
Frankfurt–Mannheim high-speed railway (new line, 300 km/h, in planning)
Hanau-Gelnhausen high-speed railway (new line, 300 km/h, in planning)
Dresden‒Prague high-speed line
(new line, 300 km/h, in planning)
Nuremberg-Würzburg high-speed railway (new line, 300 km/h, in planning)
Hanover-Hamburg high-speed railway/Hanover-Bremen high-speed railway (Y-shaped, partially new line, 160 and 300 km/h on new sections, 160 km/h on an existing section, in planning)
Fulda–Eisenach high-speed railway 250 km/h 2030 52 km
Fulda–Frankfurt (parallel new) high-speed railway 250 km/h 2035 80 km
Route planning and network layout
The ICE system is a polycentric network. Connections are offered in either 30-minute, hourly or bi-hourly intervals. Furthermore, additional services run during peak times, and some services call at lesser stations during off-peak times.
Unlike the French TGV or the Japanese Shinkansen systems, the vehicles, tracks and operations were not designed as an integrated whole; rather, the ICE system has been integrated into Germany's pre-existing system of railway lines instead. One of the effects of this is that the ICE 3 trains can reach a speed of 300 km/h (186 mph) only on some stretches of line and cannot currently reach their maximum allowed speed of 330 km/h on German railway lines (though a speed of 320 km/h is reached by ICE 3 in France).
The line most heavily utilised by ICE trains is the Mannheim–Frankfurt railway between Frankfurt and Mannheim due to the bundling of many ICE lines in that region. When considering all traffic (freight, local and long-distance passenger), the busiest line carrying ICE traffic is the Munich–Augsburg line, carrying about 300 trains per day.
North–south connections
The network's main backbone consists of six north–south lines:
from Hamburg-Altona via Hamburg, Hannover, Kassel, Fulda, Frankfurt, Mannheim, Karlsruhe and Freiburg to Basel (line 20) or continuing from Mannheim to Stuttgart (line 22)
from Hamburg-Altona and Hamburg and from Bremen to Hannover (where portions are joined) and via Kassel, Fulda and Würzburg to Nuremberg and via Ingolstadt to Munich (line 25)
from Hamburg-Altona via Hamburg, Berlin-Spandau, Berlin, Berlin Südkreuz, Leipzig or Halle, Erfurt to Nuremberg and via Augsburg or Ingolstadt to Munich (lines 18, 28 and 29) or continuing from Erfurt via Fulda, Frankfurt, Stuttgart, Ulm and Augsburg to Munich (line 11)
from Berlin via Berlin-Spandau, Braunschweig, Kassel, Fulda, Frankfurt, Mannheim, Karlsruhe and Freiburg to Basel (line 12) or via Fulda and Frankfurt Süd to Frankfurt Airport (line 13)
from Amsterdam or Dortmund via Duisburg, Düsseldorf, Cologne and Frankfurt Airport to Mannheim and either via Karlsruhe and Freiburg to Basel (line 43) or via Stuttgart, Ulm and Augsburg to Munich (lines 42 and 47)
from Essen via Cologne, Frankfurt, Würzburg, Nuremberg and Ingolstadt to Munich (line 41)
(Also applies to trains in the opposite directions, taken from 2024 network map)
East–west connections
Furthermore, the network has three main east–west thoroughfares:
from Berlin Gesundbrunnen via Berlin, Hannover, Bielefeld to Hamm (where train portions are split) and continuing either via Dortmund, Essen, Duisburg and Düsseldorf to Cologne/Bonn Airport or via Hagen and Wuppertal to Cologne (10, 19)
from Dresden via Leipzig, Erfurt, Fulda, Frankfurt, Frankfurt Airport and Mainz to Wiesbaden (50)
from Karlsruhe via Stuttgart, Ulm, and Augsburg to Munich (60)
(Also applies to trains in the opposite directions, taken from 2023 network map)
German branch lines
Some train lines extend past the core network and branch off to serve the following connections:
from Berlin to Rostock (line 28, individual services)
from Berlin to Stralsund (line 28, individual services)
from Hamburg to Lübeck (line 25, individual services)
from Hamburg to Kiel (lines 20, 22, 28 and 31, individual services)
from Bremen to Oldenburg (lines 10, 22 and 25, individual services)
from Leipzig via Hanover to Cologne (line 50, individual services)
from Leipzig via Kassel to Düsseldorf (line 50, individual services)
from Würzburg via Kassel to Essen (line 41, individual services)
from Munich to Garmisch-Partenkirchen (lines 25, 28 and 41, individual services)
from Nuremberg via Regensburg to Passau (line 91, every two hours)
(Also applies to trains in the opposite directions)
ICE Sprinter
The "ICE Sprinter" trains are trains with fewer stops between Germany's major cities running in the morning and evening hours. They are tailored for business travellers or long-distance commuters and are marketed by DB as an alternative to domestic flights. Some of the Sprinter services continue as normal ICE services after reaching their destination. The service is usually half an hour faster than a standard ICE between the same cities.
A reservation was mandatory on the ICE Sprinter until December 2015.
The first Sprinter service was established between Munich and Frankfurt in 1992. Frankfurt-Hamburg followed in 1993 and Cologne-Hamburg in 1994. This service ran as a Metropolitan service between December 1996 and December 2004. In 1998, a Berlin-Frankfurt service was introduced and a service between Cologne and Stuttgart ran between December 2005 and October 2006.
Until December 2006, a morning Sprinter service ran between Frankfurt and Munich (with an intermediate stop at Mannheim), taking 3:25 hours for the journey. This has been since replaced by a normal ICE connection taking only 3:21 hours.
Starting with the December 2017 schedule change, a new Sprinter line links Berlin main station and Munich main station in less than four hours.
, the individual ICE Sprinter lines are:
(Source: Deutsche Bahn AG)
Line segments abroad
Some ICE trains also run on services abroad – sometimes diverting from their original lines.
from Frankfurt Hbf to Amsterdam Centraal (Netherlands) via Köln Hbf, Duisburg Hbf and Arnhem Centraal (lines 120, 122, 128, 126, 220, 222/121, 123, 127, 129, 221)
from Frankfurt Hbf to Bruxelles-Midi/Brussel-Zuid (Belgium) via Köln Hbf, Aachen Hbf and Liège-Guillemins (lines 10, 12, 14, 18, 16, 314, 316/11, 13, 15, 17, 19, 315, 317)
from Frankfurt Hbf and Stuttgart Hbf to Paris Est (France) via Karlsruhe Hbf and Strasbourg-Ville (lines 9568, 9572, 9574/9563, 9567, 9571, 9573)
from Frankfurt Hbf to Paris Est via Saarbrücken Hbf (lines 9550, 9556, 9558/9553, 9555)
from Hamburg-Altona, Kiel Hbf and Ostseebad Binz to Basel SBB and Zürich HB (Switzerland) via Frankfurt Hbf and Karlsruhe (lines 73, 75, 77, 79, 203, 205/72, 74, 76, 78, 272)
from Frankfurt Hbf and Hamburg-Altona to Chur (Switzerland) via Karlsruhe, Basel SBB and Zürich HB (lines 271, 1271/70)
from Frankfurt Hbf, Berlin Hbf and Hamburg-Altona station to Interlaken (Switzerland) via Karlsruhe and Basel SBB (lines 275, 373/278, 376)
from Frankfurt Hbf and Dortmund Hbf to Wien Hbf (Austria) via Nürnberg Hbf, Passau Hbf and Linz Hbf (lines 21, 23, 27, 29, 229/20, 22, 26, 28, 228)
from Hamburg-Altona to Wien Hbf (Austria) via Berlin Hbf, Nürnberg Hbf, Passau Hbf and Linz Hbf (lines 93, 95/92, 94)
from Dortmund Hbf to Innsbruck Hbf (Austria) via Köln Hbf, Stuttgart Hbf, and Bregenz Hbf (line 119), return schedule terminates at Münster Hbf (line 118)
from Berlin Hbf to Innsbruck Hbf (Austria) via Würzburg Hbf, Stuttgart Hbf, Lindau-Reutin and Bregenz Hbf (alternative route for lines 118/119)
from Münster Hbf to Klagenfurt Hbf (Austria) via Köln, Stuttgart, Munich Hbf and Salzburg Hbf (line 115), return schedule terminates at Dortmund Hbf (line 114)
from Berlin Hbf to Innsbruck Hbf (Austria) via Frankfurt, Stuttgart and Munich (line 1211/1218)
from Amsterdam Centraal (Netherlands) to Basel SBB (Switzerland) via Frankfurt Hbf (line 105/104)
from Munich to Zürich via Lindau-Reutin (operated by Swiss Federal Railways as EuroCity-Express)
(Also applies to the opposite directions)
Since December 2006, Stuttgart Hbf and Zürich HB have been connected by a bi-hourly service. This service however has been replaced by a daily Intercity service since March 2010.
The ÖBB in Austria also uses two ICE T trainsets (classified as ÖBB Class 4011) between Wien Hauptbahnhof, Innsbruck Hauptbahnhof and Bregenz (without stops in Germany), although they do not use tilting technology. Since December 2007 ÖBB and DB offer a bi-hourly connection between Wien Westbf and Frankfurt Hbf. On 12 December 2021 a new Railjet schedule was introduced by ÖBB between Frankfurt and Vienna on a different route via Stuttgart, Ulm, Biberach, Friedrichshafen, Lindau, Bregenz and Innsbruck. Since 11 June 2023 a new ICE service is operated by Deutsche Bahn via the same route between Dortmund and Innsbruck on ICE 4 trains replacing the IC 118/119 service.
Since June 2007 ICE 3M trains had been running between Frankfurt Hbf and Paris Est via Saarbrücken and Kaiserslautern. Together with the TGV-operated service between Paris Est and München Hbf via Stuttgart Hbf (TGV 9576/9577), this ICE line was part of the "LGV Est européenne", also called "Paris-Ostfrankreich-Süddeutschland" (or POS) for short, a pan-European high-speed line between France and Germany. This service now co-exists with a direct TGV service (TGV 9551/9552).
From late 2007, ICE TD trains linked Berlin Hbf with Copenhagen and Aarhus via Hamburg Hbf. These services have been operated since December 2017 by Danish IC3 sets as EuroCity services.
A EuroCity-Express service was introduced between Munich and Zürich in December 2020 with the completion of the electrification of the line in Germany, replacing a EuroCity service. Six pairs of trains run every two hours and are operated by Swiss Federal Railways with Alstom ETR 610 (Astoro) sets.
In addition, ICE Trains to London via the Channel Tunnel are on the horizon. Unique safety and security requirements for the tunnel (such as airport-style checks at stations) as well as hold-ups in the production of the Velaro-D trains to be used on the route have delayed these plans.
Intra-Swiss ICE trains
To avoid empty runs or excess waits, several services exist that operate exclusively inside Switzerland:
three services from Basel SBB to Interlaken Ost
two services from Basel SBB to Zürich HB
three service from Interlaken Ost to Basel SBB
one service from Interlaken Ost to Bern
two services from Zürich HB to Basel SBB
one service from Bern to Interlaken Ost
These trains, despite being officially notated as ICEs, are more comparable to a Swiss InterRegio or RegioExpress train, calling at small stations like or . As common in Switzerland, these trains can be used without paying a supplement.
Travel times
Accidents
There have been several accidents involving ICE trains. The Eschede disaster was the only accident with fatalities inside the train, but other accidents have resulted in major damage to the trainsets involved.
Eschede disaster
The ICE accident near Eschede that happened on 3 June 1998 was a severe railway accident. Trainset 51, travelling as ICE 884 Wilhelm Conrad Röntgen from Munich to Hamburg, derailed at 200 km/h (124 mph), killing 101 people and injuring 88. It remains the world's worst high-speed rail disaster.
The cause of the accident was a wheel rim which broke and damaged the train six kilometres south of the accident site. The wheel rim penetrated the carriage floor and lifted the check rail of a set of points close to Eschede station. This guide rail also penetrated the floor of the car, becoming embedded in the vehicle and lifting the nearby wheels off the rails. One of the now-derailed wheels struck the points lever of the following set of points to change direction, and the rear cars of the trainset were diverted to a different track. They hit the pillars of a street overpass, which then collapsed onto the tracks. Only three cars and the front powerhead passed under the bridge, the rest of the 14-car train jack-knifed into the collapsed bridge.
Other accidents
On 27 September 2001, trainset 5509 fell off a work platform at the Hof maintenance facility and was written off.
On 22 November 2001, powerhead 401 020 caught fire. The train was stopped at the station in Offenbach am Main near Frankfurt a.M. No passengers were harmed, but the fire caused the powerhead to be written off.
On 6 January 2004, ICE TD trainset 1106 caught fire while it was parked at Leipzig. Two cars were written off, and the others are now used as spares.
On 1 April 2004, trainset 321 collided with a tractor that had fallen onto the track at a tunnel entrance near Istein, and was derailed. No-one was injured. Trainset 321 was temporarily taken apart, its cars being switched with cars from other ICE 3 trainsets.
Powerhead 401 553 suffered major damage in a collision with a car on the Mannheim–Frankfurt railway in April 2006.
On 28 April 2006, trainset 73 collided head-on with two BLS Re 465 locomotives at Thun in Switzerland. The driver of the Swiss locomotives was unfamiliar with the new layout of the station, which had been recently changed. He did not see a shunting signal ordering him to stop. The locomotives automatically engaged the emergency brakes when he passed the signal, but came to a stop on the same track as the approaching ICE. The ICE was travelling at a speed of 74 km/h. The emergency brake slowed the train to 56 km/h at the point of collision. 30 passengers and the driver of the ICE suffered minor injuries, the driver of the Swiss locomotives having jumped to safety. Both trains suffered major damage. The powerhead 401 573 had to be rebuilt using components from three damaged powerheads (401 573, 401 020 and 401 551).
On 1 March 2008, trainset 1192, travelling as ICE 23, collided with a tree which had fallen onto the track near Brühl after being blown down by Cyclone Emma. The driver suffered severe injuries. The trainset is back in service, its driving-car having been replaced with that from trainset 1106.
On 26 April 2008, trainset 11, travelling as ICE 885, collided with a herd of sheep on the Hanover-Würzburg high-speed rail line near Fulda. Both powerheads and ten of the 12 cars derailed. The train came to a stop 1300 metres into the Landrückentunnel. 19 of the 130 passengers suffered mostly minor injuries, four of them needing hospital treatment.
A cracked axle was blamed for a low-speed derailment of a third-generation ICE in Cologne in July 2008. The accident, in which no-one was hurt, caused DB to recall its newest ICEs as a safety measure. In October 2008, the company recalled its ICE-T trains after a further crack was found.
On 17 April 2010, ICE 105 Amsterdam - Basel lost a door while travelling at high speed near Montabaur. The door slammed into the side of ICE 612 on the adjacent track. Six people travelling on ICE 612 were injured.
On 17 August 2010, the ICE from Frankfurt to Paris hit a truck that had slid from an embankment on to the rail near Lambrecht. The first two carriages derailed and ten people were injured, one seriously.
On 11 January 2011, trainset 4654 partly derailed during a side-on collision with a freight train near Zevenaar in the Netherlands. There were no injuries.
On 2 May 2017, a trainset was derailed at Dortmund Hauptbahnhof. Two people were injured.
On 12 October 2018, two cars of a trainset caught fire while it was traveling from Cologne to Munich on the Cologne-Frankfurt line. Five people suffered minor injuries during the evacuation.
Fare structure
Germany
ICE trains are the highest category (Class A) trains in the fare system of the Deutsche Bahn. Their fares are not calculated on a fixed per-kilometre table as with other trains, but instead have fixed prices for station-to-station connections, depending on a multitude of factors including the railway line category and the general demand on the line. Even on lines where the ICE is not faster than an ordinary IC or EC train (for example Hamburg to Dortmund), an additional surcharge will be levied on the ground that the ICE trains have a higher comfort level than IC/EC trains.
Austria
On the intra-Austrian lines (Vienna-Innsbruck-Bregenz, Vienna-Salzburg(-Munich), Vienna-Passau(-Hamburg) and Innsbruck-Kufstein(-Berlin)) no additional fees are charged.
Switzerland
Likewise, the trains running to and from Zürich, Interlaken and Chur, as well as those on the intra-Swiss ICE trains (see above) can be used without any surcharge.
Netherlands
On ICE trains between Amsterdam and Cologne, passengers travelling nationally within the Netherlands (between Amsterdam Centraal and Arnhem Centraal) can use the national OV-chipkaart scheme but have to purchase a supplement. Passengers travelling into/from Germany have to buy an international ticket.
France
On ICE trains between Paris Est and Frankfurt or Stuttgart, only the fare system from SNCF Voyageurs is used for national trips to Forbach and Strasbourg. Reservation is compulsory for trips to/from and within France.
Scale models
Various ICE train scale models in several scales have been produced by Märklin, Fleischmann, Roco, Trix, Mehano, PIKO. and Lima.
Possible future service to London
In January 2010, the European railway network was opened to a liberalisation intended to allow greater competition. Both Air France-KLM and Deutsche Bahn have indicated their desire to take advantage of the new laws to run new services via the Channel Tunnel and the High Speed 1 route that terminates at London St Pancras International. A test run of an ICE train through the Channel Tunnel took place on 19 October 2010. Passenger-carrying ICE trains, however, will have to meet safety requirements in order to transit the Channel Tunnel. Although the requirement for splittable trains was lifted, concerns remain over the shorter length of ICE trainsets, fire safety, and the ICE's distributed power arrangements. There have been suggestions that French interests have advocated stringent enforcement to delay a competitor on the route. Eurostar also recently chose Siemens Velaro-based rolling stock; there were concerns that Alstom (builders of the passenger trains that already use the Tunnel) and the French Government would take the matter to court. In October 2010, the French transport minister suggested that the European Railway Agency (based in France) should arbitrate. After safety rule changes which might permit the use of Siemens Velaro rolling stock, the French government dismissed their delegate to the Channel Tunnel Safety Authority, and brought in a replacement.
In March 2011, a European Rail Agency report authorized trains with distributed traction for use in the Channel Tunnel. This means that the ICE class 407 trains which DB intends to use for its London services will be able to run through the tunnel. In February 2014, however, Deutsche Bahn announced further difficulties with launching the route, and reports make it seem unlikely that service will start anytime this decade.
In June 2018, Deutsche Bahn announced that it was shelving plans to revive a potential London-Frankfurt ICE connection. The service would take around 5 hours and could rival airlines and become the first competitor for Eurostar.
Ridership
From its inception in July 1991 to 2006, ICE has transported roughly 550 million passengers, including 67 million in 2005. The cumulative sum of passengers is roughly 1.25 billion in 2015.
Legacy
On 5 October 2006, the Deutsche Post AG released a series of stamps, among them a stamp picturing an ICE 3, at 55+25 euro cents.
In 2006, Lego modelled one of its train sets after the ICE. A Railworks add on is available for Train Simulator 2018 accurately reflecting the original 1991 version of the ICE on German tracks (Siegen to Hagen). There is also an addon utilising the Munich - Augsburg line using ICE 3 trainsets. The ICE 3 can also be used in Career scenarios on the Mannheim-Karlsruhe route (including the extension to Frankfurt), and Cologne-Düsseldorf. The ICE T, ICE 2, and ICE TD are also available for purchase as separate vehicles.
| Technology | High-speed rail | null |
186725 | https://en.wikipedia.org/wiki/Agronomy | Agronomy | Agronomy is the science and technology of producing and using plants by agriculture for food, fuel, fiber, chemicals, recreation, or land conservation. Agronomy has come to include research of plant genetics, plant physiology, meteorology, and soil science. It is the application of a combination of sciences such as biology, chemistry, economics, ecology, earth science, and genetics. Professionals of agronomy are termed agronomists.
History
Early humans practiced hunter-gathering, but by around 10,000 BCE, they began to domesticate plants like wheat, barley and rice. This laid the foundation for agriculture.
Ancient civilizations such as the Sumerians, Egyptians and Romans made significant advances in farming. They introduced irrigation systems, crop rotation, and early forms of fertilization.
During this period, agricultural knowledge remained relatively static in Europe, though Islamic scholars made advances in agronomy. Ibn al-'Awwam, a 12th-century Andalusian agronomist, wrote the Kitāb al-Filāḥa, a comprehensive guide on farming practices, crop management and soil conservation.
The Renaissance saw a renewed interest in scientific exploration, including agriculture. Leonardo da Vinci and other scholars contributed to early agronomic theory, studying plant growth, crop rotation, and animal husbandry.
Agronomy emerged as a distinct scientific discipline in the 1800s, driven by advancements in chemistry and biology. The development of scientific methods led to the study of plant physiology, soil chemistry, and the role of fertilizers in crop production. Justus von Liebig, a German chemist, made groundbreaking discoveries about plant nutrition, establishing that plants require specific minerals, such as nitrogen, phosphorus and potassium, for growth.
In the early 20th century, industrialization began transforming agriculture. Mechanization, the development of synthetic fertilizers and pesticides, and improved crop varieties, led to higher agricultural productivity. The Green Revolution (1940s-1960s), led by scientists like Norman Borlaug, introduced high-yield crop varieties and modern farming techniques, helping to avert hunger in many parts of the world.
By the late same century, concerns over the environmental impact of industrial agriculture, such as soil degradation, water pollution, and biodiversity loss, led to a push toward sustainable agriculture. Today agronomy continues to adapt to challenges of climate change, global food security and the need to balance productivity with environmental stewardship.
Plant breeding
This topic of agronomy involves selective breeding of plants to produce the best crops for various conditions. Plant breeding has increased crop yields and has improved the nutritional value of numerous crops, including corn, soybeans, and wheat. It has also resulted in the development of new types of plants. For example, a hybrid grain named triticale was produced by crossbreeding rye and wheat. Triticale contains more usable protein than does either rye or wheat. Agronomy has also been instrumental for fruit and vegetable production research. Furthermore, the application of plant breeding for turfgrass development has resulted in a reduction in the demand for fertilizer and water inputs (requirements), as well as turf-types with higher disease resistance.
Biotechnology
Agronomists use biotechnology to extend and expedite the development of desired characteristics. Biotechnology is often a laboratory activity requiring field testing of new crop varieties that are developed.
In addition to increasing crop yields agronomic biotechnology is being applied increasingly for novel uses other than food. For example, oilseed is at present used mainly for margarine and other food oils, but it can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals.
Soil science
Agronomists study sustainable ways to make soils more productive and profitable. They classify soils and analyze them to determine whether they contain nutrients vital for plant growth. Common macronutrients analyzed include compounds of nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. Soil is also assessed for several micronutrients, like zinc and boron. The percentage of organic matter, soil pH, and nutrient holding capacity (cation exchange capacity) are tested in a regional laboratory. Agronomists will interpret these laboratory reports and make recommendations to modify soil nutrients for optimal plant growth.
Soil conservation
Additionally, agronomists develop methods to preserve soil and decrease the effects of [erosion] by wind and water. For example, a technique known as contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers of agronomy also seek ways to use the soil more effectively for solving other problems. Such problems include the disposal of human and animal manure, water pollution, and pesticide accumulation in the soil, as well as preserving the soil for future generations such as the burning of paddocks after crop production. Pasture management techniques include no-till farming, planting of soil-binding grasses along contours on steep slopes, and using contour drains of depths as much as 1 metre.
Agroecology
Agroecology is the management of agricultural systems with an emphasis on ecological and environmental applications. This topic is associated closely with work for sustainable agriculture, organic farming, and alternative food systems and the development of alternative cropping systems.
Theoretical modeling
Theoretical production ecology is the quantitative study of the growth of crops. The plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable products. The main parameters are temperature, sunlight, standing crop biomass, plant production distribution, and nutrient and water supply.
| Technology | Basics_2 | null |
186749 | https://en.wikipedia.org/wiki/Narcissistic%20personality%20disorder | Narcissistic personality disorder | Narcissistic personality disorder (NPD) is a personality disorder characterized by a life-long pattern of exaggerated feelings of self-importance, an excessive need for admiration, and a diminished ability to empathize with other people's feelings. Narcissistic personality disorder is one of the sub-types of the broader category known as personality disorders. It is often comorbid with other mental disorders and associated with significant functional impairment and psychosocial disability.
Personality disorders are a class of mental disorders characterized by enduring and inflexible maladaptive patterns of behavior, cognition, and inner experience, exhibited across many contexts and deviating from those accepted by any culture. These patterns develop by early adulthood, and are associated with significant distress or impairment. Criteria for diagnosing personality disorders are listed in the sixth chapter of the International Classification of Diseases (ICD) and in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM).
There is no standard treatment for NPD. Its high comorbidity with other mental disorders influences treatment choice and outcomes. Psychotherapeutic treatments generally fall into two categories: psychoanalytic/psychodynamic and cognitive behavioral therapy, with growing support for integration of both in therapy. However, there is an almost complete lack of studies determining the effectiveness of treatments. One's subjective experience of the mental disorder, as well as their agreement to and level of engagement with treatment, are highly dependent on their motivation to change.
Signs and symptoms
Despite outward signs of grandiosity, many people with NPD struggle with symptoms of intense shame, worthlessness, low self-compassion, and self-loathing. Their view of themselves is extremely malleable and dependent on others' opinions of them. They are also hypersensitive to criticism and possess an intense need for admiration. People with NPD gain self-worth and meaning through this admiration. Individuals with NPD are often motivated to achieve their goals, status, improvement, and perfectionism, and to ignore relationships or avoid situations due to fears of incompetence, failure, worthlessness, inferiority, shame, humiliation, and losing control.
People with NPD will try to gain social status and approval in an attempt to avoid and combat these feelings, often by exaggerating their skills, accomplishments, and degree of intimacy with people they consider high-status. Alongside this, they may have difficulty accepting help, vengeful fantasies, and a sense of entitlement, and they may feign humility. They are more likely to try forms of plastic surgery due to a desire to gain attention and to be seen as beautiful. A sense of personal superiority may lead them to monopolize conversations, look down on others or become impatient and disdainful when other persons talk about themselves. Drastic shifts in levels of self-esteem can result in a significantly decreased ability to regulate emotions.
Patients with NPD have an impaired ability to recognize facial expressions or mimic emotions, as well as a lower capacity for emotional empathy and emotional intelligence. However they do not display a compromised capacity for cognitive empathy or an impaired theory of mind, which are the abilities to understand others' feelings and attribute mental states to oneself or others respectively. They may also have difficulty relating to others’ experiences and being emotionally vulnerable. People with NPD are less likely to engage in prosocial behavior. They can still act in selfless ways to improve others' perceptions of them, advance their social status, or if explicitly told to. Despite these characteristics, they are more likely to overestimate their capacity for empathy.
It is common for people with NPD to have difficult relationships. Narcissists may disrespect others' boundaries or idealize and devalue them. They commonly keep people emotionally distant, and project, deny, or split. Narcissists respond with anger and hostility towards rejection, and can degrade, insult, or blame others who disagree with them.
They generally lack self-awareness, and will have a difficult time understanding their own traits and narcissistic tendencies, either due to a belief that NPD characteristics do not apply to them, or due to a refusal to accept or endorse negative characteristics in an attempt to maintain a positive self image. Narcissists can have difficulty seeing multiple perspectives on issues and might engage in black and white thinking. Despite this, people with NPD will often feel as they are skilled at accurately assessing others' feelings.
Diagnosis
The DSM-5 indicates that: "Many highly successful individuals display personality traits that might be considered narcissistic. Only when these traits are inflexible, maladaptive, and persisting, and cause significant functional impairment or subjective distress, do they constitute narcissistic personality disorder." Given the high-function sociability associated with narcissism, some people with NPD might not view such a diagnosis as a functional impairment to their lives. Although overconfidence tends to make people with NPD very ambitious, such a mindset does not necessarily lead to professional high achievement and success, because they refuse to take risks, in order to avoid failure or the appearance of failure. Moreover, the psychological inability to tolerate disagreement, contradiction, and criticism makes it difficult for persons with NPD to work cooperatively or to maintain long-term relationships.
DSM-5
The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) describes NPD as possessing at least five of the following nine criteria.
A grandiose sense of self-importance (exaggerates achievements and talents, expects to be recognized as superior without commensurate achievements)
Preoccupation with fantasies of unlimited success, power, brilliance, beauty, or ideal love
Believing that they are "special" and unique and can only be understood by, or should associate with, other special or high-status people (or institutions)
Requiring excessive admiration
A sense of entitlement (unreasonable expectations of especially favorable treatment or automatic compliance with their expectations)
Being interpersonally exploitative (taking advantage of others to achieve their own ends)
Lacking empathy (unwilling to recognize or identify with the feelings and needs of others)
Often being envious of others or believing that others are envious of them
Showing arrogant, haughty behaviors or attitudes
Within the DSM-5, NPD is a cluster B personality disorder. Individuals with cluster B personality disorders often appear dramatic, emotional, or erratic. Narcissistic personality disorder is a mental disorder characterized by a life-long pattern of exaggerated feelings of self-importance, an excessive craving for admiration, and a diminished ability to empathize with others' feelings.
A diagnosis of NPD, like other personality disorders, is made by a qualified healthcare professional in a clinical interview. In the narcissistic personality disorder, there is a fragile sense of self that becomes a view of oneself as exceptional.
Narcissistic personality disorder usually develops either in youth or in early adulthood. True symptoms of NPD are pervasive, are apparent in varied social situations, and are rigidly consistent over time. Severe symptoms of NPD can significantly impair the person's mental capabilities to develop meaningful human relationships, such as friendship, kinship, and marriage. Generally, the symptoms of NPD also impair the person's psychological abilities to function socially, either at work or at school, or within important societal settings. The DSM-5 indicates that, in order to qualify as symptomatic of NPD, the person's manifested personality traits must substantially differ from social norms.
ICD-11 and ICD-10
In the International Statistical Classification of Diseases and Related Health Problems, 11th Edition ICD-11 of the World Health Organization (WHO), all personality disorders are diagnosed under a single title called "personality disorder.” The criteria for diagnosis are mainly concerned with assessing dysfunction, distress, and maladaptive behavior. Once a diagnosis has been made, the clinician then can draw upon five trait domains to describe the particular causes of dysfunction, as these have major implications for potential treatments. NPD, as it currently conceptualised, would correspond more or less entirely to the ICD-11 trait of Dissociality, which includes self-centredness (grandiosity, attention-seeking, entitlement and egocentricity) and lack of empathy (callousness, ruthlessness, manipulativeness, interpersonal exploitativeness, and hostility).
In the previous edition, the ICD-10, narcissistic personality disorder (NPD) is listed under the category of "other specific personality disorders", meaning the ICD-10 required that cases otherwise described as NPD in the DSM-5 would only need to meet a general set of diagnostic criteria.
Differential diagnosis
The occurrence of narcissistic personality disorder presents a high rate of comorbidity with other mental disorders. People with a fragile variant of NPD (see Subtypes) are prone to bouts of psychological depression, often to the degree that meets the clinical criteria for a co-occurring depressive disorder. NPD is associated with the occurrence of bipolar disorder and substance use disorders, especially cocaine use disorder. NPD may also be comorbid or differentiated with the occurrence of other mental disorders, including histrionic personality disorder, borderline personality disorder, antisocial personality disorder, or paranoid personality disorder. NPD should also be differentiated from mania and hypomania as these cases can also present with grandiosity, but present with different levels of functional impairment. It is common for children and adolescents to display personality traits that resemble NPD, but such occurrences are usually transient, and register below the clinical criteria for a formal diagnosis of NPD.
Problematic social media use
Subtypes
Although the DSM-5 diagnostic criteria for NPD has been viewed as homogeneous, there are a variety of subtypes used for classification of NPD. There is poor consensus on how many subtypes exist, but there is broad acceptance that there are at least two: grandiose or overt narcissism, and vulnerable or covert narcissism. However, none of the subtypes of NPD are recognized in the DSM-5 or in the ICD-11.
Empirically verified subtypes
Some research has indicated the existence of three subtypes of NPD, which can be distinguished by symptom criteria, comorbidity and other clinical criteria. These are as follows:
Grandiose/overt: the group exhibits grandiosity, entitlement, interpersonal exploitativeness and manipulation, pursuit of power and control, lack of empathy and remorse, and marked irritability and hostility. This group was noted for high levels of comorbid antisocial and paranoid personality disorders, substance abuse, externalizing, unemployment and greater likelihood of violence. Of note, Russ et al. observed that this group "do not appear to suffer from underlying feelings of inadequacy or to be prone to negative affect states other than anger", an observation corroborated by recent research which found this variant to show strong inverse associations with depressive, anxious-avoidant, and dependant/victimised features.
Vulnerable/covert: this variant is defined by feelings of shame, envy, resentment, and inferiority (which is occasionally "masked" by arrogance), entitlement, a belief that one is misunderstood or unappreciated, and excessive reactivity to slights or criticism. This variant is associated with elevated levels of neuroticism, psychological distress, depression, and anxiety. In fact, recent research suggests that vulnerable narcissism is mostly the product of dysfunctional levels of neuroticism. Vulnerable narcissism is sometimes comorbid with diagnoses of avoidant, borderline and dependent personality disorders.
High-functioning/exhibitionistic: A third subtype for classifying people with NPD, initially theorized by psychiatrist Glen Gabbard, is termed high functioning or exhibitionistic. This variant has been described as "high functioning narcissists [who] were grandiose, competitive, attention-seeking, and sexually provocative; they tended to show adaptive functioning and utilize their narcissistic traits to succeed." This group has been found to have relatively few psychological issues and high rates of obsessive-compulsive personality disorder, with excessive perfectionism posited as a potential cause for their impairment.
Others
Oblivious/hypervigilant: Glen Gabbard described two subtypes of NPD in 1989, later referred to as equivalent to, the grandiose and vulnerable subtypes. The first was the "oblivious" subtype of narcissist, equivalent to the grandiose subtype. This group was described as being grandiose, arrogant and thick-skinned, while also exhibiting personality traits of helplessness and emotional emptiness, low self-esteem and shame. These were observed in people with NPD to be expressed as socially avoidant behavior in situations where self-presentation is difficult or impossible, leading to withdrawal from situations where social approval is not given.
The second subtype Gabbard described was termed "hypervigilant", equivalent to the vulnerable subtype. People with this subtype of NPD were described as having easily hurt feelings, an oversensitive temperament, and persistent feelings of shame.
Communal narcissism: A fourth type is the communal narcissist. Communal narcissism is a form of narcissism that occurs in group settings. It is characterized by an inflated sense of importance and a need for admiration from others. In relation to the grandiose narcissist, a communal narcissist is arrogant and self-motivating, and shares the sense of entitlement and grandiosity. However, the communal narcissist seeks power and admiration in the communal realm. They see themselves as altruistic, saintly, caring, helpful, and warm. Individuals who display communal narcissism often seek out positions of power and influence within their groups.
Millon's subtypes
In the study Disorders of Personality: DSM-IV-TM and Beyond (1996), Theodore Millon suggested five subtypes of NPD, although he did not identify specific treatments per subtype.
Masterson's subtypes (exhibitionist and closet)
In 1993, James F. Masterson proposed two subtypes for pathological narcissism, exhibitionist and closet. Both fail to adequately develop an age- and phase- appropriate self because of defects in the quality of psychological nurturing provided, usually by the mother. A person with exhibitionist narcissism is similar to NPD described in the DSM-IV and differs from closet narcissism in several ways. A person with closet narcissism is more likely to be described as having a deflated, inadequate self-perception and greater awareness of emptiness within. A person with exhibitionist narcissism would be described as having an inflated, grandiose self-perception with little or no conscious awareness of feelings of emptiness. Such a person would assume that their condition was normal and that others were just like them. A person with closet narcissism is described to seek constant approval from others and appears similar to those with borderline personality disorder in the need to please others. A person with exhibitionist narcissism seeks perfect admiration all the time from others.
Malignant narcissism
Malignant narcissism, a term first coined in Erich Fromm's 1964 book The Heart of Man: Its Genius for Good and Evil, is a syndrome consisting of a combination of NPD, antisocial personality disorder, and paranoid traits. A person with malignant narcissism was described as deriving higher levels of psychological gratification from accomplishments over time, suspected to worsen the disorder. Because a person with malignant narcissism becomes more involved in psychological gratification, it was suspected to be a risk factor for developing antisocial, paranoid, and schizoid personality disorders. The term malignant is added to the term narcissist to indicate that individuals with this disorder have a severe form of narcissistic disorder that is characterized also by features of paranoia, psychopathy (anti-social behaviors), aggression, and sadism.
Historical demarcation of grandiose and vulnerable types
Over the years, many clinicians and theorists have described two variants of NPD akin to the grandiose and vulnerable expressions of trait narcissism. Some examples include:
Assessment and screening
Narcissistic Personality Inventory
Risk factors for NPD and grandiose/overt and vulnerable/covert subtypes are measured using the narcissistic personality inventory, an assessment tool originally developed in 1979, which has undergone multiple iterations with new versions in 1984, 2006 and 2014. It captures principally grandiose narcissism, but also seems to capture elements of vulnerability. A popular three-factor model has it that grandiose narcissism is assessed via the Leadership/Authority and Grandiose/Exhibitionism facets, while a combination of grandiose and vulnerable traits are indexed by the Entitlement/Exploitativeness facet.
Pathological Narcissism Inventory
The Pathological Narcissism Inventory (PNI) was designed to measure fluctuations in grandiose and vulnerable narcissistic states, similar to what is ostensibly observed by some clinicians (though empirical demonstration of this phenomenon is lacking). While having both "grandiosity" and vulnerability scales, empirically both seem to primarily capture vulnerable narcissism.
The PNI scales show significant associations with parasuicidal behavior, suicide attempts, homicidal ideation, and several aspects of psychotherapy utilization.
Five-Factor Narcissism Inventory
In 2013, the Five-Factor Narcissism Inventory (FFNI) was defined as a comprehensive assay of grandiose and vulnerable expressions of trait narcissism. The scale measures 11 traits of grandiose narcissism and 4 traits of vulnerable narcissism, both of which correlate with clinical ratings of NPD (with grandiose features of arrogance, grandiose fantasies, manipulativeness, entitlement and exploitativeness showing stronger relations). Later analysis revealed that the FFNI actually measures three factors:
Agentic Extraversion: an exaggerated sense of self-importance, grandiose fantasies, striving for greatness and acclaim, social dominance and authoritativeness, and exhibitionistic, charming interpersonal conduct.
Self-Centred Antagonism: disdain for others, psychological entitlement, interpersonally exploitative and manipulative behaviour, lack of empathy, anger in response to criticism or rebuke, suspiciousness, and thrill-seeking.
Narcissistic Neuroticism: shame-proneness, oversensitivity and negative emotionality to criticism and rebuke, and excessive need for admiration to maintain self-esteem.
Grandiose narcissism is a combination of agency and antagonism, and vulnerability is a combination of antagonism and neuroticism. The three factors show differential associations with clinically important variables. Agentic traits are associated with high self-esteem, positive view others and the future, autonomous and authentic living, commitment to personal growth, sense of purpose in life and life satisfaction. Neurotic traits show precisely the opposite correlation with all of these variables, while antagonistic traits show more complex associations; they are associated with negative view of others (but necessarily of the self), a sense of alienation from their 'true self', disinterest in personal growth, negative relationships with others, and all forms of aggression.
Millon Clinical Multiaxial Inventory
The Millon Clinical Multiaxial Inventory (MCMI) is another diagnostic test developed by Theodore Millon. The MCMI includes a scale for narcissism. The NPI and MCMI have been found to be well correlated. Whereas the MCMI measures narcissistic personality disorder (NPD), the NPI measures narcissism as it occurs in the general population; the MCMI is a screening tool. In other words, the NPI measures "normal" narcissism; i.e., most people who score very high on the NPI do not have NPD. Indeed, the NPI does not capture any sort of narcissism taxon as would be expected if it measured NPD.
A 2020 study found that females scored significantly higher on vulnerable narcissism than males, but no gender differences were found for grandiose narcissism.
Causes
The cause of narcissistic personality disorder (NPD) is unclear, although there is evidence for a strong biological or genetic underpinning. It is unclear if or how much a person's upbringing contributes to the development of NPD, although many speculative theories have been proposed.
Evidence to support social factors in the development of NPD is limited. Some studies have found NPD correlates with permissive and overindulgent parenting in childhood, while others have found correlations with harsh discipline, neglect or abuse. Findings have been inconsistent, and scientists do not know if these correlations are causal, as these studies do not control for genetic confounding.
This problem of genetic confounding is explained by psychologist Svenn Torgersen in a 2009 review:
Twin studies allow scientists to assess the influence of genes and environment, in particular, how much of the variation in a trait is attributed to the "shared environment" (influences shared by twins, such as parents and upbringing) or the "unshared environment" (measurement error, noise, differing illnesses between twins, randomness in brain growth, and social or non-social experiences that only one twin experienced). According to a 2018 review, twin studies of NPD have found little or no influence from the shared environment, and a major contribution of genes and the non-shared environment:
According to neurogeneticist Kevin Mitchell, a lack of influence from the shared environment indicates that the non-shared environmental influence may be largely non-social, perhaps reflecting innate processes such as randomness in brain growth.
Neuroscientists have also studied the brains of people with NPD using structural imaging technology. A 2021 review concluded the most consistent finding among NPD patients is lowered gray matter volume in the medial prefrontal cortex. Studies of the occurrence of narcissistic personality disorder identified structural abnormalities in the brains of people with NPD, specifically, a lesser volume of gray matter in the left, anterior insular cortex. The results of a 2015 study associated the condition of NPD with a reduced volume of gray matter in the prefrontal cortex. The regions of the brain identified and studied – the insular cortex and the prefrontal cortex – are associated with the human emotions of empathy and compassion, and with the mental functions of cognition and emotional regulation. The neurological findings of the studies suggest that NPD may be related to a compromised capacity for emotional empathy and emotional regulation.
Evolutionary models of NPD have also been proposed. According to psychologist Marco Del Giudice, cluster B traits including NPD, predict increased mating success and fertility. NPD could potentially be an adaptive evolutionary phenomena, though a risky one that can sometimes result in social rejection and failure to reproduce. Another proposal is that NPD may result from an excess of traits which are only adaptive in moderate amounts (leadership success increases with moderate degrees of narcissism, but declines at the high end of narcissism).
Research on NPD is limited, because patients are hard to recruit for study. The cause of narcissistic personality disorder requires further research.
Management
Treatment for NPD is primarily psychotherapeutic; there is no clear evidence that psychopharmacological treatment is effective for NPD, although it can prove useful for treating comorbid disorders. Psychotherapeutic treatment falls into two general categories: psychoanalytic/psychodynamic and cognitive behavioral. Psychoanalytic therapies include schema therapy, transference focused psychotherapy, mentalization-based treatment and metacognitive psychotherapy. Cognitive behavioral therapies include cognitive behavioral therapy and dialectal behavior therapy. Formats also include group therapy and couples therapy. The specific choice of treatment varies based on individual presentations.
Management of narcissistic personality disorder has not been well studied, however many treatments tailored to NPD exist. Therapy is complicated by the lack of treatment-seeking behavior in people with NPD, despite mental distress. Additionally, people with narcissistic personality disorders have decreased life satisfaction and lower qualities of life, irrespective of diagnosis. People with NPD often present with comorbid mental disorders, complicating diagnosis and treatment. NPD is rarely the primary reason for which people seek mental health treatment. When people with NPD enter treatment (psychologic or psychiatric), they often express seeking relief from a comorbid mental disorder, including major depressive disorder, a substance use disorder (drug addiction), or bipolar disorder.
Prognosis
, no treatment guidelines exist for NPD and no empirical studies have been conducted on specific NPD groups to determine efficacy for psychotherapies and pharmacology.
Though there is no known single cure for NPD, there are some things one can do to lessen its symptoms. Medications such as antidepressants, which treat depression, are commonly prescribed by healthcare providers; mood stabilizers to reduce mood swings and antipsychotic drugs to reduce the prevalence of psychotic episodes.
The presence of NPD in patients undergoing psychotherapy for the treatment of other mental disorders is associated with slower treatment progress and higher dropout rates. In this therapy, the goals often are examining traits and behaviors that negatively affect life, identifying ways these behaviors cause distress to the person and others, exploring early experiences that contributed to narcissistic defenses, developing new coping mechanisms to replace those defenses, helping the person see themselves and others in more realistic and nuanced ways, rather than wholly good or wholly bad, identifying and practicing more helpful patterns of behavior, developing interpersonal skills, and learning to consider the needs and feelings of others.
Epidemiology
, overall prevalence is estimated to range from 0.8% to 6.2%. In 2008 under the DSM-IV, lifetime prevalence of NPD was estimated to be 6.2%, with 7.7% for men and 4.8% for women, with a 2015 study confirming the gender difference. In clinical settings, prevalence estimates range from 1% to 15%. The occurrence of narcissistic personality disorder presents a high rate of comorbidity with other mental disorders.
History
The term "narcissism" comes from the first century (written in the year 8 AD) narrative poem the Metamorphoses by the Roman poet Ovid. Book III of the Metamorphoses features a myth about two main characters, Narcissus and Echo. Narcissus is a handsome young man who spurns the advances of many potential lovers. When Narcissus rejects Echo, a nymph cursed to only echo the sounds that others made, the goddess Nemesis punishes him by making him fall in love with his own reflection in a pool of water. When Narcissus discovers that the object of his love cannot love him back, he slowly pines away and dies.
The concept of excessive selfishness has been recognized throughout history. In ancient Greece, the concept was understood as hubris. It is only since the late 1800s that narcissism has been defined in psychological terms:
Havelock Ellis (1898) was the first psychologist to use the term when he linked the myth to the condition in one of his patients.
Sigmund Freud (1905–1953) used the terms "narcissistic libido" in his Three Essays on the Theory of Sexuality.
Ernest Jones (1913/1951) was the first to construe extreme narcissism as a character flaw.
Robert Waelder (1925) published the first case study of narcissism. His patient was a successful scientist with an attitude of superiority, an obsession with fostering self-respect, and a lack of normal feelings of guilt. The patient was aloof and independent from others and had an inability to empathize with others' situations, and was selfish sexually. Waelder's patient was also overly logical and analytical and valued abstract intellectual thought (thinking for thinking's sake) over the practical application of scientific knowledge.
Narcissistic personality was first described by the psychoanalyst Robert Waelder in 1925. The term narcissistic personality disorder (NPD) was coined by Heinz Kohut in 1968. Waelder's initial study has been influential in the way narcissism and the clinical disorder Narcissistic personality disorder are defined today
Freudianism and psychoanalysis
Much of the early history of narcissism and NPD originates from psychoanalysis. Regarding the adult neurotic's sense of omnipotence, Sigmund Freud said that "this belief is a frank acknowledgement of a relic of the old megalomania of infancy"; and concluded that: "we can detect an element of megalomania in most other forms of paranoic disorder. We are justified in assuming that this megalomania is essentially of an infantile nature, and that, as development proceeds, it is sacrificed to social considerations."
Narcissistic injury and narcissistic scar are terms used by Freud in the 1920s. Narcissistic wound and narcissistic blow are other, almost interchangeable, terms. When wounded in the ego, either by a real or a perceived criticism, a narcissistic person's displays of anger can be disproportionate to the nature of the criticism suffered; but typically, the actions and responses of the NPD person are deliberate and calculated. Despite occasional flare-ups of personal insecurity, the inflated self-concept of the NPD person is primarily stable.
In The Psychology of Gambling (1957), Edmund Bergler considered megalomania to be a normal occurrence in the psychology of a child, a condition later reactivated in adult life, if the individual takes up gambling. In The Psychoanalytic Theory of Neurosis (1946), Otto Fenichel said that people who, in their later lives, respond with denial to their own narcissistic injury usually undergo a similar regression to the megalomania of childhood.
Narcissistic supply
Narcissistic supply was a concept introduced by Otto Fenichel in 1938, to describe a type of admiration, interpersonal support, or sustenance drawn by an individual from his or her environment and essential to their self-esteem. The term is typically used in a negative sense, describing a pathological or excessive need for attention or admiration that does not take into account the feelings, opinions, or preferences of other people.
Narcissistic rage
The term narcissistic rage was a concept introduced by Heinz Kohut in 1972. Narcissistic rage was theorised as a reaction to a perceived threat to a narcissist's self-esteem or self-worth. Narcissistic rage occurs on a continuum from aloofness, to expressions of mild irritation or annoyance, to serious outbursts, including violent attacks.
Narcissistic rage reactions are not necessarily limited to narcissistic personality disorder. They may also be seen in catatonic, paranoid delusion, and depressive episodes. It was later suggested that narcissistic people have two layers of rage; the first layer of rage being directed constant anger towards someone else, with the second layer being self-deprecating.
Object relations
In the second half of the 20th century, in contrast to Freud's perspective of megalomania as an obstacle to psychoanalysis, in the US and UK Kleinian psychologists used the object relations theory to re-evaluate megalomania as a defence mechanism. This Kleinian therapeutic approach built upon Heinz Kohut's view of narcissistic megalomania as an aspect of normal mental development, by contrast with Otto Kernberg's consideration of such grandiosity as a pathological distortion of normal psychological development.
To the extent that people are pathologically narcissistic, the person with NPD can be a self-absorbed individual who passes blame by psychological projection and is intolerant of contradictory views and opinions; is apathetic towards the emotional, mental, and psychological needs of other people; and is indifferent to the negative effects of their behaviors, whilst insisting that people should see them as an ideal person. The merging of the terms "inflated self-concept" and "actual self" is evident in later research on the grandiosity component of narcissistic personality disorder, along with incorporating the defence mechanisms of idealization and devaluation and of denial.
Comparison to other personality disorders
NPD mainly overlaps with Borderline Personality Disorder (BPD), Antisocial Personality Disorder (ASPD), and Histrionic Personality Disorder (HPD) as all are "Cluster B" disorders. Cluster B personality disorders are characterized by impulsivity, an inability to regulate emotions, and a difficulty maintaining social relationships. Because these personality disorders share symptoms and behaviors with NPD, many are also comorbid with NPD.
Borderline Personality Disorder is characterized as having an instability in emotions, relationships, and self image. The presence of BPD is also highly associated with traits such grandiosity, lack of empathy, and praise-seeking behaviors which are characteristic of NPD. Research has shown that BPD does, in fact, have high comorbidity rates with NPD.
Antisocial Personality Disorder is similarly characterized by impulsivity and a lack of empathy, but also by a disregard for the rights of others and a tendency to manipulate others. Additionally, both antisociality and narcissism are heavily correlated with psychopathy, further suggesting an overlap between the two.
Controversy
The extent of controversy about narcissism was on display when the committee on personality disorders for the 5th Edition (2013) of the Diagnostic and Statistical Manual of Mental Disorders recommended the removal of Narcissistic Personality from the manual. A contentious three-year debate unfolded in the clinical community with one of the sharpest critics being John Gunderson, who led the DSM personality disorders committee for the 4th edition of the manual.
The American Psychiatric Association's (APA) formulation, description, and definition of narcissistic personality disorder, as published in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Ed., Text Revision (DSM-IV-TR, 2000), was criticised by clinicians as inadequately describing the range and complexity of the personality disorder that is NPD. That it is excessively focused upon "the narcissistic individual's external, symptomatic, or social interpersonal patterns – at the expense of ... internal complexity and individual suffering", which reduced the clinical utility of the NPD definition in the DSM-IV-TR.
In revising the diagnostic criteria for personality disorders, the work group for the list of "Personality and Personality Disorders" proposed the elimination of narcissistic personality disorder (NPD) as a distinct entry in the DSM-5, and thus replaced a categorical approach to NPD with a dimensional approach, which is based upon the severity of the dysfunctional-personality-trait domains. Clinicians critical of the DSM-5 revision characterized the new diagnostic system as an "unwieldy conglomeration of disparate models that cannot happily coexist", which is of limited usefulness in clinical practice. Despite the reintroduction of the NPD entry, the APA's re-formulation, re-description, and re-definition of NPD, towards a dimensional view based upon personality traits, remains in the list of personality disorders of the DSM-5.
A 2011 study concluded that narcissism should be conceived as personality dimensions pertinent to the full range of personality disorders, rather than as a distinct diagnostic category. In a 2012 literature review about NPD, the researchers concluded that narcissistic personality disorder "shows nosological inconsistency, and that its consideration as a trait domain needed further research would be strongly beneficial to the field." In a 2018 latent structure analysis, results suggested that the DSM-5 NPD criteria fail to distinguish some aspects of narcissism relevant to diagnosis of NPD and subclinical narcissism.
In popular culture
Suzanne Stone-Maretto, Nicole Kidman's character in the film To Die For (1995), wants to appear on television at all costs, even if this involves murdering her husband. A psychiatric assessment of her character noted that she "was seen as a prototypical narcissistic person by the raters: on average, she satisfied 8 of 9 criteria for narcissistic personality disorder... had she been evaluated for personality disorders, she would receive a diagnosis of narcissistic personality disorder".
Jay Gatsby, the eponymous character of F. Scott Fitzgerald's novel The Great Gatsby (1925), "an archetype of self-made American men seeking to join high society", has been described by English professor Giles Mitchell as a "pathological narcissist" for whom the "ego-ideal" has become "inflated and destructive" and whose "grandiose lies, poor sense of reality, sense of entitlement, and exploitive treatment of others" conspire toward his own demise.
| Biology and health sciences | Mental disorders | Health |
186919 | https://en.wikipedia.org/wiki/Stirling%20engine | Stirling engine | A Stirling engine is a heat engine that is operated by the cyclic expansion and contraction of air or other gas (the working fluid) by exposing it to different temperatures, resulting in a net conversion of heat energy to mechanical work.
More specifically, the Stirling engine is a closed-cycle regenerative heat engine, with a permanent gaseous working fluid. Closed-cycle, in this context, means a thermodynamic system in which the working fluid is permanently contained within the system. Regenerative describes the use of a specific type of internal heat exchanger and thermal store, known as the regenerator. Strictly speaking, the inclusion of the regenerator is what differentiates a Stirling engine from other closed-cycle hot air engines.
In the Stirling engine, a working fluid (e.g. air) is heated by energy supplied from outside the engine's interior space (cylinder). As the fluid expands, mechanical work is extracted by a piston, which is coupled to a displacer. The displacer moves the working fluid to a different location within the engine, where it is cooled, which creates a partial vacuum at the working cylinder, and more mechanical work is extracted. The displacer moves the cooled fluid back to the hot part of the engine, and the cycle continues.
A unique feature is the regenerator, which acts as a temporary heat store by retaining heat within the machine rather than dumping it into the heat sink, thereby increasing its efficiency.
The heat is supplied from the outside, so the hot area of the engine can be warmed with any external heat source. Similarly, the cooler part of the engine can be maintained by an external heat sink, such as running water or air flow. The gas is permanently retained in the engine, allowing a gas with the most-suitable properties to be used, such as helium or hydrogen. There are no intake and no exhaust gas flows so the machine is practically silent.
The machine is reversible so that if the shaft is turned by an external power source a temperature difference will develop across the machine; in this way it acts as a heat pump.
The Stirling engine was invented by Scotsman Robert Stirling in 1816 as an industrial prime mover to rival the steam engine, and its practical use was largely confined to low-power domestic applications for over a century.
Contemporary investment in renewable energy, especially solar energy, has given rise to its application within concentrated solar power and as a heat pump.
History
Early hot air engines
Robert Stirling is considered one of the fathers of hot air engines, along with earlier innovators such as Guillaume Amontons, who built the first working hot air engine in 1699.
Amontons was later followed by Sir George Cayley. This engine type was of those in which the fire is enclosed, and fed by air pumped in beneath the grate in sufficient quantity to maintain combustion, while by far the largest portion of the air enters above the fire, to be heated and expanded; the whole, together with the products of combustion, then acts on the piston, and passes through the working cylinder; and the operation being one of simple mixture only, no heating surface of metal is required, the air to be heated being brought into immediate contact with the fire.
Stirling came up with a first air engine in 1816. The principle of the Stirling Air Engine differs from that of Sir George Cayley (1807), in which the air is forced through the furnace and exhausted, whereas in Stirling's engine the air works in a closed circuit. The inventor devoted most of his attention to that.
A engine, built in 1818 for pumping water at an Ayrshire quarry, continued to work for some time until a careless attendant allowed the heater to overheat. This experiment proved to the inventor that, owing to the low working pressure obtainable, the engine could only be adapted to low power for which there was, at that time, no demand.
The Stirling 1816 patent was also about an "economiser," which is the predecessor of the regenerator. In this patent (# 4081) he describes the "economiser" technology and several applications where such technology can be used. Out of them came a new arrangement for a hot air engine.
With his brother James, Stirling patented a second hot air engine in 1827. They inverted the design so that the hot ends of the displacers were underneath the machinery, and they added a compressed air pump so the air within could be pressurised to around .
The Stirling brothers were followed shortly after (1828) by Parkinson & Crossley and Arnott in 1829.
These precursors, including Ericsson, have brought to the world the hot air engine technology and its enormous advantages over the steam engine. Each came with his own specific technology, and although the Stirling engine and the Parkinson & Crossley engines were quite similar, Robert Stirling distinguished himself by inventing the regenerator.
Parkinson and Crossley introduced the principle of using air of greater density than that of the atmosphere and so obtained an engine of greater power in the same compass. James Stirling followed this same idea when he built the famous Dundee engine.
The Stirling patent of 1827 was the base of the Stirling third patent of 1840. The changes from the 1827 patent were minor but essential, and this third patent led to the Dundee engine.
James Stirling presented his engine to the Institution of Civil Engineers in 1845, the first engine of this kind which, after various modifications, was efficiently constructed and heated, had a cylinder of in diameter, with a length of stroke of , and made 40 strokes or revolutions in a minute (40 rpm). This engine moved all the machinery at the Dundee Foundry Company's works for eight or ten months, and was previously found capable of raising 320,000 kg (700,000 lbs) 60 cm (2 ft) in a minute, a power of approximately .
Finding this power insufficient for their works, the Dundee Foundry Company erected the second engine with a cylinder of in diameter, a stroke of , and making 28 strokes in a minute. When this engine had been in continuous operation for over two years it had not only performed the work of the foundry in the most satisfactory manner but had been tested (by a friction brake on a third mover) to the extent of lifting nearly , approximately .
Invention and early development
The Stirling engine (or Stirling's air engine as it was known at the time) was invented and patented in 1816. It followed earlier attempts at making an air engine but was probably the first put to practical use when, in 1818, an engine built by Stirling was employed pumping water in a quarry. The main subject of Stirling's original patent was a heat exchanger, which he called an "economiser" for its enhancement of fuel economy in a variety of applications. The patent also described in detail the employment of one form of the economiser in his unique closed-cycle air engine design in which application it is now generally known as a "regenerator". Subsequent development by Robert Stirling and his brother James, an engineer, resulted in patents for various improved configurations of the original engine including pressurization, which by 1843, had sufficiently increased power output to drive all the machinery at a Dundee iron foundry.
A paper presented by James Stirling in June 1845 to the Institution of Civil Engineers stated that his aims were not only to save fuel but also to create a safer alternative to the steam engines of the time, whose boilers frequently exploded, causing many injuries and fatalities. This has, however, been disputed.
The need for Stirling engines to run at very high temperatures to maximize power and efficiency exposed limitations in the materials of the day, and the few engines that were built in those early years suffered unacceptably frequent failures (albeit with far less disastrous consequences than boiler explosions). For example, the Dundee foundry engine was replaced by a steam engine after three hot cylinder failures in four years.
Later 19th century
Subsequent to the replacement of the Dundee foundry engine there is no record of the Stirling brothers having any further involvement with air engine development, and the Stirling engine never again competed with steam as an industrial scale power source. (Steam boilers were becoming safer, e.g. the Hartford Steam Boiler and steam engines more efficient, thus presenting less of a target for rival prime movers). However, beginning about 1860, smaller engines of the Stirling/hot air type were produced in substantial numbers for applications in which reliable sources of low to medium power were required, such as pumping air for church organs or raising water. These smaller engines generally operated at lower temperatures so as not to tax available materials, and so were relatively inefficient. Their selling point was that unlike steam engines, they could be operated safely by anybody capable of managing a fire. The 1906 Rider-Ericsson Engine Co. catalog claimed that "any gardener or ordinary domestic can operate these engines and no licensed or experienced engineer is required". Several types remained in production beyond the end of the century, but apart from a few minor mechanical improvements the design of the Stirling engine in general stagnated during this period.
20th-century revival
During the early part of the 20th century, the role of the Stirling engine as a "domestic motor" was gradually taken over by electric motors and small internal combustion engines. By the late 1930s, it was largely forgotten, only produced for toys and a few small ventilating fans.
Philips MP1002CA
Around that time, Philips was seeking to expand sales of its radios into parts of the world where grid electricity and batteries were not consistently available. Philips' management decided that offering a low-power portable generator would facilitate such sales and asked a group of engineers at the company's research lab in Eindhoven to evaluate alternative ways of achieving this aim. After a systematic comparison of various prime movers, the team decided to go forward with the Stirling engine, citing its quiet operation (both audibly and in terms of radio interference) and ability to run on a variety of heat sources (common lamp oil – "cheap and available everywhere" – was favored). They were also aware that, unlike steam and internal combustion engines, virtually no serious development work had been carried out on the Stirling engine for many years and asserted that modern materials and know-how should enable great improvements.
By 1951, the 180/200 W generator set designated MP1002CA (known as the "Bungalow set") was ready for production and an initial batch of 250 was planned, but soon it became clear that they could not be made at a competitive price. Additionally, the advent of transistor radios and their much lower power requirements meant that the original reason for the set was disappearing. Approximately 150 of these sets were eventually produced. Some found their way into university and college engineering departments around the world, giving generations of students a valuable introduction to the Stirling engine; a letter dated March 1961 from Research and Control Instruments Ltd. London WC1 to North Devon Technical College, offering "remaining stocks... to institutions such as yourselves... at a special price of £75 net".
In parallel with the Bungalow set, Philips developed experimental Stirling engines for a wide variety of applications and continued to work in the field until the late 1970s, but only achieved commercial success with the "reversed Stirling engine" cryocooler. They filed a large number of patents and amassed a wealth of information which they licensed to other companies and which formed the basis of much of the development work in the modern era.
Submarine use
In 1996, the Swedish navy commissioned three Gotland-class submarines. On the surface, these boats are propelled by marine diesel engines; however, when submerged they use a Stirling-driven generator developed by Swedish shipbuilder Kockums to recharge batteries and provide electrical power for propulsion. A supply of liquid oxygen is carried to support burning of diesel fuel to power the engine. Stirling engines are also fitted to Swedish Södermanland-class submarines, the Archer-class submarines in service in Singapore, and the Japanese Sōryū-class submarines, with the engines license-built by Kawasaki Heavy Industries. In a submarine application, the Stirling engine offers the advantage of being exceptionally quiet when running.
21st-century developments
By the turn of the 21st century, Stirling engines were used in the dish version of Concentrated Solar Power systems. A mirrored dish similar to a very large satellite dish directs and concentrates sunlight onto a thermal receiver, which absorbs and collects the heat and using a fluid transfers it into the Stirling engine. The resulting mechanical power is then used to run a generator or alternator to produce electricity.
The core component of micro combined heat and power (CHP) units can be formed by a Stirling cycle engine, as they are more efficient and safer than a comparable steam engine. By 2003, CHP units were being commercially installed in domestic applications, such as home electrical generators.
In 2013, an article was published about scaling laws of free-piston Stirling engines based on six characteristic dimensionless groups.
Name and classification
Robert Stirling patented the first practical example of a closed-cycle hot air engine in 1816, and it was suggested by Fleeming Jenkin as early as 1884 that all such engines should therefore generically be called Stirling engines. This naming proposal found little favour, and the various types on the market continued to be known by the name of their individual designers or manufacturers, e.g., Rider's, Robinson's, or Heinrici's (hot) air engine. In the 1940s, the Philips company was seeking a suitable name for its own version of the 'air engine', which by that time had been tested with working fluids other than air, and decided upon 'Stirling engine' in April 1945. However, nearly thirty years later, Graham Walker still had cause to bemoan the fact such terms as hot air engine remained interchangeable with Stirling engine, which itself was applied widely and indiscriminately, a situation that continues.
Like the steam engine, the Stirling engine is traditionally classified as an external combustion engine, as all heat transfers to and from the working fluid take place through a solid boundary (heat exchanger) thus isolating the combustion process and any contaminants it may produce from the working parts of the engine. This contrasts with an internal combustion engine, where heat input is by combustion of a fuel within the body of the working fluid. Most of the many possible implementations of the Stirling engine fall into the category of reciprocating piston engine.
Theory
The idealised Stirling cycle consists of four thermodynamic processes acting on the working fluid:
Isothermal expansion. The expansion-space and associated heat exchanger are maintained at a constant high temperature, and the gas undergoes near-isothermal expansion absorbing heat from the hot source.
Constant-volume (known as isovolumetric or isochoric) heat-removal. The gas is passed through the regenerator, where it cools, transferring heat to the regenerator for use in the next cycle.
Isothermal compression. The compression space and associated heat exchanger are maintained at a constant low temperature so the gas undergoes near-isothermal compression rejecting heat to the cold sink
Constant-volume (known as isovolumetric or isochoric) heat-addition. The gas passes back through the regenerator where it recovers much of the heat transferred in process 2, heating up on its way to the expansion space.
With the ideal, maximally efficient, Stirling engine, for the thermal reservoirs the ratio of the heat in to the heat out is the efficiency of the ideal Carnot cycle. This is the Carnot efficiency, which is the ratio of the Kelvin temperatures of the cold to the hot reservoir. With the ideal, maximally efficient, Carnot cycle, the isochores (constant volume) are replaced by adiabats (no net heat transfer because no heat transfer). For the ideal Stirling cycle, whatever heat enters during the isochoric leg where the temperature increases is totally released during the isochoric leg where the temperature decreases (no net heat transfer).
The engine is designed so the working gas is generally compressed in the colder portion of the engine and expanded in the hotter portion resulting in a net conversion of heat into work. An internal regenerative heat exchanger increases the Stirling engine's thermal efficiency compared to simpler hot air engines lacking this feature.
The Stirling engine uses the temperature difference between its hot end and cold end to establish a cycle of a fixed mass of gas, heated and expanded, and cooled and compressed, thus converting thermal energy into mechanical energy. The greater the temperature difference between the hot and cold sources, the greater the thermal efficiency. The maximum theoretical efficiency is equivalent to that of the Carnot cycle, but the efficiency of real engines is less than this value because of friction and other losses.
Since the Stirling engine is a closed cycle, it contains a fixed mass of gas called the "working fluid", most commonly air, hydrogen or helium. In normal operation, the engine is sealed and no gas enters or leaves; no valves are required, unlike other types of piston engines. The Stirling engine, like most heat engines, cycles through four main processes: cooling, compression, heating, and expansion. This is accomplished by moving the gas back and forth between hot and cold heat exchangers, often with a regenerator between the heater and cooler. The hot heat exchanger is in thermal contact with an external heat source, such as a fuel burner, and the cold heat exchanger is in thermal contact with an external heat sink, such as air fins. A change in gas temperature causes a corresponding change in gas pressure, while the motion of the piston makes the gas alternately expand and compress.
The gas follows the behaviour described by the gas laws that describe how a gas's pressure, temperature, and volume are related. When the gas is heated, the pressure rises (because it is in a sealed chamber) and this pressure then acts on the power piston to produce a power stroke. When the gas is cooled the pressure drops and this drop means that the piston needs to do less work to compress the gas on the return stroke. The difference in work between the strokes yields a net positive power output.
When one side of the piston is open to the atmosphere, the operation is slightly different. As the sealed volume of working gas comes in contact with the hot side, it expands, doing work on both the piston and on the atmosphere. When the working gas contacts the cold side, its pressure drops below atmospheric pressure and the atmosphere pushes on the piston and does work on the gas.
Components
As a consequence of closed-cycle operation, the heat driving a Stirling engine must be transmitted from a heat source to the working fluid by heat exchangers and finally to a heat sink. A Stirling engine system has at least one heat source, one heat sink and up to five heat exchangers. Some types may combine or dispense with some of these.
Heat source
The heat source may be provided by the combustion of a fuel and, since the combustion products do not mix with the working fluid and hence do not come into contact with the internal parts of the engine, a Stirling engine can run on fuels that would damage other engine types' internals, such as landfill gas, which may contain siloxane that could deposit abrasive silicon dioxide in conventional engines.
Other suitable heat sources include concentrated solar energy, geothermal energy, nuclear energy, waste heat and bioenergy. If solar power is used as a heat source, regular solar mirrors and solar dishes may be utilised. The use of Fresnel lenses and mirrors has also been advocated, for example in planetary surface exploration. Solar powered Stirling engines are increasingly popular as they offer an environmentally sound option for producing power while some designs are economically attractive in development projects.
Heat exchangers
Designing Stirling engine heat exchangers is a balance between high heat transfer with low viscous pumping losses, and low dead space (unswept internal volume). Engines that operate at high powers and pressures require that heat exchangers on the hot side be made of alloys that retain considerable strength at high temperatures and that don't corrode or creep.
In small, low power engines the heat exchangers may simply consist of the walls of the respective hot and cold chambers, but where larger powers are required a greater surface area is needed to transfer sufficient heat. Typical implementations are internal and external fins or multiple small bore tubes for the hot side, and a cooler using a liquid (like water) for the cool side.
Regenerator
In a Stirling engine, the regenerator is an internal heat exchanger and temporary heat store placed between the hot and cold spaces such that the working fluid passes through it first in one direction then the other, taking heat from the fluid in one direction, and returning it in the other. It can be as simple as metal mesh or foam, and benefits from high surface area, high heat capacity, low conductivity and low flow friction. Its function is to retain within the system that heat which would otherwise be exchanged with the environment at temperatures intermediate to the maximum and minimum cycle temperatures, thus enabling the thermal efficiency of the cycle (though not of any practical engine) to approach the limiting Carnot efficiency.
The primary effect of regeneration in a Stirling engine is to increase the thermal efficiency by 'recycling' internal heat which would otherwise pass through the engine irreversibly. As a secondary effect, increased thermal efficiency yields a higher power output from a given set of hot and cold end heat exchangers. These usually limit the engine's heat throughput. In practice this additional power may not be fully realized as the additional "dead space" (unswept volume) and pumping loss inherent in practical regenerators reduces the potential efficiency gains from regeneration.
The design challenge for a Stirling engine regenerator is to provide sufficient heat transfer capacity without introducing too much additional internal volume ('dead space') or flow resistance. These inherent design conflicts are one of many factors that limit the efficiency of practical Stirling engines. A typical design is a stack of fine metal wire meshes, with low porosity to reduce dead space, and with the wire axes perpendicular to the gas flow to reduce conduction in that direction and to maximize convective heat transfer.
The regenerator is the key component invented by Robert Stirling, and its presence distinguishes a true Stirling engine from any other closed-cycle hot air engine. Many small 'toy' Stirling engines, particularly low-temperature difference (LTD) types, do not have a distinct regenerator component and might be considered hot air engines; however, a small amount of regeneration is provided by the surface of the displacer itself and the nearby cylinder wall, or similarly the passage connecting the hot and cold cylinders of an alpha configuration engine.
Heat sink
The larger the temperature difference between the hot and cold sections of a Stirling engine, the greater the engine's efficiency. The heat sink is typically the environment the engine operates in, at ambient temperature. In the case of medium- to high-power engines, a radiator is required to transfer the heat from the engine to the ambient air. Marine engines have the advantage of using cool ambient sea, lake, or river water, which is typically cooler than ambient air. In the case of combined heat and power systems, the engine's cooling water is used directly or indirectly for heating purposes, raising efficiency.
Alternatively, heat may be supplied at ambient temperature and the heat sink maintained at a lower temperature by such means as cryogenic fluid (see Liquid nitrogen economy) or iced water.
Displacer
The displacer is a special-purpose piston, used in Beta and Gamma type Stirling engines, to move the working gas back and forth between the hot and cold heat exchangers. Depending on the type of engine design, the displacer may or may not be sealed to the cylinder; i.e., it may be a loose fit within the cylinder, allowing the working gas to pass around it as it moves to occupy the part of the cylinder beyond. The Alpha type engine has a high stress on the hot side, that's why so few inventors started to use a hybrid piston for that side. The hybrid piston has a sealed part as a normal Alpha type engine, but it has a connected displacer part with smaller diameter as the cylinder around that. The compression ratio is a bit smaller than in the original Alpha type engines, but the stress factor is pretty low on the sealed parts.
Configurations
The three major types of Stirling engines are distinguished by the way they move the air between the hot and cold areas:
The alpha configuration has two power pistons, one in a hot cylinder, one in a cold cylinder, and the gas is driven between the two by the pistons; it is typically in a V-formation with the pistons joined at the same point on a crankshaft.
The beta configuration has a single cylinder with a hot end and a cold end, containing a power piston and a 'displacer' that drives the gas between the hot and cold ends. It is typically used with a rhombic drive to achieve the phase difference between the displacer and power pistons, but they can be joined 90 degrees out of phase on a crankshaft.
The gamma configuration has two cylinders: one containing a displacer, with a hot and a cold end, and one for the power piston; they are joined to form a single space, so the cylinders have equal pressure; the pistons are typically in parallel and joined 90 degrees out of phase on a crankshaft.
Alpha
An alpha Stirling contains two power pistons in separate cylinders, one hot and one cold. The hot cylinder is situated inside the high-temperature heat exchanger and the cold cylinder is situated inside the low-temperature heat exchanger. This type of engine has a high power-to-volume ratio but has technical problems because of the usually high temperature of the hot piston and the durability of its seals. In practice, this piston usually carries a large insulating head to move the seals away from the hot zone at the expense of some additional dead space. The crank angle has a major effect on efficiency and the best angle frequently must be found experimentally. An angle of 90° frequently locks.
A four-step description of the process is as follows:
Most of the working gas is in the hot cylinder and has more contact with the hot cylinder's walls. This results in overall heating of the gas. Its pressure increases and the gas expands. Because the hot cylinder is at its maximum volume and the cold cylinder is at mid stroke (partial volume), the volume of the system is increased by expansion into the cold cylinder.
The system is at its maximum volume and more gas has contact with the cold cylinder. This cools the gas, lowering its pressure. Because of flywheel momentum or other piston pairs on the same shaft, the hot cylinder begins an upstroke reducing the volume of the system.
Almost all the gas is now in the cold cylinder and cooling continues. This continues to reduce the pressure of the gas and cause contraction. Because the hot cylinder is at minimum volume and the cold cylinder is at its maximum volume, the volume of the system is further reduced by compression of the cold cylinder inwards.
The system is at its minimum volume and the gas has greater contact with the hot cylinder. The volume of the system increases by expansion of the hot cylinder.
Beta
A beta Stirling has a single power piston arranged within the same cylinder on the same shaft as a displacer piston. The displacer piston is a loose fit and does not extract any power from the expanding gas but only serves to shuttle the working gas between the hot and cold heat exchangers. When the working gas is pushed to the hot end of the cylinder it expands and pushes the power piston. When it is pushed to the cold end of the cylinder it contracts and the momentum of the machine, usually enhanced by a flywheel, pushes the power piston the other way to compress the gas. Unlike the alpha type, the beta type avoids the technical problems of hot moving seals, as the power piston is not in contact with the hot gas.
Power piston (dark grey) has compressed the gas, the displacer piston (light grey) has moved so that most of the gas is adjacent to the hot heat exchanger.
The heated gas increases in pressure and pushes the power piston to the farthest limit of the power stroke.
The displacer piston now moves, shunting the gas to the cold end of the cylinder.
The cooled gas is now compressed by the flywheel momentum. This takes less energy, since its pressure drops when it is cooled.
Other types
Other Stirling configurations continue to interest engineers and inventors.
The rotary Stirling engine seeks to convert power from the Stirling cycle directly into torque, similar to the rotary combustion engine. No practical engine has yet been built but a number of concepts, models and patents have been produced, such as the Quasiturbine engine.
A hybrid between piston and rotary configuration is a double-acting engine. This design rotates the displacers on either side of the power piston. In addition to giving great design variability in the heat transfer area, this layout eliminates all but one external seal on the output shaft and one internal seal on the piston. Also, both sides can be highly pressurized as they balance against each other.
Another alternative is the Fluidyne engine (or Fluidyne heat pump), which uses hydraulic pistons to implement the Stirling cycle. The work produced by a Fluidyne engine goes into pumping the liquid. In its simplest form, the engine contains a working gas, a liquid, and two non-return valves.
The Ringbom engine concept published in 1907 has no rotary mechanism or linkage for the displacer. This is instead driven by a small auxiliary piston, usually a thick displacer rod, with the movement limited by stops.
The engineer Andy Ross invented a two-cylinder Stirling engine (positioned at 0°, not 90°) connected using a special yoke.
The Franchot engine is a double-acting engine invented by Charles-Louis-Félix Franchot in the nineteenth century. In a double-acting engine, the pressure of the working fluid acts on both sides of the piston. One of the simplest forms of a double-acting machine, the Franchot engine consists of two pistons and two cylinders, and acts like two separate alpha machines. In the Franchot engine, each piston acts in two gas phases, which makes more efficient use of the mechanical components than a single-acting alpha machine. However, a disadvantage of this machine is that one connecting rod must have a sliding seal at the hot side of the engine, which is difficult when dealing with high pressures and temperatures.
Free-piston engines
Free-piston Stirling engines include those with liquid pistons and those with diaphragms as pistons. In a free-piston device, energy may be added or removed by an electrical linear alternator, pump or other coaxial device. This avoids the need for a linkage, and reduces the number of moving parts. In some designs, friction and wear are nearly eliminated by the use of non-contact gas bearings or very precise suspension through planar springs.
Four basic steps in the cycle of a free-piston Stirling engine are:
The power piston is pushed outwards by the expanding gas thus doing work. Gravity plays no role in the cycle.
The gas volume in the engine increases and therefore the pressure reduces, which causes a pressure difference across the displacer rod to force the displacer towards the hot end. When the displacer moves, the piston is almost stationary and therefore the gas volume is almost constant. This step results in the constant volume cooling process, which reduces the pressure of the gas.
The reduced pressure now arrests the outward motion of the piston and it begins to accelerate towards the hot end again and by its own inertia, compresses the now cold gas, which is mainly in the cold space.
As the pressure increases, a point is reached where the pressure differential across the displacer rod becomes large enough to begin to push the displacer rod (and therefore also the displacer) towards the piston and thereby collapsing the cold space and transferring the cold, compressed gas towards the hot side in an almost constant volume process. As the gas arrives in the hot side the pressure increases and begins to move the piston outwards to initiate the expansion step as explained in (1).
In the early 1960s, William T. Beale of Ohio University located in Athens, Ohio, invented a free piston version of the Stirling engine to overcome the difficulty of lubricating the crank mechanism. While the invention of the basic free piston Stirling engine is generally attributed to Beale, independent inventions of similar types of engines were made by E.H. Cooke-Yarborough and C. West at the Harwell Laboratories of the UK AERE. G.M. Benson also made important early contributions and patented many novel free-piston configurations.
The first known mention of a Stirling cycle machine using freely moving components is a British patent disclosure in 1876. This machine was envisaged as a refrigerator (i.e., the reversed Stirling cycle). The first consumer product to utilize a free piston Stirling device was a portable refrigerator manufactured by Twinbird Corporation of Japan and offered in the US by Coleman in 2004.
Flat engines
Design of the flat double-acting Stirling engine solves the drive of a displacer with the help of the fact that areas of the hot and cold pistons of the displacer are different.
The drive does so without any mechanical transmission. Using diaphragms eliminates friction and need for lubricants.
When the displacer is in motion, the generator holds the working piston in the limit position, which brings the engine working cycle close to an ideal Stirling cycle. The ratio of the area of the heat exchangers to the volume of the machine increases by the implementation of a flat design.
Flat design of the working cylinder approximates thermal process of the expansion and compression closer to the isothermal one.
The disadvantage is a large area of the thermal insulation between the hot and cold space.
Thermoacoustic cycle
Thermoacoustic devices are very different from Stirling devices, although the individual path travelled by each working gas molecule does follow a real Stirling cycle. These devices include the thermoacoustic engine and thermoacoustic refrigerator. High-amplitude acoustic standing waves cause compression and expansion analogous to a Stirling power piston, while out-of-phase acoustic travelling waves cause displacement along a temperature gradient, analogous to a Stirling displacer piston. Thus a thermoacoustic device typically does not have a displacer, as found in a beta or gamma Stirling.
Other developments
NASA has considered nuclear-decay heated Stirling Engines for extended missions to the outer solar system. In 2018, NASA and the United States Department of Energy announced that they had successfully tested a new type of nuclear reactor called KRUSTY, which stands for "Kilopower Reactor Using Stirling TechnologY", and which is designed to be able to power deep space vehicles and probes as well as exoplanetary encampments.
At the 2012 Cable-Tec Expo put on by the Society of Cable Telecommunications Engineers, Dean Kamen took the stage with Time Warner Cable Chief Technology Officer Mike LaJoie to announce a new initiative between his company Deka Research and the SCTE. Kamen refers to it as a Stirling engine.
Operational considerations
Size and temperature
Very low-power engines have been built that run on a temperature difference of as little as 0.5 K. A displacer-type Stirling engine has one piston and one displacer. A temperature difference is required between the top and bottom of the large cylinder to run the engine. In the case of the low-temperature-difference (LTD) Stirling engine, the temperature difference between one's hand and the surrounding air can be enough to run the engine. The power piston in the displacer-type Stirling engine is tightly sealed and is controlled to move up and down as the gas inside expands. The displacer, on the other hand, is very loosely fitted so that air can move freely between the hot and cold sections of the engine as the piston moves up and down. The displacer moves up and down to cause most of the gas in the displacer cylinder to be either heated, or cooled.
Stirling engines, especially those that run on small temperature differentials, are quite large for the amount of power that they produce (i.e., they have low specific power). This is primarily due to the heat transfer coefficient of gaseous convection, which limits the heat flux that can be attained in a typical cold heat exchanger to about 500 W/(m2·K), and in a hot heat exchanger to about 500–5000 W/(m2·K). Compared with internal combustion engines, this makes it more challenging for the engine designer to transfer heat into and out of the working gas. Because of the thermal efficiency the required heat transfer grows with lower temperature difference, and the heat exchanger surface (and cost) for 1 kW output grows with (1/ΔT)2. Therefore, the specific cost of very low temperature difference engines is very high. Increasing the temperature differential and/or pressure allows Stirling engines to produce more power, assuming the heat exchangers are designed for the increased heat load, and can deliver the convected heat flux necessary.
A Stirling engine cannot start instantly; it literally needs to "warm up". This is true of all external combustion engines, but the warm up time may be longer for Stirlings than for others of this type such as steam engines. Stirling engines are best used as constant speed engines.
Power output of a Stirling tends to be constant and to adjust it can sometimes require careful design and additional mechanisms. Typically, changes in output are achieved by varying the displacement of the engine (often through use of a swashplate crankshaft arrangement), or by changing the quantity of working fluid, or by altering the piston/displacer phase angle, or in some cases simply by altering the engine load. This property is less of a drawback in hybrid electric propulsion or "base load" utility generation where constant power output is actually desirable.
Gas choice
The gas used should have a low heat capacity, so that a given amount of transferred heat leads to a large increase in pressure. Considering this issue, helium would be the best gas because of its very low heat capacity. Air is a viable working fluid, but the oxygen in a highly pressurized air engine can cause fatal accidents caused by lubricating oil explosions. Following one such accident Philips pioneered the use of other gases to avoid such risk of explosions.
Hydrogen's low viscosity and high thermal conductivity make it the most powerful working gas, primarily because the engine can run faster than with other gases. However, because of hydrogen absorption, and given the high diffusion rate associated with this low molecular weight gas, particularly at high temperatures, H2 leaks through the solid metal of the heater. Diffusion through carbon steel is too high to be practical, but may be acceptably low for metals such as aluminum, or even stainless steel. Certain ceramics also greatly reduce diffusion. Hermetic pressure vessel seals are necessary to maintain pressure inside the engine without replacement of lost gas. For high-temperature-differential (HTD) engines, auxiliary systems may be required to maintain high-pressure working fluid. These systems can be a gas storage bottle or a gas generator. Hydrogen can be generated by electrolysis of water, the action of steam on red hot carbon-based fuel, by gasification of hydrocarbon fuel, or by the reaction of acid on metal. Hydrogen can also cause the embrittlement of metals. Hydrogen is a flammable gas, which is a safety concern if released from the engine.
Most technically advanced Stirling engines, like those developed for United States government labs, use helium as the working gas, because it functions close to the efficiency and power density of hydrogen with fewer of the material containment issues. Helium is inert, and hence not flammable. Helium is relatively expensive, and must be supplied as bottled gas. One test showed hydrogen to be 5% (absolute) more efficient than helium (24% relatively) in the GPU-3 Stirling engine. The researcher Allan Organ demonstrated that a well-designed air engine is theoretically just as efficient as a helium or hydrogen engine, but helium and hydrogen engines are several times more powerful per unit volume.
Some engines use air or nitrogen as the working fluid. These gases have much lower power density (which increases engine costs), but they are more convenient to use and they minimize the problems of gas containment and supply (which decreases costs). The use of compressed air in contact with flammable materials or substances such as lubricating oil introduces an explosion hazard, because compressed air contains a high partial pressure of oxygen. However, oxygen can be removed from air through an oxidation reaction or bottled nitrogen can be used, which is nearly inert and very safe.
Other possible lighter-than-air gases include methane and ammonia.
Pressurization
In most high-power Stirling engines, both the minimum pressure and mean pressure of the working fluid are above atmospheric pressure. This initial engine pressurization can be realized by a pump, or by filling the engine from a compressed gas tank, or even just by sealing the engine when the mean temperature is lower than the mean operating temperature. All of these methods increase the mass of working fluid in the thermodynamic cycle. All of the heat exchangers must be sized appropriately to supply the necessary heat transfer rates. If the heat exchangers are well designed and can supply the heat flux needed for convective heat transfer, then the engine, in a first approximation, produces power in proportion to the mean pressure, as predicted by the West number and Beale number. In practice, the maximum pressure is also limited to the safe pressure of the pressure vessel. Like most aspects of Stirling engine design, optimization is multivariate, and often has conflicting requirements. A difficulty of pressurization is that while it improves the power, the heat required increases proportionately to the increased power. This heat transfer is made increasingly difficult with pressurization since increased pressure also demands increased thicknesses of the walls of the engine, which, in turn, increase the resistance to heat transfer.
Lubricants and friction
At high temperatures and pressures, the oxygen in air-pressurized crankcases, or in the working gas of hot air engines, can combine with the engine's lubricating oil and explode. At least one person has died in such an explosion. Lubricants can also clog heat exchangers, especially the regenerator. For these reasons, designers prefer non-lubricated, low-coefficient of friction materials (such as rulon or graphite), with low normal forces on the moving parts, especially for sliding seals. Some designs avoid sliding surfaces altogether by using diaphragms for sealed pistons. These are some of the factors that allow Stirling engines to have lower maintenance requirements and longer life than internal-combustion engines.
Efficiency
Theoretical thermal efficiency equals that of the ideal Carnot cycle, i.e. the highest efficiency attainable by any heat engine. However, though it is useful for illustrating general principles, practical Stirling engines deviate substantially from the ideal. It has been argued that its indiscriminate use in many standard books on engineering thermodynamics has done a disservice to the study of Stirling engines in general.
Stirling engines cannot achieve total efficiencies typical of an internal combustion engine, the main constraint being thermal efficiency. During internal combustion, temperatures achieve around 1500 °C–1600 °C for a short period of time, resulting in greater mean heat supply temperature of the thermodynamic cycle than any Stirling engine could achieve. It is not possible to supply heat at temperatures that high by conduction, as it is done in Stirling engines because no material could conduct heat from combustion in that high temperature without huge heat losses and problems related to heat deformation of materials.
Stirling engines are capable of quiet operation and can use almost any heat source. The heat energy source is generated external to the Stirling engine rather than by internal combustion as with the Otto cycle or Diesel cycle engines. This type of engine is currently generating interest as the core component of micro combined heat and power (CHP) units, in which it is more efficient and safer than a comparable steam engine. However, it has a low power-to-weight ratio, rendering it more suitable for use in static installations where space and weight are not at a premium.
Other real-world issues reduce the efficiency of actual engines, due to the limits of convective heat transfer and viscous flow (friction). There are also practical, mechanical considerations: for instance, a simple kinematic linkage may be favoured over a more complex mechanism needed to replicate the idealized cycle, and limitations imposed by available materials such as non-ideal properties of the working gas, thermal conductivity, tensile strength, creep, rupture strength, and melting point. A question that often arises is whether the ideal cycle with isothermal expansion and compression is in fact the correct ideal cycle to apply to the Stirling engine. Professor C. J. Rallis has pointed out that it is very difficult to imagine any condition where the expansion and compression spaces may approach isothermal behavior and it is far more realistic to imagine these spaces as adiabatic. An ideal analysis where the expansion and compression spaces are taken to be adiabatic with isothermal heat exchangers and perfect regeneration was analyzed by Rallis and presented as a better ideal yardstick for Stirling machinery. He called this cycle the 'pseudo-Stirling cycle' or 'ideal adiabatic Stirling cycle'. An important consequence of this ideal cycle is that it does not predict Carnot efficiency. A further conclusion of this ideal cycle is that maximum efficiencies are found at lower compression ratios, a characteristic observed in real machines. In an independent work, T. Finkelstein also assumed adiabatic expansion and compression spaces in his analysis of Stirling machinery
The ideal Stirling cycle is unattainable in the real world, as with any heat engine. The efficiency of Stirling machines is also linked to the environmental temperature: higher efficiency is obtained when the weather is cooler, thus making this type of engine less attractive in places with warmer climates. As with other external combustion engines, Stirling engines can use heat sources other than the combustion of fuels. For example, various designs for solar-powered Stirling engines have been developed.
Comparison with internal combustion engines
In contrast to internal combustion engines, Stirling engines have the potential to use renewable heat sources more easily, and to be quieter and more reliable with lower maintenance. They are preferred for applications that value these unique advantages, particularly if the cost per unit energy generated is more important than the capital cost per unit power. On this basis, Stirling engines are cost-competitive up to about 100 kW.
Compared to an internal combustion engine of the same power rating, Stirling engines currently have a higher capital cost and are usually larger and heavier. However, they are more efficient than most internal combustion engines. Their lower maintenance requirements make the overall energy cost comparable. The thermal efficiency is also comparable (for small engines), ranging from 15% to 30%. For applications such as micro-CHP, a Stirling engine is often preferable to an internal combustion engine. Other applications include water pumping, astronautics, and electrical generation from plentiful energy sources that are incompatible with the internal combustion engine, such as solar energy, and biomass such as agricultural waste and other waste such as domestic refuse. However, Stirling engines are generally not price-competitive as an automobile engine, because of high cost per unit power, & low power density.
Basic analysis is based on the closed-form Schmidt analysis.
Advantages of Stirling engines compared to internal combustion engines include:
Stirling engines can run directly on any available heat source, not just one produced by combustion, so they can run on heat from solar, geothermal, biological, nuclear sources or waste heat from industrial processes.
If combustion is used to supply heat, it can be a continuous process, so those emissions associated with the intermittent combustion processes of a reciprocating internal combustion engine can be reduced.
Bearings and seals can be on the cool side of the engine, where they require less lubricant and last longer than equivalents on other reciprocating engine types.
The engine mechanisms are in some ways simpler than other reciprocating engine types. No valves are needed, and the burner system (if any) can be relatively simple. Crude Stirling engines can be made using common household materials.
A Stirling engine uses a single-phase working fluid that maintains an internal pressure close to the design pressure, and thus for a properly designed system the risk of explosion is low. In comparison, a steam engine uses a two-phase gas/liquid working fluid, so a faulty overpressure relief valve can cause an explosion.
Low operating pressure can be used, allowing the use of lightweight cylinders.
They can be built to run quietly and without an air supply, for air-independent propulsion use in submarines.
They start easily (albeit slowly, after warmup) and run more efficiently in cold weather, in contrast to the internal combustion, which starts quickly in warm weather, but not in cold weather.
A Stirling engine used for pumping water can be configured so that the water cools the compression space. This increases efficiency when pumping cold water.
They are extremely flexible. They can be used as CHP (combined heat and power) in the winter and as coolers in summer.
Waste heat is easily harvested (compared to waste heat from an internal combustion engine), making Stirling engines useful for dual-output heat and power systems.
In 1986 NASA built a Stirling automotive engine and installed it in a Chevrolet Celebrity. Fuel economy was improved 45% and emissions were greatly reduced. Acceleration (power response) was equivalent to the standard internal combustion engine. This engine, designated the Mod II, also nullifies arguments that Stirling engines are heavy, expensive, unreliable, and demonstrate poor performance. A catalytic converter, muffler and frequent oil changes are not required.
Disadvantages of Stirling engines compared to internal combustion engines include:
Stirling engine designs require heat exchangers for heat input and for heat output, and these must contain the pressure of the working fluid, where the pressure is proportional to the engine power output. In addition, the expansion-side heat exchanger is often at very high temperature, so the materials must resist the corrosive effects of the heat source, and have low creep. Typically these material requirements substantially increase the cost of the engine. The materials and assembly costs for a high-temperature heat exchanger typically accounts for 40% of the total engine cost.
All thermodynamic cycles require large temperature differentials for efficient operation. In an external combustion engine, the heater temperature always equals or exceeds the expansion temperature. This means that the metallurgical requirements for the heater material are very demanding. This is similar to a Gas turbine, but is in contrast to an Otto engine or Diesel engine, where the expansion temperature can far exceed the metallurgical limit of the engine materials, because the input heat source is not conducted through the engine, so engine materials operate closer to the average temperature of the working gas.
The Stirling cycle is not actually achievable; the real cycle in Stirling machines is less efficient than the theoretical Stirling cycle. The efficiency of the Stirling cycle is lower where the ambient temperatures are mild, while it would give its best results in a cool environment, such as northern countries' winters.
Dissipation of waste heat is especially complicated because the coolant temperature is kept as low as possible to maximize thermal efficiency. This increases the size of the radiators, which can make packaging difficult. Along with materials cost, this has been one of the factors limiting the adoption of Stirling engines as automotive prime movers. For other applications such as ship propulsion and stationary microgeneration systems using combined heat and power (CHP) high power density is not required.
Applications
Applications of the Stirling engine range from heating and cooling to underwater power systems. A Stirling engine can function in reverse as a heat pump for heating or cooling. Other uses include combined heat and power, solar power generation, Stirling cryocoolers, heat pump, marine engines, low power model aircraft engines, and low temperature difference engines.
| Physical sciences | Thermodynamics | Physics |
187022 | https://en.wikipedia.org/wiki/Cupronickel | Cupronickel | Cupronickel or copper–nickel (CuNi) is an alloy of copper with nickel, usually along with small quantities of other metals added for strength, such as iron and manganese. The copper content typically varies from 60 to 90 percent. (Monel is a nickel–copper alloy that contains a minimum of 52 percent nickel.)
Despite its high copper content, cupronickel is silver in colour. Cupronickel is highly resistant to corrosion by salt water, and is therefore used for piping, heat exchangers and condensers in seawater systems, as well as for marine hardware. It is sometimes used for the propellers, propeller shafts, and hulls of high-quality boats. Other uses include military equipment and chemical industry, petrochemical industry, and electrical industries.
In decorative use, a cupronickel alloy called nickel silver is common, although it contains additional zinc but no silver.
Another common 20th-century use of cupronickel was silver-coloured coins. For this use, the typical alloy has 3:1 copper to nickel ratio, with very small amounts of manganese. In the past, true silver coins were debased with cupronickel, such as coins of the pound sterling from 1947 onward having their content replaced.
Name
Cupronickel, as the German kupfernickel, originally referred to the mineral form of nickel arsenide; natural deposits had superficial similarities to copper ores, and local folklore blamed the sprite Nickel (compare Old Nick) for the absence of usable copper and health issues from arsenic exposure. It was from a sample of this kupfernickel that Baron Axel Fredrik Cronstedt first isolated elemental nickel in 1751, naming the new metal for the sprite. The mineral was given its modern names, nickeline and niccolite, by the mid-19th century.
Aside from cupronickel and copper–nickel, several other terms have been used to describe the material: the tradenames Alpaka or Alpacca, Argentan Minargent, the registered French term cuivre blanc, Chinese silver, and the romanized Cantonese term Paktong, 白銅 (the French and Cantonese terms both meaning "white copper").
Cupronickel alloys containing zinc are referred to as nickel silver, also sometimes hotel silver, German silver, plata alemana (Spanish for "German silver").
Applications
Marine engineering
Cupronickel alloys are used for marine applications due to their resistance to seawater corrosion, good fabricability, and their effectiveness in lowering macrofouling levels. Alloys ranging in composition from 90% Cu–10% Ni to 70% Cu–30% Ni are commonly specified in heat exchanger or condenser tubes in a wide variety of marine applications.
Important marine applications for cupronickel include:
Shipbuilding and repair: hulls of boats and ships, seawater cooling, bilge and ballast, sanitary, fire fighting, inert gas, hydraulic and pneumatic chiller systems.
Desalination plants: brine heaters, heat rejection and recovery, and in evaporator tubing.
Offshore oil and gas platforms and processing and FPSO vessels: systems and splash zone sheathings.
Power generation: steam turbine condensers, oil coolers, auxiliary cooling systems and high pressure pre-heaters at nuclear and fossil fuel power plants.
Seawater system components: condenser and heat exchanger tubes, tube sheets, piping, high pressure systems, fittings, pumps, and water boxes.
Coinage
The successful use of cupronickel in coinage is due to its corrosion resistance, electrical conductivity, durability, malleability, low allergy risk, ease of stamping, antimicrobial properties and recyclability.
In Europe, Switzerland pioneered cupronickel-based billon coinage in 1850, with the addition of silver and zinc, for coins of 5, 10 and 20 Rappen. Starting in 1860/1861, Belgium issued 5, 10 and 20 Centimes in pure cupronickel (75% copper, 25% nickel, without additional silver and zinc), and Germany issued 5 and 10 Pfennig in the same 75:25 ratio from 1873/1874 (until 1915/1916). In 1879, Switzerland, for 5 and 10 Rappen coins, also adopted that cheaper 75:25 copper to nickel ratio then being used in Belgium, the United States and Germany. From 1947 to 2012, all "silver" coinage in the UK was made from cupronickel (but from 2012 onwards the two smallest UK cupronickel denominations were replaced with lower-cost nickel-plated steel coins). Moreover, when silver prices rose in the 1960s/1970s also some other European countries replaced remaining silver denominations by cupronickel, e.g. the 1/2 to (pictured) 5 Swiss franc coins starting 1968 and German 5 Deutsche Mark 1975–2001. Since 1999, cupronickel is also used for the inner segment of the 1 euro coin and the outer segment of the 2 euro coin.
In part due to silver hoarding in the Civil War, the United States Mint first used cupronickel for circulating coinage in three-cent pieces starting in 1865, and then for five-cent pieces starting in 1866. Prior to these dates, both denominations had been made only in silver in the United States.
Cupronickel is the cladding on either side of United States half-dollars (50¢) since 1971, and all quarters (25¢) and dimes (10¢) made after 1964. Currently, some circulating coins, such as the United States Jefferson nickel (5¢), the Swiss franc, and the South Korean 500 and 100 won are made of solid cupronickel (75:25 ratio).
Decorative housewares
Nickel silver cupronickels are used extensively as a substitute for silver in tableware and other decorative housewares. Nickel silver is also used as a base for silver plating, where the product is known as electro-plated nickel silver, or EPNS.
Other usage
A thermocouple junction is formed from a pair of thermocouple conductors such as iron-constantan, copper-constantan or nickel-chromium/nickel-aluminium. The junction may be protected within a sheath of copper, cupronickel or stainless steel.
Cupronickel is used in cryogenic applications. It retains high ductility and thermal conductivity at very low temperatures. Where other metals like steel or aluminum would shatter and become thermally inert, cupronickel's unusual thermal and mechanical performance at these low temperatures facilitate a number of niche uses. Machinery that must perform many duty cycles at continuously low-temperatures and heat exchangers at cryogenic plants are the main industrial destinations of cupronickel in cryogenic applications. Niche applications also exist, for example the alloy's high thermal conductivity at low temperatures has made cupronickel ubiquitous in freeze branding operations.
Beginning around the turn of the 20th century, bullet jackets were commonly made from this material. It was soon replaced with gilding metal to reduce metal fouling in the bore.
Currently, cupronickel and nickel silver remain the basic material for silver-plated cutlery. It is commonly used for mechanical and electrical equipment, medical equipment, zippers, jewelry items, and both for strings for instruments in the violin family, and for guitar frets. Fender Musical Instruments used "CuNiFe" magnets in their "Wide Range Humbucker" pickup for various Telecaster and Starcaster guitars during the 1970s.
For high-quality cylinder locks and locking systems, cylinder cores are made from wear-resistant cupronickel.
Cupronickel has been used as an alternative to traditional steel hydraulic brake lines (the pipes containing the brake fluid), as it does not rust. Since cupronickel is much softer than steel, it bends and flares more easily, and the same property allows it to form a better seal with hydraulic components.
Physical and mechanical properties
Cupronickel lacks a copper color due to nickel's high electronegativity, which causes a loss of one electron in copper's d-shell (leaving 9 electrons in the d-shell versus pure copper's typical 10 electrons).
Important properties of cupronickel alloys include corrosion resistance, inherent resistance to macrofouling, good tensile strength, excellent ductility when annealed, thermal conductivity and expansion characteristics amenable for heat exchangers and condensers, good thermal conductivity and ductility at cryogenic temperatures and beneficial antimicrobial touch surface properties.
Subtle differences in corrosion resistance and strength determine which alloy is selected. Descending the table, the maximum allowable flow rate in piping increases, as does the tensile strength.
In seawater, the alloys have excellent corrosion rates which remain low as long as the maximum design flow velocity is not exceeded. This velocity depends on geometry and pipe diameter. They have high resistance to crevice corrosion, stress corrosion cracking and hydrogen embrittlement that can be troublesome to other alloy systems. Copper–nickels naturally form a thin protective surface layer over the first several weeks of exposure to seawater and this provides its ongoing resistance. Additionally, they have a high inherent biofouling resistance to attachment by macrofoulers (e.g. seagrasses and molluscs) living in the seawater. To use this property to its full potential, the alloy needs to be free of the effects of, or insulated from, any form of cathodic protection.
However, Cu–Ni alloys can show high corrosion rates in polluted or stagnant seawater when sulfides or ammonia are present. It is important, therefore, to avoid exposure to such conditions, particularly during commissioning and refit while the surface films are maturing. Ferrous sulfate dosing to sea water systems can provide improved resistance.
As copper and nickel alloy with each other easily and have simple structures, the alloys are ductile and readily fabricated. Strength and hardness for each individual alloy is increased by cold working; they are not hardened by heat treatment. Joining of 90–10 (C70600) and 70–30 (C71500) is possible by both welding or brazing. They are both weldable by the majority of techniques, although autogenous (welding without weld consumables) or oxyacetylene methods are not recommended. The 70–30 rather than 90–10 weld consumables are normally preferred for both alloys and no after-welding heat treatment is required. They can also be welded directly to steel, providing a 65% nickel–copper weld consumable is used to avoid iron dilution effects. The C71640 alloy tends to be used as seamless tubing and expanded rather than welded into the tube plate. Brazing requires appropriate silver-base brazing alloys. However, great care must be taken to ensure that there are no stresses in the Cu–Ni being silver brazed, since any stress can cause intergranular penetration of the brazing material, and severe stress cracking (see image). Thus, full annealing of any potential mechanical stress is necessary.
Applications for Cu–Ni alloys have withstood the test of time, as they are still widely used and range from seawater system piping, condensers and heat exchangers in naval vessels, commercial shipping, multiple-stage flash desalination and power stations. They have also been used as splash zone cladding on offshore structures and protective cladding on boat hulls, as well as for solid hulls themselves.
Fabrication
Due to its ductility, cupronickel alloys can be readily fabricated in a wide variety of product forms and fittings. Cupronickel tubing can be readily expanded into tube sheets for the manufacturing of shell and tube heat exchangers.
Details of fabrication procedures, including general handling, cutting and machining, forming, heat treatment, preparing for welding, weld preparations, tack welding, welding consumables, welding processes, painting, mechanical properties of welds, and tube and pipe bending are available.
Standards
ASTM, EN, and ISO standards exist for ordering wrought and cast forms of cupronickel.
Thermocouples and resistors whose resistance is stable across changes in temperature contain alloy constantan, which consists of 55% copper and 45% nickel.
History
Chinese history
Cupronickel alloys were known as "white copper" to the Chinese since about the third century BC. Some weapons made during the Warring States period were made with Cu-Ni alloys. The theory of Chinese origins of Bactrian cupronickel was suggested in 1868 by Flight, who found that the coins considered the oldest cupronickel coins yet discovered were of a very similar alloy to Chinese paktong.
The author-scholar, Ho Wei, precisely described the process of making cupronickel in about 1095 AD. The paktong alloy was described as being made by adding small pills of naturally occurring yunnan ore to a bath of molten copper. When a crust of slag formed, saltpeter was added, the alloy was stirred and the ingot was immediately cast. Zinc is mentioned as an ingredient but there are no details about when it was added. The ore used is noted as solely available from Yunnan, according to the story:
"San Mao Chun were at Tanyang during a famine year when many people died, so taking certain chemicals, Ying projected them onto silver, turning it into gold, and he also transmuted iron into silver – thus enabling the lives of many to be saved [through purchasing grain through this fake silver and gold]
Thereafter all those who prepared chemical powders by heating and transmuting copper by projection called their methods "Tanyang techniques".
The late Ming and Qing literature have very little information about paktong. However, it is first mentioned specifically by name in the Thien Kung Khai Wu of circa 1637:
"When lu kan shih (zinc carbonate, calamine) or wo chhein (zinc metal) is mixed and combined with chih thung (copper), one gets 'yellow bronze' (ordinary brass). When phi shang and other arsenic substances are heated with it, one gets 'white bronze' or white copper: pai thong. When alum and niter and other chemicals are mixed together one gets ching thung: green bronze."
Ko Hung stated in 300 AD: "The Tanyang copper was created by throwing a mercuric elixir into Tanyang copper and heated- gold will be formed." However, the Pha Phu Tsu and the Shen I Ching describing a statue in the Western provinces as being of silver, tin, lead and Tanyang copper – which looked like gold, and could be forged for plating and inlaying vessels and swords.
Joseph Needham et al. argue that cupronickel was at least known as a unique alloy by the Chinese during the reign of Liu An in 120 BC in Yunnan. Moreover, the Yunnanese State of Tien was founded in 334 BC as a colony of the Chu. Most likely, modern paktong was unknown to Chinese of the day – but the naturally occurring Yunnan ore cupronickel alloy was likely a valuable internal trade commodity.
Greco-Bactrian coinage
In 1868, W. Flight discovered a Greco-Bactrian coin comprising 20% nickel that dated from 180 to 170 BCE with the bust of Euthydemus II on the obverse. Coins of a similar alloy with busts of his younger brothers, Pantaleon and Agathocles, were minted around 170 BCE. The composition of the coins was later verified using the traditional wet method and X-ray fluorescence spectrometry. Cunningham in 1873 proposed the "Bactrian nickel theory," which suggested that the coins must have been the result of overland trade from China through India to Greece. Cunningham's theory was supported by scholars such as W. W. Tarn, Sir John Marshall, and J. Newton Friend, but was criticized by E. R. Caley and S. van R. Cammann.
In 1973, Cheng and Schwitter in their new analyses suggested that the Bactrian alloys (copper, lead, iron, nickel and cobalt) were closely similar to the Chinese paktong, and of nine known Asian nickel deposits, only those in China could provide the identical chemical compositions. Cammann criticized Cheng and Schwitter's paper, arguing that the decline of cupronickel currency should not have coincided with the opening of the Silk Road. If the Bactrian nickel theory were true, according to Cammann, the Silk Road would have increased the supply of cupronickel. However, the end of Greco-Bactrian cupronickel currency could be attributed to other factors such as the end of the House of Euthydemus.
European history
The alloy seems to have been rediscovered by the West during alchemy experiments. Notably, Andreas Libavius, in his Alchemia of 1597, mentions a surface-whitened copper aes album by mercury or silver. But in De Natura Metallorum in Singalarum Part 1, published in 1599, the same term was applied to "tin" from the East Indies (modern-day Indonesia and the Philippines) and given the Spanish name, tintinaso.
Richard Watson of Cambridge appears to be the first to discover that cupronickel was an alloy of three metals. In attempting to rediscover the secret of white-copper, Watson critiqued Jean-Baptiste Du Halde's History of China (1688) as confusing the term paktong'., He noted the Chinese of his day did not form it as an alloy but rather smelted readily available unprocessed ore:
"...appeared from a vast series of experiments made at Peking- that it occurred naturally as an ore mined at the region, the most extraordinary copper is pe-tong or white copper: it is white when dug out of the mine and even more white within than without. It appears, by a vast number of experiments made at Peking, that its colour is owing to no mixture; on the contrary, all mixtures diminish its beauty, for, when it is rightly managed it looks exactly like silver and were there not a necessity of mixing a little tutenag or such metal to soften it, it would be so much more the extraordinary as this sort of copper is found nowhere but in China and that only in the Province of Yunnan". Notwithstanding what is here said, of the colour of the copper being owing to no mixture, it is certain the Chinese white copper as brought to us, is a mixt [sic: mixed] metal; so that the ore from which it was extracted must consist of various metallic substances; and from such ore that the natural orichalcum if it ever existed, was made."
During the peak European importation of Chinese white-copper from 1750 to 1800, increased attention was made to its discovering its constituents. Peat and Cookson found that "the darkest proved to contain 7.7% nickel and the lightest said to be indistinguishable from silver with a characteristic bell-like resonance when struck and considerable resistance to corrosion, 11.1%".
Another trial by Andrew Fyfe estimated the nickel content at 31.6%. Guesswork ended when James Dinwiddie of the Macartney Embassy brought back in 1793, at considerable personal risk (smuggling of paktong ore was a capital crime by the Chinese Emperor), some of the ore from which paktong was made. Cupronickel became widely understood, as published by E. Thomason, in 1823, in a submission, later rejected for not being new knowledge, to the Royal Society of Arts.
Efforts in Europe to exactly duplicate the Chinese paktong failed due to a general lack of requisite complex cobalt–nickel–arsenic naturally occurring ore. However, the Schneeberg district of Germany, where the famous Blaufarbenwerke made cobalt blue and other pigments, solely held the requisite complex cobalt–nickel–arsenic ores in Europe.
At the same time, the Prussian Verein zur Beförderung des Gewerbefleißes (Society for the Improvement of Business Diligence/Industriousness) offered a prize for the mastery of the process. Unsurprisingly, Dr E.A. Geitner and J.R. von Gersdoff of Schneeberg won the prize and launched their "German silver" brand under the trade names Argentan and Neusilber'' (new silver).
In 1829, Percival Norton Johnston persuaded Dr. Geitner to establish a foundry in Bow Common behind Regents' Park Canal in London, and obtained ingots of nickel-silver with the composition 18% Ni, 55% Cu and 27% Zn.
Between 1829 and 1833, Percival Norton Johnson was the first person to refine cupronickel on the British Isles. He became a wealthy man, producing in excess of 16.5 tonnes per year. The alloy was mainly made into cutlery by the Birmingham firm William Hutton and sold under the trade-name "Argentine".
Johnsons' most serious competitors, Charles Askin and Brok Evans, under the brilliant chemist Dr. EW Benson, devised greatly improved methods of cobalt and nickel suspension and marketed their own brand of nickel-silver, called "British Plate".
John Fairfield Thompson writes that the 3:1 copper-nickel alloy was developed for coinage by Belgium in 1860. France and Greece are notable for their adoption of this technology in the 20th century.
After the unification of Germany cupronickel coinage was introduced by the German Coinage Act, and sudden demand of nickel for tens of millions of 5 and 10 pfennig coins minted in 1873-1876 caused such a shock on the previously tranquil market that price more than tripled, leading to a significant expansion of supply.
By the 1920s, a 70–30 copper–nickel grade was developed for naval condensers. Soon afterwards, a 2% manganese and 2% iron alloy now known as alloy C71640 was introduced for a UK power station which needed better erosion resistance because the levels of entrained sand in the seawater. A 90–10 alloy first became available in the 1950s, initially for seawater piping, and is now the more widely used alloy for this purpose.
| Physical sciences | Copper alloys | Chemistry |
187240 | https://en.wikipedia.org/wiki/Quadratic%20function | Quadratic function | In mathematics, a quadratic function of a single variable is a function of the form
where is its variable, and , , and are coefficients. The expression , especially when treated as an object in itself rather than as a function, is a quadratic polynomial, a polynomial of degree two. In elementary mathematics a polynomial and its associated polynomial function are rarely distinguished and the terms quadratic function and quadratic polynomial are nearly synonymous and often abbreviated as quadratic.
The graph of a real single-variable quadratic function is a parabola. If a quadratic function is equated with zero, then the result is a quadratic equation. The solutions of a quadratic equation are the zeros (or roots) of the corresponding quadratic function, of which there can be two, one, or zero. The solutions are described by the quadratic formula.
A quadratic polynomial or quadratic function can involve more than one variable. For example, a two-variable quadratic function of variables and has the form
with at least one of , , and not equal to zero. In general the zeros of such a quadratic function describe a conic section (a circle or other ellipse, a parabola, or a hyperbola) in the – plane. A quadratic function can have an arbitrarily large number of variables. The set of its zero form a quadric, which is a surface in the case of three variables and a hypersurface in general case.
Etymology
The adjective quadratic comes from the Latin word quadrātum ("square"). A term raised to the second power like is called a square in algebra because it is the area of a square with side .
Terminology
Coefficients
The coefficients of a quadratic function are often taken to be real or complex numbers, but they may be taken in any ring, in which case the domain and the codomain are this ring (see polynomial evaluation).
Degree
When using the term "quadratic polynomial", authors sometimes mean "having degree exactly 2", and sometimes "having degree at most 2". If the degree is less than 2, this may be called a "degenerate case". Usually the context will establish which of the two is meant.
Sometimes the word "order" is used with the meaning of "degree", e.g. a second-order polynomial. However, where the "degree of a polynomial" refers to the largest degree of a non-zero term of the polynomial, more typically "order" refers to the lowest degree of a non-zero term of a power series.
Variables
A quadratic polynomial may involve a single variable x (the univariate case), or multiple variables such as x, y, and z (the multivariate case).
The one-variable case
Any single-variable quadratic polynomial may be written as
where x is the variable, and a, b, and c represent the coefficients. Such polynomials often arise in a quadratic equation The solutions to this equation are called the roots and can be expressed in terms of the coefficients as the quadratic formula. Each quadratic polynomial has an associated quadratic function, whose graph is a parabola.
Bivariate and multivariate cases
Any quadratic polynomial with two variables may be written as
where and are the variables and are the coefficients, and one of , and is nonzero. Such polynomials are fundamental to the study of conic sections, as the implicit equation of a conic section is obtained by equating to zero a quadratic polynomial, and the zeros of a quadratic function form a (possibly degenerate) conic section.
Similarly, quadratic polynomials with three or more variables correspond to quadric surfaces or hypersurfaces.
Quadratic polynomials that have only terms of degree two are called quadratic forms.
Forms of a univariate quadratic function
A univariate quadratic function can be expressed in three formats:
is called the standard form,
is called the factored form, where and are the roots of the quadratic function and the solutions of the corresponding quadratic equation.
is called the vertex form, where and are the and coordinates of the vertex, respectively.
The coefficient is the same value in all three forms. To convert the standard form to factored form, one needs only the quadratic formula to determine the two roots and . To convert the standard form to vertex form, one needs a process called completing the square. To convert the factored form (or vertex form) to standard form, one needs to multiply, expand and/or distribute the factors.
Graph of the univariate function
Regardless of the format, the graph of a univariate quadratic function is a parabola (as shown at the right). Equivalently, this is the graph of the bivariate quadratic equation .
If , the parabola opens upwards.
If , the parabola opens downwards.
The coefficient controls the degree of curvature of the graph; a larger magnitude of gives the graph a more closed (sharply curved) appearance.
The coefficients and together control the location of the axis of symmetry of the parabola (also the -coordinate of the vertex and the h parameter in the vertex form) which is at
The coefficient controls the height of the parabola; more specifically, it is the height of the parabola where it intercepts the -axis.
Vertex
The vertex of a parabola is the place where it turns; hence, it is also called the turning point. If the quadratic function is in vertex form, the vertex is . Using the method of completing the square, one can turn the standard form
into
so the vertex, , of the parabola in standard form is
If the quadratic function is in factored form
the average of the two roots, i.e.,
is the -coordinate of the vertex, and hence the vertex is
The vertex is also the maximum point if , or the minimum point if .
The vertical line
that passes through the vertex is also the axis of symmetry of the parabola.
Maximum and minimum points
Using calculus, the vertex point, being a maximum or minimum of the function, can be obtained by finding the roots of the derivative:
is a root of if
resulting in
with the corresponding function value
so again the vertex point coordinates, , can be expressed as
Roots of the univariate function
Exact roots
The roots (or zeros), and , of the univariate quadratic function
are the values of for which .
When the coefficients , , and , are real or complex, the roots are
Upper bound on the magnitude of the roots
The modulus of the roots of a quadratic can be no greater than where is the golden ratio
The square root of a univariate quadratic function
The square root of a univariate quadratic function gives rise to one of the four conic sections, almost always either to an ellipse or to a hyperbola.
If then the equation describes a hyperbola, as can be seen by squaring both sides. The directions of the axes of the hyperbola are determined by the ordinate of the minimum point of the corresponding parabola If the ordinate is negative, then the hyperbola's major axis (through its vertices) is horizontal, while if the ordinate is positive then the hyperbola's major axis is vertical.
If then the equation describes either a circle or other ellipse or nothing at all. If the ordinate of the maximum point of the corresponding parabola
is positive, then its square root describes an ellipse, but if the ordinate is negative then it describes an empty locus of points.
Iteration
To iterate a function , one applies the function repeatedly, using the output from one iteration as the input to the next.
One cannot always deduce the analytic form of , which means the nth iteration of . (The superscript can be extended to negative numbers, referring to the iteration of the inverse of if the inverse exists.) But there are some analytically tractable cases.
For example, for the iterative equation
one has
where
and
So by induction,
can be obtained, where can be easily computed as
Finally, we have
as the solution.
See Topological conjugacy for more detail about the relationship between f and g. And see Complex quadratic polynomial for the chaotic behavior in the general iteration.
The logistic map
with parameter 2<r<4 can be solved in certain cases, one of which is chaotic and one of which is not. In the chaotic case r=4 the solution is
where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic and exhibits sensitive dependence on initial conditions, so it is said to be chaotic.
The solution of the logistic map when r=2 is
for . Since for any value of other than the unstable fixed point 0, the term goes to 0 as n goes to infinity, so goes to the stable fixed point
Bivariate (two variable) quadratic function
A bivariate quadratic function is a second-degree polynomial of the form
where A, B, C, D, and E are fixed coefficients and F is the constant term.
Such a function describes a quadratic surface. Setting equal to zero describes the intersection of the surface with the plane which is a locus of points equivalent to a conic section.
Minimum/maximum
If the function has no maximum or minimum; its graph forms a hyperbolic paraboloid.
If the function has a minimum if both and , and a maximum if both and ; its graph forms an elliptic paraboloid. In this case the minimum or maximum occurs at where:
If and the function has no maximum or minimum; its graph forms a parabolic cylinder.
If and the function achieves the maximum/minimum at a line—a minimum if A>0 and a maximum if A<0; its graph forms a parabolic cylinder.
| Mathematics | Specific functions | null |
7480138 | https://en.wikipedia.org/wiki/Ixora | Ixora | Ixora is a genus of flowering plants in the family Rubiaceae. It is the only genus in the tribe Ixoreae. It consists of tropical evergreen trees and shrubs and holds around 544 species. Though native to the tropical and subtropical areas throughout the world, its centre of diversity is in Tropical Asia. Ixora also grows commonly in subtropical climates in the United States, such as Florida where it is commonly known as West Indian jasmine.
Name
Ixora is Latinized from Sanskrit Ishwara, one of the names of the Hindu god Shiva. The genus was formally created by Linnaeus in 1753, as it was noted by Hendrik van Rheede that the flowers of what he noted as schetti (and named by Rheede as Ixora coccinea) were offered in temples in the Malabar.
Other common names include viruchi, kiskaara, kepale, rangan, kheme, ponna, chann tanea, techi, pan, siantan, jarum-jarum/jejarum, cây trang thái, jungle flame, jungle geranium, and cruz de Malta, among others.
Botany
The plants possess leathery leaves, ranging from 3 to 6 inches in length, and produce large clusters of tiny flowers in the summer. Members of Ixora prefer acidic soil, and are suitable choices for bonsai. It is also a popular choice for hedges in parts of South East Asia. In tropical climates, they flower year round and are commonly used in Hindu worship, as well as in ayurveda and Indian folk medicine.
In Brazil, fungal species Pseudocercospora ixoricola was found to be causing leaf spots on Ixora coccinea. Then in 2018, in Taiwan, during a fungal study, it was found that the species Pseudopestalotiopsis ixorae and Pseudopestalotiopsis taiwanensis caused leaf spots on species of Ixora, which is a popular garden plant in Taiwan.
Selected species
Ixora albersii
Ixora backeri
Ixora beckleri
Ixora brevipedunculata
Ixora calycina
Ixora chinensis
Ixora coccinea
Ixora elongata
Ixora euosmia
Ixora finlaysoniana
Ixora foliosa
Ixora johnsonii
Ixora jucunda
Ixora killipii
Ixora lawsonii
Ixora malabarica
Ixora margaretae
Ixora marquesensis
Ixora mooreensis
Ixora nigerica
Ixora nigricans
Ixora ooumuensis
Ixora panurensis
Ixora pavetta
Ixora peruviana
Ixora pudica
Ixora raiateensis
Ixora raivavaensis
Ixora rufa
Ixora saulierei
Ixora setchellii
Ixora st-johnii
Ixora stokesii
Ixora temehaniensis
Ixora ulei
Ixora umbellata
Ixora yavitensis
Gallery
| Biology and health sciences | Gentianales | Plants |
7481381 | https://en.wikipedia.org/wiki/Dosage%20form | Dosage form | Dosage forms (also called unit doses) are pharmaceutical drug products presented in a specific form for use. They contain a mixture of active ingredients and inactive components (excipients), configured in a particular way (such as a capsule shell) and apportioned into a specific dose. For example, two products may both be amoxicillin, but one may come in 500 mg capsules, while another may be in 250 mg chewable tablets.
The term unit dose can also refer to non-reusable packaging, particularly when each drug product is individually packaged. However, the FDA differentiates this by referring to it as unit-dose "packaging" or "dispensing". Depending on the context, multi(ple) unit dose may refer to multiple distinct drug products packaged together or a single product containing multiple drugs and/or doses.
Formulations
The term dosage form may also sometimes refer only to the pharmaceutical formulation of a drug product's constituent substances, without considering its final configuration as a consumable product (e.g., capsule, patch, etc.). Due to the somewhat ambiguous nature and overlap of these terms within the pharmaceutical industry, caution is advisable when discussing them with others who may interpret the terminology differently.
Types
Dosage forms vary depending on the method/route of administration, which can include many types of liquid, solid, and semisolid forms. Common dosage forms include tablets, capsules, drinks, and syrups, among others.
A combination drug (or fixed-dose combination; FDC) is a product that contains more than one active ingredient (e.g., one tablet, one capsule, or one syrup with multiple drugs).
In naturopathy, dosages can take the form of decoctions and herbal teas, in addition to the more conventional methods mentioned above.
Route of administration
The route of administration (ROA) for drug delivery depends on the dosage form of the substance. Different dosage forms may be available for a particular drug, especially if certain conditions restrict the ROA. For example, if a patient is unconscious or experiencing persistent nausea and vomiting, oral administration may not be feasible, necessitating the use of alternative routes, such as inhalational, buccal, sublingual, nasal, suppository, or parenteral.
A specific dosage form may also be required due to issues such as chemical stability or pharmacokinetic properties. For instance, insulin cannot be given orally because it is extensively metabolized in the gastrointestinal tract (GIT) before it reaches the bloodstream, preventing it from reaching therapeutic target destinations. Similarly, the oral and intravenous doses of a drug like paracetamol differ for the same reason.
Oral
Pills, i.e. tablets or capsules
Liquids such as syrups, solutions, elixers, emulsions, and tinctures
Liquids such as decoctions and herbal teas
Orally disintegrating tablets
Lozenges or candy (electuaries)
Thin films (e.g., Listerine Pocketpaks, nitroglycerin) to be placed on top of or underneath the tongue as well as against the cheek
Powders or effervescent powder or tablets, often instructed to be mixed into a food item
Plants or seeds prepared in various ways such as a cannabis edible
Pastes such as high fluoride toothpastes
Gases such as oxygen (can also be delivered through the nose)
Ophthalmic
Eye drops
Lotions
Ointments
Emulsions
Inhalation
Aerosolized medication
Dry-powder Inhalers or metered dose inhalers
Nebulizer-administered medication
Smoking
Vaporizer-administered medication
Unintended ingredients
Talc is an excipient often used in pharmaceutical tablets that may end up being crushed to a powder against medical advice or for recreational use. Also, illicit drugs that occur as white powder in their pure form are often cut with cheap talc. Natural talc is cheap but contains asbestos while asbestos-free talc is more expensive. Inhaled talc that has asbestos is generally accepted as being able to cause lung cancer if it is inhaled. The evidence about asbestos-free talc is less clear, according to the American Cancer Society.
Injection
Parenteral
Intradermally-administered (ID)
Subcutaneously-administered (SC)
Intramuscularly-administered (IM)
Intraosseous administration (IO)
Intraperitoneally-administered (IP)
intravenously-administered (IV)
Intracavernously-administered (ICI)
These are usually solutions and suspensions.
Unintended ingredients
Safe
Eye drops (normal saline in disposable packages) are distributed to syringe users by needle exchange programs.
Unsafe
The injection of talc from crushed pills has been associated with pulmonary talcosis in intravenous drug users.
Topical
Creams, liniments, balms (such as lip balm or antiperspirants and deodorants), lotions, or ointments, etc.
Gels and hydrogels
Ear drops
Transdermal and dermal patches to be applied to the skin
Powders
Unintended use
It is not safe to calculate divided doses by cutting and weighing medical skin patches, because there's no guarantee that the substance is evenly distributed on the patch surface. For example, fentanyl transdermal patches are designed to slowly release the substance over 3 days. It is well known that cut fentanyl transdermal consumed orally have cause overdoses and deaths.
Single blotting papers for illicit drugs injected from solvents in syringes may also cause uneven distribution across the surface.
Other
Intravaginal administration
Vaginal rings
Capsules and tablets
Suppositories
Rectal administration (enteral)
Suppositories
Suspensions and solutions in the form of enemas
Gels
Urethral
Nasal sprays
| Biology and health sciences | General concepts_2 | Health |
7481866 | https://en.wikipedia.org/wiki/Detachment%20fault | Detachment fault | A detachment fault is a gently dipping normal fault associated with large-scale extensional tectonics. Detachment faults often have very large displacements (tens of km) and juxtapose unmetamorphosed hanging walls against medium to high-grade metamorphic footwalls that are called metamorphic core complexes. They are thought to have formed as either initially low-angle structures or by the rotation of initially high-angle normal faults modified also by the isostatic effects of tectonic denudation. They may also be called denudation faults.
Examples of detachment faulting include:
The Snake Range detachment system of the Basin and Range Province of western North America which was active during the Miocene
The Nordfjord-Sogn detachment of western Norway active during the Devonian Period
The Whipple detachment in southeastern California
Detachment faults have been found on the sea floor close to divergent plate boundaries characterised by a limited supply of upwelling magma, such as the Southwest Indian Ridge. These detachment faults are associated with the development of oceanic core complex structures.
Continental detachment faults
Continental detachment faults are also called décollements, denudational faults, low-angle normal faults (LANF) and dislocation surfaces. The low-angle nature of these normal faults has sparked debate among scientists, centred on whether these faults started out at low angles or rotated from initially steep angles. Faults of the latter type are present, for example, in the Yerington district of Nevada. There, evidence for rotation of the fault plane comes from tilted volcanic dikes. However, other authors disagree that these should be called detachment faults. One group of scientists defines detachment faults as follows:
"The essential elements of extensional detachment faults, as the term is used here, are low angle of initial dip, subregional to regional scale of development, and large translational displacements, certainly up to tens of kilometres in some instances."
Detachments faults of this kind (initially low-angle) can be found in the Whipple Mountains of California and the Mormon Mountains of Nevada. They initiate at depth in zones of intracrustal flow, where mylonitic gneisses form. Shear along the fault is ductile at mid to lower crustal depths, but brittle at shallower depths. The footwall can transport mylonitic gneisses from lower crustal levels to upper crustal levels, where they become chlorititic and brecciated. The hanging wall, composed of extended, thinned and brittle crustal material, can be cut by numerous normal faults. These either merge into the detachment fault at depth or simply terminate at the detachment fault surface without shallowing. The unloading of the footwall can lead to isostatic uplift and doming of the more ductile material beneath.
Low angle normal faulting is not explained by Andersonian fault mechanics. However, slip on low angle normal faults could be facilitated by fluid pressure, as well as by weakness of minerals in wall rocks. Detachment faults may also initiate on reactivated thrust fault surfaces.
Oceanic detachment faults
Oceanic detachment faults occur at spreading ridges where magmatic activity is not enough to account for the entire plate spreading rate. They are characterized by long domes parallel to the spreading direction (oceanic core complexes of the footwall). Slip on these faults can range from tens to hundreds of km. They cannot be structurally restored, as slip on the fault exceeds the thickness of oceanic crust (~30 km compared to ~6 km, for example).
While occurring at relatively amagmatic spreading centres, the footwalls of these detachment faults are much more influenced by magmatism than in continental settings. In fact, they are often created by 'continuous casting': new footwall is continually being generated by mantle or melt from a magma chamber as slip occurs on the fault. The lithology is dominated by gabbro and peridotite, resulting in a mineralogy of olivine, serpentine, talc and plagioclase. This is in contrast to continental settings, where the mineralogy is dominantly quartz and feldspar. The footwall is also much more extensively hydrothermally altered than in continental settings.
In contrast to many detachment faults in continental settings, oceanic detachment faults are usually rolling hinge normal faults, initiating at higher angles and rotating to low angles.
| Physical sciences | Structural geology | Earth science |
64976 | https://en.wikipedia.org/wiki/Attention%20deficit%20hyperactivity%20disorder | Attention deficit hyperactivity disorder | Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by executive dysfunction occasioning symptoms of inattention, hyperactivity, impulsivity and emotional dysregulation that are excessive and pervasive, impairing in multiple contexts, and developmentally-inappropriate.
ADHD symptoms arise from executive dysfunction, and emotional dysregulation is often considered a core symptom. Impairments resulting from deficits in self-regulation such as time management, inhibition, task initiation, and sustained attention can include poor professional performance, relationship difficulties, and numerous health risks, collectively predisposing to a diminished quality of life and a direct average reduction in life expectancy of 13 years. The disorder costs society hundreds of billions of US dollars each year, worldwide. It is associated with other neurodevelopmental and mental disorders as well as non-psychiatric disorders, which can cause additional impairment.
While people with ADHD often struggle to initiate work and persist on tasks with delayed consequences, this may not be evident in contexts they find intrinsically interesting and immediately rewarding, potentiating hyperfocus (a more colloquial term) or perseverative responding. This mental state is often hard to disengage from and is related to risks such as for internet addiction and types of offending behaviour.
ADHD represents the extreme lower end of the continuous dimensional trait (bell curve) of executive functioning and self-regulation, which is supported by twin, brain imaging and molecular genetic studies.
The precise causes of ADHD are unknown in most individual cases. Meta-analyses have shown that the disorder is primarily genetic with a heritability rate of 70-80%, where risk factors are highly accumulative. The environmental risks are not related to social or familial factors; they exert their effects very early in life, in the prenatal or early postnatal period. However, in rare cases, ADHD can be caused by a single event including traumatic brain injury, exposure to biohazards during pregnancy, or a major genetic mutation. There is no biologically distinct adult-onset ADHD except for when ADHD occurs after traumatic brain injury.
Signs and symptoms
Inattention, hyperactivity (restlessness in adults), disruptive behaviour, and impulsivity are common in ADHD. Academic difficulties are frequent, as are problems with relationships. The signs and symptoms can be difficult to define, as it is hard to draw a line at where normal levels of inattention, hyperactivity, and impulsivity end and significant levels requiring interventions begin.
According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and its text revision (DSM-5-TR), symptoms must be present for six months or more to a degree that is much greater than others of the same age. This requires at least six symptoms of either inattention or hyperactivity/impulsivity for those under 17 and at least five symptoms for those 17 years or older. The symptoms must be present in at least two settings (e.g., social, school, work, or home), and must directly interfere with or reduce quality of functioning. Additionally, several symptoms must have been present before age 12. The DSM-5 's required age of onset of symptoms is 12 years. However, research indicates the age of onset should not be interpreted as a prerequisite for diagnosis given contextual exceptions.
Presentations
ADHD is divided into three primary presentations:
predominantly inattentive (ADHD-PI or ADHD-I)
predominantly hyperactive-impulsive (ADHD-PH or ADHD-HI)
combined presentation (ADHD-C).
The table "Symptoms" lists the symptoms for ADHD-I and ADHD-HI from two major classification systems. Symptoms which can be better explained by another psychiatric or medical condition which an individual has are not considered to be a symptom of ADHD for that person. In DSM-5, subtypes were discarded and reclassified as presentations of the disorder that change over time.
Girls and women with ADHD tend to display fewer hyperactivity and impulsivity symptoms but more symptoms of inattention and distractibility.
Symptoms are expressed differently and more subtly as the individual ages. Hyperactivity tends to become less overt with age and turns into inner restlessness, difficulty relaxing or remaining still, talkativeness or constant mental activity in teens and adults with ADHD. Impulsivity in adulthood may appear as thoughtless behaviour, impatience, irresponsible spending and sensation-seeking behaviours, while inattention may appear as becoming easily bored, difficulty with organization, remaining on task and making decisions, and sensitivity to stress.
Although not listed as an official symptom, emotional dysregulation or mood lability is generally understood to be a common symptom of ADHD. People with ADHD of all ages are more likely to have problems with social skills, such as social interaction and forming and maintaining friendships. This is true for all presentations. About half of children and adolescents with ADHD experience social rejection by their peers compared to 10–15% of non-ADHD children and adolescents. People with attention deficits are prone to having difficulty processing verbal and nonverbal language which can negatively affect social interaction. They may also drift off during conversations, miss social cues, and have trouble learning social skills.
Difficulties managing anger are more common in children with ADHD, as are delays in speech, language and motor development. Poorer handwriting is more common in children with ADHD. Poor handwriting can be a symptom of ADHD in itself due to decreased attentiveness. When this is a pervasive problem, it may also be attributable to dyslexia or dysgraphia. There is significant overlap in the symptomatologies of ADHD, dyslexia, and dysgraphia, and 3 in 10 people diagnosed with dyslexia experience co-occurring ADHD. Although it causes significant difficulty, many children with ADHD have an attention span equal to or greater than that of other children for tasks and subjects they find interesting.
IQ test performance
Certain studies have found that people with ADHD tend to have lower scores on intelligence quotient (IQ) tests. The significance of this is controversial due to the differences between people with ADHD and the difficulty determining the influence of symptoms, such as distractibility, on lower scores rather than intellectual capacity. In studies of ADHD, higher IQs may be over-represented because many studies exclude individuals who have lower IQs despite those with ADHD scoring on average nine points lower on standardised intelligence measures. However, other studies contradict this, saying that in individuals with high intelligence, there is an increased risk of a missed ADHD diagnosis, possibly because of compensatory strategies in said individuals.
Studies of adults suggest that negative differences in intelligence are not meaningful and may be explained by associated health problems.
Comorbidities
Psychiatric comorbidities
In children, ADHD occurs with other disorders about two-thirds of the time.
Other neurodevelopmental conditions are common comorbidities. Autism spectrum disorder (ASD), co-occurring at a rate of 21% in those with ADHD, affects social skills, ability to communicate, behaviour, and interests. Learning disabilities have been found to occur in about 20–30% of children with ADHD. Learning disabilities can include developmental speech and language disorders, and academic skills disorders. ADHD, however, is not considered a learning disability, but it very frequently causes academic difficulties. Intellectual disabilities and Tourette's syndrome are also common.
ADHD is often comorbid with disruptive, impulse control, and conduct disorders. Oppositional defiant disorder (ODD) occurs in about 25% of children with an inattentive presentation and 50% of those with a combined presentation. It is characterised by angry or irritable mood, argumentative or defiant behaviour and vindictiveness which are age-inappropriate. Conduct disorder (CD) is another common comorbid disorder of adolescents with ADHD, and occurs in 25% of individuals with combined presentation. It is characterised by aggression, destruction of property, deceitfulness, theft and violations of rules. Adolescents with ADHD who also have CD are more likely to develop antisocial personality disorder in adulthood. Brain imaging supports that CD and ADHD are separate conditions: conduct disorder was shown to reduce the size of one's temporal lobe and limbic system, and increase the size of one's orbitofrontal cortex, whereas ADHD was shown to reduce connections in the cerebellum and prefrontal cortex more broadly. Conduct disorder involves more impairment in motivation control than ADHD. Intermittent explosive disorder is characterised by sudden and disproportionate outbursts of anger and co-occurs in individuals with ADHD more frequently than in the general population.
Anxiety and mood disorders are frequent comorbidities. Anxiety disorders have been found to occur more commonly in the ADHD population, as have mood disorders (especially bipolar disorder and major depressive disorder). Boys diagnosed with the combined ADHD subtype are more likely to have a mood disorder. Adults and children with ADHD sometimes also have bipolar disorder, which requires careful assessment to accurately diagnose and treat both conditions.
Sleep disorders and ADHD commonly co-exist. They can also occur as a side effect of medications used to treat ADHD. In children with ADHD, insomnia is the most common sleep disorder with behavioural therapy being the preferred treatment. Problems with sleep initiation are common among individuals with ADHD but often they will be deep sleepers and have significant difficulty getting up in the morning. Melatonin is sometimes used in children who have sleep onset insomnia. Restless legs syndrome has been found to be more common in those with ADHD and is often due to iron deficiency anemia. However, restless legs can simply be a part of ADHD and requires careful assessment to differentiate between the two disorders. Delayed sleep phase disorder is also a common comorbidity.
Individuals with ADHD are at increased risk of substance use disorders. This is most commonly seen with alcohol or cannabis. The reason for this may be an altered reward pathway in the brains of ADHD individuals, self-treatment and increased psychosocial risk factors. This makes the evaluation and treatment of ADHD more difficult, with serious substance misuse problems usually treated first due to their greater risks. Other psychiatric conditions include reactive attachment disorder, characterised by a severe inability to appropriately relate socially, and cognitive disengagement syndrome, a distinct attention disorder occurring in 30–50% of ADHD cases as a comorbidity, regardless of the presentation; a subset of cases diagnosed with ADHD-PIP have been found to have CDS instead. Individuals with ADHD are three times more likely to be diagnosed with an eating disorder compared to those without ADHD; conversely, individuals with eating disorders are two times more likely to have ADHD than those without eating disorders.
Trauma
ADHD, trauma, and adverse childhood experiences are also comorbid, which could in part be potentially explained by the similarity in presentation between different diagnoses. The symptoms of ADHD and PTSD can have significant behavioural overlap—in particular, motor restlessness, difficulty concentrating, distractibility, irritability/anger, emotional constriction or dysregulation, poor impulse control, and forgetfulness are common in both. This could result in trauma-related disorders or ADHD being mis-identified as the other. Additionally, traumatic events in childhood are a risk factor for ADHD; they can lead to structural brain changes and the development of ADHD behaviours. Finally, the behavioural consequences of ADHD symptoms cause a higher chance of the individual experiencing trauma (and therefore ADHD leads to a concrete diagnosis of a trauma-related disorder).
Non-psychiatric
Some non-psychiatric conditions are also comorbidities of ADHD. This includes epilepsy, a neurological condition characterised by recurrent seizures. There are well established associations between ADHD and obesity, asthma and sleep disorders, and an association with celiac disease. Children with ADHD have a higher risk for migraine headaches, but have no increased risk of tension-type headaches. Children with ADHD may also experience headaches as a result of medication.
A 2021 review reported that several neurometabolic disorders caused by inborn errors of metabolism converge on common neurochemical mechanisms that interfere with biological mechanisms also considered central in ADHD pathophysiology and treatment. This highlights the importance of close collaboration between health services to avoid clinical overshadowing.
In June 2021, Neuroscience & Biobehavioral Reviews published a systematic review of 82 studies that all confirmed or implied elevated accident-proneness in ADHD patients and whose data suggested that the type of accidents or injuries and overall risk changes in ADHD patients over the lifespan. In January 2014, Accident Analysis & Prevention published a meta-analysis of 16 studies examining the relative risk of traffic collisions for drivers with ADHD, finding an overall relative risk estimate of 1.36 without controlling for exposure, a relative risk estimate of 1.29 when controlling for publication bias, a relative risk estimate of 1.23 when controlling for exposure, and a relative risk estimate of 1.86 for ADHD drivers with oppositional defiant disorder or conduct disorder comorbidities.
Problematic digital media use
Suicide risk
Systematic reviews in 2017 and 2020 found strong evidence that ADHD is associated with increased suicide risk across all age groups, as well as growing evidence that an ADHD diagnosis in childhood or adolescence represents a significant future suicidal risk factor. Potential causes include ADHD's association with functional impairment, negative social, educational and occupational outcomes, and financial distress. A 2019 meta-analysis indicated a significant association between ADHD and suicidal spectrum behaviours (suicidal attempts, ideations, plans, and completed suicides); across the studies examined, the prevalence of suicide attempts in individuals with ADHD was 18.9%, compared to 9.3% in individuals without ADHD, and the findings were substantially replicated among studies which adjusted for other variables. However, the relationship between ADHD and suicidal spectrum behaviours remains unclear due to mixed findings across individual studies and the complicating impact of comorbid psychiatric disorders. There is no clear data on whether there is a direct relationship between ADHD and suicidality, or whether ADHD increases suicide risk through comorbidities.
Causes
ADHD arises from brain maldevelopment especially in the prefrontal executive networks that can arise either from genetic factors (different gene variants and mutations for building and regulating such networks) or from acquired disruptions to the development of these networks and regions; involved in executive functioning and self-regulation. Their reduced size, functional connectivity, and activation contribute to the pathophysiology of ADHD, as well as imbalances in the noradrenergic and dopaminergic systems that mediate these brain regions.
Genetic factors play an important role; ADHD has a heritability rate of 70-80%. The remaining 20-30% of variance is mediated by de-novo mutations and non-shared environmental factors that provide for or produce brain injuries; there is no significant contribution of the rearing family and social environment. Very rarely, ADHD can also be the result of abnormalities in the chromosomes.
Genetics
In November 1999, Biological Psychiatry published a literature review by psychiatrists Joseph Biederman and Thomas Spencer found the average heritability estimate of ADHD from twin studies to be 0.8, while a subsequent family, twin, and adoption studies literature review published in Molecular Psychiatry in April 2019 by psychologists Stephen Faraone and Henrik Larsson that found an average heritability estimate of 0.74. Additionally, evolutionary psychiatrist Randolph M. Nesse has argued that the 5:1 male-to-female sex ratio in the epidemiology of ADHD suggests that ADHD may be the end of a continuum where males are overrepresented at the tails, citing clinical psychologist Simon Baron-Cohen's suggestion for the sex ratio in the epidemiology of autism as an analogue.
Natural selection has been acting against the genetic variants for ADHD over the course of at least 45,000 years, indicating that it was not an adaptive trait in ancient times. The disorder may remain at a stable rate by the balance of genetic mutations and removal rate (natural selection) across generations; over thousands of years, these genetic variants become more stable, decreasing disorder prevalence. Throughout human evolution, the executive functions involved in ADHD likely provide the capacity to bind contingencies across time thereby directing behaviour toward future over immediate events so as to maximise future social consequences for humans.
ADHD has a high heritability of 74%, meaning that 74% of the presence of ADHD in the population is due to genetic factors. There are multiple gene variants which each slightly increase the likelihood of a person having ADHD; it is polygenic and thus arises through the accumulation of many genetic risks each having a very small effect. The siblings of children with ADHD are three to four times more likely to develop the disorder than siblings of children without the disorder.
The association of maternal smoking observed in large population studies disappears after adjusting for family history of ADHD, which indicates that the association between maternal smoking during pregnancy and ADHD is due to familial or genetic factors that increase the risk for the confluence of smoking and ADHD.
ADHD presents with reduced size, functional connectivity and activation as well as low noradrenergic and dopaminergic functioning in brain regions and networks crucial for executive functioning and self-regulation. Typically, a number of genes are involved, many of which directly affect brain functioning and neurotransmission. Those involved with dopamine include DAT, DRD4, DRD5, TAAR1, MAOA, COMT, and DBH. Other genes associated with ADHD include SERT, HTR1B, SNAP25, GRIN2A, ADRA2A, TPH2, and BDNF. A common variant of a gene called latrophilin 3 is estimated to be responsible for about 9% of cases and when this variant is present, people are particularly responsive to stimulant medication. The 7 repeat variant of dopamine receptor D4 (DRD4–7R) causes increased inhibitory effects induced by dopamine and is associated with ADHD. The DRD4 receptor is a G protein-coupled receptor that inhibits adenylyl cyclase. The DRD4–7R mutation results in a wide range of behavioural phenotypes, including ADHD symptoms reflecting split attention. The DRD4 gene is both linked to novelty seeking and ADHD. The genes GFOD1 and CDH13 show strong genetic associations with ADHD. CDH13's association with ASD, schizophrenia, bipolar disorder, and depression make it an interesting candidate causative gene. Another candidate causative gene that has been identified is ADGRL3. In zebrafish, knockout of this gene causes a loss of dopaminergic function in the ventral diencephalon and the fish display a hyperactive/impulsive phenotype.
For genetic variation to be used as a tool for diagnosis, more validating studies need to be performed. However, smaller studies have shown that genetic polymorphisms in genes related to catecholaminergic neurotransmission or the SNARE complex of the synapse can reliably predict a person's response to stimulant medication. Rare genetic variants show more relevant clinical significance as their penetrance (the chance of developing the disorder) tends to be much higher. However their usefulness as tools for diagnosis is limited as no single gene predicts ADHD. ASD shows genetic overlap with ADHD at both common and rare levels of genetic variation.
Environment
In addition to genetics, some environmental factors might play a role in causing ADHD. Alcohol intake during pregnancy can cause fetal alcohol spectrum disorders which can include ADHD or symptoms like it. Children exposed to certain toxic substances, such as lead or polychlorinated biphenyls, may develop problems which resemble ADHD. Exposure to the organophosphate insecticides chlorpyrifos and dialkyl phosphate is associated with an increased risk; however, the evidence is not conclusive. Exposure to tobacco smoke during pregnancy can cause problems with central nervous system development and can increase the risk of ADHD. Nicotine exposure during pregnancy may be an environmental risk.
Extreme premature birth, very low birth weight, and extreme neglect, abuse, or social deprivation also increase the risk as do certain infections during pregnancy, at birth, and in early childhood. These infections include, among others, various viruses (measles, varicella zoster encephalitis, rubella, enterovirus 71). At least 30% of children with a traumatic brain injury later develop ADHD and about 5% of cases are due to brain damage.
Some studies suggest that in a small number of children, artificial food dyes or preservatives may be associated with an increased prevalence of ADHD or ADHD-like symptoms, but the evidence is weak and may apply to only children with food sensitivities. The European Union has put in place regulatory measures based on these concerns. In a minority of children, intolerances or allergies to certain foods may worsen ADHD symptoms.
Individuals with hypokalemic sensory overstimulation are sometimes diagnosed as having ADHD, raising the possibility that a subtype of ADHD has a cause that can be understood mechanistically and treated in a novel way. The sensory overload is treatable with oral potassium gluconate.
Research does not support popular beliefs that ADHD is caused by eating too much refined sugar, watching too much television, bad parenting, poverty or family chaos; however, they might worsen ADHD symptoms in certain people.
In some cases, an inappropriate diagnosis of ADHD may reflect a dysfunctional family or a poor educational system, rather than any true presence of ADHD in the individual. In other cases, it may be explained by increasing academic expectations, with a diagnosis being a method for parents in some countries to obtain extra financial and educational support for their child. Behaviours typical of ADHD occur more commonly in children who have experienced violence and emotional abuse.
Pathophysiology
Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems, particularly those involving dopamine and norepinephrine. The dopamine and norepinephrine pathways that originate in the ventral tegmental area and locus coeruleus project to diverse regions of the brain and govern a variety of cognitive processes. The dopamine pathways and norepinephrine pathways which project to the prefrontal cortex and striatum are directly responsible for modulating executive function (cognitive control of behaviour), motivation, reward perception, and motor function; these pathways are known to play a central role in the pathophysiology of ADHD. Larger models of ADHD with additional pathways have been proposed.
Brain structure
In children with ADHD, there is a general reduction of volume in certain brain structures, with a proportionally greater decrease in the volume in the left-sided prefrontal cortex. The posterior parietal cortex also shows thinning in individuals with ADHD compared to controls. Other brain structures in the prefrontal-striatal-cerebellar and prefrontal-striatal-thalamic circuits have also been found to differ between people with and without ADHD.
The subcortical volumes of the accumbens, amygdala, caudate, hippocampus, and putamen appears smaller in individuals with ADHD compared with controls. Structural MRI studies have also revealed differences in white matter, with marked differences in inter-hemispheric asymmetry between ADHD and typically developing youths.
Functional MRI (fMRI) studies have revealed a number of differences between ADHD and control brains. Mirroring what is known from structural findings, fMRI studies have showed evidence for a higher connectivity between subcortical and cortical regions, such as between the caudate and prefrontal cortex. The degree of hyperconnectivity between these regions correlated with the severity of inattention or hyperactivity Hemispheric lateralization processes have also been postulated as being implicated in ADHD, but empiric results showed contrasting evidence on the topic.
Neurotransmitter pathways
Previously, it had been suggested that the elevated number of dopamine transporters in people with ADHD was part of the pathophysiology, but it appears the elevated numbers may be due to adaptation following exposure to stimulant medication. Current models involve the mesocorticolimbic dopamine pathway and the locus coeruleus-noradrenergic system. ADHD psychostimulants possess treatment efficacy because they increase neurotransmitter activity in these systems. There may additionally be abnormalities in serotonergic, glutamatergic, or cholinergic pathways.
Executive function and motivation
ADHD arises from a core deficit in executive functions (e.g., attentional control, inhibitory control, and working memory), which are a set of cognitive processes that are required to successfully select and monitor behaviours that facilitate the attainment of one's chosen goals. The executive function impairments that occur in ADHD individuals result in problems with staying organised, time keeping, procrastination control, maintaining concentration, paying attention, ignoring distractions, regulating emotions, and remembering details. People with ADHD appear to have unimpaired long-term memory, and deficits in long-term recall appear to be attributed to impairments in working memory. Due to the rates of brain maturation and the increasing demands for executive control as a person gets older, ADHD impairments may not fully manifest themselves until adolescence or even early adulthood. Conversely, brain maturation trajectories, potentially exhibiting diverging longitudinal trends in ADHD, may support a later improvement in executive functions after reaching adulthood.
ADHD has also been associated with motivational deficits in children. Children with ADHD often find it difficult to focus on long-term over short-term rewards, and exhibit impulsive behaviour for short-term rewards.
Paradoxical reaction to neuroactive substances
Another sign of the structurally altered signal processing in the central nervous system in this group of people is the conspicuously common paradoxical reaction ( of patients). These are unexpected reactions in the opposite direction as with a normal effect, or otherwise significant different reactions. These are reactions to neuroactive substances such as local anesthetic at the dentist, sedative, caffeine, antihistamine, weak neuroleptics and central and peripheral painkillers. Since the causes of paradoxical reactions are at least partly genetic, it may be useful in critical situations, for example before operations, to ask whether such abnormalities may also exist in family members.
Diagnosis
ADHD is diagnosed by an assessment of a person's behavioural and mental development, including ruling out the effects of drugs, medications, and other medical or psychiatric problems as explanations for the symptoms. ADHD diagnosis often takes into account feedback from parents and teachers with most diagnoses begun after a teacher raises concerns. While many tools exist to aid in the diagnosis of ADHD, their validity varies in different populations, and a reliable and valid diagnosis requires confirmation by a clinician while supplemented by standardized rating scales and input from multiple informants across various settings.
The diagnosis of ADHD has been criticised as being subjective because it is not based on a biological test. The International Consensus Statement on ADHD concluded that this criticism is unfounded, on the basis that ADHD meets standard criteria for validity of a mental disorder established by Robins and Guze. They attest that the disorder is considered valid because: 1) well-trained professionals in a variety of settings and cultures agree on its presence or absence using well-defined criteria and 2) the diagnosis is useful for predicting a) additional problems the patient may have (e.g., difficulties learning in school); b) future patient outcomes (e.g., risk for future drug abuse); c) response to treatment (e.g., medications and psychological treatments); and d) features that indicate a consistent set of causes for the disorder (e.g., findings from genetics or brain imaging), and that professional associations have endorsed and published guidelines for diagnosing ADHD.
The most commonly used rating scales for diagnosing ADHD are the Achenbach System of Empirically Based Assessment (ASEBA) and include the Child Behavior Checklist (CBCL) used for parents to rate their child's behaviour, the Youth Self Report Form (YSR) used for children to rate their own behaviour, and the Teacher Report Form (TRF) used for teachers to rate their pupil's behaviour. Additional rating scales that have been used alone or in combination with other measures to diagnose ADHD include the Behavior Assessment System for Children (BASC), Behavior Rating Inventory of Executive Function - Second Edition (BRIEF2), Revised Conners Rating Scale (CRS-R), Conduct-Hyperactive-Attention Problem-Oppositional Symptom scale (CHAOS), Developmental Behavior Checklist Hyperactivity Index (DBC-HI), Parent Disruptive Behavior Disorder Ratings Scale (DBDRS), Diagnostic Infant and Preschool Assessment (DIPA-L), Pediatric Symptom Checklist (PSC), Social Communication Questionnaire (SCQ), Social Responsiveness Scale (SRS), Strengths and Weaknesses of ADHD Symptoms and Normal Behavior Rating Scale (SWAN). and the Vanderbilt ADHD diagnostic rating scale.
The ASEBA, BASC, CHAOS, CRS, and Vanderbilt diagnostic rating scales allow for both parents and teachers as raters in the diagnosis of childhood and adolescent ADHD. Adolescents may also self report their symptoms using self report scales from the ASEBA, SWAN, and the Dominic Interactive for Adolescents-Revised (DIA-R). Self-rating scales, such as the ADHD rating scale and the Vanderbilt ADHD diagnostic rating scale, are used in the screening and evaluation of ADHD.
Based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), rating scales based on parent report, teacher report, or self-assessment from the adolescent have high internal consistency as a diagnostic tool meaning that the items within the scale are highly interrelated. The reliability of the scales between raters (i.e. their degree of agreement) however is poor to moderate making it important to include information from multiple raters to best inform a diagnosis.
Imaging studies of the brain do not give consistent results between individuals; thus, they are only used for research purposes and not a diagnosis. Electroencephalography is not accurate enough to make an ADHD diagnosis. A 2024 systematic review concluded that the use of biomarkers such as blood or urine samples, electroencephalogram (EEG) markers, and neuroimaging such as MRIs, in diagnosis for ADHD remains unclear; studies showed great variability, did not assess test-retest reliability, and were not independently replicable.
In North America and Australia, DSM-5 criteria are used for diagnosis, while European countries usually use the ICD-10. The DSM-IV criteria for diagnosis of ADHD is more likely to diagnose ADHD than is the ICD-10 criteria. ADHD is alternately classified as neurodevelopmental disorder or a disruptive behaviour disorder along with ODD, CD, and antisocial personality disorder. A diagnosis does not imply a neurological disorder.
Very few studies have been conducted on diagnosis of ADHD on children younger than 7 years of age, and those that have were found in a 2024 systematic review to be of low or insufficient strength of evidence.
Classification
Diagnostic and Statistical Manual
As with many other psychiatric disorders, a formal diagnosis should be made by a qualified professional based on a set number of criteria. In the United States, these criteria are defined by the American Psychiatric Association in the DSM. Based on the DSM-5 criteria published in 2013 and the DSM-5-TR criteria published in 2022, there are three presentations of ADHD:
ADHD, predominantly inattentive presentation, presents with symptoms including being easily distracted, forgetful, daydreaming, disorganization, poor sustained attention, and difficulty completing tasks.
ADHD, predominantly hyperactive-impulsive presentation, presents with excessive fidgeting and restlessness, hyperactivity, and difficulty waiting and remaining seated.
ADHD, combined presentation, is a combination of the first two presentations.
This subdivision is based on presence of at least six (in children) or five (in older teenagers and adults) out of nine long-term (lasting at least six months) symptoms of inattention, hyperactivity–impulsivity, or both. To be considered, several symptoms must have appeared by the age of six to twelve and occur in more than one environment (e.g. at home and at school or work). The symptoms must be inappropriate for a child of that age and there must be clear evidence that they are causing impairment in multiple domains of life.
The DSM-5 and the DSM-5-TR also provide two diagnoses for individuals who have symptoms of ADHD but do not entirely meet the requirements. Other Specified ADHD allows the clinician to describe why the individual does not meet the criteria, whereas Unspecified ADHD is used where the clinician chooses not to describe the reason.
International Classification of Diseases
In the eleventh revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-11) by the World Health Organization, the disorder is classified as Attention deficit hyperactivity disorder (code 6A05). The defined subtypes are predominantly inattentive presentation (6A05.0); predominantly hyperactive-impulsive presentation(6A05.1); and combined presentation (6A05.2). However, the ICD-11 includes two residual categories for individuals who do not entirely match any of the defined subtypes: other specified presentation (6A05.Y) where the clinician includes detail on the individual's presentation; and presentation unspecified (6A05.Z) where the clinician does not provide detail.
In the tenth revision (ICD-10), the symptoms of hyperkinetic disorder were analogous to ADHD in the ICD-11. When a conduct disorder (as defined by ICD-10) is present, the condition was referred to as hyperkinetic conduct disorder. Otherwise, the disorder was classified as disturbance of activity and attention, other hyperkinetic disorders or hyperkinetic disorders, unspecified. The latter was sometimes referred to as hyperkinetic syndrome.
Social construct theory
The social construct theory of ADHD suggests that, because the boundaries between normal and abnormal behaviour are socially constructed (i.e. jointly created and validated by all members of society, and in particular by physicians, parents, teachers, and others), it then follows that subjective valuations and judgements determine which diagnostic criteria are used and thus, the number of people affected. Thomas Szasz, a supporter of this theory, has argued that ADHD was "invented and then given a name".
Adults
Adults with ADHD are diagnosed under the same criteria, including that their signs must have been present by the age of six to twelve. The individual is the best source for information in diagnosis, however others may provide useful information about the individual's symptoms currently and in childhood; a family history of ADHD also adds weight to a diagnosis. While the core symptoms of ADHD are similar in children and adults, they often present differently in adults than in children: for example, excessive physical activity seen in children may present as feelings of restlessness and constant mental activity in adults.
Worldwide, it is estimated that 2.58% of adults have persistent ADHD (where the individual currently meets the criteria and there is evidence of childhood onset), and 6.76% of adults have symptomatic ADHD (meaning that they currently meet the criteria for ADHD, regardless of childhood onset). In 2020, this was 139.84 million and 366.33 million affected adults respectively. Around 15% of children with ADHD continue to meet full DSM-IV-TR criteria at 25 years of age, and 50% still experience some symptoms. , most adults remain untreated. Many adults with ADHD without diagnosis and treatment have a disorganised life, and some use non-prescribed drugs or alcohol as a coping mechanism. Other problems may include relationship and job difficulties, and an increased risk of criminal activities. Associated mental health problems include depression, anxiety disorders, and learning disabilities.
Some ADHD symptoms in adults differ from those seen in children. While children with ADHD may climb and run about excessively, adults may experience an inability to relax, or may talk excessively in social situations. Adults with ADHD may start relationships impulsively, display sensation-seeking behaviour, and be short-tempered. Addictive behaviour such as substance abuse and gambling are common. This led to those who presented differently as they aged having outgrown the DSM-IV criteria. The DSM-5 criteria does specifically deal with adults unlike that of DSM-IV, which does not fully take into account the differences in impairments seen in adulthood compared to childhood.
For diagnosis in an adult, having symptoms since childhood is required. Nevertheless, a proportion of adults who meet the criteria for ADHD in adulthood would not have been diagnosed with ADHD as children. Most cases of late-onset ADHD develop the disorder between the ages of 12–16 and may therefore be considered early adult or adolescent-onset ADHD.
Differential diagnosis
The DSM provides differential diagnoses – potential alternate explanations for specific symptoms. Assessment and investigation of clinical history determines which is the most appropriate diagnosis. The DSM-5 suggests oppositional defiant disorder, intermittent explosive disorder, and other disorders such as stereotypic movement disorder and Tourette syndrome, in addition to specific learning disorder, intellectual disability, autism, reactive attachment disorder, anxiety disorders, depressive disorders, bipolar disorder, disruptive mood dysregulation disorder, substance use disorder, personality disorders, psychotic disorders, medication-induced symptoms, and neurocognitive disorders. Many but not all of these are also common comorbidities of ADHD. The DSM-5-TR also suggests post-traumatic stress disorder.
Symptoms of ADHD that particularly relate to disinhibition and irritability in addition to low-mood and self-esteem as a result of symptom expression might be confusable with dysthymia and bipolar disorder as well as with borderline personality disorder, however they are comorbid at a significantly increased rate relative to the general population. Some symptoms that are viewed superficially due to anxiety disorders, intellectual disability or the effects of substance abuse such as intoxication and withdrawal can overlap to some extent with ADHD. These disorders can also sometimes occur along with ADHD.
Primary sleep disorders may affect attention and behaviour and the symptoms of ADHD may affect sleep. It is thus recommended that children with ADHD be regularly assessed for sleep problems. Sleepiness in children may result in symptoms ranging from the classic ones of yawning and rubbing the eyes, to disinhibition and inattention. Obstructive sleep apnea can also cause ADHD-like symptoms.
In general, the DSM-5-TR can help distinguish between many conditions associated with ADHD-like symptoms by the context in which the symptoms arise. For example, children with learning disabilities may feel distractable and agitated when asked to engage in tasks that require the impaired skill (e.g., reading, math), but not in other situations. A person with an intellectual disability may develop symptoms that overlap with ADHD when placed in a school environment that is inappropriate for their needs. The type of inattention implicated in ADHD, of poor persistence and sustained attention, differs substantially from selective or oriented inattention seen in cognitive disengagement syndrome (CDS), as well as from rumination, reexperiencing or mind blanking seen in anxiety disorders or PTSD.
In mood disorders, ADHD-like symptoms may be limited to manic or depressive states of an episodic nature. Symptoms overlapping with ADHD in psychotic disorders may be limited to psychotic states. Substance use disorder, some medications, and certain medical conditions may cause symptoms to appear later in life, while ADHD, as a neurodevelopmental disorder, requires for them to have been present since childhood.
Furthermore, a careful understanding of the nature of the symptoms may help establish the difference between ADHD and other disorders. For example, the forgetfulness and impulsivity typical of ADHD (e.g., in completing school assignments or following directions) may be distinguished from opposition when there is no hostility or defiance, although ADHD and ODD are highly comorbid. Tantrums may differ from the outbursts in intermittent explosive disorder if there is no aggression involved. The fidgetiness observed in ADHD may be differentiated from tics or stereotypies common in Tourette's disorder or autism.
Also, the social difficulties often experienced by individuals with ADHD due to inattention (e.g., being unfocused during the interaction and therefore missing cues or being unaware of one's behavior) or impulsivity (blurting things out, asking intrusive questions, interrupting) may be contrasted with the social detachment and deficits in understanding social cues associated with autism. Individuals with ADHD may also present signs of the social impairment or emotional and cognitive dysregulation seen in personality disorders, but not necessarily such features as a fear of abandonment, an unstable sense of self, narcissistic tendencies, aggressiveness, or other personality features.
While it is possible and common for many of these different conditions to be comorbid with ADHD, the symptoms must not be better explained by them, as per diagnostic criterion E in the DSM-5. The symptoms must arise early in life, appear across multiple environments, and cause significant impairment. Moreover, when some of these conditions are in fact comorbid with ADHD, it is still important to distinguish them, as each may need to be treated separately.
Management
The management of ADHD typically involves counseling or medications, either alone or in combination. While there are various options of treatment to improve ADHD symptoms, medication therapies substantially improve long-term outcomes, and while eliminating some elevated risks such as obesity, they do come with some risks of adverse events. Medications used include stimulants, atomoxetine, alpha-2 adrenergic receptor agonists, and sometimes antidepressants. In those who have trouble focusing on long-term rewards, a large amount of positive reinforcement improves task performance. Medications are the most effective treatment, and any side effects are typically mild and easy to resolve although any improvements will be reverted if medication is ceased. ADHD stimulants also improve persistence and task performance in children with ADHD. To quote one systematic review, "recent evidence from observational and registry studies indicates that pharmacological treatment of ADHD is associated with increased achievement and decreased absenteeism at school, a reduced risk of trauma-related emergency hospital visits, reduced risks of suicide and attempted suicide, and decreased rates of substance abuse and criminality". Data also suggest that combining medication with cognitive behavioral therapy (CBT) can have positive effects: although CBT is substantially less effective, it can help address problems that reside after medication has been optimised.
The nature and range of desirable endpoints of ADHD treatment vary among diagnostic standards for ADHD. In most studies, the efficacy of treatment is determined by reductions in symptoms. However, some studies have included subjective ratings from teachers and parents as part of their assessment of treatment efficacies.
Behavioural therapies
There is good evidence for the use of behavioural therapies in ADHD. They are the recommended first-line treatment in those who have mild symptoms or who are preschool-aged. Psychological therapies used include: psychoeducational input, behavior therapy, cognitive behavioral therapy, interpersonal psychotherapy, family therapy, school-based interventions, social skills training, behavioural peer intervention, organization training, and parent management training. Neurofeedback has greater treatment effects than non-active controls for up to 6 months and possibly a year following treatment, and may have treatment effects comparable to active controls (controls proven to have a clinical effect) over that time period. Despite efficacy in research, there is insufficient regulation of neurofeedback practice, leading to ineffective applications and false claims regarding innovations. Parent training may improve a number of behavioural problems including oppositional and non-compliant behaviours.
There is little high-quality research on the effectiveness of family therapy for ADHD—but the existing evidence shows that it is similar to community care, and better than placebo. ADHD-specific support groups can provide information and may help families cope with ADHD.
Social skills training, behavioural modification, and medication may have some limited beneficial effects in peer relationships. Stable, high-quality friendships with non-deviant peers protect against later psychological problems.
Digital interventions
Several clinical trials have investigated the efficacy of digital therapeutics, particularly Akili Interactive Labs's video game-based digital therapeutic AKL-T01, marketed as EndeavourRx. The pediatric STARS-ADHD randomized, double-blind, parallel-group, controlled trial demonstrated that AKL-T01 significantly improved performance on the Test of Variables of Attention, an objective measure of attention and inhibitory control, compared to a control group after four weeks of at-home use. A subsequent pediatric open-label study, STARS-Adjunct, published in Nature Portfolio's npj Digital Medicine evaluated AKL-T01 as an adjunctive treatment for children with ADHD who were either on stimulant medication or not on stimulant pharmacotherapy. Results showed improvements in ADHD-related impairment (measured by the Impairment Rating Scale) and ADHD symptoms after 4 weeks of treatment, with effects persisting during a 4-week pause and further improving with an additional treatment period. Notably, the magnitude of the measured improvement was similar for children both on and off stimulants. In 2020, AKL-T01 received marketing authorization for pediatric ADHD from the FDA, becoming "the first game-based therapeutic granted marketing authorization by the FDA for any type of condition."
In addition to pediatric populations, a 2023 study in the Journal of the American Academy of Child & Adolescent Psychiatry investigated the efficacy and safety of AKL-T01 in adults with ADHD. After six weeks of at-home treatment with AKL-T01, participants showed significant improvements in objective measures of attention (TOVA - Attention Comparison Score), reported ADHD symptoms (ADHD-RS-IV inattention subscale and total score), and reported quality of life (AAQoL). The magnitude of improvement in attention was nearly seven times greater than that reported in pediatric trials. The treatment was well-tolerated, with high compliance and no serious adverse events.
Medication
The medications for ADHD appear to alleviate symptoms via their effects on the pre-frontal executive, striatal and related regions and networks in the brain; usually by increasing neurotransmission of norepinephrine and dopamine.
Stimulants
Methylphenidate and amphetamine or its derivatives are often first-line treatments for ADHD. About 70 per cent respond to the first stimulant tried and as few as 10 per cent respond to neither amphetamines nor methylphenidate. Stimulants may also reduce the risk of unintentional injuries in children with ADHD. Magnetic resonance imaging studies suggest that long-term treatment with amphetamine or methylphenidate decreases abnormalities in brain structure and function found in subjects with ADHD. A 2018 review found the greatest short-term benefit with methylphenidate in children, and amphetamines in adults. Studies and meta-analyses show that amphetamine is slightly-to-modestly more effective than methylphenidate at reducing symptoms, and they are more effective pharmacotherapy for ADHD than α2-agonists but methylphenidate has comparable efficacy to non-stimulants such as atomoxetine.
In a Cochrane clinical synopsis, Dr Storebø and colleagues summarised their meta-review on methylphenidate for ADHD in children and adolescents. The meta-analysis raised substantial doubts about the drug's efficacy relative to a placebo. This led to a strong critical reaction from the European ADHD Guidelines Group and individuals in the scientific community, who identified a number of flaws in the review. Since at least September 2021, there is a unanimous and global scientific consensus that methylphenidate is safe and highly effective for treating ADHD. The same journal released a subsequent systematic review (2022) of extended-release methylphenidate for adults, concluding similar doubts about the certainty of evidence. Other recent systematic reviews and meta-analyses, however, find certainty in the safety and high efficacy of methylphenidate for reducing ADHD symptoms, for alleviating the underlying executive functioning deficits, and for substantially reducing the adverse consequences of untreated ADHD with continuous treatment. Clinical guidelines internationally are also consistent in approving the safety and efficacy of methylphenidate and recommending it as a first-line treatment for the disorder.
Safety and efficacy data have been reviewed extensively by medical regulators (e.g., the US Food and Drug Administration and the European Medicines Agency), the developers of evidence-based international guidelines (e.g., the UK National Institute for Health and Care Excellence and the American Academy of Pediatrics), and government agencies who have endorsed these guidelines (e.g., the Australian National Health and Medical Research Council). These professional groups unanimously conclude, based on the scientific evidence, that methylphenidate is safe and effective and should be considered as a first-line treatment for ADHD.
The likelihood of developing insomnia for ADHD patients taking stimulants has been measured at between 11 and 45 per cent for different medications, and may be a main reason for discontinuation. Other side effects, such as tics, decreased appetite and weight loss, or emotional lability, may also lead to discontinuation. Stimulant psychosis and mania are rare at therapeutic doses, appearing to occur in approximately 0.1% of individuals, within the first several weeks after starting amphetamine therapy. The safety of these medications in pregnancy is unclear. Symptom improvement is not sustained if medication is ceased.
The long-term effects of ADHD medication have yet to be fully determined, although stimulants are generally beneficial and safe for up to two years for children and adolescents. A 2022 meta-analysis found no statistically significant association between ADHD medications and the risk of cardiovascular disease (CVD) across age groups, although the study suggests further investigation is warranted for patients with preexisting CVD as well as long-term medication use. Regular monitoring has been recommended in those on long-term treatment. There are indications suggesting that stimulant therapy for children and adolescents should be stopped periodically to assess continuing need for medication, decrease possible growth delay, and reduce tolerance. Although potentially addictive at high doses, stimulants used to treat ADHD have low potential for abuse. Treatment with stimulants is either protective against substance abuse or has no effect.
The majority of studies on nicotine and other nicotinic agonists as treatments for ADHD have shown favorable results; however, no nicotinic drug has been approved for ADHD treatment. Caffeine was formerly used as a second-line treatment for ADHD but research indicates it has no significant effects in reducing ADHD symptoms. Caffeine appears to help with alertness, arousal and reaction time but not the type of inattention implicated in ADHD (sustained attention/persistence). Pseudoephedrine and ephedrine do not affect ADHD symptoms.
Modafinil has shown some efficacy in reducing the severity of ADHD in children and adolescents. It may be prescribed off-label to treat ADHD.
Non-stimulants
Two non-stimulant medications, atomoxetine and viloxazine, are approved by the FDA and in other countries for the treatment of ADHD.
Atomoxetine, due to its lack of addiction liability, may be preferred in those who are at risk of recreational or compulsive stimulant use, although evidence is lacking to support its use over stimulants for this reason. Atomoxetine alleviates ADHD symptoms through norepinephrine reuptake and by indirectly increasing dopamine in the pre-frontal cortex, sharing 70-80% of the brain regions with stimulants in their produced effects. Atomoxetine has been shown to significantly improve academic performance. Meta-analyses and systematic reviews have found that atomoxetine has comparable efficacy, equal tolerability and response rate (75%) to methylphenidate in children and adolescents. In adults, efficacy and discontinuation rates are equivalent.
Analyses of clinical trial data suggests that viloxazine is about as effective as atomoxetine and methylphenidate but with fewer side effects.
Amantadine was shown to induce similar improvements in children treated with methylphenidate, with less frequent side effects. A 2021 retrospective study showed that amantadine may serve as an effective adjunct to stimulants for ADHD–related symptoms and appears to be a safer alternative to second- or third-generation antipsychotics.
Bupropion is also used off-label by some clinicians due to research findings. It is effective, but modestly less than atomoxetine and methylphenidate.
There is little evidence on the effects of medication on social behaviours. Antipsychotics may also be used to treat aggression in ADHD.
Alpha-2a agonists
Two alpha-2a agonists, extended-release formulations of guanfacine and clonidine, are approved by the FDA and in other countries for the treatment of ADHD (effective in children and adolescents but effectiveness has still not been shown for adults). They appear to be modestly less effective than the stimulants (amphetamine and methylphenidate) and non-stimulants (atomoxetine and viloxazine) at reducing symptoms, but can be useful alternatives or used in conjunction with a stimulant. These medications act by adjusting the alpha-2a ports on the outside of noradrenergic nerve cells in the pre-frontal executive networks, so the information (electrical signal) is less confounded by noise.
Guidelines
Guidelines on when to use medications vary by country. The United Kingdom's National Institute for Health and Care Excellence recommends use for children only in severe cases, though for adults medication is a first-line treatment. Conversely, most United States guidelines recommend medications in most age groups. Medications are especially not recommended for preschool children. Underdosing of stimulants can occur, and can result in a lack of response or later loss of effectiveness. This is particularly common in adolescents and adults as approved dosing is based on school-aged children, causing some practitioners to use weight-based or benefit-based off-label dosing instead.
Exercise
Exercise does not reduce the symptoms of ADHD. The conclusion by the International Consensus Statement is based on two meta-analyses: one of 10 studies with 300 children and the other of 15 studies and 668 participants, which showed that exercise yields no statistically significant reductions on ADHD symptoms. A 2024 systematic review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI) identified seven studies on the effectiveness of physical exercise for treating ADHD symptoms. The type and amount of exercise varied widely across studies from martial arts interventions to treadmill training, to table tennis or aerobic exercise. Effects reported were not replicated, causing the authors to conclude that there is insufficient evidence that exercise intervention is an effective form of treatment for ADHD symptoms.
Diet
Dietary modifications are not recommended by the American Academy of Pediatrics, the National Institute for Health and Care Excellence, or the Agency for Healthcare Research and Quality due to insufficient evidence.
A 2013 meta-analysis found less than a third of children with ADHD see some improvement in symptoms with free fatty acid supplementation or decreased consumption of artificial food colouring. These benefits may be limited to children with food sensitivities or those who are simultaneously being treated with ADHD medications. This review also found that evidence does not support removing other foods from the diet to treat ADHD. A 2014 review found that an elimination diet results in a small overall benefit in a minority of children, such as those with allergies. A 2016 review stated that the use of a gluten-free diet as standard ADHD treatment is not advised. A 2017 review showed that a few-foods elimination diet may help children too young to be medicated or not responding to medication, while free fatty acid supplementation or decreased eating of artificial food colouring as standard ADHD treatment is not advised. Chronic deficiencies of iron, magnesium and iodine may have a negative impact on ADHD symptoms. There is a small amount of evidence that lower tissue zinc levels may be associated with ADHD. In the absence of a demonstrated zinc deficiency (which is rare outside of developing countries), zinc supplementation is not recommended as treatment for ADHD. However, zinc supplementation may reduce the minimum effective dose of amphetamine when it is used with amphetamine for the treatment of ADHD.
Prognosis
ADHD persists into adulthood in about 30–50% of cases. Those affected are likely to develop coping mechanisms as they mature, thus compensating to some extent for their previous symptoms. Children with ADHD have a higher risk of unintentional injuries. Effects of medication on functional impairment and quality of life (e.g. reduced risk of accidents) have been found across multiple domains. Rates of smoking among those with ADHD are higher than in the general population at about 40%.
About 30–50% of people diagnosed in childhood continue to have ADHD in adulthood, with 2.58% of adults estimated to have ADHD which began in childhood. In adults, hyperactivity is usually replaced by inner restlessness, and adults often develop coping skills to compensate for their impairments. The condition can be difficult to tell apart from other conditions, as well as from high levels of activity within the range of normal behaviour. ADHD has a negative impact on patient health-related quality of life that may be further exacerbated by, or may increase the risk of, other psychiatric conditions such as anxiety and depression. Individuals with ADHD may also face misconceptions and stigma.
Individuals with ADHD are significantly overrepresented in prison populations. Although there is no generally accepted estimate of ADHD prevalence among inmates, a 2015 meta-analysis estimated a prevalence of 25.5%, and a larger 2018 meta-analysis estimated the frequency to be 26.2%.
Epidemiology
ADHD is estimated to affect about 6–7% of people aged 18 and under when diagnosed via the DSM-IV criteria. When diagnosed via the ICD-10 criteria, rates in this age group are estimated around 1–2%. Rates are similar between countries and differences in rates depend mostly on how it is diagnosed. Children in North America appear to have a higher rate of ADHD than children in Africa and the Middle East; this is believed to be due to differing methods of diagnosis rather than a difference in underlying frequency. (The same publication which describes this difference also notes that the difference may be rooted in the available studies from these respective regions, as far more studies were from North America than from Africa and the Middle East.) it was estimated to affect 84.7 million people globally.
ADHD is diagnosed approximately twice as often in boys as in girls, and 1.6 times more often in men than in women, although the disorder is overlooked in girls or diagnosed in later life because their symptoms sometimes differ from diagnostic criteria. In 2014, Keith Conners, one of the early advocates for recognition of the disorder, spoke out against overdiagnosis in a New York Times article. In contrast, a 2014 peer-reviewed medical literature review indicated that ADHD is underdiagnosed in adults.
Studies from multiple countries have reported that children born closer to the start of the school year are more frequently diagnosed with and medicated for ADHD than their older classmates. Boys who were born in December where the school age cut-off was 31 December were shown to be 30% more likely to be diagnosed and 41% more likely to be treated than those born in January. Girls born in December had a diagnosis and treatment percentage increase of 70% and 77% respectively compared to those born in January. Children who were born at the last three days of a calendar year were reported to have significantly higher levels of diagnosis and treatment for ADHD than children born at the first three days of a calendar year. The studies suggest that ADHD diagnosis is prone to subjective analysis.
Rates of diagnosis and treatment have increased in both the United Kingdom and the United States since the 1970s. Prior to 1970, it was rare for children to be diagnosed with ADHD, while in the 1970s rates were about 1%. This is believed to be primarily due to changes in how the condition is diagnosed and how readily people are willing to treat it with medications rather than a true change in incidence. With widely differing rates of diagnosis across countries, states within countries, races, and ethnicities, some suspect factors other than symptoms of ADHD are playing a role in diagnosis, such as cultural norms.
Despite showing a higher frequency of symptoms associated with ADHD, non-White children in the US are less likely than White children to be diagnosed or treated for ADHD, a finding that is often explained by bias among health professionals, as well as parents who may be reluctant to acknowledge that their child has ADHD. Crosscultural differences in diagnosis of ADHD can also be attributed to the long-lasting effects of harmful, racially targeted medical practices. Medical pseudosciences, particularly those that targeted Black populations during the period of slavery in the US, lead to a distrust of medical practices within certain communities. The combination of ADHD symptoms often being regarded as misbehaviour rather than as a psychiatric condition, and the use of drugs to regulate ADHD, result in a hesitancy to trust a diagnosis of ADHD. Cases of misdiagnosis in ADHD can also occur due to stereotyping of people of color. Due to ADHD's subjectively determined symptoms, medical professionals may diagnose individuals based on stereotyped behaviour or misdiagnose due to cultural differences in symptom presentation.
A 2024 study in CDC’s Morbidity and Mortality Weekly Report reports around 15.5 million U.S. adults have attention-deficit hyperactivity disorder, with many facing challenges in accessing treatment. One-third of diagnosed individuals had received a prescription for a stimulant drug in the past year but nearly three-quarters of them reported difficulties filling the prescription due to medication shortages.
History
ADHD was officially known as attention deficit disorder (ADD) from 1980 to 1987; prior to the 1980s, it was known as hyperkinetic reaction of childhood. Symptoms similar to those of ADHD have been described in medical literature dating back to the 18th century. Sir Alexander Crichton describes "mental restlessness" in his book An inquiry into the nature and origin of mental derangement written in 1798. He made observations about children showing signs of being inattentive and having the "fidgets". The first clear description of ADHD is credited to George Still in 1902 during a series of lectures he gave to the Royal College of Physicians of London.
The terminology used to describe the condition has changed over time and has included: minimal brain dysfunction in the DSM-I (1952), hyperkinetic reaction of childhood in the DSM-II (1968), and attention-deficit disorder with or without hyperactivity in the DSM-III (1980). In 1987, this was changed to ADHD in the DSM-III-R, and in 1994 the DSM-IV in split the diagnosis into three subtypes: ADHD inattentive type, ADHD hyperactive-impulsive type, and ADHD combined type. These terms were kept in the DSM-5 in 2013 and in the DSM-5-TR in 2022. Prior to the DSM, terms included minimal brain damage in the 1930s.
ADHD, its diagnosis, and its treatment have been controversial since the 1970s. For example, positions differ on whether ADHD is within the normal range of behaviour, and to degree to which ADHD is a genetic condition. Other areas of controversy include the use of stimulant medications in children, the method of diagnosis, and the possibility of overdiagnosis. In 2009, the National Institute for Health and Care Excellence states that the current treatments and methods of diagnosis are based on the dominant view of the academic literature.
Once neuroimaging studies were possible, studies in the 1990s provided support for the pre-existing theory that neurological differences (particularly in the frontal lobes) were involved in ADHD. A genetic component was identified and ADHD was acknowledged to be a persistent, long-term disorder which lasted from childhood into adulthood. ADHD was split into the current three sub-types because of a field trial completed by Lahey and colleagues and published in 1994. In 2021, global teams of scientists curated the International Consensus Statement compiling evidence-based findings about the disorder.
In 1934, Benzedrine became the first amphetamine medication approved for use in the United States. Methylphenidate was introduced in the 1950s, and enantiopure dextroamphetamine in the 1970s. The use of stimulants to treat ADHD was first described in 1937. Charles Bradley gave the children with behavioural disorders Benzedrine and found it improved academic performance and behaviour.
Research directions
Possible positive traits
Possible positive traits of ADHD are a new avenue of research, and therefore limited.
A 2020 review found that creativity may be associated with ADHD symptoms, particularly divergent thinking and quantity of creative achievements, but not with the disorder of ADHD itself – i.e. it has not been found to be increased in people diagnosed with the disorder, only in people with subclinical symptoms or those that possess traits associated with the disorder. Divergent thinking is the ability to produce creative solutions which differ significantly from each other and consider the issue from multiple perspectives. Those with ADHD symptoms could be advantaged in this form of creativity as they tend to have diffuse attention, allowing rapid switching between aspects of the task under consideration; flexible associative memory, allowing them to remember and use more distantly-related ideas which is associated with creativity; and impulsivity, allowing them to consider ideas which others may not have.
Possible biomarkers for diagnosis
Reviews of ADHD biomarkers have noted that platelet monoamine oxidase expression, urinary norepinephrine, urinary MHPG, and urinary phenethylamine levels consistently differ between ADHD individuals and non-ADHD controls. These measurements could serve as diagnostic biomarkers for ADHD, but more research is needed to establish their diagnostic utility. Urinary and blood plasma phenethylamine concentrations are lower in ADHD individuals relative to controls. The two most commonly prescribed drugs for ADHD, amphetamine and methylphenidate, increase phenethylamine biosynthesis in treatment-responsive individuals with ADHD. Lower urinary phenethylamine concentrations are associated with symptoms of inattentiveness in ADHD individuals.
| Biology and health sciences | Mental disorder | null |
64993 | https://en.wikipedia.org/wiki/Skin%20cancer | Skin cancer | Skin cancers are cancers that arise from the skin. They are due to the development of abnormal cells that have the ability to invade or spread to other parts of the body. It occurs when skin cells grow uncontrollably, forming malignant tumors. The primary cause of skin cancer is prolonged exposure to ultraviolet (UV) radiation from the sun or tanning devices. Skin cancer is the most commonly diagnosed form of cancer in humans. There are three main types of skin cancers: basal-cell skin cancer (BCC), squamous-cell skin cancer (SCC) and melanoma. The first two, along with a number of less common skin cancers, are known as nonmelanoma skin cancer (NMSC). Basal-cell cancer grows slowly and can damage the tissue around it but is unlikely to spread to distant areas or result in death. It often appears as a painless raised area of skin that may be shiny with small blood vessels running over it or may present as a raised area with an ulcer. Squamous-cell skin cancer is more likely to spread. It usually presents as a hard lump with a scaly top but may also form an ulcer. Melanomas are the most aggressive. Signs include a mole that has changed in size, shape, color, has irregular edges, has more than one color, is itchy or bleeds.
More than 90% of cases are caused by exposure to ultraviolet radiation from the Sun. This exposure increases the risk of all three main types of skin cancer. Exposure has increased, partly due to a thinner ozone layer. Tanning beds are another common source of ultraviolet radiation. For melanomas and basal-cell cancers, exposure during childhood is particularly harmful. For squamous-cell skin cancers, total exposure, irrespective of when it occurs, is more important. Between 20% and 30% of melanomas develop from moles. People with lighter skin are at higher risk as are those with poor immune function such as from medications or HIV/AIDS. Diagnosis is by biopsy.
Decreasing exposure to ultraviolet radiation and the use of sunscreen appear to be effective methods of preventing melanoma and squamous-cell skin cancer. It is not clear if sunscreen affects the risk of basal-cell cancer. Nonmelanoma skin cancer is usually curable. Treatment is generally by surgical removal but may, less commonly, involve radiation therapy or topical medications such as fluorouracil. Treatment of melanoma may involve some combination of surgery, chemotherapy, radiation therapy and targeted therapy. In those people whose disease has spread to other areas of the body, palliative care may be used to improve quality of life. Melanoma has one of the higher survival rates among cancers, with over 86% of people in the UK and more than 90% in the United States surviving more than 5 years.
Skin cancer is the most common form of cancer, globally accounting for at least 40% of cancer cases. The most common type is nonmelanoma skin cancer, which occurs in at least 2–3 million people per year. This is a rough estimate; good statistics are not kept. Of nonmelanoma skin cancers, about 80% are basal-cell cancers and 20% squamous-cell skin cancers. Basal-cell and squamous-cell skin cancers rarely result in death. In the United States, they were the cause of less than 0.1% of all cancer deaths. Globally in 2012, melanoma occurred in 232,000 people and resulted in 55,000 deaths. White people in Australia, New Zealand and South Africa have the highest rates of melanoma in the world. The three main types of skin cancer have become more common in the last 20 to 40 years, especially regions where the population is predominantly White.
Classification
There are three main types of skin cancer: basal-cell skin cancer (basal-cell carcinoma) (BCC), squamous-cell skin cancer (squamous-cell carcinoma) (SCC) and malignant melanoma.
Basal-cell carcinomas are most commonly present on sun-exposed areas of the skin, especially the face. They rarely metastasize and rarely cause death. They are easily treated with surgery or radiation. Squamous-cell skin cancers are also common, but much less common than basal-cell cancers. They metastasize more frequently than BCCs. Even then, the metastasis rate is quite low, with the exception of SCC of the lip or ear, and in people who are immunosuppressed. Melanoma are the least frequent of the three common skin cancers. They frequently metastasize, and can cause death once they spread.
Less common skin cancers include: Merkel cell carcinoma, Paget's disease of the breast, atypical fibroxanthoma, porocarcinoma, spindle cell tumors, sebaceous carcinomas, microcystic adnexal carcinoma, keratoacanthoma, and skin sarcomas, such as angiosarcoma, dermatofibrosarcoma protuberans, Kaposi's sarcoma, leiomyosarcoma.
BCC and SCC often carry a UV-signature mutation indicating that these cancers are caused by UVB radiation via direct DNA damage. However malignant melanoma is predominantly caused by UVA radiation via indirect DNA damage. The indirect DNA damage is caused by free radicals and reactive oxygen species. Research indicates that the absorption of three sunscreen ingredients into the skin, combined with a 60-minute exposure to UV, leads to an increase of free radicals in the skin, if applied in too little quantity and too infrequently. However, the researchers add that newer creams often do not contain these specific compounds, and that the combination of other ingredients tends to retain the compounds on the surface of the skin. They also add that frequent re-application reduces the risk of radical formation.
Signs and symptoms
There are a variety of different skin cancer symptoms. These include changes in the skin that do not heal, ulcering in the skin, discolored skin, and changes in existing moles, such as jagged edges to the mole, enlargement of the mole, changes in color, the way it feels or if it bleeds. Other common signs of skin cancer can be painful lesion that itches or burns and large brownish spot with darker speckles.
Basal-cell skin cancer
Basal-cell skin cancer (BCC) usually presents as a raised, smooth, pearly bump on the sun-exposed skin of the head, neck, torso or shoulders. Sometimes small blood vessels (called telangiectasia) can be seen within the tumor. Crusting and bleeding in the center of the tumor frequently develops. It is often mistaken for a sore that does not heal. This form of skin cancer is the least deadly, and with proper treatment can be eliminated, often without significant scarring.
Squamous-cell skin cancer
Squamous-cell skin cancer (SCC) is commonly a red, scaling, thickened patch on sun-exposed skin. Some are firm hard nodules and dome shaped like keratoacanthomas. Ulceration and bleeding may occur. When SCC is not treated, it may develop into a large mass. Squamous-cell is the second most common skin cancer. It is dangerous, but not nearly as dangerous as a melanoma.
Melanoma
Most melanoma consist of various colours from shades of brown to black. A small number of melanoma are pink, red or fleshy in colour; these are called amelanotic melanoma and tend to be more aggressive. Warning signs of malignant melanoma include change in the size, shape, color or elevation of a mole. Other signs are the appearance of a new mole during adulthood or pain, itching, ulceration, redness around the site, or bleeding at the site. An often-used mnemonic is "ABCDE", where A is for "asymmetrical", B for "borders" (irregular: "Coast of Maine sign"), C for "color" (variegated), D for "diameter" (larger than 6 mm – the size of a pencil eraser) and E for "evolving."
Other
Merkel cell carcinomas are most often rapidly growing, non-tender red, purple or skin colored bumps that are not painful or itchy. They may be mistaken for a cyst or another type of cancer.
Causes
Ultraviolet radiation from sun exposure is the primary environmental cause of skin cancer. This can occur in professions such as farming. Other risk factors that play a role include:
Light skin color
Age
Smoking tobacco
HPV infections increase the risk of squamous-cell skin cancer.
Some genetic syndromes including congenital melanocytic nevi syndrome which is characterized by the presence of nevi (birthmarks or moles) of varying size which are either present at birth, or appear within 6 months of birth. Nevi larger than 20 mm (3/4") in size are at higher risk for becoming cancerous.
Chronic non-healing wounds. These are called Marjolin's ulcers based on their appearance, and can develop into squamous-cell skin cancer.
Ionizing radiation such as X-rays, environmental carcinogens, and artificial UV radiation (e.g. tanning beds). It is believed that tanning beds are the cause of hundreds of thousands of basal and squamous-cell skin cancer. The World Health Organization now places people who use artificial tanning beds in its highest risk category for skin cancer.
Alcohol consumption, specifically excessive drinking increase the risk of sunburns.
The use of many immunosuppressive medications increases the risk of skin cancer. Cyclosporin A, a calcineurin inhibitor for example increases the risk approximately 200 times, and azathioprine about 60 times.
Deliberate exposure of sensitive skin not normally exposed to sunlight during alternative wellness behaviors such as perineum sunning.
UV-induced DNA damage
UV-irradiation of skin cells causes damage to DNA through photochemical reactions. Cyclobutane pyrimidine dimers formed by adjacent thymine bases, or by adjacent cytosine bases, are frequent types of DNA damage induced by UV. Human skin cells are capable of repairing most UV-induced damage by nucleotide excision repair, a process that protects against skin cancer, but may be inadequate at high levels of exposure.
Pathophysiology
A malignant epithelial tumor that primarily originates in the epidermis, in squamous mucosa or in areas of squamous metaplasia is referred to as a squamous-cell carcinoma.
Macroscopically, the tumor is often elevated, fungating, or may be ulcerated with irregular borders. Microscopically, tumor cells destroy the basement membrane and form sheets or compact masses which invade the subjacent connective tissue (dermis). In well differentiated carcinomas, tumor cells are pleomorphic/atypical, but resembling normal keratinocytes from prickle layer (large, polygonal, with abundant eosinophilic (pink) cytoplasm and central nucleus).
Their disposal tends to be similar to that of normal epidermis: immature/basal cells at the periphery, becoming more mature to the centre of the tumor masses. Tumor cells transform into keratinized squamous cells and form round nodules with concentric, laminated layers, called "cell nests" or "epithelial/keratinous pearls". The surrounding stroma is reduced and contains inflammatory infiltrate (lymphocytes). Poorly differentiated squamous carcinomas contain more pleomorphic cells and no keratinization.
A molecular factor involved in the disease process is mutation in gene PTCH1 that plays an important role in the Sonic hedgehog signaling pathway.
Diagnosis
Diagnosis is by biopsy and histopathological examination.
Non-invasive skin cancer detection methods include photography, dermatoscopy, sonography, confocal microscopy, Raman spectroscopy, fluorescence spectroscopy, terahertz spectroscopy, optical coherence tomography, the multispectral imaging technique, thermography, electrical bio-impedance, tape stripping and computer-aided analysis.
Dermatoscopy may be useful in diagnosing basal cell carcinoma in addition to skin inspection.
There is insufficient evidence that optical coherence tomography (OCT) is useful in diagnosing melanoma or squamous cell carcinoma. OCT may have a role in diagnosing basal cell carcinoma but more data is needed to support this.
Computer-assisted diagnosis devices have been developed that analyze images from a dermatoscope or spectroscopy and can be used by a diagnostician to aid in the detection of skin cancer. CAD systems have been found to be highly sensitive in the detection of melanoma, but have a high false-positive rate. There is not yet enough evidence to recommend CAD as compared to traditional diagnostic methods.
High-frequency ultrasound (HFUS) is of unclear usefulness in the diagnosis of skin cancer. There is insufficient evidence for reflectance confocal microscopy to diagnose basal cell or squamous cell carcinoma or any other skin cancers.
Prevention
Sunscreen is effective and thus recommended to prevent melanoma and squamous-cell carcinoma. There is little evidence that it is effective in preventing basal-cell carcinoma. Other advice to reduce rates of skin cancer includes avoiding sunburn, wearing protective clothing, sunglasses and hats, and attempting to avoid sun exposure or periods of peak exposure. The U.S. Preventive Services Task Force recommends that people between 9 and 25 years of age be advised to avoid ultraviolet light.
The risk of developing skin cancer can be reduced through a number of measures including decreasing indoor tanning and mid-day sun exposure, increasing the use of sunscreen, and avoiding the use of tobacco products.
It is important to limit sun exposure and to avoid tanning beds, because they both involve UV light. UV light is known to damage skin cells by mutating their DNA. The mutated DNA can cause tumors and other growths to form on the skin. Further, there are other risk factors beside just UV exposure. Fair skin, prolonged history of sunburns, moles, and family history of skin cancer are just a few.
There is insufficient evidence either for or against screening for skin cancers. Vitamin supplements and antioxidant supplements have not been found to have an effect in prevention. Evidence for reducing melanoma risk from dietary measures is tentative, with some supportive epidemiological evidence, but no clinical trials.
Zinc oxide and titanium oxide are often used in sunscreen to provide broad protection from UVA and UVB ranges.
Eating certain foods may decrease the risk of sunburns but this is much less than the protection provided by sunscreen.
A meta-analysis of skin cancer prevention in high risk individuals found evidence that topical application of T4N5 liposome lotion reduced the rate of appearance of basal cell carcinomas in people with xeroderma pigmentosum, and that acitretin taken by mouth may have a skin protective benefit in people following kidney transplant.
A paper published in January 2022 showed that a vaccine that stimulates the production of a protein critical to the skin's antioxidant network could reinforce people's defenses against skin cancer.
Treatment
Treatment is dependent on the specific type of cancer, location of the cancer, age of the person, and whether the cancer is primary or a recurrence. For a small basal-cell cancer in a young person, the treatment with the best cure rate (Mohs surgery or CCPDMA) might be indicated. In the case of an elderly frail man with multiple complicating medical problems, a difficult to excise basal-cell cancer of the nose might warrant radiation therapy (slightly lower cure rate) or no treatment at all. Topical chemotherapy might be indicated for large superficial basal-cell carcinoma for good cosmetic outcome, whereas it might be inadequate for invasive nodular basal-cell carcinoma or invasive squamous-cell carcinoma. In general, melanoma is poorly responsive to radiation or chemotherapy.
For low-risk disease, radiation therapy (external beam radiotherapy or brachytherapy), topical chemotherapy (imiquimod or 5-fluorouracil) and cryotherapy (freezing the cancer off) can provide adequate control of the disease; all of them, however, may have lower overall cure rates than certain type of surgery. Other modalities of treatment such as photodynamic therapy, epidermal radioisotope therapy, topical chemotherapy, electrodesiccation and curettage can be found in the discussions of basal-cell carcinoma and squamous-cell carcinoma.
Mohs' micrographic surgery (Mohs surgery) is a technique used to remove the cancer with the least amount of surrounding tissue and the edges are checked immediately to see if tumor is found. This provides the opportunity to remove the least amount of tissue and provide the best cosmetically favorable results. This is especially important for areas where excess skin is limited, such as the face. Cure rates are equivalent to wide excision. Special training is required to perform this technique. An alternative method is CCPDMA and can be performed by a pathologist not familiar with Mohs surgery.
In the case of disease that has spread (metastasized), further surgical procedures or chemotherapy may be required.
Treatments for metastatic melanoma include biologic immunotherapy agents ipilimumab, pembrolizumab, nivolumab, cemiplimab; BRAF inhibitors, such as vemurafenib and dabrafenib; and a MEK inhibitor trametinib.
In February 2024, the Food and Drug Administration approved the first cancer treatment that uses tumor-infiltrating lymphocytes, also called TIL therapy, specifically for melanomas that have not improved with other treatments. Additionally, scientists are testing a vaccine designed to match the unique genetic details of a patient's cancer in an advanced clinical trial.
Reconstruction
Currently, surgical excision is the most common form of treatment for skin cancers. The goal of reconstructive surgery is the restoration of normal appearance and function. The choice of technique in reconstruction is dictated by the size and location of the defect. Excision and reconstruction of facial skin cancers are generally more challenging due to the presence of highly visible and functional anatomic structures in the face.
When skin defects are small in size, most can be repaired with simple repair where skin edges are approximated and closed with sutures. This will result in a linear scar. If the repair is made along a natural skin fold or wrinkle line, the scar will be hardly visible. Larger defects may require repair with a skin graft, local skin flap, pedicled skin flap, or a microvascular free flap. Skin grafts and local skin flaps are by far more common than the other listed choices.
Skin grafting is patching of a defect with skin that is removed from another site in the body. The skin graft is sutured to the edges of the defect, and a bolster dressing is placed atop the graft for seven to ten days, to immobilize the graft as it heals in place. There are two forms of skin grafting: split thickness and full thickness. In a split thickness skin graft, a shaver is used to shave a layer of skin from the abdomen or thigh. The donor site regenerates skin and heals over a period of two weeks. In a full thickness skin graft, a segment of skin is totally removed and the donor site needs to be sutured closed.
Split thickness grafts can be used to repair larger defects, but the grafts are inferior in their cosmetic appearance. Full thickness skin grafts are more acceptable cosmetically. However, full thickness grafts can only be used for small or moderate sized defects.
Local skin flaps are a method of closing defects with tissue that closely matches the defect in color and quality. Skin from the periphery of the defect site is mobilized and repositioned to fill the deficit. Various forms of local flaps can be designed to minimize disruption to surrounding tissues and maximize cosmetic outcome of the reconstruction. Pedicled skin flaps are a method of transferring skin with an intact blood supply from a nearby region of the body. An example of such reconstruction is a pedicled forehead flap for the repair of a large nasal skin defect. Once the flap develops a source of blood supply form its new bed, the vascular pedicle can be detached.
Prognosis
The mortality rate of basal-cell and squamous-cell carcinoma is around 0.3%, causing 2000 deaths per year in the US. In comparison, the mortality rate of melanoma is 15–20% and it causes 6500 deaths per year. Even though it is much less common, malignant melanoma is responsible for 75% of all skin cancer-related deaths.
The survival rate for people with melanoma depends upon when they start treatment. The cure rate is very high when melanoma is detected in early stages, when it can easily be removed surgically. The prognosis is less favorable if the melanoma has spread to other parts of the body. As of 2003 the overall five-year cure rate with Mohs' micrographic surgery was around 95 percent for recurrent basal cell carcinoma.
Australia and New Zealand exhibit one of the highest rates of skin cancer incidence in the world, almost four times the rates registered in the United States, the UK and Canada. Around 434,000 people receive treatment for non-melanoma skin cancers and 10,300 are treated for melanoma. Melanoma is the most common type of cancer in people between 15 and 44 years in both countries. The incidence of skin cancer has been increasing. The incidence of melanoma among Auckland residents of European descent in 1995 was 77.7 cases per 100,000 people per year, and was predicted to increase in the 21st century because of "the effect of local stratospheric ozone depletion and the time lag from sun exposure to melanoma development."
Epidemiology
Skin cancers result in 80,000 deaths a year as of 2010, 49,000 of which are due to melanoma and 31,000 of which are due to non-melanoma skin cancers. This is up from 51,000 in 1990.
More than 3.5 million cases of skin cancer are diagnosed annually in the United States, which makes it the most common form of cancer in that country. One in five Americans will develop skin cancer at some point of their lives. The most common form of skin cancer is basal-cell carcinoma, followed by squamous cell carcinoma. Unlike for other cancers, there exists no basal and squamous cell skin cancers registry in the United States.
Melanoma
In the US in 2008, 59,695 people were diagnosed with melanoma, and 8,623 people died from it. In Australia more than 12,500 new cases of melanoma are reported each year, out of which more than 1,500 die from the disease. Australia has the highest per capita incidence of melanoma in the world.
Although the rates of many cancers in the United States is falling, the incidence of melanoma keeps growing, with approximately 68,729 melanomas diagnosed in 2004 according to reports of the National Cancer Institute.
Melanoma is the fifth most common cancer in the UK (around 13,300 people were diagnosed with melanoma in 2011), and the disease accounts for 1% all cancer deaths (around 2,100 people died in 2012).
Non-melanoma
Approximately 2,000 people die from basal or squamous cell skin cancers (non-melanoma skin cancers) in the United States each year. The rate has dropped in recent years. Most of the deaths happen to people who are elderly and might not have seen a doctor until the cancer had spread; and people with immune system disorders.
Veterinary medicine
Risk factors
White people and people with light skin are prone to skin cancer.
| Biology and health sciences | Cancer | null |
65009 | https://en.wikipedia.org/wiki/Damselfly | Damselfly | Damselflies are flying insects of the suborder Zygoptera in the order Odonata. They are similar to dragonflies (which constitute the other odonatan suborder, Epiprocta) but are usually smaller and have slimmer bodies. Most species fold the wings along the body when at rest, unlike dragonflies which hold the wings flat and away from the body. Damselflies have existed since the Late Jurassic, and are found on every continent except Antarctica.
All damselflies are predatory insects: both nymphs and adults actively hunt and eat other insects. The nymphs are aquatic, with different species living in a variety of freshwater habitats including acidic bogs, ponds, lakes and rivers. The nymphs moult repeatedly, at the last moult climbing out of the water to undergo metamorphosis. The skin splits down the back, they emerge and inflate their wings and abdomen to gain their adult form. Their presence on a body of water indicates that it is relatively unpolluted, but their dependence on freshwater makes them vulnerable to damage to their wetland habitats.
Some species of damselfly have elaborate courtship behaviours. Many species are sexually dimorphic, the males often being more brightly coloured than the females. Like dragonflies, they reproduce using indirect insemination and delayed fertilisation. A mating pair form a shape known as a "heart" or "wheel", the male clasping the female at the back of the head, the female curling her abdomen down to pick up sperm from secondary genitalia at the base of the male's abdomen. The pair often remain together with the male still clasping the female while she lays eggs within the tissue of plants in or near water using a robust ovipositor.
Artificial fishing flies that mimic damselfly nymphs are used in wet-fly fishing. Damselflies are sometimes represented in personal jewellery such as brooches.
Classification
The Zygoptera are an ancient group, with the earliest fossils dating to the Kimmeridgian age of the Late Jurassic, around 152 million years ago. Well-preserved Eocene damselfly larvae and exuviae are known from fossils preserved in amber in the Baltic region.
Molecular analysis in 2021 confirms that most of the traditional families are monophyletic, but shows that the Amphipterygidae, Megapodagrionidae and Protoneuridae are paraphyletic and will need to be reorganised. The Protoneuridae in particular is shown to be composed of six clades from five families. The result so far is 27 damselfly families, with 7 more likely to be created. The discovered clades did not agree well with traditional characteristics used to classify living and fossil Zygoptera such as wing venation, so fossil taxa will need to be revisited. The 18 extant traditional families are provisionally rearranged as follows (the 3 paraphyletic families disappearing, and many details not resolved):
General description
The general body plan of a damselfly is similar to that of a dragonfly. The compound eyes are large but are more widely separated and relatively smaller than those of a dragonfly. Above the eyes is the frons or forehead, below this the clypeus, and on the lower lip or labium, an extensible organ used in the capture of prey. The top of the head bears three simple eyes (ocelli), which may measure light intensity, and a tiny pair of antennae that serve no olfactory function but may measure air speed. Many species are sexually dimorphic; the males are often brightly coloured and distinctive, while the females are plainer, cryptically coloured, and harder to identify to species. For example, in Coenagrion, the Eurasian bluets, the males are bright blue with black markings, while the females are usually predominantly green or brown with black. A few dimorphic species show female-limited polymorphism, the females being in two forms, one form distinct and the other with the patterning as in males. The ones that look like males, andromorphs, are usually under a third of the female population but the proportion can rise significantly and a theory that explains this response suggests that it helps overcome harassment by males. Some Coenagrionid damselflies show male-limited polymorphism, an even less understood phenomenon.
In general, damselflies are smaller than dragonflies, the smallest being members of the genus Agriocnemis (wisps). However, members of the Pseudostigmatidae (helicopter damselflies or forest giants) are exceptionally large for the group, with wingspans as much as in Megaloprepus and body length up to in Pseudostigma aberrans.
The first thoracic segment is the prothorax, bearing the front pair of legs. The joint between head and prothorax is slender and flexible, which enables the damselfly to swivel its head and to manoeuvre more freely when flying. The remaining thoracic segments are the fused mesothorax and metathorax (together termed the synthorax), each with a pair of wings and a pair of legs. A dark stripe known as the humeral stripe runs from the base of the front wings to the second pair of legs, and just in front of this is the pale-coloured, antehumeral stripe.
The forewings and hindwings are similar in appearance and are membranous, being strengthened and supported by longitudinal veins that are linked by many cross-veins and that are filled with haemolymph. Species markers include quadrangular markings on the wings known as the pterostigma or stigma, and in almost all species, there is a nodus near the leading edge. The thorax houses the flight muscles. Many damselflies (e.g. Lestidae, Platycnemidae, Coenagrionidae) have clear wings, but some (Calopterygidae, Euphaeidae) have coloured wings, whether uniformly suffused with colour or boldly marked with a coloured patch. In species such as the banded demoiselle, Calopteryx splendens the males have both a darker green body and large dark violet-blue patches on all four wings, which flicker conspicuously in their aerial courtship dances; the females have pale translucent greenish wings.
The abdomen is long and slender and consists of ten segments. The secondary genitalia in males are on the undersides of segments two and three and are conspicuous, making it easy to tell the sex of the damselfly when viewed from the side. The female genital opening is on the underside between segments eight and nine. It may be covered by a subgenital plate, or extended into a complex ovipositor that helps them lay eggs within plant tissue. The tenth segment in both sexes bears cerci and in males, its underside bears a pair of paraprocts.
Damselflies (except spreadwings, Lestidae) rest their wings together, above their bodies, whereas dragonflies rest with their wings spread diametrically apart; the spreadwings rest with their wings slightly apart. Damselflies have slenderer bodies than dragonflies, and their eyes do not overlap. Damselfly nymphs differ from dragonflies nymphs in that the epiproct and pair of paraprocts at the tip of their abdomen has been modified into caudal gills, in addition to being able to absorb oxygen through the wall of their rectum, whereas dragonflies breathe through internal rectal gills only. Damselfly nymphs swim by fish-like undulations, the gills functioning like a tail. Dragonfly nymphs can forcibly expel water in their rectum for rapid escape.
Distribution and diversity
Odonates are found on all the continents except Antarctica. Although some species of dragonfly have wide distributions, damselflies tend to have smaller ranges. Most odonates breed in fresh-water; a few damselflies in the family Caenagrionidae breed in brackish water (and a single dragonfly species breeds in seawater). Dragonflies are more affected by pollution than are damselflies. The presence of odonates indicates that an ecosystem is of good quality. The most species-rich environments have a range of suitable microhabitats, providing suitable water bodies for breeding.
Although most damselflies live out their lives within a short distance of where they were hatched, some species, and some individuals within species, disperse more widely. Forktails in the family Coenagrionidae seem particularly prone to do this, large male boreal bluets (Enallagma boreale) in British Columbia often migrating, while smaller ones do not. These are known to leave their waterside habitats, flying upwards till lost from view, and presumably being dispersed to far off places by the stronger winds found at high altitudes. In this way they may appear in a locality where no damselflies were to be seen the day before. Rambur's forktail (Ischnura ramburii) has been found, for example, on oil rigs far out in the Gulf of Mexico.
The distribution and diversity of damselfly species in the biogeographical regions is summarized here. (There are no damselflies in the Antarctic.) Note that some species are widespread and occur in multiple regions.
Overall, there are about 2942 extant species of damselflies placed in 309 genera.
Biology
Adult damselflies catch and eat flies, mosquitoes, and other small insects. Often they hover among grasses and low vegetation, picking prey off stems and leaves with their spiny legs (unlike dragonflies which prefer catching flying prey). Although predominantly using vision to locate their prey, adults may also make use of olfactory cues. No species are known to hunt at night, but some are crepuscular, perhaps taking advantage of newly hatched flies and other aquatic insects at a time when larger dragonflies are roosting. In tropical South America, helicopter damselflies (Pseudostigmatidae) feed on spiders, hovering near an orb web and plucking the spider, or its entangled prey, from the web. There are few pools and lakes in these habitats, and these damselflies breed in temporary water bodies in holes in trees, the rosettes of bromeliads and even the hollow stems of bamboos.
The nymphs of damselflies have been less researched than their dragonfly counterparts, and many have not even been identified. They choose their prey according to size and seem less able to overpower larger prey than can dragonfly nymphs. The major part of the diet of most species appears to be crustaceans such as water fleas.
Ecology
Damselflies exist in a range of habitats in and around the wetlands needed for their larval development; these include open spaces for finding mates, suitable perches, open aspect, roosting sites, suitable plant species for ovipositing and suitable water quality. Odonates have been used for bio-indication purposes regarding the quality of the ecosystem. Different species have different requirements for their larvae with regard to water depth, water movement and pH.
The European common blue damselfly (Enallagma cyathigerum) for example can occur at high densities in acid waters where fish are absent, such as in bog pools.
The scarce blue-tailed damselfly (Ischnura pumilio) in contrast requires base-rich habitats and water with a slow flow-rate. It is found in ditches, quarries, seeps, flushes, marshes and pools. It tolerates high levels of zinc and copper in the sediment but requires suitable emergent plants for egg-laying without the water being choked by plants. Damselflies' dependence on freshwater habitats makes them very vulnerable to damage to wetlands through drainage for agriculture or urban growth.
In the tropics, the helicopter damselfly Mecistogaster modesta (Pseudostigmatidae) breeds in phytotelmata, the small bodies of water trapped by bromeliads, epiphytic plants of the rainforest of northwest Costa Rica, at the high density of some 6000 larvae per hectare in patches of secondary forest. Another tropical species, the cascade damselfly Thaumatoneura inopinata (Megapodagrionidae), inhabits waterfalls in Costa Rica and Panama.
Damselflies, both nymphs and adults, are eaten by a range of predators including birds, fish, frogs, dragonflies, other damselflies, water spiders, water beetles, backswimmers and giant water bugs.
Damselflies have a variety of internal and external parasites. Particularly prevalent are the gregarine protozoans found in the gut. In a study of the European common blue damselfly, every adult insect was infected at the height of the flying season. When present in large numbers, these parasites can cause death by blocking the gut. Water mites Hydracarina are often seen on the outside of both nymphs and adults, and can move from one to the other at metamorphosis. They suck the body fluids and may actually kill young nymphs, but adults are relatively unaffected, it being necessary for the completion of the mite's life cycle that it returns to water, a feat accomplished when the adult damselfly breeds.
Behaviour
Many damselflies have elaborate courtship behaviours. These are designed to show off the male's distinctive characteristics, bright colouring or flying abilities, thus demonstrating his fitness. Calopteryx males will hover in front of a female with alternating fast and slow wingbeats; if she is receptive she will remain perched, otherwise she will fly off. The male river jewelwing (Calopteryx aequabilis) performs display flights in front of the female, fluttering his forewings while keeping his hindwings still, and raising his abdomen to reveal the white spots on his wings. Platycypha males will hover in front of a female, thrusting their bright white legs forward in front of their heads. Flattened tibia and bright leg colouring are seen in Platycnemis phasmovolans and a few other Platycnemididae, including the extinct Yijenplatycnemis huangi. Rhinocypha bobs up and down, often low over fast-flowing forested and shaded streams, displaying its bright-coloured body and wings. Some species (R. biceriata, R. humeralis) have a foot waggling behaviour: they thrust a leg forward and vibrate it towards ovipositing females while in flight. Vibrating the tibia is seen in Libellago semiopaca despite it lacking bright colouration on the tibia suggesting that foot waggling is a generalized excitary signal in Chlorocyphidae damselflies. Foot waggling has been observed in Calopteryx sp., Platycypha fitzsimonsi, and Platycypha caligata. Male members of the family Protoneuridae with vividly coloured wings display these to visiting females. Swift forktail (Ischnura erratica) males display to each other with their blue-tipped abdomens raised.
Other behaviours observed in damselflies include wing-warning, wing-clapping, flights of attrition and abdominal bobbing. Wing-warning is a rapid opening and closing of the wings and is aggressive, while wing-clapping involves a slower opening of the wings followed by a rapid closure, up to eight times in quick succession, and often follows flight; it may serve a thermo-regulatory function. Flights of attrition are engaged in by the ebony jewelwing (Calopteryx maculata) and involve males bouncing around each other while flying laterally and continuing to do so, sometimes over a considerable distance, until one insect is presumably exhausted and gives up. Characteristics of displays and coloration of males are suggested to be the common cues used by females to choose mates. In at least one species, Mnais costalis, males with more sunlight in their territories had higher wing-beat frequency and were more likely to mate. Females preferred "hotter" males because they would be on warmer territories for egg laying.
At night, damselflies usually roost in dense vegetation, perching with the abdomen alongside a stem. If disturbed they will move around to the other side of the stem but will not fly off. Spreadwings fully fold their wings when roosting. The desert shadowdamsel (Palaemnema domina) aggregates to roost in thick places near streams in the heat of the day. While there it engages in wing-clapping, the exact function of which is unknown. Some species such as the rubyspot damselfly, Hetaerina americana, form night roosting aggregations, with a preponderance of males; this may have an anti-predator function or may be simply the outcome of choosing safe roosting sites.
Reproduction
Mating in damselflies, as in dragonflies, is a complex, precisely choreographed process involving both indirect insemination and delayed fertilisation. The male first has to attract a female to his territory, continually driving off rival males. When he is ready to mate, he transfers a packet of sperm from his primary genital opening on segment 9, near the end of his abdomen, to his secondary genitalia on segments 2–3, near the base of his abdomen. The male then grasps the female by the head with the claspers at the end of his abdomen; the structure of the claspers varies between species, and may help to prevent interspecific mating. The pair fly in tandem with the male in front, typically perching on a twig or plant stem. The female then curls her abdomen downwards and forwards under her body to pick up the sperm from the male's secondary genitalia, while the male uses his "tail" claspers to grip the female behind the head: this distinctive posture is called the "heart" or "wheel"; the pair may also be described as being "in cop". Males may transfer the sperm to their secondary genitalia either before a female is held, in the early stage when the female is held by the legs or after the female is held between the terminal claspers. This can lead to variations in the tandem postures. The spermatophore may also have nutrition in addition to sperms as a "nuptial gift". Some cases of sexual cannibalism exist where females (of Ischnura graellsii) eat males while in copula.
Parthenogenesis (reproduction from unfertilised eggs) is exceptional, and has only been recorded in nature in female Ischnura hastata on the Azores Islands.
Egg-laying (ovipositing) involves not only the female darting over floating or waterside vegetation to deposit eggs on a suitable substrate, but the male hovering above her, mate-guarding, or in some species continuing to clasp her and flying in tandem. The male attempts to prevent rivals from removing his sperm and inserting their own, a form of sperm competition (the sperms of the last mated male have the greatest chance of fertilizing the eggs, also known as sperm precedence) made possible by delayed fertilisation and driven by sexual selection. If successful, a rival male uses his penis to compress or scrape out the sperm inserted previously; this activity takes up much of the time that a copulating pair remain in the heart posture. Flying in tandem has the advantage that less effort is needed by the female for flight and more can be expended on egg-laying, and when the female submerges to deposit eggs, the male may help to pull her out of the water.
All damselflies lay their eggs inside plant tissues; those that lay eggs underwater may submerge themselves for 30 minutes at a time, climbing along the stems of aquatic plants and laying eggs at intervals. For example, the red-eyed damselfly Erythromma najas lays eggs, in tandem, into leaves or stems of floating or sometimes emergent plants; in contrast, the scarce bluetail Ischnura pumilio oviposits alone, the female choosing mostly emergent grasses and rushes, and laying her eggs in their stems either above or just below the waterline. The willow emerald Chalcolestes viridis (a spreadwing) is unusual in laying eggs only in woody plant tissue, choosing thin twigs of trees that hang over water, and scarring the bark in the process. A possible exception is an apparent instance of ovo-viviparity, in which Heliocypha perforata was filmed in western China depositing young larvae (presumably hatched from eggs inside the female's body) onto a partly submerged branch of a tree.
Many damselflies are able to produce more than one brood per year (voltinism); this is negatively correlated with latitude, becoming more common towards the equator, except in the Lestidae.
Life cycle
Damselflies are hemimetabolous insects that have no pupal stage in their development. The female inserts the eggs by means of her ovipositor into slits made in water plants or other underwater substrates and the larvae, known as naiads or nymphs, are almost all completely aquatic. Exceptions include the Hawaiian Megalagrion oahuense and an unidentified Megapodagrionid from New Caledonia, which are terrestrial in their early stages. The spreadwings lay eggs above the waterline late in the year and the eggs overwinter, often covered by snow. In spring they hatch out in the meltwater pools and the nymphs complete their development before these temporary pools dry up.
The nymphs are voracious predators and feed by means of a flat labium (a toothed mouthpart on the lower jaw) that forms the so-called mask; it is rapidly extended to seize and pierce the Daphnia (water fleas), mosquito larvae, and other small aquatic organisms on which damselfly nymphs feed. They breathe by means of three large external, fin-like gills on the tip of the abdomen, and these may also serve for locomotion in the same manner as a fish's tail. Compared to dragonfly larvae, the nymphs show little variation in form. They tend to be slender and elongate, many having morphological adaptations for holding their position in fast flowing water. They are more sensitive than dragonfly nymphs to oxygen levels and suspended fine particulate matter, and do not bury themselves in the mud.
The nymphs proceed through about a dozen moults as they grow. In the later stages, the wing pads become visible. When fully developed, the nymphs climb out of the water and take up a firm stance, the skin on the thorax splits and the adult form wriggles out. This has a soft body at first and hangs or stands on its empty larval case. It pumps haemolymph into its small limp wings, which expand to their full extent. The haemolymph is then pumped back into the abdomen, which also expands fully. The exoskeleton hardens and the colours become more vivid over the course of the next few days. Most damselflies emerge in daytime, and in cool conditions the process takes several hours. On a hot day, the cuticle hardens rapidly and the adult can be flying away within half an hour.
Conservation
Conservation of Odonata has usually concentrated on the more iconic suborder Anisoptera, the dragonflies. However, the two suborders largely have the same needs, and what is good for dragonflies is also good for damselflies. The main threats experienced by odonates are the clearance of forests, the pollution of waterways, the lowering of groundwater levels, the damming of rivers for hydroelectric schemes and the general degradation of wetlands and marshes. The clearance of tropical rainforests is of importance because the rate of erosion increases, streams and pools dry up and waterways become clogged with silt. The presence of alien species can also have unintended consequences. In Hawaii, the introduction of the mosquitofish (Gambusia affinis) was effective in controlling mosquitoes but nearly exterminated the island's endemic damselflies. The ancient greenling Hemiphlebia mirabilis has been an important flagship species for conservation action in preserving its habitat in Australia.
In culture
Fishing flies that mimic damselfly nymphs are sometimes used in wet-fly fishing, where the hook and line are allowed to sink below the surface.
Damselflies have formed subjects for personal jewellery such as brooches since at least 1880.
Damselfly is a 2005 short film directed by Ben O'Connor.
Damselfly is the title of a 2012 novel in the Faeble series by S. L. Naeole and of a 2018 novel by Chandra Prasad.
Modern poems with the damselfly as a subject include a 1994 poem by August Kleinzahler, which contains the lines "And that blue there, cobalt / a moment, then iridescent, / fragile as a lady's pin / hovering above the nasturtium?" The poet John Engels published “Damselfly, Trout, Heron” in his 1983 collection Weather-Fear: New and Selected Poems.
| Biology and health sciences | Odonata | null |
65037 | https://en.wikipedia.org/wiki/Tire | Tire | A tire (British spelling: tyre) is a ring-shaped component that surrounds a wheel's rim to transfer a vehicle's load from the axle through the wheel to the ground and to provide traction on the surface over which the wheel travels. Most tires, such as those for automobiles and bicycles, are pneumatically inflated structures, providing a flexible cushion that absorbs shock as the tire rolls over rough features on the surface. Tires provide a footprint, called a contact patch, designed to match the vehicle's weight and the bearing on the surface that it rolls over by exerting a pressure that will avoid deforming the surface.
The materials of modern pneumatic tires are synthetic rubber, natural rubber, fabric, and wire, along with carbon black and other chemical compounds. They consist of a tread and a body. The tread provides traction while the body provides containment for a quantity of compressed air. Before rubber was developed, tires were metal bands fitted around wooden wheels to hold the wheel together under load and to prevent wear and tear. Early rubber tires were solid (not pneumatic). Pneumatic tires are used on many vehicles, including cars, bicycles, motorcycles, buses, trucks, heavy equipment, and aircraft. Metal tires are used on locomotives and railcars, and solid rubber (or other polymers) tires are also used in various non-automotive applications, such as casters, carts, lawnmowers, and wheelbarrows.
Unmaintained tires can lead to severe hazards for vehicles and people, ranging from flat tires making the vehicle inoperable to blowouts, where tires explode during operation and possibly damage vehicles and injure people. The manufacture of tires is often highly regulated for this reason. Because of the widespread use of tires for motor vehicles, tire waste is a substantial portion of global waste. There is a need for tire recycling through mechanical recycling and reuse, such as for crumb rubber and other tire-derived aggregate, and pyrolysis for chemical reuse, such as for tire-derived fuel. If not recycled properly or burned, waste tires release toxic chemicals into the environment. Moreover, the regular use of tires produces micro-plastic particles that contain these chemicals that both enter the environment and affect human health.
Etymology and spelling
The word tire is a short form of attire, from the idea that a wheel with a tire is a dressed wheel.
Tyre is the oldest spelling, and both tyre and tire were used during the 15th and 16th centuries. During the 17th and 18th centuries, tire became more common in print. The spelling tyre did not reappear until the 1840s when the English began shrink-fitting railway car wheels with malleable iron. Nevertheless, many publishers continued using tire. The Times newspaper in London was still using tire as late as 1905. The spelling tyre began to be commonly used in the 19th century for pneumatic tires in the UK. The 1911 edition of the Encyclopædia Britannica states that "The spelling 'tyre' is not now accepted by the best English authorities, and is unrecognized in the US", while Fowler's Modern English Usage of 1926 describes that "there is nothing to be said for 'tyre', which is etymologically wrong, as well as needlessly divergent from our own [sc. British] older & the present American usage". However, over the 20th century, tyre became established as the standard British spelling.
History
The earliest tires were bands of leather, then iron (later steel) placed on wooden wheels used on carts and wagons. A skilled worker, known as a wheelwright, would cause the tire to expand by heating it in a forge fire, placing it over the wheel, and quenching it, causing the metal to contract back to its original size to fit tightly on the wheel.
The first patent for what appears to be a standard pneumatic tire appeared in 1847 and was lodged by Scottish inventor Robert William Thomson. However, this idea never went into production. The first practical pneumatic tire was made in 1888 on May Street, Belfast, by Scots-born John Boyd Dunlop, owner of one of Ireland's most prosperous veterinary practices. It was an effort to prevent the headaches of his 10-year-old son Johnnie while riding his tricycle on rough pavements. His doctor, John, later Sir John Fagan, had prescribed cycling as an exercise for the boy and was a regular visitor. Fagan participated in designing the first pneumatic tires. Cyclist Willie Hume demonstrated the supremacy of Dunlop's tires in 1889, winning the tire's first-ever races in Ireland and then England. In Dunlop's tire patent specification dated 31 October 1888, his interest is only in its use in cycles and light vehicles. In September 1890, he was made aware of an earlier development, but the company kept the information to itself.
In 1892, Dunlop's patent was declared invalid because of the prior art by forgotten fellow Scot Robert William Thomson of London (patents London 1845, France 1846, USA 1847). However, Dunlop is credited with "realizing rubber could withstand the wear and tear of being a tire while retaining its resilience". John Boyd Dunlop and Harvey du Cros worked through the ensuing considerable difficulties. They employed inventor Charles Kingston Welch and acquired other rights and patents, which allowed them some limited protection of their Pneumatic Tyre business's position. Pneumatic Tyre would become Dunlop Rubber and Dunlop Tyres. The development of this technology hinged on myriad engineering advances, including the vulcanization of natural rubber using sulfur, as well as the development of the "clincher" rim for holding the tire in place laterally on the wheel rim.
Synthetic rubbers were invented in the laboratories of Bayer in the 1920s. Rubber shortages in the United Kingdom during WWII prompted research on alternatives to rubber tires with suggestions including leather, compressed asbestos, rayon, felt, bristles, and paper.
In 1946, Michelin developed the radial tire method of construction. Michelin had bought the bankrupt Citroën automobile company in 1934 to utilize this new technology. Because of its superiority in handling and fuel economy, use of this technology quickly spread throughout Europe and Asia. In the US, the outdated bias-ply tire construction persisted until the Ford Motor Company adopted radial tires in the early 1970s, following a 1968 article in an influential American magazine, Consumer Reports, highlighting the superiority of radial construction. The US tire industry lost its market share to Japanese and European manufacturers, which bought out US companies.
Applications
Tires may be classified according to the type of vehicle they serve. They may be distinguished by the load they carry and by their application, e.g. to a motor vehicle, aircraft, or bicycle.
Automotive
Light–medium duty
Light-duty tires for passenger vehicles carry loads in the range of on the drive wheel. Light-to-medium duty trucks and vans carry loads in the range of on the drive wheel. They are differentiated by speed rating for different vehicles, including (starting from the lowest speed to the highest): winter tires, light truck tires, entry-level car tires, sedans and vans, sport sedans, and high-performance cars. Apart from road tires, there are special categories:
Snow tires are designed for use on snow and ice. They have a tread design with larger gaps than those on summer tires, increasing traction on snow and ice. Such tires that have passed a specific winter traction performance test are entitled to display a "Three-Peak Mountain Snow Flake" symbol on their sidewalls. Tires designed for winter conditions are optimized to drive at temperatures below . Some snow tires have metal or ceramic studs that protrude from the tire to increase traction on hard-packed snow or ice. Studs abrade dry pavement, causing dust and creating wear in the wheel path. Regulations that require the use of snow tires or permit the use of studs vary by country in Asia and Europe, and by state or province in North America.
All-season tires are typically rated for mud and snow (M+S). These tires have tread gaps that are smaller than snow tires and larger than conventional tires. They are quieter than snow tires on clear roads, but less capable on snow or ice.
All-terrain tires are designed to have adequate traction off-road, yet have benign handling and noise characteristics for highway driving. Such tires are rated better on snow and rain than street tires and "good" on ice, rock, and sand.
Mud-terrain tires have a deeper, more open tread for good grip in mud, than all-terrain tires, but perform less well on pavement.
High-performance tires are rated for speeds up to and ultra-high-performance tires are rated for speeds up to , but have harsher ride characteristics and durability.
Electric vehicles have unique demands on tires due to the combination of weight (resulting in new load index), higher torque, and requirements for lower rolling resistance.
Other types of light-duty automotive tires include run-flat tires and race car tires:
Run-flat tires eliminates the need for a spare tire because they can be traveled on at a reduced speed in the event of a puncture, using a stiff sidewall to prevent damage to the tire rim. Vehicles without run-flat tires rely on a spare tire, which may be a compact tire, to replace a damaged tire.
Race car tires come in three main categories, DOT (street-legal), slick, and rain. Race car tires are designed to maximize cornering and acceleration friction at the expense of longevity. Racing slicks have no tread to maximize contact with the pavement and rain tires have channels to eject water to avoid hydroplaning.
Heavy duty
Heavy-duty tires for large trucks and buses come in a variety of profiles and carry loads in the range of on the drive wheel. These are typically mounted in tandem on the drive axle.
Truck tires come in a variety of profiles that include "low profile" with a section height that is 70 to 45% of the tread width, "wide-base" for heavy vehicles, and a "super-single" tire that has the same total contact pressure as a dual-mounted tire combination.
Off-road tires are used on construction vehicles, agricultural and forestry equipment, and other applications that take place on soft terrain. The category also includes machinery that travels over hardened surfaces at industrial sites, ports, and airports. Tires designed for soft terrain have a deep, wide tread to provide traction in loose dirt, mud, sand, or gravel.
Other
Aircraft, bicycles, and a variety of industrial applications have distinct design requirements.
Aircraft tires are designed for landing on paved surfaces and rely on their landing gear to absorb the shock of landing. To conserve the weight and space required, they are typically small in proportion to the vehicle that they support. Most are radial-ply construction. They are designed for a peak load when the aircraft is stationary, although side loads upon landing are an important factor. Although hydroplaning is a concern for aircraft tires, they typically have radial grooves and no lateral grooves or sipes. Some light aircraft employ large-diameter, low-pressure tundra tires for landing on unprepared surfaces in wilderness areas.
Bicycle tires may be designed for riding on roads or over unimproved terrain and may be mounted on vehicles with more than two wheels. There are three main types: clincher, wired and tubular. Most bicycle tires are clincher and have a bead that presses against the wheel rim. An inner tube provides the air pressure and the contact pressure between the bead and wheel rim.
Industrial tires support such vehicles as forklifts, tractors, excavators, road rollers, and bucket loaders. Those used on smooth surfaces have a smooth tread, whereas those used on soft surfaces typically have large tread features. Some industrial tires are solid or filled with foam.
Motorcycle tires provide traction, resisting wear, absorbing surface irregularities, and allow the motorcycle to turn via countersteering. The two tires' contact with the ground affects safety, braking, fuel economy, noise, and rider comfort.
Construction types
Tire construction spans pneumatic tires used on cars, trucks, and aircraft, but also includes non-automotive applications with slow-moving, light-duty, or railroad applications, which may have non-pneumatic tires.
Automotive
Following the 1968 Consumer Reports announcement of the superiority of the radial design, radial tires began an inexorable climb in market share, reaching 100% of the North American market in the 1980s. Radial tire technology is now the standard design for essentially all automotive tires, but other methods have been used.
Radial (or radial-ply) tire construction utilizes body ply cords extending straight across the tread from bead to bead—so that the cords are laid at approximately right angles to the centerline of the tread, and parallel to one another—as well as stabilizer belts directly beneath the tread. The plies are generally made of nylon, polyester, or steel, and the belts of steel, fiberglass, or Kevlar. The tire’s footprint, wider than a bias tire’s, and flexible sidewalls provide a better grip in turns, and its circumferential belts stabilize it. The advantages of this construction over that of a bias tire are many, including longer tread life, better steering control, lower rolling resistance, improved fuel economy, more uniform wear, higher heat resistance, fewer blowouts, and a steadier, more comfortable ride at speed. Disadvantages, besides a higher cost than that of bias tires, are a harder ride at low speeds and generally worse performance on rough terrain. Radial tires are also seldom seen in diameters of greater than 42 inches, as such tires are difficult to make.
tire (bias-ply, or cross-ply) construction utilizes body ply cords that extend diagonally from bead to bead, usually at angles in the range of 30 to 40 degrees from the direction of travel. Successive plies are laid at opposing angles, forming a crisscross pattern to which the tread is applied. Such a design is resistant to sidewall deformation and punctures (and to punctures’ expansion, or “torque splitting”) and therefore durable in severe use. Since the tread and sidewalls share their casing plies, the tire body flexes as a whole, providing the main advantage of this construction, better traction and smoother motion on uneven surfaces, with a greater tendency to conform to rocky ground and throw off mud and clay, especially because the rubber is usually of a softer compound than that used on radial tires. However, this conformity increases a bias tire's rolling resistance, and its stiffness allows less control, traction, and comfort at higher speeds, while shear between its overlapping plies causes friction that generates heat. Still, bias tires benefit from simpler structure and so cost less than like-size radials, and they remain in use on heavy equipment and off-road vehicles, although the earthmoving market has shifted to radials.
A belted bias tire starts with two or more bias plies to which stabilizer belts are bonded directly beneath the tread. This construction provides a smoother ride that is similar to the bias tire, while lessening rolling resistance because the belts increase tread stiffness. The design was introduced by Armstrong, while Goodyear made it popular with the "Polyglas" trademark tire featuring a polyester carcass with belts of fiberglass. The "belted" tire starts two main plies of polyester, rayon, or nylon annealed as in conventional tires, and then placed on top are circumferential belts at different angles that improve performance compared to non-belted bias tires. The belts may be fiberglass or steel.
Other
Tubeless tires are pneumatic tires that do not require a separate inner tube.
Semi-pneumatic tires have a hollow center, but they are not pressurized. They are lightweight, low-cost, puncture-proof, and provide cushioning. These tires often come as a complete assembly with the wheel and even integral ball bearings. They are used on lawn mowers, wheelchairs, and wheelbarrows. They can also be rugged, typically used in industrial applications, and are designed to not pull off their rim under use.
An airless tire is a non-pneumatic tire that is not supported by air pressure. They are most commonly used on small vehicles, such as golf carts, and on utility vehicles in situations where the risk of puncture is high, such as on construction equipment. Many tires used in industrial and commercial applications are non-pneumatic, and are manufactured from solid rubber and plastic compounds via molding operations. Solid tires include those used for lawnmowers, skateboards, golf carts, scooters, and many types of light industrial vehicles, carts, and trailers. One of the most common applications for solid tires is for material handling equipment (forklifts). Such tires are installed utilizing a hydraulic tire press.
Wooden wheels for horse-drawn vehicles usually have a wrought iron tire. This construction was extended to wagons on horse-drawn tramways, rolling on granite setts or cast iron rails.
The wheels of some railway engines and older types of rolling stock are fitted with railway tires to prevent the need to replace the entirety of a wheel. The tire, usually made of steel, surrounds the wheel and is primarily held in place by interference fit.
Aircraft tires may operate at pressures that exceed . Some aircraft tires are inflated with nitrogen to "eliminate the possibility of a chemical reaction between atmospheric oxygen and volatile gases from the tire inner liner producing a tire explosion".
Manufacturing
Pneumatic tires are manufactured in about 450 tire factories around the world. Tire production starts with bulk raw materials such as rubber, carbon black, and chemicals and produces numerous specialized components that are assembled and cured. Many kinds of rubber are used, the most common being styrene-butadiene copolymer.
Forecasts for the global automotive tire market indicate continued growth through 2027. Estimates put the value of worldwide sales volume around $126 billion in 2022, it is expected to reach the value of over $176 billion by 2027. Production of tires is also experiencing growth. In 2015, the US manufactured almost 170 million tires. Over 2.5 billion tires are manufactured annually, making the tire industry a major consumer of natural rubber. It was estimated that for 2019 onwards, at least 3 billion tires would be sold globally every year. However, other estimates put worldwide tire production of 2,268 million in 2021 and is predicted to reach 2,665 million tires by 2027.
As of 2011, the top three tire manufacturing companies by revenue were Bridgestone (manufacturing 190 million tires), Michelin (184 million), Goodyear (181 million); they were followed by Continental, and Pirelli. The Lego group produced over 318 million toy tires in 2011 and was recognized by Guinness World Records as having the highest annual production of tires by any manufacturer.
Components
A tire comprises several components: the tread, bead, sidewall, shoulder, and ply.
Tread
The tread is the part of the tire that comes in contact with the road surface. The portion that is in contact with the road at a given instant in time is the contact patch. The tread is a thick rubber, or rubber/composite compound formulated to provide an appropriate level of traction that does not wear away too quickly.
The tread pattern is characterized by a system of circumferential grooves, lateral sipes, and slots for road tires or a system of lugs and voids for tires designed for soft terrain or snow. Grooves run circumferentially around the tire and are needed to channel away water. Lugs are that portion of the tread design that contacts the road surface. Grooves, sipes, and slots allow tires to evacuate water.
The design of treads and the interaction of specific tire types with the roadway surface affects roadway noise, a source of noise pollution emanating from moving vehicles. These sound intensities increase with higher vehicle speeds. Tires treads may incorporate a variety of distances between slots (pitch lengths) to minimize noise levels at discrete frequencies. Sipes are slits cut across the tire, usually perpendicular to the grooves, which allow the water from the grooves to escape sideways and mitigate hydroplaning.
Different tread designs address a variety of driving conditions. As the ratio of tire tread area to groove area increases, so does tire friction on dry pavement, as seen on Formula One tires, some of which have no grooves. High-performance tires often have smaller void areas to provide more rubber in contact with the road for higher traction, but may be compounded with softer rubber that provides better traction, but wears quickly. Mud and snow (M&S) tires employ larger and deeper slots to engage mud and snow. Snow tires have still larger and deeper slots that compact snow and create shear strength within the compacted snow to improve braking and cornering performance.
Wear bars (or wear indicators) are raised features located at the bottom of the tread grooves that indicate the tire has reached its wear limit. When the tread lugs are worn to the point that the wear bars connect across the lugs, the tires are fully worn and should be taken out of service, typically at a remaining tread depth of .
Other
The tire bead is the part of the tire that contacts the rim on the wheel. This essential component is constructed with robust steel cables encased in durable, specially formulated rubber designed to resist stretching. The precision of the bead's fit is crucial, as it seals the tire against the wheel, maintaining air pressure integrity and preventing any loss of air. The bead's design ensures a secure, non-slip connection, preventing the tire from rotating independently from the wheel during vehicle motion. Additionally, the interplay between the bead's dimensions and the wheel's width significantly influences the vehicle's steering responsiveness and stability, as it helps to maintain the tire’s intended shape and contact with the road.
The sidewall is that part of the tire, or bicycle tire, that bridges between the tread and bead. The sidewall is largely rubber but reinforced with fabric or steel cords that provide for tensile strength and flexibility. The sidewall contains air pressure and transmits the torque applied by the drive axle to the tread to create traction but supports little of the weight of the vehicle, as is clear from the total collapse of the tire when punctured.
Sidewalls are molded with manufacturer-specific detail, government-mandated warning labels, and other consumer information.
Sidewall may also have sometimes decorative ornamentation that includes whitewall or red-line inserts as well as tire lettering.
The shoulder is that part of the tire at the edge of the tread as it makes the transition to the sidewall.
Plies are layers of relatively inextensible cords embedded in the rubber to hold its shape by preventing the rubber from stretching in response to the internal pressure. The orientations of the plies play a large role in the performance of the tire and are one of the main ways that tires are categorized.
Blems
Blem (short for "blemished") is a term used for a tire that failed inspection during manufacturing - but only for superficial/cosmetic/aesthetic reasons. For example, a tire with white painted lettering which is smudged or incomplete might be classified as a "blem". Blem tires are fully functional and generally carry the same warranty as flawless tires - but are sold at a discount.
Materials
The materials of modern pneumatic tires can be divided into two groups, the cords that make up the ply and the elastomer which encases them.
Cords
The cords, which form the ply and bead and provide the tensile strength necessary to contain the inflation pressure, can be composed of steel, natural fibers such as cotton or silk, or synthetic fibers such as nylon or kevlar. Good adhesion between the cords and the rubber is important. To achieve this the steel cords are coated in a thin layer of brass, various additives will also be added to the rubber to improve binding, such as resorcinol/HMMM mixtures.
Elastomer
The elastomer, which forms the tread and encases the cords to protect them from abrasion and hold them in place, is a key component of pneumatic tire design. It can be composed of various composites of rubber material – the most common being styrene-butadiene copolymer – with other chemical compounds such as silica and carbon black.
Optimizing rolling resistance in the elastomer material is a key challenge for reducing fuel consumption in the transportation sector. It is estimated that passenger vehicles consume approximately 5~15% of their fuel to overcome rolling resistance, while the estimate is understood to be higher for heavy trucks. However, there is a trade-off between rolling resistance and wet traction and grip: while low rolling resistance can be achieved by reducing the viscoelastic properties of the rubber compound (low tangent (δ)), it comes at the cost of wet traction and grip, which requires hysteresis and energy dissipation (high tangent (δ)). A low tangent (δ) value at 60 °C is used as an indicator of low rolling resistance, while a high tangent (δ) value at 0 °C is used as an indicator of high wet traction. Designing an elastomer material that can achieve both high wet traction and low rolling resistance is key in achieving safety and fuel efficiency in the transportation sector.
The most common elastomer material used today is a styrene-butadiene copolymer. It combines the properties of polybutadiene, which is a highly rubbery polymer (Tg = -100 °C) having high hysteresis and thus offering good wet grip properties, with the properties of polystyrene, which is a glassy polymer (Tg = 100 °C) having low hysteresis and thus offering low rolling resistance in addition to wear resistance. Therefore, the ratio of the two monomers in the styrene-butadiene copolymer is considered key in determining the glass transition temperature of the material, which is correlated to its grip and resistance properties.
Non-exhaust emissions of particulate matter, generated by the wearing down of brakes, clutches, tires, and road surfaces, as well as by the suspension of road dust, constitute a little-known but rising share of emissions from road traffic and significantly harm public health.
On the wheel
Associated components of tires include the wheel on which it is mounted, the valve stem through which air is introduced, and, for some tires, an inner tube that provides the airtight means for maintaining tire pressure.
Wheel: Pneumatic tires are mounted onto wheels that most often have integral rims on their outer edges to hold the tire. Automotive wheels are typically made from pressed and welded steel, or a composite of lightweight metal alloys, such as aluminum or magnesium. There are two aspects to how pneumatic tires support the rim of the wheel on which they are mounted. First, the tension in the cords pull on the bead uniformly around the wheel, except where it is reduced above the contact patch. Second, the bead transfers that net force to the rim. Tires are mounted on the wheel by forcing its beads into the channel formed by the wheel's inner and outer rims.
Valve stem: Pneumatic tires receive their air through a valve stem—a tube made of metal or rubber, with a check valve, typically a Schrader valve on automobiles and most bicycle tires, or a Presta valve on high-performance bicycles. They mount directly to the rim, in the case of tubeless tires, or are an integral part of the inner tube. Most modern passenger vehicles are now required to have a tire pressure monitoring system which usually consists of a valve stem attached to an electronic module.
Inner tube: Most bicycle tires, many motorcycle tires, and many tires for large vehicles such as buses, heavy trucks, and tractors are designed for use with inner tubes. Inner tubes are torus-shaped balloons made from an impermeable material, such as soft, elastic synthetic rubber, to prevent air leakage. The inner tubes are inserted into the tire and inflated to retain air pressure. Large inner tubes can be reused for other purposes, such as swimming and rafting (see swim ring), tubing (recreation), sledding, and skitching. Purpose-built inflatable tori are also manufactured for these uses, offering a choice of colors, fabric covering, handles, decks, and other accessories, and eliminating the protruding valve stem.
Performance characteristics
The interactions of a tire with the pavement are complex. A commonly used (empirical) model of tire properties is Pacejka's "Magic Formula". Some are explained below, alphabetically, by section.
Dynamics
Balance: Wheel-tire combinations require an even distribution of mass around their circumferences to maintain tire balance, while turning at speed. Tires are checked at the point of manufacture for excessive static imbalance and dynamic imbalance using automatic tire balance machines. Tires are checked again in the auto assembly plant or tire retail shop after mounting the tire to the wheel. Assemblies that exhibit excessive imbalance are corrected by applying balance weights to the wheels to counteract the tire/wheel imbalance. An alternative method to tire balancing is the use of internal tire balancing agents. These agents take advantage of centrifugal force and inertia to counteract the tire imbalance. To facilitate proper balancing, most high-performance tire manufacturers place red and yellow marks on the sidewalls to enable the best possible match-mounting of the tire/wheel assembly. There are two methods of match-mounting high-performance tire-to-wheel assemblies using these red (uniformity) or yellow (weight) marks.
Centrifugal growth: A tire rotating at higher speeds tends to develop a larger diameter, due to centrifugal forces that force the tread rubber away from the axis of rotation. This may cause speedometer error. As the tire diameter grows, the tire width decreases. This centrifugal growth can cause rubbing of the tire against the vehicle at high speeds. Motorcycle tires are often designed with reinforcements aimed at minimizing centrifugal growth.
Pneumatic trail: Pneumatic trail of a tire is the trail-like effect generated by compliant tires rolling on a hard surface and subject to side loads, as in a turn. More technically, it is the distance that the resultant force of side-slip occurs behind the geometric center of the contact patch.
Slip angle: Slip angle or sideslip angle is the angle between a rolling wheel's actual direction of travel and the direction towards which it is pointing (i.e., the angle of the vector sum of wheel translational velocity and sideslip velocity
Relaxation length: Relaxation length is the delay between when a slip angle is introduced and when the cornering force reaches its steady-state value.
Spring rate: Vertical stiffness, or spring rate, is the ratio of vertical force to vertical deflection of the tire, and it contributes to the overall suspension performance of the vehicle. In general, the spring rate increases with inflation pressure.
Stopping distance: Performance-oriented tires have a tread pattern and rubber compounds designed to grip the road surface, and so usually have a slightly shorter stopping distance. However, specific braking tests are necessary for data beyond generalizations.
Forces
Camber thrust: Camber thrust and camber force are the force generated perpendicular to the direction of travel of a rolling tire due to its camber angle and finite contact patch.
Circle of forces: The circle of forces, traction circle, friction circle, or friction ellipse is a useful way to think about the dynamic interaction between a vehicle's tire and the road surface.
Contact patch: The contact patch, or footprint, of the tire, is the area of the tread that is in contact with the road surface. This area transmits forces between the tire and the road via friction. The length-to-width ratio of the contact patch affects steering and cornering behavior.
Cornering force: Cornering force or side force is the lateral (i.e. parallel to the road surface) force produced by a vehicle tire during cornering.
Dry traction: Dry traction is the measure of the tire's ability to deliver traction, or grip, under dry conditions. Dry traction is a function of the tackiness of the rubber compound.
Force variation: The tire tread and sidewall elements undergo deformation and recovery as they enter and exit the footprint. Since the rubber is elastomeric, it is deformed during this cycle. As the rubber deforms and recovers, it imparts cyclical forces into the vehicle. These variations are collectively referred to as tire uniformity. Tire uniformity is characterized by radial force variation (RFV), lateral force variation (LFV), and tangential force variation. Radial and lateral force variation is measured on a force variation machine at the end of the manufacturing process. Tires outside the specified limits for RFV and LFV are rejected. Geometric parameters, including radial runout, lateral runout, and sidewall bulge, are measured using a tire uniformity machine at the tire factory at the end of the manufacturing process as a quality check.
Rolling resistance: Rolling resistance is the resistance to rolling caused by deformation of the tire in contact with the road surface. As the tire rolls, the tread enters the contact area and is deformed flat to conform to the roadway. The energy required to make the deformation depends on the inflation pressure, rotating speed, and numerous physical properties of the tire structure, such as spring force and stiffness. Tire makers seek lower rolling resistance tire constructions to improve fuel economy in cars and especially trucks, where rolling resistance accounts for a high proportion of fuel consumption. Pneumatic tires also have a much lower rolling resistance than solid tires. Because the internal air pressure acts in all directions, a pneumatic tire is able to "absorb" bumps in the road as it rolls over them without experiencing a reaction force opposite to the direction of travel, as is the case with a solid (or foam-filled) tire.
Self aligning torque: Self-aligning torque, also known as the aligning torque, SAT or Mz, is the torque that a tire creates as it rolls along that tends to steer it, i.e. rotate it around its vertical axis.
Wet traction: Wet traction is the tire's traction, or grip, under wet conditions. Wet traction is improved by the tread design's ability to channel water out of the tire footprint and reduce hydroplaning. However, tires with a circular cross-section, such as those found on racing bicycles, when properly inflated have a sufficiently small footprint to not be susceptible to hydroplaning. For such tires, it is observed that fully slick tires will give superior traction on both wet and dry pavement.
Load
Load sensitivity: Load sensitivity is the behavior of tires under load. Conventional pneumatic tires do not behave as classical friction theory would suggest. Namely, the load sensitivity of most real tires in their typical operating range is such that the coefficient of friction decreases as the vertical load, Fz, increases.
Work load: The work load of a tire is monitored so that it is not put under undue stress, which may lead to its premature failure. Work load is measured in Ton Kilometer Per Hour (TKPH). The measurement's appellation and units are the same. The recent shortage and increasing cost of tires for heavy equipment has made TKPH an important parameter in tire selection and equipment maintenance for the mining industry. For this reason, manufacturers of tires for large earth-moving and mining vehicles assign TKPH ratings to their tires based on their size, construction, tread type, and rubber compound. The rating is based on the weight and speed that the tire can handle without overheating and causing it to deteriorate prematurely. The equivalent measure used in the United States is Ton Mile Per Hour (TMPH).
Wear
Tire wear is a major source of rubber pollution. A concern hereby is that vehicle tire wear pollution is unregulated, unlike exhaust emissions.
Tread wear This occurs through normal contact with roads or terrain; there are several types of abnormal tread wear. Poor wheel alignment can cause excessive wear of the innermost or outermost ribs. Gravel roads, rocky terrain, and other rough terrain cause accelerated wear. Over-inflation above the sidewall maximum can cause excessive wear to the center of the tread. Modern tires have steel belts built in to prevent this. Under-inflation causes excessive wear to the outer ribs. Unbalanced wheels can cause uneven tire wear, as the rotation may not be perfectly circular. Tire manufacturers and car companies have mutually established standards for tread wear testing that include measurement parameters for tread loss profile, lug count, and heel-toe wear.
Tread wear indicators (T.W.I.) Raised bars in the tread channels, which indicate that the tread is becoming worn and therefore unsafe. Indicators have been required on all new tires since 1968 in the US. In many countries the Highway Code forbids driving on public roads when the contact surface is flush with any of these bars - this is often defined when the groove depth is approximately 1.5 or 1.6 mm (2/32 inch). TWI can also be used to refer to small arrows or icons on the tire sidewall, indicating the location of the raised wear bars.
Damage by aging Tire aging or "thermo-oxidative degradation" can be caused by time, ambient and operating temperatures, partial pressure of O2 in a tire, flex fatigue, or construction and compounding characteristics. For example, prolonged UV exposure leads to rubber's chemicals warping, potentially causing dry rot. Various storage methods may slow the aging process, but will not eliminate tire degradation.
Sizes, codes, standards, and regulatory agencies
Automotive tires have a variety of identifying markings molded onto the sidewall as a tire code. They denote size, rating, and other information pertinent to that individual tire.
Americas
The National Highway and Traffic Safety Administration (NHTSA) is a U.S. government body within the Department of Transportation (DOT) tasked with regulating automotive safety in the United States. NHTSA established the Uniform Tire Quality Grading System (UTQG), is a system for comparing the performance of tires according to the Code of Federal Regulations 49 CFR 575.104; it requires labeling of tires for tread wear, traction, and temperature. The DOT Code is an alphanumeric character sequence molded into the sidewall of the tire and allows the identification of the tire and its age. The code is mandated by the U.S. Department of Transportation but is used worldwide. The DOT Code is also useful in identifying tires subject to product recall or at end of life due to age. The Tire and Rim Association (T&RA) is a voluntary U.S. standards organization that promotes the interchangeability of tires, rims, and allied parts. Of particular interest, they publish key tire dimensions, rim contour dimensions, tire valve dimension standards, and load/inflation standards.
The National Institute of Metrology Standardization and Industrial Quality (INMETRO) is the Brazilian federal body responsible for automotive wheel and tire certification.
Europe
The European Tyre and Rim Technical Organisation (ETRTO) is the European standards organization "to establish engineering dimensions, load/pressure characteristics and operating guidelines". All tires sold for road use in Europe after July 1997 must carry an E-mark. The mark itself is either an upper case "E" or lower case "e" – followed by a number in a circle or rectangle, followed by a further number. An (upper case) "E" indicates that the tire is certified to comply with the dimensional, performance, and marking requirements of ECE regulation 30. A (lowercase) "e" indicates that the tire is certified to comply with the dimensional, performance, and marking requirements of Directive 92/23/EEC. The number in the circle or rectangle denotes the country code of the government that granted the type approval. The last number outside the circle or rectangle is the number of the type approval certificate issued for that particular tire size and type.
The British Rubber Manufacturers Association (BRMA) recommended practice, issued June 2001, states, "BRMA members strongly recommend that unused tires should not be put into service if they are over six years old and that all tires should be replaced ten years from the date of their manufacture."
Asia
The Japanese Automobile Tire Manufacturers Association (JATMA) is the Japanese standards organization for tires, rims, and valves. It performs similar functions as the T&RA and ETRTO.
The China Compulsory Certification (CCC) is a mandatory certification system concerning product safety in China that went into effect in August 2002. The CCC certification system is operated by the State General Administration for Quality Supervision and Inspection and Quarantine of the People's Republic of China (AQSIQ) and the Certification and Accreditation Administration of the People's Republic of China (CNCA).
Maintenance
To maintain tire health, several actions are appropriate, tire rotation, wheel alignment, and, sometimes, retreading the tire.
Rotation: Tires may exhibit irregular wear patterns once installed on a vehicle and partially worn. Front-wheel drive vehicles tend to wear the front tires at a greater rate compared to the rear tires. Tire rotation is moving the tires to different car positions, such as front-to-rear, in order to even out the wear, with the objective of extending the life of the tire.
Alignment: Wheel alignment helps prevent wear due to rotation in a direction other than the path of the vehicle. When mounted on the vehicle, the wheel and tire may not be perfectly aligned to the direction of travel, and therefore may exhibit irregular wear. If the discrepancy in alignment is large, then the irregular wear will become substantial if left uncorrected. Wheel alignment is the procedure for checking and correcting this condition through adjustment of camber, caster, and toe angles. The adjustment of the angles should be done as per the OEM specifications.
Inflation
Inflation is key to proper wear and rolling resistance of pneumatic tires. Many vehicles have monitoring systems to assure proper inflation. Most passenger cars are advised to maintain a tire pressure within the range of when the tires are not warmed by driving.
Specification— Vehicle manufacturers provide tire specifications, including a recommended cold inflation pressure, to ensure safe operation within the designated load rating and vehicle loading capacity. While many tires feature a maximum pressure rating stamped on them, passenger vehicles and light trucks typically include inflation guidance on a decal located just inside the driver's door and in the vehicle owner's handbook.
Ground contact: The tire contact patch is readily changed by both over- and underinflation. Overinflation may increase the wear on the center contact patch, and underinflation will cause a concave tread, resulting in less center contact, though the overall contact patch will still be larger. Most modern tires will wear evenly at high tire pressures, but will degrade prematurely if underinflated. Increased tire pressure may decrease rolling resistance, and may also result in shorter stopping distances If tire pressure is too low, the tire contact patch is greatly increased. This increases rolling resistance, tire flexing, and friction between the road and the tire. Under-inflation can lead to tire overheating, premature tread wear, and tread separation in severe cases.
Monitoring: Tire pressure monitoring systems (TPMS) are electronic systems that monitor the tire pressures on individual wheels on a vehicle and alert the driver when the pressure goes below a warning limit. There are several types of designs to monitor tire pressure. Some actually measure the air pressure, and some make indirect measurements, such as gauging when the relative size of the tire changes due to lower air pressure.
Hazards
Tire hazards may occur from failure of the tire, itself, or from loss of traction on the surface over which it is rolling. Structural failures of a tire can result in flat tires or more dangerous blowouts. Some of these failures can be caused by manufacture error and may lead to recalls, such as the widespread Firestone tire failures on Ford vehicles that lead to the Firestone and Ford tire controversy in the 1990s.
Tire failure
Tires may fail for any of a variety of reasons, including:
Belt separation which may be belt-to-belt, tread and belt, or separation of the edge of the belt. Belt-to-belt separation may occur having the tire deflect too much, from high pavement temperatures, road hazard impacts, or other causes that have to do with maintenance and storage.
Non-belt separations include those at the tire tread, in the bead area, in the lower sidewall, between reinforcing plies, and of the reinforcing steel or fabric materials.
Other types of failure include run-flat damage, chemical degradation, cracking, indentations and bulges.
Vehicle operation failures
Melting rubber: As tire rubber compounds heat, owing to the friction of stopping, cornering, or accelerating, they may begin to melt, lubricate the tire-road contact area, and become deposited on the pavement. This effect is stronger with increased ambient temperature.
Hydroplaning: Motor vehicles or aircraft tires passing over a wet pavement may lose contact with sufficient speed or water depth for a given tread design. In this case, the tire contact area is riding on a film of water and loses the friction needed for braking or cornering and begins to hydroplane (or aquaplane). Hydroplaning may occur as dynamic hydroplaning where standing water is present with a depth of at least above the texture of the pavement and speed is sustained above a threshold level. It may also occur as viscous hydroplaning whereby tire rubber melts for a brief interval and causes slippage. This may leave deposits of rubber on a runway as airplanes land. Dynamic hydroplaning causes decreased friction and contact with increased tire speed.
Snow: The degree to which a tire can maintain traction in snow depends on its ability to compact snow, which material then develops strength against slippage along a shear plane parallel to the contact area of the tire on the ground. At the same time, the bottom of the tire treads compress the snow on which they are bearing, also creating friction. The process of compacting snow within the treads requires it to be expelled in time for the tread to compact snow anew on the next rotation. The compaction/contact process works both in the direction of travel for propulsion and braking, but also laterally for cornering.
Ice: Ice is typically close to its melting point when a tire travels over it. This, combined with a smooth texture, promotes a low coefficient of friction and reduced traction during braking, cornering or acceleration.
Soft ground: Soil can become lubricated with water, which reduces its ability to maintain shear strength when a tire tries to apply force in acceleration, braking, or cornering. Dry sand also has low shear strength, owing to poor cohesiveness among sand particles.
Health impacts
Tires contain a number of trace toxic chemicals including heavy metals and chemical agents used to increase the durability of the tires. These typically include polycyclic aromatic hydrocarbon, benzothiazoles, isoprene and heavy metals such as zinc and lead.
As tires are used for vehicle operations, the natural wear of the tire leaves microfine particles equivalent to PM0.1, PM2.5, and PM10 as tire residue. This residue accumulates near roadways and vehicle use areas, but also will travel into the environment through surface runoff. Both humans and animals are exposed to these chemicals at the site of accumulation (i.e. walking on the road surface) and through the accumulation in natural environments and foodchains. A 2023 literature review from Imperial College London, warned of both the toxic chemicals and microplastics produced from tire wear as having potential widespread serious environmental and health consequences.
Moreover, burning of tires releases these chemicals as air pollutants as well as leaving toxic residues, that can have significant effects on local communities and first responders.
End of use
Once tires are discarded, they are considered scrap tires. Scrap tires are often re-used for things from bumper car barriers to weights to hold down tarps. Tires are not desired at landfills, due to their large volumes and 75% void space, which quickly consumes valuable space. Rubber tires are likely to contain some traces of heavy metals or other serious pollutants, but these are tightly bonded within the actual rubber compound so they are unlikely to be hazardous unless the tire structure is seriously damaged by fire or strong chemicals. Some facilities are permitted to recycle scrap tires by chipping and processing them into new products or selling the material to licensed power plants for fuel. Some tires may also be retreaded for re-use.
Environmental issues
Americans generate about 285 million scrap tires per year. Many states have regulations as to the number of scrap tires that can be held on-site, due to concerns with dumping, fire hazards, and mosquitoes. In the past, millions of tires have been discarded into open fields. This creates a breeding ground for mosquitoes, since the tires often hold water inside and remain warm enough for mosquito breeding. Mosquitoes create a nuisance and may increase the likelihood of spreading disease. It also creates a fire danger, since such a large tire pile is a lot of fuel. Some tire fires have burned for months, since water does not adequately penetrate or cool the burning tires. Tires have been known to liquefy, releasing hydrocarbons and other contaminants to the ground and even groundwater, under extreme heat and temperatures from a fire. The black smoke from a tire fire causes air pollution and is a hazard to downwind properties.
The use of scrap tire chips for landscaping has become controversial, due to the leaching of metals and other contaminants from the tire pieces. Zinc is concentrated (up to 2% by weight) to levels high enough to be highly toxic to aquatic life and plants. Of particular concern is evidence that some of the compounds that leach from tires into the water contain hormone disruptors and cause liver lesions.
Tires are a major source of microplastic pollution, found in a 2020 study to contribute 78% of the total mass of microplastics found in the ocean. The commonly-used compound 6PPD-quinone, found entering stormwater runoff via tire-wear particles, has been identified as toxic to coho salmon, brook trout, and rainbow trout.
Retreading
Tires that are fully worn can be retreaded, re-manufactured to replace the worn tread. This is known as retreading or recapping, a process of buffing away the worn tread and applying a new tread. There are two main processes used for retreading tires, called mold-cure and pre-cure methods. Both processes start with the inspection of the tire, followed by non-destructive inspection method such as shearography to locate non-visible damage and embedded debris and nails. Some casings are repaired and some are discarded. Tires can be retreaded multiple times if the casing is in usable condition. Tires used for short delivery vehicles are retreaded more than long haul tires over the life of the tire body. Casings fit for retreading have the old tread buffed away to prepare for retreading.
During the retreading process, retread technicians must ensure the casing is in the best condition possible to minimize the possibility of a casing failure. Casings with problems such as capped tread, tread separation, irreparable cuts, corroded belts or sidewall damage, or any run-flat or skidded tires, will be rejected. The mold cure method involves the application of raw rubber on the previously buffed and prepared casing, which is later cured in matrices. During the curing period, vulcanization takes place, and the raw rubber bonds to the casing, taking the tread shape of the matrix. On the other hand, the pre-cure method involves the application of a ready-made tread band on the buffed and prepared casing, which later is cured in an autoclave so that vulcanization can occur.
Recycling
Tires can be recycled into, among other things, the hot melt asphalt, typically as crumb rubber modifier—recycled asphalt pavement (CRM—RAP), and as an aggregate in portland cement concrete. Shredded tires can create rubber mulch on playgrounds to diminish fall injuries. There are some "green" buildings that are being made both private and public buildings that are made from old tires.
The tire pyrolysis method for recycling used tires is a technique that heats whole or shredded tires in a reactor vessel containing an oxygen-free atmosphere and a heat source. In the reactor, the rubber is softened after which the rubber polymers continuously break down into smaller molecules.
Other uses
Other downstream uses have been developed for worn-out tires, including:
Building elements: Tires filled with earth have been used as garden containers house foundations, bullet-proof walls and to prevent soil erosion in flood plains. Tire walls are a common feature of motor racing circuits for safety.
Recreational equipment: Used tires are employed as exercise equipment for athletic programs such as American football. One classic conditioning drill that hones players' speed and agility is the "Tire Run" where tires are laid out side by side, with each tire on the left a few inches ahead of the tire on the right in a zigzag pattern. Athletes then run through the tire pattern by stepping in the center of each tire. The drill forces athletes to lift their feet above the ground higher than normal to avoid tripping on the tires. Old tires are sometimes converted into a swing for play.
Burning tires as protest: Protestors, worldwide, have burned tires to create black smoke.
Necklacing is the use of tires to kill people, typically by lynch mobs. A tire is soaked in gasoline, placed around the victim's neck, and set on fire.
| Technology | Road transport | null |
65085 | https://en.wikipedia.org/wiki/Triticale | Triticale | Triticale (; × Triticosecale) is a hybrid of wheat (Triticum) and rye (Secale) first bred in laboratories during the late 19th century in Scotland and Germany. Commercially available triticale is almost always a second-generation hybrid, i.e., a cross between two kinds of primary (first-cross) triticales. As a rule, triticale combines the yield potential and grain quality of wheat with the disease and environmental tolerance (including soil conditions) of rye. Only recently has it been developed into a commercially viable crop. Depending on the cultivar, triticale can more or less resemble either of its parents. It is grown mostly for forage or fodder, although some triticale-based foods can be purchased at health food stores and can be found in some breakfast cereals.
When crossing wheat and rye, wheat is used as the female parent and rye as the male parent (pollen donor). The resulting hybrid is sterile and must be treated with colchicine to induce polyploidy and thus the ability to reproduce itself.
The primary producers of triticale are Poland, Germany, Belarus, France and Russia. In 2014, according to the Food and Agriculture Organization (FAO), 17.1 million tons were harvested in 37 countries across the world.
The triticale hybrids are all amphidiploid, which means the plant is diploid for two genomes derived from different species. In other words, triticale is an allotetraploid. In earlier years, most work was done on octoploid triticale. Different ploidy levels have been created and evaluated over time. The tetraploids showed little promise, but hexaploid triticale was successful enough to find commercial application.
The CIMMYT (International Maize and Wheat Improvement Center) triticale improvement program was intended to improve food production and nutrition in developing countries. Triticale was thought to have potential in the production of bread and other food products, such as cookies, pasta, pizza dough and breakfast cereals. The protein content is higher than that of wheat, although the glutenin fraction is less. The grain has also been stated to have higher levels of lysine than wheat. Acceptance would require the milling industry to adapt to triticale, as the milling techniques employed for wheat are unsuited to triticale. Past research indicated that triticale could be used as a feed grain and, particularly, later research found that its starch is readily digested. As a feed grain, triticale is already well established and of high economic importance. It has received attention as a potential energy crop, and research is currently being conducted on the use of the crop's biomass in bioethanol production. Triticale has also been used to produce vodka.
History
In the 19th century, crossing cultivars or species became better understood, allowing the controlled hybridization of more plants and animals. In 1873, Alexander Wilson first managed to manually fertilize the female organs of wheat flowers with rye pollen (male gametes), but found that the resulting plants were sterile, much the way the offspring of a horse and donkey is an infertile mule. Fifteen years later in 1888, a partially-fertile hybrid was produced by , "Tritosecale Rimpaui Wittmack". Such hybrids germinate only when the chromosomes spontaneously double.
Unfortunately, "partially fertile" was all that was produced until 1937. In that year, it was discovered that the chemical colchicine, which is used both for general plant germination and as a treatment for gout, would force chromosome doubling by keeping them from pulling apart during cell division. Triticale had become viable, though at that point the cost of producing the seeds was disproportionate to the yield.
By the 1960s, triticale was being produced that was far more nutritious than normal wheat. However, it was a poorly-producing crop, sometimes yielding shriveled kernels, germinating poorly or prematurely, and did not bake well.
Modern triticale has overcome most of these problems, after decades of additional breeding and gene transfer with wheat and rye. Millions of acres/hectares of the crop are grown around the world, slowly increasing toward becoming a significant source of food-calories.
Species
Triticale hybrids are currently classified by ploidy into three nothospecies:
× Triticosecale semisecale (Mackey) K.Hammer & Filat. – tetraploid triticale. Unstable, but used in breeding bridging. Includes the following crosses:
Triticum monococcum × Secale cereale, genome AARR;
Alternative crosses, genome ABRR (mixogenome A/B).
× Triticosecale neoblaringhemii A.Camus – hexaploid triticale. Stable, currently very successful in agriculture. May be produced by Secale cereale × Triticum turgidum, genome AABBRR.
× Triticosecale rimpaui Wittm. – octaploid triticale. Not completely stable, mainly historical importance. May be produced by Secale cereale × Triticum aestivum, genome AABBDDRR.
The current treatment follows the Mac Key 2005 treatment of Triticum using a broad species concept based on genome composition. Traditional classifications used a narrow species concept based on the treatment of wheats by Dorofeev et al., 1979, and hence produced many more species names. The genome notation follows , with the rye genome notated as R.
Biology and genetics
Earlier work with wheat-rye crosses was difficult due to low survival of the resulting hybrid embryo and spontaneous chromosome doubling. These two factors were difficult to predict and control. To improve the viability of the embryo and thus avoid its abortion, in vitro culture techniques were developed (Laibach, 1925). Colchicine was used as a chemical agent to double the chromosomes. After these developments, a new era of triticale breeding was introduced. Earlier triticale hybrids had four reproductive disorders, namely meiotic instability, high aneuploid frequency, low fertility and shriveled seed (Muntzing 1939; Krolow 1966). Cytogenetical studies were encouraged and well funded to overcome these problems.
It is especially difficult to see the expression of rye genes in the background of wheat cytoplasm and the predominant wheat nuclear genome. This makes it difficult to realise the potential of rye in disease resistance and ecological adaptation.
Triticale is essentially a self-fertilizing, or naturally inbred crop. This mode of reproduction results in a more homozygous genome. The crop is, however, adapted to this form of reproduction from an evolutionary point of view. Cross-fertilization is also possible, but it is not the primary form of reproduction.
is a stem rust resistance gene which is commonly found in triticale. Originally from rye (Imperial rye), now () widely found in triticale. Located on the 3A chromosome arm, originally from 3R. Virulence has been observed in field by Puccinia graminis f. sp. secalis (Pgs) and in an artificial cross Pgs Puccinia graminis f. sp. tritici (Pgt). When successful, Sr27 is among the few Srs that does not even allow the underdeveloped uredinia and slight degree of sporulation commonly allowed by most Srs. Instead there are necrotic or chlorotic flecks. Deployment in triticale in New South Wales and Queensland, Australia, however, rapidly showed virulence between 1982 and 1984 – the first virulence on this gene in the world. (This was especially associated with the cultivar Coorong.) Therefore, the International Maize and Wheat Improvement Center's triticale offerings were tested and many were found to depend solely on Sr27. Four years later, in 1988 virulence was found in South Africa. Sr27 has become less common in CIMMYT triticales since the mid-'80s.
Conventional breeding approaches
The aim of a triticale breeding programme is mainly focused on the improvement of quantitative traits, such as grain yield, nutritional quality and plant height, as well as traits which are more difficult to improve, such as earlier maturity and improved test weight (a measure of bulk density). These traits are controlled by more than one gene. Problems arise, however, because such polygenic traits involve the integration of several physiological processes in their expression. Thus the lack of single-gene control (or simple inheritance) results in low trait heritability (Zumelzú et al. 1998).
Since the induction of the International Maize and Wheat Improvement Center triticale breeding programme in 1964, the improvement in realised grain yield has been remarkable. In 1968, at Ciudad Obregón, Sonora, in northwest Mexico, the highest yielding triticale line produced 2.4 t/ha. Today, CIMMYT has released high yielding spring triticale lines (e.g. Pollmer-2) which have surpassed the 10 t/ha yield barrier under optimum production conditions.
Based on the commercial success of other hybrid crops, the use of hybrid triticales as a strategy for enhancing yield in favourable, as well as marginal, environments has proven successful over time. Earlier research conducted by CIMMYT made use of a chemical hybridising agent to evaluate heterosis in hexaploid triticale hybrids. To select the most promising parents for hybrid production, test crosses conducted in various environments are required, because the variance of their specific combining ability under differing environmental conditions is the most important component in evaluating their potential as parents to produce promising hybrids. The prediction of general combining ability of any triticale plant from the performance of its parents is only moderate with respect to grain yield. Commercially exploitable yield advantages of hybrid triticale cultivars is dependent on improving parent heterosis and on advances in inbred-line development.
Triticale is useful as an animal feed grain. However, it is necessary to improve its milling and bread-making quality aspects to increase its potential for human consumption. The relationship between the constituent wheat and rye genomes were noted to produce meiotic irregularities, and genome instability and incompatibility presented numerous problems when attempts were made to improve triticale. This led to two alternative methods to study and improve its reproductive performance, namely, the improvement of the number of grains per floral spikelet and its meiotic behaviour. The number of grains per spikelet has an associated low heritability value (de Zumelzú et al. 1998). In improving yield, indirect selection (the selection of correlated/related traits other than that to be improved) is not necessarily as effective as direct selection. (Gallais 1984)
Lodging (the toppling over of the plant stem, especially under windy conditions) resistance is a polygenically inherited (expression is controlled by many genes) trait, and has thus been an important breeding aim in the past. The use of dwarfing genes, known as Rht genes, which have been incorporated from both Triticum and Secale, has resulted in a decrease of up to in plant height without causing any adverse effects.
A 2013 study found that hybrids have better yield stability under yield stress than do inbred lines.
Application of newer techniques
Abundant information exists concerning R-genes (for disease resistance) in wheat, and a continuously updated on-line catalogue, the Catalogue of Gene Symbols, of these genes can be found at . Another online database of cereal rust resistance genes is available at . Unfortunately, less is known about rye and particularly triticale R-genes. Many R-genes have been transferred to wheat from its wild relatives, and appear in such papers and catalogues, thus making them available for triticale breeding. The two mentioned databases are significant contributors to improving the genetic variability of the triticale gene pool through gene (or more specifically, allele) provision. Genetic variability is essential for progress in breeding. In addition, genetic variability can also be achieved by producing new primary triticales, which essentially means the reconstitution of triticale, and the development of various hybrids involving triticale, such as triticale-rye hybrids. In this way, some chromosomes from the R genome have been replaced by some from the D genome. The resulting so-called substitution and translocation triticale facilitates the transfer of R-genes.
Introgression
Introgression involves the crossing of closely related plant relatives, and results in the transfer of 'blocks' of genes, i.e. larger segments of chromosomes compared to single genes. R-genes are generally introduced within such blocks, which are usually incorporated/translocated/introgressed into the distal (extreme) regions of chromosomes of the crop being introgressed. Genes located in the proximal areas of chromosomes may be completely linked (very closely spaced), thus preventing or severely hampering recombination, which is necessary to incorporate such blocks. Molecular markers (small lengths of DNA of a characterized/known sequence) are used to 'tag' and thus track such translocations. A weak colchicine solution has been employed to increase the probability of recombination in the proximal chromosome regions, and thus the introduction of the translocation to that region. The resultant translocation of smaller blocks that indeed carry the R-gene(s) of interest has decreased the probability of introducing unwanted genes.
The resistance gene was introgressed into wheat from the 2R chromosome of rye. However this was actually done through triticale. Triticale has been the amphiploid for several such rye⇨wheat introgressions.
A 2014 study found that dwarfing gene from the rye 5R chromosome also provides Fusarium head blight (FHB) resistance in this host.
Production of doubled haploids
Doubled haploid (DH) plants have the potential to save much time in the development of inbred lines. This is achieved in a single generation, as opposed to many, which would otherwise occupy much physical space/facilities. DHs also express deleterious recessive alleles otherwise masked by dominance effects in a genome containing more than one copy of each chromosome (and thus more than one copy of each gene). Various techniques exist to create DHs. The in vitro culture of anthers and microspores is most often used in cereals, including triticale. These two techniques are referred to as androgenesis, which refers to the development of pollen. Many plant species and cultivars within species, including triticale, are recalcitrant in that the success rate of achieving whole newly generated (diploid) plants is very low. Genotype by culture medium interaction is responsible for varying success rates, as is a high degree of microspore abortion during culturing. The response of parental triticale lines to anther culture is known to be correlated to the response of their progeny. Chromosome elimination is another method of producing DHs, and involves hybridisation of wheat with maize (Zea mays L.), followed by auxin treatment and the artificial rescue of the resultant haploid embryos before they naturally abort. This technique is applied rather extensively to wheat. Its success is in large part due to the insensitivity of maize pollen to the crossability inhibitor genes known as Kr1 and Kr2 that are expressed in the floral style of many wheat cultivars. The technique is unfortunately less successful in triticale. However, Imperata cylindrica (a grass) was found to be just as effective as maize with respect to the production of DHs in both wheat and triticale.
Application of molecular markers
An important advantage of biotechnology applied to plant breeding is the speeding up of cultivar release that would otherwise take 8–12 years. It is the process of selection that is actually enhanced, i.e., retaining that which is desirable or promising and ridding that which is not. This carries with it the aim of changing the genetic structure of the plant population. The website is a valuable resource for marker assisted selection (MAS) protocols relating to R-genes in wheat. MAS is a form of indirect selection. The Catalogue of Gene Symbols mentioned earlier is an additional source of molecular and morphological markers. Again, triticale has not been well characterised with respect to molecular markers, although an abundance of rye molecular markers makes it possible to track rye chromosomes and segments thereof within a triticale background.
Yield improvements of up to 20% have been achieved in hybrid triticale cultivars due to heterosis. This raises the question of what inbred lines should be crossed (to produce hybrids) with each other as parents to maximize yield in their hybrid progeny. This is termed the 'combining ability' of the parental lines. The identification of good combining ability at an early stage in the breeding programme can reduce the costs associated with 'carrying' a large number of plants (literally thousands) through it, and thus forms part of efficient selection. Combining ability is assessed by taking into consideration all available information on descent (genetic relatedness), morphology, qualitative (simply inherited) traits and biochemical and molecular markers. Exceptionally little information exists on the use of molecular markers to predict heterosis in triticale. Molecular markers are generally accepted as better predictors than morphological markers (of agronomic traits) due to their insensitivity to variation in environmental conditions.
A useful molecular marker known as a simple sequence repeat (SSR) is used in breeding with respect to selection. SSRs are segments of a genome composed of tandem repeats of a short sequence of nucleotides, usually two to six base pairs. They are popular tools in genetics and breeding because of their relative abundance compared to other marker types, a high degree of polymorphism (number of variants), and easy assaying by polymerase chain reaction. However, they are expensive to identify and develop. Comparative genome mapping has revealed a high degree of similarity in terms of sequence colinearity between closely related crop species. This allows the exchange of such markers within a group of related species, such as wheat, rye and triticale. One study established a 58% and 39% transferability rate to triticale from wheat and rye, respectively. Transferability refers to the phenomenon where the sequence of DNA nucleotides flanking the SSR locus (position on the chromosome) is sufficiently homologous (similar) between genomes of closely related species. Thus, DNA primers (generally, a short sequence of nucleotides used to direct the copying reaction during PCR) designed for one species can be used to detect SSRs in related species. SSR markers are available in wheat and rye, but very few, if any, are available for triticale.
Genetic transformation
The genetic transformation of crops involves the incorporation of 'foreign' genes or, rather, very small DNA fragments compared to introgression discussed earlier. Amongst other uses, transformation is a useful tool to introduce new traits or characteristics into the transformed crop. Two methods are commonly employed: infectious bacterial-mediated (usually Agrobacterium) transfer and biolistics, with the latter being most commonly applied to allopolyploid cereals such as triticale. Agrobacterium-mediated transformation, however, holds several advantages, such as a low level of DNA rearrangement in the transgenic plant, a low number of introduced copies of the transforming DNA, stable integration of an a-priori characterized T-DNA fragment (containing the DNA expressing the trait of interest) and an expected higher level of transgene expression. Triticale has, until recently, only been transformed via biolistics, with a 3.3% success rate. Little has been documented on Agrobacterium-mediated transformation of wheat: while no data existed with respect to triticale until 2005, the success rate in later work was nevertheless low.
Research
Triticale holds much promise as a commercial crop, as it has the potential to address specific problems within the cereal industry. Research is currently being conducted worldwide in places like Stellenbosch University in South Africa.
Conventional plant breeding has helped establish triticale as a valuable crop, especially where conditions are less favourable for wheat cultivation. Triticale being a synthesized grain notwithstanding, many initial limitations, such as an inability to reproduce due to infertility and seed shrivelling, low yield and poor nutritional value, have been largely eliminated.
Tissue culture techniques with respect to wheat and triticale have seen continuous improvements, but the isolation and culturing of individual microspores seems to hold the most promise. Many molecular markers can be applied to marker-assisted gene transfer, but the expression of R-genes in the new genetic background of triticale remains to be investigated. More than 750 wheat microsatellite primer pairs are available in public wheat breeding programmes, and could be exploited in the development of SSRs in triticale. Another type of molecular marker, single nucleotide polymorphism (SNP), is likely to have a significant impact on the future of triticale breeding.
Health concerns
Like both its hybrid parents – wheat and rye – triticale contains gluten and is therefore unsuitable for people with gluten-related disorders, such as celiac disease, non-celiac gluten sensitivity and wheat allergy, among others.
In fiction
An episode of the popular TV series Star Trek, "The Trouble with Tribbles", revolved around the protection of a grain developed from triticale. This grain was named "quadro-triticale" by writer David Gerrold at the suggestion of producer Gene Coon, with four distinct lobes per kernel. In that episode Mr. Spock correctly attributes the ancestry of the nonfictional grain to 20th-century Canada.
Indeed, in 1953 the University of Manitoba began the first North American triticale breeding program. Early breeding efforts concentrated on developing a high-yield, drought-tolerant human food crop species suitable for marginal wheat-producing areas. (Later in the episode, Chekov claims that the fictional quadro-triticale was a "Russian invention".)
A later episode titled "More Tribbles, More Troubles", in the animated series, also written by Gerrold, dealt with "quinto-triticale", an improvement on the original, having apparently five lobes per kernel.
Three decades later the spinoff series Star Trek: Deep Space Nine revisited quadro-triticale and the depredations of the Tribbles in the episode "Trials and Tribble-ations".
| Biology and health sciences | Grains | Plants |
65091 | https://en.wikipedia.org/wiki/Pika | Pika | A pika ( , or ) is a small, mountain-dwelling mammal native to Asia and North America. With short limbs, a very round body, an even coat of fur, and no external tail, they resemble their close relative, the rabbit, but with short, rounded ears. The large-eared pika of the Himalayas and nearby mountains lives at elevations of more than .
The name pika appears to be derived from the Tungus pika, and the scientific name Ochotona is derived from the Mongolian word ogotno, оготно, which means pika. It is used for any member of the Ochotonidae (), a family within the order of lagomorphs, the order which also includes the Leporidae (rabbits and hares). They are the smallest animal in the lagomorph group. Only one genus, Ochotona ( or ), is extant within the family, covering 37 species, though many fossil genera are known. Another species, the Sardinian pika, belonging to the separate genus Prolagus, has become extinct within the last 2,000 years owing to human activity.
Pikas prefer rocky slopes and graze on a range of plants, primarily grasses, flowers, and young stems. In the autumn, they pull hay, soft twigs, and other stores of food under rocks to eat during the long, cold winter. The pika is also known as the whistling hare because of its high-pitched alarm call it gives when alarmed. The two species found in North America are the American pika, found primarily in the mountains of the western United States and far southwestern Canada, and the collared pika of northern British Columbia, the Yukon, western Northwest Territories and Alaska.
Habitat
Pikas are native to cold climates in Asia and North America. Most species live on rocky mountainsides, where numerous crevices are available for their shelter, although some pikas also construct crude burrows. A few burrowing species are native to open steppe land. In the mountains of Eurasia, pikas often share their burrows with snowfinches, which build their nests there. Changing temperatures have forced some pika populations to restrict their ranges to even higher elevations.
Characteristics
Pikas are small mammals, with short limbs and rounded ears. They are about in body length and weigh between , depending on species.
These animals are herbivores and feed on a wide variety of plant matter, including forbs, grasses, sedges, shrub twigs, moss and lichens. Easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. But in order to get nutrients out of hard to digest fiber, pika ferment fiber in the cecum (in the GI tract) and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients.
Collared pikas have been known to store dead birds in their burrows for food during winter and eat the feces of other animals.
As with other lagomorphs, pikas have gnawing incisors and no canines, although they have fewer molars than rabbits. They have a dental formula of: = 26. Another similarity that pikas share with other lagomorphs is that the bottom of their paws are covered with fur and lack paw pads.
Rock-dwelling pikas have small litters of fewer than five young, whilst the burrowing species tend to give birth to more young and to breed more frequently, possibly owing to a greater availability of resources in their native habitats. The young are born altricial (eyes and ears closed, no fur) after a gestation period of between 25 and 30 days.
Activity
Pikas are active during daylight (diurnal) or twilight hours (crepuscular), with higher-elevation species generally being more active during the daytime. They show their peak activity just before the winter season. Pikas do not hibernate and remain active throughout the winter by traveling in tunnels under rocks and snow and eating dried plants that they have stored. Rock-dwelling pikas exhibit two methods of foraging: the first involves direct consumption of food, and the second is characterized by the gathering of plants to store in a "haypile" of cached plants.
The impact of human activity on the tundra ecosystems where pikas live has been recorded dating back to the 1970s. Rather than hibernate during winter, pikas forage for grasses and other forms of plant matter and stash these findings in protected dens in a process called "haying". They eat the dried plants during the winter. When pikas mistake humans as predators, they may respond to humans as they do to other species that do prey on pikas. Such interactions with humans have been linked to pikas having reduced amounts of foraging time, consequentially limiting the amount of food they can stockpile for winter months. Pikas prefer foraging in temperatures below , so they generally spend their time in shaded regions and out of direct sunlight when temperatures are high. A link has also been found between temperature increases and lost foraging time, where for every increase of to the ambient temperature in alpine landscapes home to pikas, those pikas lose 3% of their foraging time.
Eurasian pikas commonly live in family groups and share duties of gathering food and keeping watch. Some species are territorial. North American pikas (O. princeps and O. collaris) are asocial, leading solitary lives outside the breeding season.
Vocalization
Pikas have distinct calls, which vary in duration. The call can be short and quick, a little longer and more drawn out or long songs. The short calls are an example of geographic variation. The pikas determine the appropriate time to make short calls by listening for cues for sound localization. The calls are used for individual recognition, predator warning signals, territory defense, or as a way to attract potential mates. There are also different calls depending on the season. In the spring the songs become more frequent during the breeding season. In late summer the vocalizations become short calls. Through various studies, the acoustic characteristics of the vocalizations can be a useful taxonomic tool.
Lifespan
The average lifespan of pikas in the wild is roughly seven years. A pika's age may be determined by the number of adhesion lines on the periosteal bone on the lower jaw. The lifespan does not differ between the sexes.
Species
The 34 species currently recognized are:
Order Lagomorpha
Family Ochotonidae: pikas
Genus Ochotona
Subgenus Conothoa: mountain pikas
Chinese red pika, O. erythrotis
Forrest's pika, O. forresti
Gaoligong pika (O. gaoligongensis) and black pika (O. nigritia) are now thought to be conspecific with O. forresti
Glover's pika, O. gloveri
Muli pika (O. muliensis) is now thought to be conspecific with O. gloveri
Ili pika, O. iliensis
Koslov's pika, O. koslowi
Ladak pika, O. ladacensis
Large-eared pika, O. macrotis
Royle's pika, O. roylei
Himalayan pika (O. himalayana) is now thought to be conspecific with O. roylei
Turkestan red pika, O. rutila
Subgenus Alienauroa
Yellow pika, O. huanglongensis
Sacred pika, O. sacraria
Flat-headed pika, O. flatcalvariam
Subgenus Ochotona: shrub-steppe pikas
Gansu pika or gray pika, O. cansus
Plateau pika or black-lipped pika, O. curzoniae
Daurian pika, O. dauurica
Nubra pika, O. nubrica
Steppe pika, O. pusilla
Qionglai pika, O. qionglaiensis
Afghan pika, O. rufescens
Sijin pika, O. sikimaria
Tsing-ling pika, O. syrinx
Moupin pika, O. thibetana
Thomas's pika, O. thomasi
Subgenus Pika: northern pikas
Alpine pika or Altai pika, O. alpina
Helan Shan pika or silver pika, O. argentata
Collared pika, O. collaris
Korean pika, O. coreana
Hoffmann's pika, O. hoffmanni
Northern pika or Siberian pika, O. hyperborea
Manchurian pika, O. mantchurica
Kazakh pika, O. opaca
Pallas's pika, O. pallasii
American pika, O. princeps
Turuchan pika, O. turuchanensis
Extinct species
Many fossil forms of Ochotona are described in the literature, from the Miocene epoch to the early Holocene (extinct species) and present (16.4-0 Ma). They lived in Europe, Asia, and North America.Some species listed below are common for Eurasia and North America (O. gromovi, O. tologoica, O. zazhigini, and probably O. whartoni).
Eurasia
large forms
†Ochotona chowmincheni (China: Baode area, late Miocene)
†Ochotona gromovi (Asia, Pliocene, see also North America)
†Ochotona gudrunae (China: Shanxi, early Pleistocene)
†Ochotona guizhongensis (Tibet, late Miocene)
†Ochotona lagreli (China: Inner Mongolia, late Miocene to late Pliocene)
†Ochotona magna (China, early Pleistocene)
†Ochotona tologoica (Transbaikalia, Pliocene, see also North America)
†Ochotona transcaucasica (Transcaucasia: eastern Georgia and Azerbaijan, Transbaikal and probably southern Europe, early to late Pleistocene)
†Ochotona ursui (Romania, Pliocene)
†Ochotona zasuchini (Transbaikalia, Pleistocene)
†Ochotona zazhigini (Asia, Pliocene, see also North America)
†Ochotona zhangi (China, Pleistocene)
medium-sized forms
†Ochotona agadjianiani (Asia, Pliocene)
†Ochotona antiqua (Moldavia, Ukraine, and the Russian Plain, Caucasus, and probably Rhodes, late Miocene to Pliocene)
†Ochotona azerica (Transcaucasia: Azerbaijan, middle Pliocene)
†Ochotona lingtaica (Asia, Pliocene)
†Ochotona dodogolica (Asia: western Transbaikalia, Pleistocene)
†Ochotona nihewanica (China: Hebei, early Pleistocene)
†Ochotona plicodenta (Asia, Pliocene)
†Ochotona polonica (Europe: Poland, Germany, France, Pliocene)
small-sized forms
†Ochotona bazarovi (Asia, upper Pliocene)
†Ochotona dehmi (Germany: Schernfeld, Pleistocene)
†Ochotona filippovi (Siberia, Pleistocene)
†Ochotona gracilis (Asia, Pliocene)
†Ochotona horaceki (Slovakia: Honce, Pleistocene)
†Ochotona minor (China, late Miocene)
†Ochotona sibirica (Asia, Pliocene)
†Ochotona valerotae (France: Valerots site, Pleistocene)
†Ochotona youngi (Asia, Pliocene)and others.
other examples
†Ochotona agadzhaniani (Transcaucasia: Armenia, Pliocene)
†Ochotona alaica (Asia: Kyrgyzstan, Pleistocene)
†Ochotona (Proochotona) eximia (Moldova, Ukraine, Russia, Kazakhstan, Miocene to Pliocene)
†Ochotona (Proochotona) gigas (Ukraine, Pliocene)
†Ochotona gureevi (Transbaikalia, middle Pliocene)
†Ochotona hengduanshanensis (China, Pleistocene)
†Ochotona intermedia (Asia, Pliocene)
†Ochotona (Proochotona) kalfaense (Europe: Moldova, Miocene)
†Ochotona (Proochotona) kirgisica (Asia: Kyrgyzstan, Pliocene)
†Ochotona kormosi (Hungary, Pleistocene)
†Ochotona (Proochotona) kurdjukovi (Asia: Kyrgyzstan, Pliocene)
†Ochotona largerli (Georgia, Pleistocene)
†Ochotona lazari (Ukraine, Pleistocene)
†Ochotona mediterranensis (Turkey, Pliocene)
†Ochotona ozansoyi (Turkey, Miocene)
†Ochotona pseudopusilla (Ukraine and Russian Plain, Pleistocene)
†Ochotona spelaeus (Ukraine, late Pleistocene)
†Ochotona tedfordi (China: Yushe Basin, late Miocene)
†Ochotona cf. whartoni (Irkutsk Oblast and Yakutia, Pleistocene, see also North America)
†Ochotona zabiensis (southern Poland, early Pleistocene)
†Ochotona sp. (Greece: Maritsa, Pliocene)
†Ochotona sp. (Hungary: Ostramos, Pleistocene)
†Ochotona sp. (Siberia, Pleistocene)
†Ochotona sp. (Yakutia, Pleistocene)
North America
†Ochotona gromovi (US: Colorado, Pliocene, see also Eurasia)
†Ochotona spanglei (US, late Miocene or early Pliocene)
†Ochotona tologoica (US: Colorado, Pliocene, see also Eurasia)
†Ochotona whartoni (giant pika, US, Canada, Pleistocene to early Holocene, see also Eurasia)
†Ochotona wheatleyi (US: Alaska, Pliocene, late Pleistocene)
†Ochotona zazhigini (US: Colorado, Pleistocene, see also Eurasia)
extinct small pikas similar to the O. pusilla group (Pleistocene)
Paleontologists have also described multiple forms of pika not referred to specific species (Ochotona indet.) or not certainly identified (O. cf. antiqua, O. cf. cansus, O. cf. daurica, O. cf. eximia, O. cf. gromovi, O. cf. intermedia, O. cf. koslowi, O. cf. lagrelii, O. cf. nihewanica). The statuses of Ochotona (Proochotona) kirgisica and O. spelaeus are uncertain.
The "pusilla" group of pikas is characterized by archaic (plesiomorphic) cheek teeth and small size.
The North American species migrated from Eurasia. They invaded the New World twice:
O. spanglei during the latest Miocene or early Pliocene, followed by a roughly three-million-year-long gap in the known North American pika record
O. whartoni (giant pika) and small pikas via the Bering Land Bridge during the earliest Pleistocene
Ochotona cf. whartoni and small pikas of the O. pusilla group are also known from Siberia. The extant, endemic North American species appeared in the Pleistocene. The North American collared pika (O. collaris) and American pika (O. princeps) have been suggested to have descended from the same ancestor as the steppe pika (O. pusilla).
The range of Ochotona was larger in the past, with both extinct and extant species inhabiting Western Europe and Eastern North America, areas that are currently free of pikas. Pleistocene fossils of the extant steppe pika O. pusilla currently native to Asia have been found also in many countries of Europe from the United Kingdom to Russia and from Italy to Poland, and the Asiatic extant northern pika O. hyperborea in one location in the middle Pleistocene United States.
While Ochotona is the only currently living genus of Ochotonidae, extinct genera of ochotonids include †Albertona, †Alloptox, †Amphilagus, †Australagomys, †Austrolagomys, †Bellatona, †Bellatonoides, †Bohlinotona, †Cuyamalagus, †Desmatolagus, †Eurolagus, †Gripholagomys, †Gymnesicolagus, †Hesperolagomys, †Heterolagus, †Kenyalagomys, †Lagopsis, †Marcuinomys, †Ochotonoides, †Ochotonoma, †Oklahomalagus, †Oreolagus, †Paludotona, †Piezodus, †Plicalagus, †Pliolagomys, †Prolagus, †Proochotona (syn. Ochotona), †Pseudobellatona, †Ptychoprolagus, †Russellagus, †Sinolagomys, †Titanomys and †Tonomochota. The earliest one is Desmatolagus (middle Eocene to Miocene, 42.5–14.8 Ma), usually included in the Ochotonidae, sometimes in Leporidae or in neither ochotonid nor leporid stem-lagomorphs.
Ochotonids appeared in Asia between the late Eocene and the early Oligocene, and continued to develop along with increased distribution of C3 grasses in previously forest dominated areas under the "climatic optimum" from the late Oligocene to middle Miocene. They thrived in Eurasia, North America, and even Africa. The peak of their diversity occurred during the period from the early Miocene to middle Miocene. Most of them became extinct during the transition from the Miocene to Pliocene, which was accompanied by an increase in diversity of the leporids. It has been proposed that this switch between ochotonids and larger leporids was caused by expansion of C4 plants (particularly the Poaceae) related to global cooling in the late Miocene, since extant pikas reveal a strong preference for C3 plants (Asteraceae, Rosaceae, and Fabaceae, many of them C3). Replacement of large areas of forests by open grassland first started probably in North America and is called sometimes "nature's green revolution".
| Biology and health sciences | Lagomorphs | Animals |
65192 | https://en.wikipedia.org/wiki/Three%20Gorges%20Dam | Three Gorges Dam | The Three Gorges Dam () is a hydroelectric gravity dam that spans the Yangtze River near Sandouping in Yiling District, Yichang, Hubei province, central China, downstream of the Three Gorges. The world's largest power station by installed capacity (22,500 MW), the Three Gorges Dam generates 95±20 TWh of electricity per year on average, depending on the amount of precipitation in the river basin. After the extensive monsoon rainfalls of 2020, the dam's produced nearly 112 TWh in a year, breaking the previous world record of ~103 TWh set by Itaipu Dam in 2016.
The dam's body was completed in 2006; the power plant became fully operational in 2012, when the last of the main water turbines in the underground plant began production. The last major component of the project, the ship lift, was completed in 2015.
Each of the main water turbines, state-of-the-art at their installation, has a capacity of 700 MW. Combining the capacity of the dam's 32 main turbines with the two smaller generators (50 MW each) that provide power to the plant itself, the total electric generating capacity of the Three Gorges Dam is 22,500 MW with minimal greenhouse gas emissions.
The dam increases the Yangtze River's shipping capacity and reduces the likelihood of the sort of flooding downstream that have killed millions of people on the Yangtze Plain. As a result, the Chinese government regards the project as a monumental social and economic success.
However, it is controversial domestically and abroad. The dam's construction displaced more than 31 million people and inundated ancient and culturally significant sites. In operation, the dam has caused some ecological changes, including an increased risk of landslides.
History
Sun Yat-sen envisioned a large dam across the Yangtze River in The International Development of China (1919). He wrote that a dam capable of generating 30 million horsepower (22 GW) was possible downstream of the Three Gorges. In 1932, the Nationalist government, led by Chiang Kai-shek, began preliminary work on plans in the Three Gorges. In 1939, during the Second Sino-Japanese War, Japanese military forces occupied Yichang and surveyed the area.
In 1944, the United States Bureau of Reclamation's head design engineer, John L. Savage, surveyed the area and drew up a dam proposal for a "Yangtze River Project". Some 54 Chinese engineers went to the US for training. The original plans called for the dam to employ a unique method for moving ships: the ships would enter locks at the dam's lower and upper ends and then cranes would move them from each lock to the next. Groups of craft would be lifted together for efficiency. It is not known whether this solution was considered for its water-saving performance or because the engineers thought the difference in height between the river above and below the dam too great for alternative methods. No construction work was performed because of the Nationalists' worsening situation in the Chinese Civil War.
After the 1949 Communist Revolution, Mao Zedong supported the project, but began the Gezhouba Dam project nearby first, and economic problems including the Great Leap Forward and the Cultural Revolution slowed progress. After the 1954 Yangtze River Floods, in 1956, Mao wrote "Swimming", a poem about his fascination with a dam on the Yangtze River. In 1958, after the Hundred Flowers Campaign, some engineers who spoke out against the project were imprisoned.
During China's emphasis on the Four Modernizations during its early period of Reform and Opening Up, The Communist Party revived plans for the dam and proposed to start construction in 1986. It emphasized the need to develop hydroelectric power.
The Chinese People's Political Consultative Conference became a center of opposition to the proposed dam. It convened panels of experts who recommended delaying the project.
The National People's Congress approved the dam in 1992: of 2,633 delegates, 1,767 voted in favour, 177 voted against, 664 abstained, and 25 members did not vote, giving the legislation an unusually low 67.75% approval rate. Construction started on December 14, 1994. The dam was expected to be fully operational in 2009, but additional projects, such as the underground power plant with six additional generators, delayed full operation until 2012. The ship lift was completed in 2015. The dam raised the water level in the reservoir to above sea level by 2008 and to the designed maximum level of by 2010.
Composition and dimensions
Made of concrete and steel, the dam is long and above sea level at its top. The project used of concrete (mainly for the dam wall), used 463,000 tonnes of steel (enough to build 63 Eiffel Towers), and moved about of earth. The concrete dam wall is high above the rock basis.
When the water level is at its maximum of above sea level, higher than the river level downstream, the dam reservoir is on average about in length and in width. It contains of water and has a total surface area of . On completion, the reservoir flooded a total area of of land, compared to the of reservoir created by the Itaipu Dam.
Economics
The Chinese government estimated that the Three Gorges Dam project would cost 180 billion yuan (US$22.5 billion). By the end of 2008, spending had reached 148.365 billion yuan, of which 64.613 billion yuan was spent on construction, 68.557 billion yuan on relocating affected residents, and 15.195 billion yuan on financing. It was estimated in 2009 that the cost of construction would be fully recouped when the dam had generated of electricity, yielding 250 billion yuan; total cost recovery was thus expected to be completed ten years after the dam became fully operational. In fact, the entire cost of the Three Gorges Dam was recovered by December 20, 2013.
Funding sources include the Three Gorges Dam Construction Fund, profits from the Gezhouba Dam, loans from the China Development Bank, loans from domestic and foreign commercial banks, corporate bonds, and revenue from both before and after the dam had become fully operational. Additional charges were assessed as follows: every province receiving power from the Three Gorges Dam had to pay an extra ¥7.00 per MWh, and the other provinces had to pay an additional charge of ¥4.00 per MWh. No surcharge was imposed on the Tibet Autonomous Region.
Power generation and distribution
Generating capacity
Power generation is managed by China Yangtze Power, a listed subsidiary of China Three Gorges Corporation (CTGC), a Central Enterprise administered by SASAC. The Three Gorges Dam is the world's largest capacity hydroelectric power station, with 34 generators: 32 main generators, each with a capacity of 700 MW, and two plant power generators, each with capacity of 50 MW, for a total of 22,500 MW. Among the 32 main generators, 14 are installed on the dam's north side, 12 on the south side, and the remaining six in the underground power plant in the mountain south of the dam. Annual electricity generation in 2018 was 101.6 TWh, which is 20 times more than the Hoover Dam.
Generators
The main generators each weigh about 6,000 tonnes and are designed to produce more than 700 MW of power each. The designed hydraulic head of the generators is . The flow rate varies between depending on the head available; the greater the head, the less water needed to reach full power. Three Gorges uses Francis turbines with a diameter of 9.7/10.4 m (VGS design/Alstom's design) and a rotation speed of 75 revolutions per minute. This means that in order to generate power at 50 Hz, the generator rotors have 80 poles. Rated power is 778 MVA, with a maximum of 840 MVA and a power factor of 0.9. The generator produces electrical power at 20 kV. The electricity generated is then stepped up to 500 kV for transmission at 50 Hz. The generator's stator, the biggest of its kind, is 3.1/3 m in height; the outer diameter of the stator is 21.4/20.9 m, the inner diameter is 18.5/18.8 m, and the bearing load is 5,050/5,500 tonnes. Average efficiency is over 94%, with a maximum efficiency of 96.5% reached.
The generators were manufactured by two joint ventures: Alstom, ABB, Kvaerner, and the Chinese company Harbin Motor; and Voith, General Electric, Siemens (abbreviated as VGS), and the Chinese company Oriental Motor. The technology transfer agreement was signed together with the contract. Most of the generators are water-cooled. Some of the newer ones are air-cooled, making them simpler in design and easier to manufacture and maintain.
Generator installation progress
The first north-side main generator (No. 2) started up on July 10, 2003. The north side became completely operational on September 7, 2005, with the implementation of generator No. 9. Full power (9,800 MW) was eventually achieved on October 18, 2006, after the water level reached 156 meters.
On the south side, main generator No. 22 started up on June 11, 2007, and No. 15 became operational on October 30, 2008. The sixth (No. 17) began operation on December 18, 2007, raising capacity to 14.1 GW, exceeding that of Itaipu dam (14.0 GW) to become the world's largest hydro power plant by capacity.
When the last main generator (No. 27) finished its final test on May 23, 2012, the six underground main generators were all operational, raising the capacity to 22.5 GW. After nine years of construction, installation and testing, the power plant was fully operational by July 2012.
Output milestones
By August 16, 2011, the plant had generated 500 TWh of electricity. In July 2008 it generated 10.3 TWh of electricity, its first month over 10 TWh. On June 30, 2009, after the river flow rate increased to over 24,000 m3/s, all 28 generators were switched on, producing only 16,100 MW because the head available during flood season is insufficient. During an August 2009 flood, the plant first reached its maximum output for a short period.
During the November to May dry season, power output is limited by the river's flow rate, as seen in the diagrams on the right. When there is enough flow, power output is limited by plant generating capacity. The maximum power-output curves were calculated based on the average flow rate at the dam site, assuming the water level is 175 m and the plant gross efficiency is 90.15%. The actual power output in 2008 was obtained based on the monthly electricity sent to the grid.
The Three Gorges Dam reached its design-maximum reservoir water level of for the first time on October 26, 2010, in which the intended annual power-generation capacity of 84.7 TWh was realized. It has a combined generating capacity of 22.5 gigawatts and a designed annual generation capacity of 88.2 TWh. In 2012, the dam's 32 generating units generated a record 98.1 TWh of electricity, which accounts for 14% of China's total hydro generation. Between 2012 (first year with all 32 generating units operating) and 2021, the dam generated an average of 97.22 TWh of electricity per year, higher than Itaipu dam's average of 89.22 TWh of electricity per year during the same period. Due to the extensive 2020 monsoon season rainfall, the annual production reached ~112 TWh that year, which broke the previous world record of annual production by Itaipu Dam equal to ~103 TWh.
Distribution
The State Grid Corporation and China Southern Power Grid paid a flat rate of ¥250 per MWh (US$35.7) until July 2, 2008. Since then, the price has varied by province, from ¥228.7 to ¥401.8 per MWh. Higher-paying customers, such as Shanghai, receive priority. Nine provinces and two cities consume power from the dam.
Power distribution and transmission infrastructure cost about 34.387 billion yuan. Construction was completed in December 2007, one year ahead of schedule.
Power is distributed over multiple 500 kV transmission lines. Three direct current (DC) lines to the East China Grid carry 7,200 MW: Three Gorges – Shanghai (3,000 MW), HVDC Three Gorges – Changzhou (3,000 MW), and HVDC Gezhouba – Shanghai (1,200 MW). The alternating current (AC) lines to the Central China Grid have a total capacity of 12,000 MW. The DC transmission line HVDC Three Gorges – Guangdong to the South China Grid has a capacity of 3,000 MW.
The dam was expected to provide 10% of China's power. However, electricity demand has increased more quickly than previously projected. Even fully operational and despite its size, on average, it supported only about 1.7% of electricity demand in China in the year of 2011, when the Chinese electricity demand reached 4,692.8 TWh.
Environmental impact
Emissions
According to the National Development and Reform Commission, 366 grams of coal would produce 1 kWh of electricity during 2006. From 2003 to 2007, power production equaled that of 84 million tonnes of standard coal.
Erosion and sedimentation
Two hazards are uniquely identified with the dam: that sedimentation projections are not agreed upon, and that the dam sits on a seismic fault. At current levels, 80% of the land in the area is eroding, depositing about 40 million tons of sediment into the Yangtze annually. Because the flow is slower above the dam, much of this sediment settles there instead of flowing downstream, and there is less sediment downstream.
The absence of silt downstream has three effects:
Some hydrologists expect downstream riverbanks to become more vulnerable to flooding.
Shanghai, more than away, rests on a massive sedimentary plain. The "arriving siltso long as it does arrivestrengthens the bed on which Shanghai is built ... the less the tonnage of arriving sediment the more vulnerable is this biggest of Chinese cities to inundation".
Benthic sediment buildup causes biological damage and reduces aquatic biodiversity.
Landslides
Erosion in the reservoir, induced by rising water, causes frequent major landslides that have led to noticeable disturbance in the reservoir surface, including two incidents in May 2009 when somewhere between of material plunged into the flooded Wuxia Gorge of the Wu River. In the first four months of 2010, there were 97 significant landslides.
Waste management
The dam catalyzed improved upstream wastewater treatment around Chongqing and its suburban areas. According to the Ministry of Environmental Protection, as of April 2007, more than 50 new plants could treat 1.84 million tonnes per day, 65% of the total need. About 32 landfills were added, which could handle 7,664.5 tonnes of solid waste every day. Over one billion tons of wastewater are released annually into the river, which was more likely to be swept away before the reservoir was created. This has left the water stagnant, polluted and murky.
Forest cover
In 1997, the Three Gorges area had 10% forestation, down from 20% in the 1950s.
Research by the United Nations Food and Agriculture Organization suggested that the Asia-Pacific region would gain about of forest by 2008. That is a significant change from the net loss of forest each year in the 1990s. This is largely due to China's large reforestation effort. This accelerated after the 1998 Yangtze River floods convinced the government that it should restore tree cover, especially in the Yangtze's basin upstream of the Three Gorges Dam.
Wildlife
Concerns about the dam's impact on wildlife predate the National People's Congress's approval in 1992. This region has long been known for its rich biodiversity. It is home to 6,388 plant species, which belong to 238 families and 1,508 genera. Of these species, 57 are endangered. These rare species are also used as ingredients in traditional Chinese medicines. The proportion of forested area in the region surrounding the Three Gorges Dam dropped from 20% in 1950 to less than 10% as of 2002, adversely affecting all plant species there. The region also provides habitats to hundreds of freshwater and terrestrial animal species. Freshwater fish are especially affected by dams due to changes in the water temperature and flow regime. Many other fish are injured in the hydroelectric plants' turbine blades. This is particularly detrimental to the region's ecosystem because the Yangtze River basin is home to 361 different fish species and accounts for 27% of China's endangered freshwater fish species. Other aquatic species have been endangered by the dam, particularly the baiji, or Chinese river dolphin, now extinct. In fact, Chinese Government scholars even claim that the Three Gorges Dam directly caused the extinction of the baiji.
Of the 3,000 to 4,000 remaining critically endangered Siberian crane, many spend the winter in wetlands that the Three Gorges Dam will destroy. Populations of the Yangtze sturgeon are guaranteed to be "negatively affected" by the dam. In 2022 the Chinese paddlefish was declared extinct, with the last confirmed sighting in 2003.
Terrestrial impact
In 2005, NASA scientists calculated that the shift of water mass stored by the dams would increase the total length of the Earth's day by 0.06 microseconds and make the Earth slightly more round in the middle and flat on the poles. A study published in 2022 in the journal Open Geosciences suggests that the change of reservoir water level affects the gravity field in western Sichuan, which in turn affects the seismicity in that area.
Floods, agriculture, industry
An important function of the dam is to control flooding, which is a major problem for the seasonal river of the Yangtze. Millions of people live downstream of the dam, with many large, important cities like Wuhan, Nanjing, and Shanghai located adjacent to the river. Large areas of farmland and China's most important industrial area are situated beside the river.
The reservoir's flood storage capacity is . This capacity will reduce the frequency of major downstream flooding from once every 10 years to once every 100 years. The dam is expected to minimize the effect of even a "super" flood. The river flooded in 1954 over an area of , killing 33,169 people and forcing almost 18.9 million people to move. The flood waters covered Wuhan, a city of eight million people, for over three months, and the Jingguang Railway was out of service for more than 100 days. The 1954 flood carried of water. The dam could only divert the water above Chenglingji, leaving to be diverted. The dam cannot protect against some of the large tributaries downstream, including the Xiang, Zishui, Yuanshui, Lishui, Hanshui, and Gan.
In 1998, a flood in the same area caused billions of dollars worth of damage, when of farmland were flooded. The flood affected more than 2.3 million people, killing 1,526. In early August 2009, the largest flood in five years passed through the dam site. During this flood, the dam limited the water flow to less than per second, raising the upstream water level from on August 1, to on August 8. A full of flood water was captured and the river flow was cut by as much as per second.
The dam discharges its reservoir during the dry season every year, between December and March. This increases the flow rate of the river downstream, providing fresh water for agricultural and industrial usage, and improving shipping conditions. The water level upstream drops from , in preparation for the rainy season. The water also powers the Gezhouba Dam downstream.
Since the filling of the reservoir in 2003, the Three Gorges Dam has supplied an extra of fresh water to downstream cities and farms over the course of the dry season.
During the South China floods in July 2010, inflows at the Three Gorges Dam reached a peak of , exceeding the peak inflow during the 1998 Yangtze River floods. The dam's reservoir rose nearly in 24 hours and reduced the outflow to in discharges downstream, preventing any significant impact on the middle and lower river.
Navigating the dam
Locks
The installation of ship locks is intended to increase river shipping from ten million to 100 million tonnes annually; as a result transportation costs will be cut between 30 and 37%. Shipping will become safer, since the gorges are notoriously dangerous to navigate.
There are two series of ship locks installed near the dam (). Each of them is made up of five stages, with transit time at around four hours. Maximum vessel size is 10,000 tons. The locks are 280 m long, 35 m wide, and 5 m deep (918 × 114 × 16.4 ft). That is longer than those on the St Lawrence Seaway, but half as deep. Before the dam was constructed, the maximum freight capacity at the Three Gorges site was 18.0 million tonnes per year. From 2004 to 2007, a total of 198 million tonnes of freight passed through the locks. The freight capacity of the river increased six times and the cost of shipping was reduced by 25%. Originally, the total capacity of the ship locks was expected to reach 100 million tonnes per year. In 2022, their cargo turnover reached 159.65 million tons, with an annual increase of 6% over the previous few years.
These locks are staircase locks, whereby inner lock gate pairs serve as both the upper gate of the chamber below and the lower gate of the chamber above. The gates are the vulnerable hinged type, which, if damaged, could temporarily render the entire flight unusable. As there are separate sets of locks for upstream and downstream traffic, this system is more water efficient than bi-directional staircase locks.
Ship lift
In addition to the canal locks, there is a ship lift, a kind of elevator for vessels. The ship lift can lift ships of up to 3,000 tons. The vertical distance traveled is , and the size of the ship lift's basin is . The ship lift takes 30 to 40 minutes to transit, as opposed to the three to four hours for stepping through the locks. One complicating factor is that the water level can vary dramatically. The ship lift must work even if water levels vary by on the lower side, and on the upper side.
The ship lift's design uses a helical gear system, to climb or descend a toothed rack.
The ship lift was not yet complete when the rest of the project was officially opened on May 20, 2006.
In November 2007, it was reported in the local media that construction of the ship lift started in October 2007.
In February 2012, Xinhua reported that the four towers intended to support the ship lift were nearly complete. The report said that by that time, the towers had reached of the anticipated .
As of May 2014, the ship lift was expected to be completed by July 2015. It was tested in December 2015 and announced complete in January 2016. Lahmeyer, the German firm that designed the ship lift, said it will take a vessel less than an hour to transit the lift. An article in Steel Construction says the actual time of the lift will be 21 minutes. It says that the expected dimensions of the passenger vessels the ship lift's basin was designed to carry will be . The moving mass (including counterweights) is 34,000 tonnes.
The trials of elevator finished in July 2016, the first cargo ship was lifted on July 15; the lift time comprised 8 minutes. Shanghai Daily reported that the first operational use of the lift was on September 18, 2016, when limited "operational testing" of the lift began.
Portage railways
Plans also exist for the construction of short portage railways bypassing the dam area altogether. Two short rail lines, one on each side of the river, are to be constructed. The northern portage railway () will run from the Taipingxi port facility () on the northern side of the Yangtze, just upstream from the dam, via Yichang East Railway Station to the Baiyang Tianjiahe port facility in Baiyang Town (白洋镇), below Yichang. The southern portage railway () will run from Maoping (upstream of the dam) via Yichang South Railway Station to Zhicheng (on the Jiaozuo–Liuzhou Railway).
In late 2012, preliminary work started along both future railway routes.
Displacement of residents
During planning, it was estimated that 13 cities, 140 towns and 1,350 villages would be partially or completely flooded by the reservoir, amounting to roughly 1.5% of Hubei's 60.3 million people and Chongqing Municipality's 31.44 million people. These people were moved to new homes by the Chinese government, which considered the displacement justified by the flood protection provided for the communities downstream of the dam.
Between 2002 and 2005, Canadian photographer Edward Burtynsky documented the impact of the project on the surrounding areas, including the town of Wanzhou. Other photographers who recorded the change include Chengdu-based Muge, Paris-based Zeng Nian (originally from Jiangsu), and Israeli Nadav Kander. Living conditions deteriorated for many, and hundreds of thousands of people could not find work. The older generation was particularly affected, but younger generations benefited from the educational and career opportunities afforded by moving to large cities with new, modern companies and schools.
Some 2007 reports claimed that Chongqing Municipality would encourage four million more people to move away from the dam to Chongqing's main urban area by 2020. The municipal government asserted that the relocation was driven by urbanization, rather than a direct result of the dam project, and that the people involved included other areas of the municipality.
By June 2008, China had moved 1.24 million residents as far as Gaoyang in Hebei Province, and the moves concluded the following month.
Other effects
Cultural and history
The area which would fill with water behind the dam included locations with significant cultural history. The State Council authorized a ¥505 million archaeology salvage effort. Over the course of several years, archaeologists excavated 723 sites and conducted surface archaeology recovery missions at an additional 346 sites. Archaeologists recovered 200,000 artifacts of which 13,000 were considered as particularly historically or culturally notable. As part of this effort, the old Chongqing City Museum was replaced by the Chongqing China Sanxia Museum to house many of the recovered artifacts.
Recovered structures that were too large for museums were moved upland to reconstruction districts (fu jian qu), which are outdoor museum parks. Recovered structures placed in such parks include temples, pavilions, houses, and bridges, among others.
Some sites could not be moved because of their location, size, or design, such as the hanging coffins site high in the Shen Nong Gorge, part of the cliffs.
National security
The United States Department of Defense reported that in Taiwan, "proponents of strikes against the mainland apparently hope that merely presenting credible threats to China's urban population or high-value targets, such as the Three Gorges Dam, will deter Chinese military coercion". Destroying the Three Gorges Dam has been a tactic discussed and debated in Taiwan since the early 1990s, when the dam was still in the planning phase. The notion that the military in Taiwan would seek to destroy the dam provoked an angry response from the mainland Chinese media. People's Liberation Army General Liu Yuan was quoted in the China Youth Daily saying that the People's Republic of China would be "seriously on guard against threats from Taiwan independence terrorists". Former Taiwanese Ministry of Defense advisor Sung Chao-wen, called the notion of using cruise missiles to destroy the Three Gorges Dam "ridiculous", saying missiles would deliver minimal damage to the reinforced concrete, and any attack attempts would have to go through multiple layers of ground and air defenses.
The Three Gorges Dam is a steel-concrete gravity dam. The water is held back by the innate mass of the individual dam sections. As a result, damage to an individual section should not affect other parts of the dam. Zhang Boting, deputy secretary-general of China Society for Hydropower Engineering, suggested that concrete gravity dams such as the Three Gorges Dam are resistant to nuclear strikes.
Debate among Chinese scholars and analysts about the basic principles of China's no first use of nuclear weapons policy includes whether to include narrow exceptions, such as acts that produce catastrophic consequences equivalent to that of a nuclear attack, including attacks intended to destroy the Three Gorges Dam.
Structural integrity
Immediately after the reservoir was first filled, around 80 hairline cracks were observed in the dam's structure. Still, an experts group gave the project overall a good-quality rating. The 163,000 concrete units all passed quality testing, with normal deformation within design limits.
Upstream dams
In order to maximize the utility of the Three Gorges Dam and cut down on sedimentation from the Jinsha River, the upper course of the Yangtze River, authorities are building a series of dams on the Jinsha, including the now completed Wudongde, Baihetan, Xiluodu, and Xiangjiaba dams. The total capacity of those four dams is 38,500 MW, almost double the capacity of the Three Gorges.
Baihetan became fully operational in 2022. Wudongde was opened in June 2021. Another eight dams are in the midstream of the Jinsha and eight more upstream of it.
| Technology | Hydraulic infrastructure | null |
65195 | https://en.wikipedia.org/wiki/Kimura%20spider | Kimura spider | Heptathela kimurai, the Kimura spider, or kimura-gumo (in Japanese), is an Old World spider, found primarily in Japan and named after Arika Kimura, who collected it in 1920. It belongs to the sub-order Mesothelae (primitive burrowing spiders) and can reach up to 3 cm in length. Its burrows are covered by a camouflaged "pill box" flap.
Taxonomy
Heptathela kimurai was first described by Kyukichi Kishida in 1920, when it was placed in the genus Liphistius as L. kimurai. In 1923, Kishida erected the genus Heptathela for the species, as on re-examination he decided it was sufficiently distinct from spiders of the genus Liphistius. The species name kimurai honours the collector, Arika Kimura.
| Biology and health sciences | Spiders | Animals |
65316 | https://en.wikipedia.org/wiki/Canna%20%28plant%29 | Canna (plant) | Canna or canna lily is the only genus of flowering plants in the family Cannaceae, consisting of 10 species. All of the genus's species are native to the American tropics and were naturalized in Europe, India and Africa in the 1860s. Although they grow native to the tropics, most cultivars have been developed in temperate climates and are easy to grow in most countries of the world, as long as they receive at least 6–8 hours average sunlight during the summer, and are moved to a warm location for the winter. See the Canna cultivar gallery for photographs of Canna cultivars.
Cannas are not true lilies, but have been assigned by the APG II system of 2003 to the order Zingiberales in the monocot clade Commelinids, together with their closest relatives, the gingers, spiral gingers, bananas, arrowroots, heliconias, and birds of paradise.
The plants have large foliage, so horticulturists have developed selected forms as large-flowered garden plants. Cannas are also used in agriculture as a source of starch for human and animal consumption. C. indica and C. glauca have been grown into many cultivars in India and Africa.
Description
The plants are large tropical and subtropical herbaceous perennials with a rhizomatous rootstock. The broad, flat, alternate leaves that are such a feature of these plants, grow out of a stem in a long, narrow roll and then unfurl. The leaves are typically solid green, but some cultivars have glaucose, brownish, maroon, or even variegated leaves.
The flowers are asymmetric and composed of three sepals and three petals that are small, inconspicuous, and hidden under extravagant stamens. What appear to be petals are the highly modified stamens or staminodes. The staminodes number (1–) 3 (–4) (with at least one staminodal member called the labellum, always being present. A specialized staminode, the stamen, bears pollen from a half-anther. A somewhat narrower "petal" is the pistil, which is connected down to a three-chambered ovary.
The flowers are typically red, orange, or yellow, or any combination of those colours, and are aggregated in inflorescences that are spikes or panicles (thyrses). The main pollinators of the flowers are bees, hummingbirds, sunbirds, and bats. The pollination mechanism is conspicuously specialized. Pollen is shed on the style while still in the bud, and in the species and early hybrids, some is also found on the stigma because of the high position of the anther, which means that they are self-pollinating. Later cultivars have a lower anther, and rely on pollinators alighting on the labellum and touching first the terminal stigma, and then the pollen.
The wild species often grow to at least in height, but wide variation in size exists among cultivated plants; numerous cultivars have been selected for smaller stature.
Cannas grow from swollen underground stems, correctly known as rhizomes, which store starch, and this is the main attraction of the plant to agriculture, having the largest starch grains of all plant life.
Canna is the only member of the Liliopsida class (monocot group) in which hibernation of seed is known to occur, due to its hard, impenetrable seed covering.
Taxonomy
History
The name Canna originates from the Latin word for a cane or reed.
Canna indica, commonly called achira in Latin America, has been cultivated by Native Americans in tropical America for thousands of years, and was one of the earliest domesticated plants in the Americas. The starchy root is edible.
The first species of Canna introduced to Europe was C. indica, which was imported from the East Indies, though the species originated from the Americas. Charles de l'Ecluse, who first described and sketched C. indica, indicated this origin, and stated that it was given the name indica, not because the plant is from India, in Asia, but because this species was originally transported from America: Quia ex America primum delata sit; and at that time, one described the tropical areas of that part of the globe as the West Indies.
Much later, in 1658, Willem Piso made reference to another species that he documented under the vulgar or common name of 'Albara' and 'Pacivira', which resided, he said, in the "shaded and damp places, between the tropics"; this species is C. angustifolia L. (later reclassified as C. glauca L. by taxonomists).
Phylogeny
Species
Although most cannas grown these days are cultivars (see below), about 20 known species are of the wild form, and in the last three decades of the 20th century, Canna species have been categorized by two different taxonomists, Paul Maas, from the Netherlands and Nobuyuki Tanaka from Japan. Both reduced the number of species from the 50–100 accepted previously, assigning most as synonyms.
This reduction in species is also confirmed by work done by Kress and Prince at the Smithsonian Institution, but this only covers a subset of the species range.
Distribution
The genus is native to tropical and subtropical regions of the New World, from the Southern United States (southern South Carolina west to southern Texas) and south to northern Argentina.
C. indica has become naturalized in many tropical areas around the world, is a difficult plant to remove, and is invasive in some places.
Canna cultivars are grown in most countries, even those with territory above the Arctic Circle, which have short summers, but long days, and the rapid growth rate of cannas makes them a feasible gardening plant, as long as they receive 6–8 hours of sunlight each day during the growing season and are protected from the cold of winter.
Ecology
Pests
Cannas are largely free of pests. However, in the eastern and southern United States, plants sometimes fall victim to the canna leaf roller moth, with the resultant leaf damage, while not being fatal to the plant, can be most distressing to a keen gardener's eye.
Slugs and snails are quite fond of cannas and their large, juicy leaves, potentially leaving unsightly holes where they have chewed on the plant—particularly during and after rainy periods (when mollusks become active). Slugs and snails tend to prefer tender, younger foliage, however. Red spider mites may also be a potential pest for cannas grown indoors, in dry areas, or that receive poor airflow.
For canna grown outside (in California or Texas, for example), mealybugs and scale insects are most drawn to the dense folds and creases between the leaves and the stem/petiole, where the foliage attaches to the plant. At times, if left unchecked, these sucking-insects may remain effectively concealed in these tight areas, only for older or dead leaves to be peeled off to reveal a small colony of white, fuzzy mealybugs congregated. Mealybugs are particularly prevalent in drier climates, such as the Southwestern US. Japanese beetles can also ravage the leaves if left uncontrolled.
These pests, while certainly able to drain a plant of its energy over time, and cause its eventual decline, are generally not lethal to the plant when dealt with immediately. The majority of insect pests on canna plants can be sprayed with a 70% isopropyl alcohol mist—diluted slightly—applied during non-sunny periods, as the alcohol may cause sunburn on the plant. Other effective options include insecticidal soap, neem and horticultural oils, and other commercially available spray treatments. Granulated systemic insecticides are also useful, and generally completely safe; when applied to the soil topically every few months, granulated or powdered systemics will prevent nearly all pest infestations (for the duration of effectiveness). Non-scented baby wipes or paper towels, moistened with rubbing alcohol or apple cider vinegar may be used to wipe any invisible eggs or larvae from the leaves.
Disease
Cannas are remarkably free of diseases, compared to many genera. However, they may fall victim to canna rust, a fungal disease resulting in orange spots on the plant's leaves, caused by over-moist soil. They are also susceptible to certain plant viruses, some of which are Canna-specific, which may result in spotted or streaked leaves, in a mild form, but can finally result in stunted growth and twisted and distorted blooms and foliage.
The flowers are sometimes affected by a grey, fuzzy mold called botrytis. Under humid conditions, it is often found growing on the older flowers. Treatment is to simply remove the old flowers, so the mold does not spread to the new flowers.
Cultivation
Cannas grow best in full sun with moderate water in well-drained, rich or sandy soil. They grow from perennial rhizomes, but are frequently grown as annuals in temperate zones for an exotic or tropical look in the garden. In arid regions, cannas are often grown in the water garden, with the lower inch of pot submerged. In all areas, high winds tear the leaves, so shelter is advised.
The rhizomes are sensitive to frost and will rot if left unprotected in freezing conditions. In areas with winter temperatures below in the winter (< USDA Zone 8b), the rhizomes can be dug up before freezing and stored (above ) for replanting in the spring. Otherwise, they should be protected by a thick layer of mulch over winter.
Uses
Some species and many cultivars are widely grown in the garden in temperate and subtropical regions. Sometimes, they are also grown as potted plants. A large number of ornamental cultivars have been developed. They can be used in herbaceous borders, tropical plantings, and as a patio or decking plant.
Internationally, cannas are one of the most popular garden plants, and a large horticultural industry depends on the plant.
The rhizomes of cannas are rich in starch, and have many uses in agriculture. All of the plant material has commercial value, rhizomes for starch (consumption by humans and livestock), stems and foliage for animal fodder, young shoots as a vegetable, and young seeds as an addition to tortillas.
The seeds are used as beads in jewelry.
The seeds are used as the mobile elements of the kayamb, a musical instrument from Réunion, as well as the hosho, a gourd rattle from Zimbabwe, where the seeds are known as hota seeds.
In more remote regions of India, cannas are fermented to produce alcohol.
The plant yields a fibre from the stem, which is used as a jute substitute.
A fibre obtained from the leaves is used for making paper. The leaves are harvested in late summer after the plant has flowered, they are scraped to remove the outer skin, and are then soaked in water for two hours prior to cooking. The fibres are cooked for 24 hours with lye and then beaten in a blender. They make a light tan to brown paper.
A purple dye is obtained from the seed.
Smoke from the burning leaves is said to be insecticidal.
Cannas are used to extract many undesirable pollutants in a wetland environment as they have a high tolerance to contaminants.
In Thailand, cannas are a traditional gift for Father's Day.
In Vietnam, canna starch is used to make cellophane noodles known as miến dong.
Cannas attract hummingbirds, so can be part of a pollinator and wildlife habitat strategy.
Horticultural varieties (cultivars)
Cannas became very popular in Victorian times as garden plants, and were grown widely in France, Germany, Hungary, India, Italy, the United Kingdom, and the United States. Some cultivars from this time, including a sterile hybrid, usually referred to as Canna × ehemannii, are still commercially available. C. × ehemannii is tall and green-leafed with terminal drooping panicles of hot pink iris-like flowers, looking somewhat like a cross between a banana and a fuchsia.
As tender perennials in northern climates, they suffered severe setbacks when two world wars sent the young gardening staff off to war. The genus Canna has recently experienced a renewed interest and revival in popularity. Once, hundreds of cultivars existed, but many are now extinct. In 1910, Árpäd Mühle, from Hungary, published his Canna book, written in German. It contained descriptions of over 500 cultivars.
In recent years, many new cultivars have been created, but the genus suffers severely from having many synonyms for many popular ones. Most of the synonyms were created by old varieties resurfacing without viable names, with the increase in popularity from the 1960s onwards. Research has accumulated over 2,800 Canna cultivar names, but many of these are simply synonyms. See List of Canna hybridists for details of the people and firms that created the current Canna legacy.
In the early 20th century, Professor Liberty Hyde Bailey defined, in detail, two "garden species" (C. × generalis and C. × orchiodes) to categorise the floriferous cannas being grown at that time, namely the Crozy hybrids and the orchid-like hybrids introduced by Carl Ludwig Sprenger in Italy and Luther Burbank in the U.S., at about the same time (1894). The definition was based on the genotype, rather than the phenotype, of the two cultivar groups. Inevitably over time, those two floriferous groups were interbred, the distinctions became blurred and overlapped, and the Bailey species names became redundant. Pseudo-species names are now deprecated by the International Code of Nomenclature for Cultivated Plants which, instead, provides Cultivar Groups for categorising cultivars (see groups at List of Canna cultivars).
AGM cultivars
These canna cultivars have gained the Royal Horticultural Society's Award of Garden Merit:
'Alaska' (cream flushed yellow)
'Annaeei' (large blue-green leaves)
C. × ehemannii (deep pink)
'Erebus' (coral pink)
'General Eisenhower' (bronze leaves, orange flowers)
'Louis Cayeux' (salmon pink)
'Musifolia' (large leaves flushed bronze)
'Mystique' (bronze leaves)
'Phasion' (bronze leaves, orange flowers)
'Picasso' (yellow spotted red flowers)
'Russian Red' (bronze leaves)
'Shenandoah' (flesh pink)
'Verdi' (bright orange)
'Whithelm Pride' (bright pink)
'Wyoming' (bright orange)
Agricultural varieties
The Canna Agriculture Group contains all of the varieties of Canna grown in agriculture. "Canna achira" is a generic term used in South America to describe the cannas that have been selectively bred for agricultural purposes, normally derived from C. discolor. It is grown especially for its edible rootstock from which starch is obtained, but the leaves and young seeds are also edible, and achira was once a staple food crop in Peru and Ecuador. Trials in Ecuador using a wide range of varieties have shown that achira can yield on average 56 tons of rhizomes and 7.8 tons of extractable starch per hectare. However, the crop needs 9–12 months to mature to full productivity.
Many more traditional kinds exist worldwide; they have all involved human selection, so are classified as agricultural cultivars. Traditionally, Canna edulis Ker Gawl. has been reputed to be the species grown for food in South America, but C. edulis probably is simply a synonym of C. discolor, which is also grown for agricultural purposes throughout Asia.
Propagation
Sexual propagation
Seeds are produced from sexual reproduction, involving the transfer of pollen from the stamen of the pollen parent onto the stigma of the seed parent. In the case of Canna, the same plant can usually play the roles of both pollen and seed parents, technically referred to as a hermaphrodite. However, the cultivars of the Italian group and triploids are almost always seed sterile, and their pollen has a low fertility level. Mutations are almost always totally sterile.
Canna seeds have a very hard seed coat, which contributes to their dormancy. Germination is facilitated by scarification of the seed coat, which can be accomplished by several techniques.
Pollination
The species are capable of self-pollination, but most cultivars require an outside pollinator. All cannas produce nectar, so attract nectar-consuming insects, bats, and hummingbirds, that act as the transfer agent, spreading pollen between stamens and stigmas on the same or different flowers.
Genetic changes
Since genetic recombination has occurred, a cultivar grown from seed will have different characteristics from its parent(s), thus should never be given a parent's name. The wild species have evolved in the absence of other Canna genes and are usually true to type when the parents are of the same species, but a degree of variance still occurs. The species C. indica is an aggregate species, having many different and extreme forms ranging from the giant to miniature, from large foliage to small foliage, both green and dark foliage, and many differently coloured blooms of red, orange, pink, or yellow, and combinations of those colours.
Asexual propagation
Division of plant parts
Outside of a laboratory, the only effective asexual propagation method is rhizome division. This uses material from a single parent, and as no exchange of genetic material occurs, it almost always produces plants that are identical to the parent. After a summer's growth, the horticultural cultivars can be separated into typically four or five separate smaller rhizomes, each with a growing nodal point (growing eye). Without the growing point, which is composed of meristem material, the rhizome will not grow.
Micropropagation
Micropropagation, also known as tissue culture, is the practice of rapidly multiplying stock plant material to produce a large number of progeny plants. Micropropagation uses in vitro division of small pieces in a sterile environment, where they first produce proliferations of tissue, which are then separated into small pieces that are treated differently so that they produce roots and new stem tissue. The steps in the process are regulated by different ratios of plant growth regulators. Many commercial organizations have produced cannas this way, and specifically the "Island Series" of cannas was introduced by means of mass-produced plants using this technique. However, cannas have a reputation for being difficult micropropagation candidates.
Micropropagation techniques can be employed to disinfest plants of a virus. In the growing tip of a plant, cell division is so rapid that the younger cells may not have had time to be infected with the virus. The rapidly growing region of meristem cells producing the shoot tip is cut off and placed in vitro, with a very high probability of being uncontaminated by virus.
Citations
General and cited references
External links
Cannaceae in Flora of North America
Indian Canna
Canna × generalis from Floridata
Canna indica hybrids
Canna indica: Indian Shot
Reappraisal of Edible Canna as a High-Value Starch Crop in Vietnam
Crop Growth and Starch Productivity of Edible Canna
The utilization of edible Canna plants in southeastern Asia and southern China
Constructed wetland for on-site septic treatment.
Zingiberales genera | Biology and health sciences | Monocots | null |
65344 | https://en.wikipedia.org/wiki/SIDS | SIDS | Sudden infant death syndrome (SIDS), sometimes known as cot death or crib death, is the sudden unexplained death of a child of less than one year of age. Diagnosis requires that the death remain unexplained even after a thorough autopsy and detailed death scene investigation. SIDS usually occurs during sleep. Typically death occurs between the hours of midnight and 9:00a.m. There is usually no noise or evidence of struggle. SIDS remains the leading cause of infant mortality in Western countries, constituting half of all post-neonatal deaths.
The exact cause of SIDS is unknown. The requirement of a combination of factors including a specific underlying susceptibility, a specific time in development, and an environmental stressor has been proposed. These environmental stressors may include sleeping on the stomach or side, overheating, and exposure to tobacco smoke. Accidental suffocation from bed sharing (also known as co-sleeping) or soft objects may also play a role. Another risk factor is being born before 37 weeks of gestation. Between 1% and 5% of SIDS cases are estimated to be misidentified infanticides caused by intentional suffocation. SIDS makes up about 80% of sudden and unexpected infant deaths (SUIDs). The other 20% of cases are often caused by infections, genetic disorders, and heart problems.
The most effective method of reducing the risk of SIDS is putting a child less than one year old on their back to sleep. Other measures include a firm mattress separate from but close to caregivers, no loose bedding, a relatively cool sleeping environment, using a pacifier, and avoiding exposure to tobacco smoke. Breastfeeding and immunization may also be preventative. Measures not shown to be useful include positioning devices and baby monitors. Evidence is not sufficient for the use of fans. Grief support for families affected by SIDS is important, as the death of the infant is unexpected, unexplained, and can cause suspicion that the infant may have been intentionally harmed.
Rates of SIDS vary nearly tenfold in developed countries from one in a thousand to one in ten thousand. Globally, it resulted in about 19,200 deaths in 2015, down from 22,000 deaths in 1990. SIDS was the third leading cause of death in children less than one year old in the United States in 2011. It is the most common cause of death between one month and one year of age. About 90% of cases happen before six months of age, with it being most frequent between two months and four months of age. It is more common in boys than girls. Rates of SIDS have decreased by up to 80% in areas with "Safe to Sleep" campaigns.
Definition
The syndrome applies only to infants under one year of age. SIDS is a diagnosis of exclusion and should be applied to only those cases in which an infant's death is sudden and unexpected, and remains unexplained after the performance of an adequate postmortem investigation, including:
an autopsy (by an experienced pediatric pathologist, if possible);
investigation of the death scene and circumstances of the death; and
exploration of the medical history of the infant and family.
After investigation, some of these infant deaths are found to be caused by suffocation, hyperthermia or hypothermia, neglect or some other defined cause.
Australia and New Zealand shifted to sudden unexpected death in infancy (SUDI) for professional, scientific, and coronial clarity:
In addition, the US Centers for Disease Control and Prevention have proposed that such deaths be called sudden unexpected infant deaths (SUID) and that SIDS is a subset of SUID.
Age
SIDS has a four-parameter lognormal age distribution that spares infants shortly after birth—the time of maximal risk for almost all other causes of non-trauma infant death.
By definition, SIDS deaths occur under the age of one year, with the peak incidence occurring when the infant is two to four months old. This is considered a critical period because the infant's ability to rouse from sleep is not yet mature.
Risk factors
The exact cause of SIDS is unknown. Although studies have identified risk factors for SIDS, such as putting infants to bed on their bellies, there has been little understanding of the syndrome's biological process or its potential causes. Deaths from SIDS are unlikely to be due to a single cause, but rather to multiple risk factors. The frequency of SIDS does appear to be influenced by social, economic, or cultural factors, such as maternal education, race or ethnicity, or poverty. SIDS is believed to occur when an infant with an underlying biological vulnerability, who is at a critical development age, is exposed to an external trigger. The following risk factors generally contribute either to the underlying biological vulnerability or represent an external trigger:
Tobacco smoke
SIDS rates are higher in babies of mothers who smoke during pregnancy. Between no smoking and smoking one cigarette a day, on average, the risk doubles. About 22% of SIDS in the United States is related to maternal smoking. SIDS correlates with levels of nicotine and its derivatives in the baby. Nicotine and derivatives cause alterations in neurodevelopment.
Sleeping
Placing an infant to sleep while lying on the belly or side rather than on the back increases the risk for SIDS. This increased risk is greatest at two to three months of age. Elevated or reduced room temperature also increases the risk, as does excessive bedding, clothing, soft sleep surfaces, and stuffed animals in the bed. Bumper pads may increase the risk of SIDS due to the risk of suffocation. They are not recommended for children under one year of age, as this risk of suffocation greatly outweighs the risk of head bumping or limbs getting stuck in the bars of the crib.
Sharing a bed with parents or siblings increases the risk for SIDS. This risk is greatest in the first three months of life, when the mattress is soft, when one or more persons share the infant's bed, especially when the bed partners are using drugs or alcohol or are smoking. The risk remains, however, even in parents who do not smoke or use drugs. The American Academy of Pediatrics thus recommends "room-sharing without bed-sharing", stating that such an arrangement can decrease the risk of SIDS by up to 50%. Furthermore, the academy has recommended against devices marketed to make bed-sharing "safe", such as "in-bed co-sleepers".
Room sharing as opposed to solitary sleeping is known to decrease the risk of SIDS.
Breastfeeding
Breastfeeding is associated with a lower risk of SIDS. It is not clear if co-sleeping among mothers who breastfeed without any other risk factors increases SIDS risk.
Pregnancy and infant factors
SIDS rates decrease with increasing maternal age, with teenage mothers at greatest risk. Delayed or inadequate prenatal care also increases risk. Low birth weight is a significant risk factor. In the United States from 1995 to 1998, the SIDS death rate for infants weighing 1000–1499 g was 2.89/1000, while for a birth weight of 3500–3999 g, it was only 0.51/1000. Premature birth increases the risk of SIDS death roughly fourfold. From 1995 to 1998, the U.S. SIDS rate for births at 37–39 weeks of gestation was 0.73/1000, while the SIDS rate for births at 28–31 weeks of gestation was 2.39/1000.
Anemia has also been linked to SIDS (however, per item 6 in the list of epidemiologic characteristics below, extent of anemia cannot be evaluated at autopsy because an infant's total hemoglobin can only be measured during life). SIDS incidence rises from zero at birth, is highest from two to four months of age, and declines toward zero after the infant's first year.
Genetics
Genetics plays a role, as SIDS is more prevalent in males. There is a consistent 50% male excess in SIDS per 1000 live births of each sex. Given a 5% male excess birth rate, there appears to be 3.15 male SIDS cases per 2 female cases, for a male fraction of 0.61. This value of 61% in the US is an average of 57% black male SIDS, 62.2% white male SIDS and 59.4% for all other races combined. Note that when multiracial parentage is involved, infant race is arbitrarily assigned to one category or the other; most often it is chosen by the mother. The X-linkage hypothesis for SIDS and the male excess in infant mortality have shown that the 50% male excess might be related to a dominant X-linked allele, occurring with a frequency of that is protective against transient cerebral anoxia. An unprotected male would occur with a frequency of and an unprotected female would occur with a frequency of .
About 10 to 20% of SIDS cases are believed to be due to channelopathies, which are inherited defects in the ion channels which play an important role in the contraction of the heart.
Genetic evidence published in November 2020 concerning the case of Kathleen Folbigg, who was imprisoned for the death of her children, showed that at least two of the children had genetic mutations in the CALM2 gene that predisposed them to heart complications. Kathleen was pardoned 5 June 2023 after spending 20 years in jail.
Alcohol
Drinking of alcohol by parents is linked to SIDS. One study found a positive correlation between the two during New Years celebrations and weekends. Another found that alcohol use disorder was linked to a more than doubling of risk.
Other
A 2022 study found that infants who died of SIDS exhibited significantly lower specific activity of butyrylcholinesterase, an enzyme involved in the brain's arousal pathway, shortly after birth. This can serve as a biomarker to identify infants with a potential autonomic cholinergic dysfunction and elevated risk for SIDS.
SIDS has been linked to cold weather, with this association believed to be due to over-bundling and thus, overheating. Premature babies are at four times the risk of SIDS, possibly related to an underdeveloped ability to automatically control the cardiovascular system.
A 2-part edition of The Cook Report from 1994 claimed that antimony- and phosphorus-containing compounds used as fire retardants in PVC and other cot mattress materials were a cause of SIDS. Subsequent investigation by an Expert Panel led by Lady Limerick found that there was no evidence to support this claim. The report also states that toxic gas cannot be generated from antimony in mattresses and that babies had SIDS on mattresses that did not contain the compound.
It has been suggested that some cases of SIDS may be related to Staphylococcus aureus and Escherichia coli infections.
Diagnosis
Differential diagnosis
Some conditions that are often undiagnosed and could be confused with or comorbid with SIDS include:
medium-chain acyl-coenzyme A dehydrogenase deficiency (MCAD deficiency);
infant botulism;
long QT syndrome (accounting for less than 2% of cases);
Helicobacter pylori bacterial infections;
shaken baby syndrome and other forms of child abuse;
overlaying, child smothering during carer's sleep
For example, an infant with MCAD deficiency might die by "classical SIDS" if found swaddled and prone, with its head covered, in an overheated room where parents were smoking. Genes indicating susceptibility to MCAD and Long QT syndrome do not protect an infant from dying of classical SIDS. Therefore, the presence of a susceptibility gene, such as for MCAD, means the infant might have died either from SIDS or from MCAD deficiency. It is currently impossible for a pathologist to distinguish between them.
A 2010 study looked at 554 autopsies of infants in North Carolina that listed SIDS as the cause of death, and suggested that many of these deaths may have been due to accidental suffocation. The study found that 69% of autopsies listed other possible risk factors that could have led to death, such as unsafe bedding or sleeping with adults.
Several instances of infanticide have been uncovered in which the diagnosis was originally SIDS. Since an autopsy is often unable to determine whether asphyxiation is caused intentionally, medical practitioners rely on patient and family history and evidence of prior abuse to identify cases of infanticide. Some estimates in the 1980s and 1990s placed the potential rate of SIDS deaths caused by maltreatment around 10% and as high as 40%, but data from interventions such as the Safe to Sleep campaign suggests that these figures were substantially inflated. In 2006 the American Academy of Pediatrics estimated that between 1% and 5% of SIDS cases were potentially attributable to undiagnosed infanticide.
Some have underestimated the risk of two SIDS deaths occurring in the same family; the Royal Statistical Society issued a media release refuting expert testimony in one UK case, in which the conviction was subsequently overturned.
Prevention
A number of measures have been found to be effective in preventing SIDS, including changing the sleeping position to supine, breastfeeding, limiting soft bedding, immunizing the infant and using pacifiers. The use of electronic monitors has not been found to be useful as a preventative strategy. The effect that fans might have on the risk of SIDS has not been studied well enough to make any recommendation about them. Evidence regarding swaddling is unclear regarding SIDS. A 2016 review found tentative evidence that swaddling increases the risk of SIDS, especially among babies placed on their bellies or sides while sleeping.
Measures not shown to be useful include positioning devices and baby monitors. In the United States, companies that sell the monitors do not have FDA approval for them as medical devices.
Sleep positioning
Sleeping on the back has been found to reduce the risk of SIDS. It is thus recommended by the American Academy of Pediatrics and promoted as a best practice by the US National Institute of Child Health and Human Development (NICHD) "Safe to Sleep" campaign. The incidence of SIDS has fallen in a number of countries in which this recommendation has been widely adopted. Sleeping on the back does not appear to increase the risk of choking, even in those with gastroesophageal reflux disease. While infants in this position may sleep more lightly, this is not harmful. Sharing the same room as the parents but in a different bed may decrease the SIDS risk by half.
Pacifiers
The use of pacifiers appears to decrease the risk of SIDS, although the reason is unclear. The American Academy of Pediatrics considers pacifier use to prevent SIDS to be reasonable. Pacifiers do not appear to affect breastfeeding in the first four months, even though this is a common misconception.
Bedding
Product safety experts advise against using pillows, overly soft mattresses, sleep positioners, bumper pads (crib bumpers), stuffed animals, or fluffy bedding in the crib, and recommend instead dressing the child warmly and keeping the crib "naked."
Due to the obvious dangers, experts have also warned that blankets or other clothing not be placed over a baby's head.
The use of a "baby sleep bag" or "sleep sack", a soft bag with holes for the baby's arms and head, can be used as a type of bedding that warms the baby without covering its head.
Vaccination
Infants typically receive several vaccinations between the ages of 2 and 4 months, which is also the peak age for SIDS. Due to this coincidence, a number of studies have investigated the possible role of vaccinations as a cause of SIDS. These have found either no relation between vaccinations and SIDS, or a reduction of the risk of SIDS following vaccination. A 2007 meta-analysis found that vaccinations were associated with a halving of the risk of SIDS, and argued that immunisation should be a part of SIDS prevention campaigns.
Epidemiology
Globally, SIDS resulted in about 22,000 deaths , down from 30,000 deaths in 1990. Rates vary significantly by population from 0.05 per 1000 in Hong Kong to 6.7 per 1000 in Native Americans.
SIDS was responsible for 0.54 deaths per 1,000 live births in the US in 2005. It is responsible for far fewer deaths than congenital disorders and disorders related to short gestation, though it is the leading cause of death in healthy infants after one month of age.
SIDS deaths in the US decreased from 4,895 in 1992 to 2,247 in 2004, a 54% decrease. During a similar time period, 1989 to 2004, SIDS as the cause of death for sudden infant death (SID) decreased from 80% to 55%, a 31% decrease. According to John Kattwinkel, chairman of the Centers for Disease Control and Prevention (CDC) Special Task Force on SIDS "A lot of us are concerned that the rate (of SIDS) isn't decreasing significantly, but that a lot of it is just code shifting".
Race
In 2013, there were persistent disparities in SIDS deaths among racial and ethnic groups in the U.S. In 2009, the rates of death ranged from 20.3 per 100,000 live births for Asians and Pacific Islanders to 119.2 per 100,000 live births for Native Americans and Alaska Natives. African American infants have a 24% greater risk (100.7 per 100,000 live births) of having a SIDS-related death, compared to the U.S. population as a whole, and experience a 2.5 greater incidence of SIDS than white infants. Rates are calculated per 100,000 live births to enable more accurate comparison across groups of different total population size.
Research suggests that factors which contribute more directly to SIDS risk—maternal age, exposure to smoking, safe sleep practices, etc.—vary by racial and ethnic group and therefore risk exposure also varies by these groups. Risk factors associated with prone sleeping patterns of African American families include mother's age, household poverty index, rural/urban status of residence, and infant's age. More than 50% of African American infants were placed in non-recommended sleeping positions, according to a 2012 study completed in South Carolina, indicating that cultural factors can be protective as well as problematic.
The rate of SIDS per 1000 births varies among ethnic groups in the United States:
Central Americans and South Americans: 0.20
Asian/Pacific Islanders: 0.28
Mexicans: 0.24
Puerto Ricans: 0.53
Whites: 0.51
African Americans: 1.08
Native American: 1.24
Society and culture
Many popular media portrayals of infants show them in non-recommended sleeping positions.
| Biology and health sciences | Specific diseases | Health |
65424 | https://en.wikipedia.org/wiki/Hydraulics | Hydraulics | Hydraulics () is a technology and applied science using engineering, chemistry, and other sciences involving the mechanical properties and use of liquids. At a very basic level, hydraulics is the liquid counterpart of pneumatics, which concerns gases. Fluid mechanics provides the theoretical foundation for hydraulics, which focuses on applied engineering using the properties of fluids. In its fluid power applications, hydraulics is used for the generation, control, and transmission of power by the use of pressurized liquids. Hydraulic topics range through some parts of science and most of engineering modules, and they cover concepts such as pipe flow, dam design, fluidics, and fluid control circuitry. The principles of hydraulics are in use naturally in the human body within the vascular system and erectile tissue.
Free surface hydraulics is the branch of hydraulics dealing with free surface flow, such as occurring in rivers, canals, lakes, estuaries, and seas. Its sub-field open-channel flow studies the flow in open channels.
History
Ancient and medieval eras
Early uses of water power date back to Mesopotamia and ancient Egypt, where irrigation has been used since the 6th millennium BC and water clocks had been used since the early 2nd millennium BC. Other early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient Central Asia.
Persian Empire and Urartu
In the Persian Empire or previous entities in Persia, the Persians constructed an intricate system of water mills, canals and dams known as the Shushtar Historical Hydraulic System. The project, commenced by Achaemenid king Darius the Great and finished by a group of Roman engineers captured by Sassanian king Shapur I, has been referred to by UNESCO as "a masterpiece of creative genius". They were also the inventors of the Qanat, an underground aqueduct, around the 9th century BC. Several of Iran's large, ancient gardens were irrigated thanks to Qanats.
The Qanat spread to neighboring areas, including the Armenian highlands. There, starting in the early 8th century BC, the Kingdom of Urartu undertook significant hydraulic works, such as the Menua canal.
The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BCE, in the regions of Iraq, Iran, and Egypt.
China
In ancient China there was Sunshu Ao (6th century BC), Ximen Bao (5th century BC), Du Shi (circa 31 AD), Zhang Heng (78 – 139 AD), and Ma Jun (200 – 265 AD), while medieval China had Su Song (1020 – 1101 AD) and Shen Kuo (1031–1095). Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron. Zhang Heng was the first to employ hydraulics to provide motive power in rotating an armillary sphere for astronomical observation.
Sri Lanka
In ancient Sri Lanka, hydraulics were widely used in the ancient kingdoms of Anuradhapura and Polonnaruwa. The discovery of the principle of the valve tower, or valve pit, (Bisokotuwa in Sinhalese) for regulating the escape of water is credited to ingenuity more than 2,000 years ago. By the first century AD, several large-scale irrigation works had been completed. Macro- and micro-hydraulics to provide for domestic horticultural and agricultural needs, surface drainage and erosion control, ornamental and recreational water courses and retaining structures and also cooling systems were in place in Sigiriya, Sri Lanka. The coral on the massive rock at the site includes cisterns for collecting water. Large ancient reservoirs of Sri Lanka are Kalawewa (King Dhatusena), Parakrama Samudra (King Parakrama Bahu), Tisa Wewa (King Dutugamunu), Minneriya (King Mahasen)
Greco-Roman world
In Ancient Greece, the Greeks constructed sophisticated water and hydraulic power systems. An example is a construction by Eupalinos, under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. An early example of the usage of hydraulic wheel, probably the earliest in Europe, is the Perachora wheel (3rd century BC).
In Greco-Roman Egypt, the construction of the first hydraulic machine automata by Ctesibius (flourished c. 270 BC) and Hero of Alexandria (c. 10 – 80 AD) is notable. Hero describes several working machines using hydraulic power, such as the force pump, which is known from many Roman sites as having been used for raising water and in fire engines.
In the Roman Empire, different hydraulic applications were developed, including public water supplies, innumerable aqueducts, power using watermills and hydraulic mining. They were among the first to make use of the siphon to carry water across valleys, and used hushing on a large scale to prospect for and then extract metal ores. They used lead widely in plumbing systems for domestic and public supply, such as feeding thermae.
Hydraulic mining was used in the gold-fields of northern Spain, which was conquered by Augustus in 25 BC. The alluvial gold-mine of Las Medulas was one of the largest of their mines. At least seven long aqueducts worked it, and the water streams were used to erode the soft deposits, and then wash the tailings for the valuable gold content.
Arabic-Islamic world
In the Muslim world during the Islamic Golden Age and Arab Agricultural Revolution (8th–13th centuries), engineers made wide use of hydropower as well as early uses of tidal power, and large hydraulic factory complexes. A variety of water-powered industrial mills were used in the Islamic world, including fulling mills, gristmills, paper mills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation, from Al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also used water turbines, employed gears in watermills and water-raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines.
Al-Jazari (1136–1206) described designs for 50 devices, many of them water-powered, in his book, The Book of Knowledge of Ingenious Mechanical Devices, including water clocks, a device to serve wine, and five devices to lift water from rivers or pools. These include an endless belt with jugs attached and a reciprocating device with hinged valves.
The earliest programmable machines were water-powered devices developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated water-powered flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented water-powered programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.
Modern era (c. 1600–1870)
Benedetto Castelli and Italian Hydraulics
In 1619 Benedetto Castelli, a student of Galileo Galilei, published the book Della Misura dell'Acque Correnti or "On the Measurement of Running Waters," one of the foundations of modern hydrodynamics. He served as a chief consultant to the Pope on hydraulic projects, i.e., management of rivers in the Papal States, beginning in 1626.
The science and engineering of water in Italy from 1500-1800 in books and manuscripts is presented in an illustrated catalog published in 2022.
Blaise Pascal
Blaise Pascal (1623–1662) studied fluid hydrodynamics and hydrostatics, centered on the principles of hydraulic fluids. His discovery on the theory behind hydraulics led to his invention of the hydraulic press, which multiplied a smaller force acting on a smaller area into the application of a larger force totaled over a larger area, transmitted through the same pressure (or exact change of pressure) at both locations. Pascal's law or principle states that for an incompressible fluid at rest, the difference in pressure is proportional to the difference in height, and this difference remains the same whether or not the overall pressure of the fluid is changed by applying an external force. This implies that by increasing the pressure at any point in a confined fluid, there is an equal increase at every other end in the container, i.e., any change in pressure applied at any point of the liquid is transmitted undiminished throughout the fluids.
Jean Léonard Marie Poiseuille
A French physician, Poiseuille (1797–1869) researched the flow of blood through the body and discovered an important law governing the rate of flow with the diameter of the tube in which flow occurred.
In the UK
Several cities developed citywide hydraulic power networks in the 19th century, to operate machinery such as lifts, cranes, capstans and the like. Joseph Bramah (1748–1814) was an early innovator and William Armstrong (1810–1900) perfected the apparatus for power delivery on an industrial scale. In London, the London Hydraulic Power Company was a major supplier its pipes serving large parts of the West End of London, City and the Docks, but there were schemes restricted to single enterprises such as docks and railway goods yards.
Hydraulic models
After students understand the basic principles of hydraulics, some teachers use a hydraulic analogy to help students learn other things.
For example:
The MONIAC Computer uses water flowing through hydraulic components to help students learn about economics.
The thermal-hydraulic analogy uses hydraulic principles to help students learn about thermal circuits.
The electronic–hydraulic analogy uses hydraulic principles to help students learn about electronics.
The conservation of mass requirement combined with fluid compressibility yields a fundamental relationship between pressure, fluid flow, and volumetric expansion, as shown below:
Assuming an incompressible fluid or a "very large" ratio of compressibility to contained fluid volume, a finite rate of pressure rise requires that any net flow into the collected fluid volume create a volumetric change.
| Technology | Hydraulics and pneumatics | null |
65452 | https://en.wikipedia.org/wiki/Galangal | Galangal | Galangal () is a common name for several tropical rhizomatous spices.
Differentiation
The word galangal, or its variant galanga or archaically galingale, can refer in common usage to the aromatic rhizome of any of four plant species in the Zingiberaceae (ginger) family, namely:
Alpinia galanga, also called greater galangal, lengkuas, Siamese ginger or laos
Alpinia officinarum, or lesser galangal
Boesenbergia rotunda, also called Chinese ginger or fingerroot
Kaempferia galanga, also called kencur, black galangal or sand ginger
The term galingale is sometimes also used for the rhizome of the unrelated sweet cyperus (Cyperus longus), traditionally used as a folk medicine in Europe.
Uses
Culinary
Various galangal rhizomes are used in traditional Southeast Asian cuisine, such as Khmer kroeung (Cambodian paste), Thai and Lao tom yum and tom kha gai soups, Vietnamese Huế cuisine (tré) and throughout Indonesian cuisine, as in soto and opor. Polish Żołądkowa Gorzka vodka is flavoured with galangal. While all species of galangal are closely related to common ginger, each is unique in its own right. Due to their unique taste and 'hotness' profiles, the individual varieties are usually distinguished from ginger, and from each other, in traditional Asian dishes. The taste of galangal has been variously described as "flowery", "like ginger with cardamom" and "like peppery cinnamon". Lesser galangal was popular in European medieval cooking.
Galangals are commonly available in Asian markets in a variety of forms: as whole fresh rhizomes; dried and sliced; and powdered.
Medical
In ethnobotany, galangal has been used for its purported merits in promoting digestion and alleviating respiratory diseases and stomach problems. Specific medical virtues have been attributed to each galangal variety.
| Biology and health sciences | Herbs and spices | Plants |
65465 | https://en.wikipedia.org/wiki/Petroleum%20engineering | Petroleum engineering | Petroleum engineering is a field of engineering concerned with the activities related to the production of hydrocarbons, which can be either crude oil or natural gas. Exploration and production are deemed to fall within the upstream sector of the oil and gas industry. Exploration, by earth scientists, and petroleum engineering are the oil and gas industry's two main subsurface disciplines, which focus on maximizing economic recovery of hydrocarbons from subsurface reservoirs. Petroleum geology and geophysics focus on provision of a static description of the hydrocarbon reservoir rock, while petroleum engineering focuses on estimation of the recoverable volume of this resource using a detailed understanding of the physical behavior of oil, water and gas within porous rock at very high pressure.
The combined efforts of geologists and petroleum engineers throughout the life of a hydrocarbon accumulation determine the way in which a reservoir is developed and depleted, and usually they have the highest impact on field economics. Petroleum engineering requires a good knowledge of many other related disciplines, such as geophysics, petroleum geology, formation evaluation (well logging), drilling, economics, reservoir simulation, reservoir engineering, well engineering, artificial lift systems, completions and petroleum production engineering.
Recruitment to the industry has historically been from the disciplines of physics, mechanical engineering, chemical engineering and mining engineering. Subsequent development training has usually been done within oil companies.
Overview
The profession got its start in 1914 within the American Institute of Mining, Metallurgical and Petroleum Engineers (AIME). The first Petroleum Engineering degree was conferred in 1915 by the University of Pittsburgh. Since then, the profession has evolved to solve increasingly difficult situations. Improvements in computer modeling, materials and the application of statistics, probability analysis, and new technologies like horizontal drilling and enhanced oil recovery, have drastically improved the toolbox of the petroleum engineer in recent decades. Automation, sensors, and robots are being used to propel the industry to more efficiency and safety.
Deep-water, arctic and desert conditions are usually contended with. High temperature and high pressure (HTHP) environments have become increasingly commonplace in operations and require the petroleum engineer to be savvy in topics as wide-ranging as thermo-hydraulics, geomechanics, and intelligent systems.
The Society of Petroleum Engineers (SPE) is the largest professional society for petroleum engineers and publishes much technical information and other resources to support the oil and gas industry. It provides free online education (webinars), mentoring, and access to SPE Connect, an exclusive platform for members to discuss technical issues, best practices, and other topics. SPE members also are able to access the SPE Competency Management Tool to find knowledge and skill strengths and opportunities for growth. SPE publishes peer-reviewed journals, books, and magazines. SPE members receive a complimentary subscription to the Journal of Petroleum Technology and discounts on SPE's other publications. SPE members also receive discounts on registration fees for SPE organized events and training courses. SPE provides scholarships and fellowships to undergraduate and graduate students.
According to the United States Department of Labor's Bureau of Labor Statistics, petroleum engineers are required to have a bachelor's degree in engineering, generally a degree focused on petroleum engineering is preferred, but degrees in mechanical, chemical, and civil engineering are satisfactory as well. Petroleum engineering education is available at many universities in the United States and throughout the world - primarily in oil producing regions. U.S. News & World Report maintains a list of the Best Undergraduate Petroleum Engineering Programs. SPE and some private companies offer training courses. Some oil companies have considerable in-house petroleum engineering training classes.
Petroleum engineering salaries
Petroleum engineering has historically been one of the highest-paid engineering disciplines, although there is a tendency for mass layoffs when oil prices decline and waves of hiring as prices rise. In 2020, the United States Department of Labor's Bureau of Labor Statistics reported the median pay for petroleum engineers was US$137,330, or roughly $66.02 per hour. The same summary projects there will be 3% job growth in this field from 2019 to 2029.
SPE annually conducts a salary survey. In 2017, SPE reported that the average SPE professional member reported earning US$194,649 (including salary and bonus). The average base pay reported in 2016 was $143,006. Base pay and other compensation was on average was highest in the United States where the base pay was US$174,283. Drilling and production engineers tended to make the best base pay, US$160,026 for drilling engineers and US$158,964 for production engineers. Average base pay ranged from US$96,382-174,283. There are still significant gender pay gaps, plus or minus 5% of the US average pay gap which was 18% difference in 2017.
Also in 2016, U.S. News & World Report named petroleum engineering the top college major in terms of highest median annual wages of college-educated workers (age 25–59). The 2010 National Association of Colleges and Employers survey showed petroleum engineers as the highest paid 2010 graduates, at an average annual salary of $125,220. For individuals with experience, salaries can range from $170,000 to $260,000. They make an average of $112,000 a year and about $53.75 per hour. In a 2007 article, Forbes.com reported that petroleum engineering was the 24th best paying job in the United States.
Sub-disciplines
Petroleum engineers divide themselves into several types:
Reservoir engineers work to optimize production of oil and gas via proper placement, production rates, and enhanced oil recovery techniques.
Drilling engineers manage the technical aspects of drilling exploratory, production and injection wells.
Drilling fluid engineers A mud engineer (correctly called a Drilling Fluids Engineer, but most often referred to as the "Mud Man") works on an oil well or gas well drilling rig, and is responsible ensuring the properties of the drilling fluid, also known as drilling mud, are within designed specifications.
Completion engineers (also known as subsurface engineers) work to design and oversee the implementation of techniques aimed at ensuring wells are drilled stably and with the maximum opportunity for oil and gas production.
Production engineers manage the interface between the reservoir and the well, including perforations, sand control, downhole flow control, and downhole monitoring equipment; evaluate artificial lift methods; and select surface equipment that separates the produced fluids (oil, natural gas, and water).
Petrophysicists gather information about subsurface properties to build wellbore stability models and study rock properties
Education
Petroleum Engineering, like most forms of engineering, requires a strong foundation in physics, chemistry, and mathematics. Other fields pertinent to petroleum engineering include geology, formation evaluation, fluid flow in porous media, well drilling technology, economics, geostatistics, etc.
Petroleum Geostatistics
Geostatistics as applied to petroleum engineering uses statistical analysis to characterize reservoirs and create flow simulations that quantify uncertainties of the location of oil and gas.
Petroleum Geology
Petroleum geology is an interdisciplinary field composed of geophysics, geochemistry, and paleontology. The main focus of petroleum geology is the exploration and appraisal of reservoirs containing hydrocarbons via technical forms of analysis.
Well Drilling Technology
Well drilling technology is primarily the focus for drilling engineers. The two forms of well drilling are percussion and rotary drilling, rotary being the most common of the two. An important aspect of drilling is the drill bit, which creates a borehole of approximately three and a half to thirty inches in diameter. The three classes of drill bits, roller cone, fixed cutter, and hybrid, each use teeth to break up the rock. To optimize drilling efficiency and cost, drilling engineers make use of drilling simulators that allow them to identify drilling conditions. Drilling technologies including horizontal drilling and directional drilling have been developed to obtain hydrocarbons profitably from impermeable and coal-bed methane accumulations.
Professional associations
Society of Petroleum Engineers
American Institute of Mining, Metallurgical and Petroleum Engineers
| Technology | Disciplines | null |
65546 | https://en.wikipedia.org/wiki/Boric%20acid | Boric acid | Boric acid, more specifically orthoboric acid, is a compound of boron, oxygen, and hydrogen with formula . It may also be called hydrogen orthoborate, trihydroxidoboron or boracic acid. It is usually encountered as colorless crystals or a white powder, that dissolves in water, and occurs in nature as the mineral sassolite. It is a weak acid that yields various borate anions and salts, and can react with alcohols to form borate esters.
Boric acid is often used as an antiseptic, insecticide, flame retardant, neutron absorber, or precursor to other boron compounds.
The term "boric acid" is also used generically for any oxyacid of boron, such as metaboric acid and tetraboric acid .
History
Orthoboric acid was first prepared by Wilhelm Homberg (1652–1715) from borax, by the action of mineral acids, and was given the name ("sedative salt of Homberg"). However boric acid and borates have been used since the time of the ancient Greeks for cleaning, preserving food, and other activities.
Molecular and crystal structure
The three oxygen atoms form a trigonal planar geometry around the boron. The B-O bond length is 136 pm and the O-H is 97 pm. The molecular point group is C3h.
Two crystalline forms of orthoboric acid are known: triclinic with space group P, and trigonal with space group P32. The former is the most common; the second, which is a bit more stable thermodynamically, can be obtained with a special preparation method.
The triclinic form of boric acid consists of layers of molecules held together by hydrogen bonds with an O...O separation of 272 pm. The distance between two adjacent layers is 318 pm. While the layers of the triclinic phase are nearly trigonal with , , and (compared to for the trigonal form), the stacking of the layers is somewhat offset in the triclinic phase, with and . The triclinic phase has and the trigonal one has .
Preparation
Boric acid may be prepared by reacting borax (sodium tetraborate decahydrate) with a mineral acid, such as hydrochloric acid:
·10 + 2 HCl → 4 + 2 NaCl + 5
It is also formed as a by product of hydrolysis of boron trihalides and diborane:
+ 6 → 2 + 6
+ 3 → + 3 HX (X = Cl, Br, I)
Reactions
Pyrolysis
When heated, orthoboric acid undergoes a three step dehydration. The reported transition temperatures vary substantially from source to source.
When heated above 140 °C, orthoboric acid yields metaboric acid () with loss of one water molecule:
→ +
Heating metaboric acid above about 180 °C eliminates another water molecule forming tetraboric acid, also called pyroboric acid ():
4 → +
Further heating (to about 530 °C) leads to boron trioxide:
→ 2 +
Aqueous solution
When orthoboric acid is dissolved in water, it partially dissociates to give metaboric acid:
+
The solution is mildly acidic due to ionization of the acids:
+ +
+ +
However, Raman spectroscopy of strongly alkaline solutions has shown the presence of ions, leading some to conclude that the acidity is exclusively due to the abstraction of from water:
+
Equivalently,
+ + (Ka = 7.3×10−10; pKa = 9.14)
Or, more properly,
+ 2 +
This reaction occurs in two steps, with the neutral complex aquatrihydroxyboron as an intermediate:
+ →
+ → +
This reaction may be characterized as Lewis acidity of boron toward , rather than as Brønsted acidity. However, some of its behaviour towards some chemical reactions suggest it to be a tribasic acid in the Brønsted-Lowry sense as well.
Boric acid, mixed with borax (more properly ) in the weight ratio of 4:5, is highly soluble in water, though they are not so soluble separately.
Sulfuric acid solution
Boric acid also dissolves in anhydrous sulfuric acid according to the equation:
+ 6 → + 2 + 3
The product is an extremely strong acid, even stronger than the original sulfuric acid.
Esterification
Boric acid reacts with alcohols to form borate esters, where R is alkyl or aryl. The reaction is typically driven by a dehydrating agent, such as concentrated sulfuric acid:
+ 3 ROH → + 3
With vicinal diols
The acidity of boric acid solutions is greatly increased in the presence of cis-vicinal diols (organic compounds containing similarly oriented hydroxyl groups in adjacent carbon atoms, ) such as glycerol and mannitol.
The tetrahydroxyborate anion formed in the dissolution spontaneously reacts with these diols to form relatively stable anion esters containing one or two five-member rings. For example, the reaction with mannitol , whose two middle hydroxyls are in cis orientation, can be written as
+ +
+ + 2
+ + 2
Giving the overall reaction
+ 2 + 3 +
The stability of these mannitoborate ester anions shifts the equilibrium of the right and thus increases the acidity of the solution by 5 orders of magnitude compared to that of pure boric oxide, lowering the pKa from 9 to below 4 for sufficient concentration of mannitol. The resulting solution has been called mannitoboric acid.
The addition of mannitol to an initially neutral solution containing boric acid or simple borates lowers its pH enough for it to be titrated by a strong base as NaOH, including with an automated a potentiometric titrator. This property is used in analytical chemistry to determine the borate content of aqueous solutions, for example to monitor the depletion of boric acid by neutrons in the water of the primary circuit of light-water reactor when the compound is added as a neutron poison during refueling operations.
Toxicology
Based on mammalian median lethal dose (LD50) rating of 2,660 mg/kg body mass, boric acid is only poisonous if taken internally or inhaled in large quantities. The Fourteenth Edition of the Merck Index indicates that the LD50 of boric acid is 5.14 g/kg for oral dosages given to rats, and that 5 to 20 g/kg has produced death in adult humans. For a 70 kg adult, at the lower 5 g/kg limit, 350 g could produce death in humans. For comparison's sake, the LD50 of salt is reported to be 3.75 g/kg in rats according to the Merck Index. According to the Agency for Toxic Substances and Disease Registry, "The minimal lethal dose of ingested boron (as boric acid) was reported to be 2–3 g in infants, 5–6 g in children, and 15–20 g in adults. [...] However, a review of 784 human poisonings with boric acid (10–88 g) reported no fatalities, with 88% of cases being asymptomatic." Human studies in three borate exposure rich comparison groups (U.S. Borax mine and production facility workers, Chinese boron workers, Turkish residents living near boron rich regions) produced no indicators of developmental toxicity in blood and semen tests. Highest estimated exposure was 5 mg B/kg/day likely due to eating in contaminated workplaces, more than 100 times the average daily exposure.
Long-term exposure to boric acid may be of more concern, causing kidney damage and eventually kidney failure (see links below). Although it does not appear to be carcinogenic, studies in dogs have reported testicular atrophy after exposure to 32 mg/(kg⋅day) for 90 days. This level, were it applicable to humans at like dose, would equate to a cumulative dose of 202 g over 90 days for a 70 kg adult, not far lower than the above LD50.
According to the CLH report for boric acid published by the Bureau for Chemical Substances Lodz, Poland, boric acid in high doses shows significant developmental toxicity and teratogenicity in rabbit, rat, and mouse fetuses, as well as cardiovascular defects, skeletal variations, and mild kidney lesions. As a consequence in the 30th ATP to EU directive 67/548/EEC of August 2008, the European Commission decided to amend its classification as reprotoxic category 2 and to apply the risk phrases R60 (may impair fertility) and R61 (may cause harm to the unborn child).
At a 2010 European Diagnostics Manufacturing Association (EDMA) Meeting, several new additions to the substance of very high concern (SVHC) candidate list in relation to the Registration, Evaluation, Authorisation and Restriction of Chemicals Regulations 2007 (REACH) were discussed. Following the registration and review completed as part of REACH, the classification of boric acid CAS 10043-35-3 / 11113-50-1 is listed from 1 December 2010 is H360FD (May damage fertility. May damage the unborn child).
Uses
Industrial
The primary industrial use of boric acid is in the manufacture of monofilament fiberglass usually referred to as textile fiberglass. Textile fiberglass is used to reinforce plastics in applications that range from boats, to industrial piping to computer circuit boards.
In the jewelry industry, boric acid is often used in combination with denatured alcohol to reduce surface oxidation and formation of firescale on metals during annealing and soldering operations.
Boric acid is used in the production of the glass in LCD flat panel displays.
In electroplating, boric acid is used as part of some proprietary formulas. One known formula uses about a 1 to 10 ratio of to , a very small portion of sodium lauryl sulfate and a small portion of .
The solution of orthoboric acid and borax in 4:5 ratio is used as a fire retarding agent of wood by impregnation.
It is also used in the manufacturing of ramming mass, a fine silica-containing powder used for producing induction furnace linings and ceramics.
Boric acid is added to borax for use as welding flux by blacksmiths.
Boric acid, in combination with polyvinyl alcohol (PVA) or silicone oil, is used to manufacture Silly Putty.
Boric acid is also present in the list of chemical additives used for hydraulic fracturing (fracking) in the Marcellus Shale in Pennsylvania. It is often used in conjunction with guar gum as cross-linking and gelling agent for controlling the viscosity and the rheology of the fracking fluid injected at high pressure in the well. It is important to control the fluid viscosity for keeping in suspension on long transport distances the grains of the propping agents aimed at maintaining the cracks in the shales sufficiently open to facilitate the gas extraction after the hydraulic pressure is relieved. The rheological properties of borate cross-linked guar gum hydrogel mainly depend on the pH value.
Boric acid is used in some expulsion-type electrical fuses as a de-ionization/extinguishing agent. During an electrical fault in an expulsion-type fuse, a plasma arc is generated by the disintegration and rapid spring-loaded separation of the fusible element, which is typically a specialized metal rod that passes through a compressed mass of boric acid within the fuse assembly. The high-temperature plasma causes the boric acid to rapidly decompose into water vapor and boric anhydride, and in-turn, the vaporization products de-ionize the plasma, helping to interrupt the electrical fault.
Medical
Boric acid can be used as an antiseptic for minor burns or cuts and is sometimes used in salves and dressings, such as boracic lint. Boric acid is applied in a very dilute solution as an eye wash. Boric acid vaginal suppositories can be used for recurrent candidiasis due to non-albicans candida as a second line treatment when conventional treatment has failed. It is less effective than conventional treatment overall. Boric acid largely spares lactobacilli within the vagina. As TOL-463, it is under development as an intravaginal medication for the treatment for vulvovaginal candidiasis.
As an antibacterial compound, boric acid can also be used as an acne treatment. It is also used as prevention of athlete's foot, by inserting powder in the socks or stockings. Various preparations can be used to treat some kinds of (ear infection) in both humans and animals. The preservative in urine sample bottles in the UK is boric acid.
Boric acid solutions used as an eye wash or on abraded skin are known to be toxic, particularly to infants, especially after repeated use; this is because of its slow elimination rate.
Boric acid is one of the most commonly used substances that can counteract the harmful effects of reactive hydrofluoric acid (HF) after an accidental contact with the skin. It works by forcing the free anions into the inert tetrafluoroborate anion. This process defeats the extreme toxicity of hydrofluoric acid, particularly its ability to sequester ionic calcium from blood serum which can lead to cardiac arrest and bone decomposition; such an event can occur from just minor skin contact with HF.
Insecticidal
Boric acid was first registered in the US as an insecticide in 1948 for control of cockroaches, termites, fire ants, fleas, silverfish, and many other insects. The product is generally considered to be safe to use in household kitchens to control cockroaches and ants. It acts as a stomach poison affecting the insects' metabolism, and the dry powder is abrasive to the insects' exoskeletons. It is in non-specific IRAC group 8D. Boric acid also has the reputation as "the gift that keeps on killing" in that cockroaches that cross over lightly dusted areas do not die immediately, but that the effect is like shards of glass cutting them apart. This often allows a roach to go back to the nest where it soon dies. Cockroaches, being cannibalistic, eat others killed by contact or consumption of boric acid, consuming the powder trapped in the dead roach and killing them, too.
Boric acid has also been widely used in the treatment of wood for protection against termites. The full complexity of its mechanism is not fully understood, but aside from causing dose-dependent mortality, boric acid causes dysbiosis in the Eastern Subterranean termite, leading to the opportunistic rise of insect pathogens that could be contributing to mortality.
Preservation
In combination with its use as an insecticide, boric acid also prevents and destroys existing wet and dry rot in timbers. It can be used in combination with an ethylene glycol carrier to treat external wood against fungal and insect attack. It is possible to buy borate-impregnated rods for insertion into wood via drill holes where dampness and moisture is known to collect and sit. It is available in a gel form and injectable paste form for treating rot affected wood without the need to replace the timber. Concentrates of borate-based treatments can be used to prevent slime, mycelium, and algae growth, even in marine environments.
Boric acid is added to salt in the curing of cattle hides, calfskins, and sheepskins. This helps to control bacterial development, and helps to control insects.
pH buffer
Boric acid in equilibrium with its conjugate base the borate ion is widely used (in the concentration range 50–100 ppm boron equivalents) as a primary or adjunct pH buffer system in swimming pools. Boric acid is a weak acid, with pK (the pH at which buffering is strongest because the free acid and borate ion are in equal concentrations) of 9.24 in pure water at 25 °C. But apparent pK is substantially lower in swimming pool or ocean waters because of interactions with various other molecules in solution. It will be around 9.0 in a salt-water pool. No matter which form of soluble boron is added, within the acceptable range of pH and boron concentration for swimming pools, boric acid is the predominant form in aqueous solution, as shown in the accompanying figure. The boric acid – borate system can be useful as a primary buffer system (substituting for the bicarbonate system with pK = 6.0 and pK = 9.4 under typical salt-water pool conditions) in pools with salt-water chlorine generators that tend to show upward drift in pH from a working range of pH 7.5–8.2. Buffer capacity is greater against rising pH (towards the pKa around 9.0), as illustrated in the accompanying graph. The use of boric acid in this concentration range does not allow any reduction in free HOCl concentration needed for pool sanitation, but it may add marginally to the photo-protective effects of cyanuric acid and confer other benefits through anti-corrosive activity or perceived water softness, depending on overall pool solute composition.
Lubrication
Colloidal suspensions of nanoparticles of boric acid dissolved in petroleum or vegetable oil can form a remarkable lubricant on ceramic or metal surfaces with a coefficient of sliding friction that decreases with increasing pressure to a value ranging from 0.10 to 0.02. Self-lubricating films result from a spontaneous chemical reaction between water molecules and coatings in a humid environment. In bulk-scale, an inverse relationship exists between friction coefficient and Hertzian contact pressure induced by applied load.
Boric acid is used to lubricate carrom and novuss boards, allowing for faster play.
Nuclear power
Boric acid is used in some nuclear power plants as a neutron poison. The boron in boric acid reduces the probability of thermal fission by absorbing some thermal neutrons. Fission chain reactions are generally driven by the probability that free neutrons will result in fission and is determined by the material and geometric properties of the reactor. Natural boron consists of approximately 20% boron-10 and 80% boron-11 isotopes. Boron-10 has a high cross-section for absorption of low energy (thermal) neutrons. By increasing boric acid concentration in the reactor coolant, the probability that a neutron will cause fission is reduced. Changes in boric acid concentration can effectively regulate the rate of fission taking place in the reactor. During normal at power operation, boric acid is used only in pressurized water reactors (PWRs), whereas boiling water reactors (BWRs) employ control rod pattern and coolant flow for power control, although BWRs can use an aqueous solution of boric acid and borax or sodium pentaborate for an emergency shutdown system if the control rods fail to insert. Boric acid may be dissolved in spent fuel pools used to store spent fuel elements. The concentration is high enough to keep neutron multiplication at a minimum. Boric acid was dumped over Reactor 4 of the Chernobyl nuclear power plant after its meltdown to prevent another reaction from occurring.
Pyrotechnics
Boron is used in pyrotechnics to prevent the amide-forming reaction between aluminium and nitrates. A small amount of boric acid is added to the composition to neutralize alkaline amides that can react with the aluminium.
Boric acid can be used as a colorant to make fire green. For example, when dissolved in methanol it is popularly used by fire jugglers and fire spinners to create a deep green flame much stronger than copper sulfate.
Agriculture
Boric acid is used to treat or prevent boron deficiencies in plants. It is also used in preservation of grains such as rice and wheat.
| Physical sciences | Inorganic compounds | null |
65555 | https://en.wikipedia.org/wiki/Antiseptic | Antiseptic | An antiseptic ( and ) is an antimicrobial substance or compound that is applied to living tissue to reduce the possibility of sepsis, infection, or putrefaction. Antiseptics are generally distinguished from antibiotics by the latter's ability to safely destroy bacteria within the body, and from disinfectants, which destroy microorganisms found on non-living objects.
Antibacterials include antiseptics that have the proven ability to act against bacteria. Microbicides which destroy virus particles are called viricides or antivirals. Antifungals, also known as antimycotics, are pharmaceutical fungicides used to treat and prevent mycosis (fungal infection).
Surgery
Antiseptic practices evolved in the 19th century through multiple individuals. Ignaz Semmelweis showed already in 1847-1848 that hand washing prior to delivery reduced puerperal fever. Despite this, many hospitals continued to practice surgery in unsanitary conditions, with some surgeons taking pride in their bloodstained operating gowns.Only a decade later the situation started to change, when some French surgeons started to adopt carbolic acid as an antiseptic, reducing surgical infection rates, followed by their Italian colleagues in the 1860s. In 1867 Joseph Lister published seminal paper Antiseptic Principle of the Practice of Surgery, where he explained this reduction in terms of Louis Pasteur's germ theory. Thus he was able to popularize the antiseptic surgical methods in the English-speaking world.
Some of this work was anticipated by:
Ancient Greek physicians Galen () and Hippocrates () as well as Sumerian clay tablets dating from 2150 BC that advocate the use of similar techniques.
Florence Nightingale, who contributed substantially to the report of the Royal Commission on the Health of the Army (1856–1857), based on her earlier work
Medieval surgeons Hugh of Lucca, Theoderic of Servia, and his pupil Henri de Mondeville were opponents of Galen's opinion that pus was important to healing, which had led ancient and medieval surgeons to let pus remain in wounds. They advocated draining and cleaning the wound edges with wine, dressing the wound after suturing, if necessary and leaving the dressing on for ten days, soaking it in warm wine all the while, before changing it. Their theories were bitterly opposed by Galenist Guy de Chauliac and others trained in the classical tradition.
Oliver Wendell Holmes Sr., who published The Contagiousness of Puerperal Fever in 1843
Some common antiseptics
Antiseptics can be subdivided into about eight classes of materials. These classes can be subdivided according to their mechanism of action: small molecules that indiscriminately react with organic compounds and kill microorganisms (peroxides, iodine, phenols) and more complex molecules that disrupt the cell walls of the bacteria.
Alcohols, including ethanol and 2-propanol/isopropanol are sometimes referred to as surgical spirit. They are used to disinfect the skin before injections, among other uses.
Diguanides including chlorhexidine gluconate, a bacteriocidal antiseptic which (with an alcoholic solvent) is considered a safe and effective antiseptic for reducing the risk of infection after clean surgery, including tourniquet-controlled upper limb surgery. It is also used in mouthwashes to treat inflammation of the gums (gingivitis). Polyhexanide (polyhexamethylene biguanide, PHMB) is an antimicrobial compound suitable for clinical use in critically colonized or infected acute and chronic wounds. The physicochemical action on the bacterial envelope prevents or impedes the development of resistant bacterial strains.
Iodine, especially in the form of povidone-iodine, is widely used because it is well tolerated; does not negatively affect wound healing; leaves a deposit of active iodine, thereby creating the so-called "remnant", or persistent effect; and has wide scope of antimicrobial activity. The traditional iodine antiseptic is an alcohol solution (called tincture of iodine) or as Lugol's iodine solution. Some studies do not recommend disinfecting minor wounds with iodine because of concern that it may induce scar tissue formation and increase healing time. However, concentrations of 1% iodine or less have not been shown to increase healing time and are not otherwise distinguishable from treatment with saline. Iodine will kill all principal pathogens and, given enough time, even spores, which are considered to be the most difficult form of microorganisms to be inactivated by disinfectants and antiseptics.
Octenidine dihydrochloride, currently increasingly used in continental Europe, often as a chlorhexidine substitute.
Peroxides, such as hydrogen peroxide and benzoyl peroxide. Commonly, 3% solutions of hydrogen peroxide have been used in household first aid for scrapes, etc. However, the strong oxidization causes scar formation and increases healing time during fetal development.
Phenols such as phenol itself (as introduced by Lister) and triclosan, hexachlorophene, chlorocresol, and chloroxylenol. The fact that the more substituted and more lipophylic phenols are less toxic, less irritant and more powerful was gradually discovered in late 19th century. Nowadays comparatively more water-soluble phenols such as chlorocresol are commonly used as preservatives in personal care products while less soluble such as chloroxylenol – as topical antiseptics. Both can be encountered in household disinfectants.
Quat salts such as benzalkonium chloride/lidocaine (trade name Bactine among others), cetylpyridinium chloride, or cetrimide. These surfactants disrupt cell walls.
Quinolines such as hydroxyquinolone, dequalium chloride, or chlorquinaldol.
4-Hexylresorcinol, or S.T.37
| Biology and health sciences | Anti-infectives | Health |
65559 | https://en.wikipedia.org/wiki/Borate | Borate | A borate is any of a range of boron oxyanions, anions containing boron and oxygen, such as orthoborate , metaborate , or tetraborate ; or any salt of such anions, such as sodium metaborate, and borax . The name also refers to esters of such anions, such as trimethyl borate but they are alkoxides.
Natural occurrence
Borate ions occur, alone or with other anions, in many borate and borosilicate minerals such as borax, boracite, ulexite (boronatrocalcite) and colemanite. Borates also occur in seawater, where they make an important contribution to the absorption of low frequency sound in seawater.
Borates also occur in plants, including almost all fruits.
Anions
The main borate anions are:
tetrahydroxyborate , found in sodium tetrahydroxyborate .
orthoborate , found in trisodium orthoborate
, found in the calcium yttrium borosilicate oxyapatite
perborate , as in sodium perborate
metaborate or its cyclic trimer , found in sodium metaborate
diborate , found in magnesium diborate (suanite) ,
triborate , found in calcium aluminium triborate (johachidolite) ,
tetraborate , found in anhydrous borax
tetrahydroxytetraborate , found in borax "decahydrate"
tetraborate(6-) found in lithium tetraborate(6-)
pentaborate or , found in sodium pentaborate
octaborate found in disodium octaborate
Preparation
In 1905, Burgess and Holt observed that fusing mixtures of boric oxide and sodium carbonate yielded on cooling two crystalline compounds with definite compositions, consistent with anhydrous borax (which can be written ) and sodium octaborate (which can be written ).
Structures
Borate anions (and functional groups) consist of trigonal planar and/or tetrahedral structural units, joined together via shared oxygen atoms (corners) or atom pairs (edges) into larger clusters so as to construct various ions such as , , , , , etc. These anions may be cyclic or linear in structure, and can further polymerize into infinite chains, layers, and tridimensional frameworks. The terminal (unshared) oxygen atoms in the borate anions may be capped with hydrogen atoms () or may carry a negative charge ().
The planar units may be stacked in the crystal lattice so as to have π-conjugated molecular orbitals, which often results in useful optical properties such as strong harmonics generation, birefringence, and UV transmission.
Polymeric borate anions may have linear chains of 2, 3 or 4 trigonal structural units, each sharing oxygen atoms with adjacent unit(s). as in , contain chains of trigonal structural units. Other anions contain cycles; for instance, and contain the cyclic ion, consisting of a six-membered ring of alternating boron and oxygen atoms with one extra oxygen atom attached to each boron atom.
The thermal expansion of crystalline borates is dominated by the fact that and polyhedra and rigid groups consisting of these polyhedra practically do not change their configuration and size upon heating, but sometimes rotate like hinges, which results in greatly anisotropic thermal expansion including linear negative expansion.
Reactions
Aqueous solution
In aqueous solution, boric acid can act as a weak Brønsted acid, that is, a proton donor, with pKa ~ 9. However, it more often acts as a Lewis acid, accepting an electron pair from a hydroxide ion produced by the water autoprotolysis:
+ 2 + (pK = 8.98)
This reaction is very fast, with characteristic time less than 10 μs. Polymeric boron oxoanions are formed in aqueous solution of boric acid at pH 7–10 if the boron concentration is higher than about 0.025 mol/L. The best known of these is the tetraborate ion , found in the mineral borax:
4 + 2 + 7
Other anions observed in solution are triborate(1−) and pentaborate(1−), in equilibrium with boric acid and tetrahydroxyborate according to the following overall reactions:
2 + + 3 (fast, pK = —1.92)
4 + + 6 (slow, pK = —2.05)
In the pH range 6.8 to 8.0, any alkali salts of "boric oxide" anions with general formula where 3x+q = 2y + z will eventually equilibrate in solution to a mixture of , , , and .
These ions, similarly to the complexed borates mentioned above, are more acidic than boric acid itself. As a result of this, the pH of a concentrated polyborate solution will increase more than expected when diluted with water.
Borate salts
A number of metal borates are known. They can be obtained by treating boric acid or boron oxides with metal oxides.
Mixed anion salts
Some chemicals contain another anion in addition to borate. These include borate chlorides, borate carbonates, borate nitrates, borate sulfates, borate phosphates.
Complex oxyanions containing boron
More complex anions can be formed by condensing borate triangles or tetrahedra with other oxyanions to yield materials such as borosulfates, boroselenates, borotellurates, boroantimonates, borophosphates, or boroselenites.
Borosilicate glass, also known as pyrex, can be viewed as a silicate in which some [SiO4]4− units are replaced by [BO4]5− centers, together with additional cations to compensate for the difference in valence states of Si(IV) and B(III). Because this substitution leads to imperfections, the material is slow to crystallise and forms a glass with low coefficient of thermal expansion, thus resistant to cracking when heated, unlike soda glass.
Uses
Common borate salts include sodium metaborate (NaBO2) and borax. Borax is soluble in water, so mineral deposits only occur in places with very low rainfall. Extensive deposits were found in Death Valley and shipped with twenty-mule teams from 1883 to 1889. In 1925, deposits were found at Boron, California on the edge of the Mojave Desert. The Atacama Desert in Chile also contains mineable borate concentrations.
Lithium metaborate, lithium tetraborate, or a mixture of both, can be used in borate fusion sample preparation of various samples for analysis by XRF, AAS, ICP-OES and ICP-MS. Borate fusion and energy dispersive X-ray fluorescence spectrometry with polarized excitation have been used in the analysis of contaminated soils.
Disodium octaborate tetrahydrate (commonly abbreviated DOT) is used as a wood preservative or fungicide. Zinc borate is used as a flame retardant.
Some borates with large anions and multiple cations, like and have been considered for applications in nonlinear optics.
Borate esters
Borate esters are organic compounds, which are conveniently prepared by the stoichiometric condensation reaction of boric acid with alcohols (or their chalcogen analogs).
Thin films
Metal borate thin films have been grown by a variety of techniques, including liquid-phase epitaxy (e.g. FeBO3, β-BaB2O4), electron-beam evaporation (e.g. CrBO3, β-BaB2O4), pulsed laser deposition (e.g. β-BaB2O4, Eu(BO2)3), and atomic layer deposition (ALD). Growth by ALD was achieved using precursors composed of the tris(pyrazolyl)borate ligand and either ozone or water as the oxidant to deposit CaB2O4, SrB2O4, BaB2O4, Mn3(BO3)2, and CoB2O4 films.
Physiology
Borate anions are largely in the form of the undissociated acid in aqueous solution at physiological pH. No further metabolism occurs in either animals or plants. In animals, boric acid/borate salts are essentially completely absorbed following oral ingestion. Absorption occurs via inhalation, although quantitative data are unavailable. Limited data indicate that boric acid/salts are not absorbed through intact skin to any significant extent, although absorption occurs through skin that is severely abraded. It distributes throughout the body and is not retained in tissues, except for bone, and is rapidly excreted in the urine.
| Physical sciences | Boric oxyanions | Chemistry |
65560 | https://en.wikipedia.org/wiki/Borax | Borax | Borax (also referred to as sodium borate, tincal () and tincar ()) is a salt (ionic compound), a hydrated or anhydrous borate of sodium, with the chemical formula .
It is a colorless crystalline solid that dissolves in water to make a basic solution.
It is commonly available in powder or granular form and has many industrial and household uses, including as a pesticide, as a metal soldering flux, as a component of glass, enamel, and pottery glazes, for tanning of skins and hides, for artificial aging of wood, as a preservative against wood fungus, and as a pharmaceutic alkalizer. In chemical laboratories, it is used as a buffering agent.
The terms tincal and tincar refer to native borax, historically mined from dry lake beds in various parts of Asia.
History
Borax was first discovered in dry lake beds in Tibet. Native tincal from Tibet, Persia, and other parts of Asia was traded via the Silk Road to the Arabian Peninsula in the 8th century AD.
Borax first came into common use in the late 19th century when Francis Marion Smith's Pacific Coast Borax Company began to market and popularize a large variety of applications under the 20 Mule Team Borax trademark, named for the method by which borax was originally hauled out of the California and Nevada deserts.
Etymology
The English word borax is Latinized: the Middle English form was , from Old French . That may have been from Medieval Latin (another English spelling), , along with Spanish (> ) and Italian , in the 9th century, and from Arabic ()
The words tincal and tincar were adopted into English in the 17th century from Malay and from Urdu/Persian/Arabic ; thus the two forms in English. These all appear to be related to the Sanskrit .
Chemistry
From a chemical perspective, borax contains the [B4O5(OH)4]2− ion. In this structure, there are two four-coordinate boron centers and two three-coordinate boron centers.
It is a proton conductor at temperatures above 21 °C. Conductivity is maximum along the b-axis.
Borax is also easily converted to boric acid and other borates, which have many applications. Its reaction with hydrochloric acid to form boric acid is:
Borax is sufficiently stable to find use as a primary standard for acid-base titrimetry.
Molten borax dissolves many metal oxides to form glasses. This property is important for its uses in metallurgy and for the borax bead test of qualitative chemical analysis.
Borax is soluble in a variety of solvents; however, it is notably insoluble in ethanol.
The term borax properly refers to the so-called "decahydrate" , but that name is not consistent with its structure. It is actually octahydrate. The anion is not tetraborate but tetrahydroxy tetraborate , so the more correct formula should be . However, the term may be applied also to the related compounds. Borax "pentahydrate" has the formula , which is actually a trihydrate . It is a colorless solid with a density of 1.880 kg/m3 that crystallizes from water solutions above 60.8 °C in the rhombohedral crystal system. It occurs naturally as the mineral tinkhanite. It can be obtained by heating the "decahydrate" above 61 °C. Borax "dihydrate" has the formula , which is actually anhydrous, with the correct formula . It can be obtained by heating the "decahydrate" or "pentahydrate" to above 116-120 °C. Anhydrous borax is sodium tetraborate proper, with formula . It can be obtained by heating any hydrate to 300 °C. It has one amorphous (glassy) form and three crystalline forms – α, β, and γ, with melting points of 1015, 993 and 936 K respectively. α- is the stable form.
Natural sources
Borax occurs naturally in evaporite deposits produced by the repeated evaporation of seasonal lakes. The most commercially important deposits are found in: Turkey; Boron, California; and Searles Lake, California. Also, borax has been found at many other locations in the Southwestern United States, the Atacama Desert in Chile, newly discovered deposits in Bolivia, and in Tibet and Romania. Borax can also be produced synthetically from other boron compounds.
Naturally occurring borax (known by the trade name Rasorite–46 in the United States and many other countries) is refined by a process of recrystallization.
Uses
Borax is used in pest control solutions because it is toxic to ants and rats. Because it is slow-acting, worker ants will carry the borax to their nests and poison the rest of the colony. Borax is more effective than zinc borate for termite control but a 1997 paper concluded that exposing at least 10% of the total colony population was needed for effective treatment. In Japan the practice of laying newspapers treated with o-boric acid and borax under buildings has been effective in controlling Coptotermes formosanus and Reticulitermes speratus populations. Decaying wood treated with 0.25 to 0.5 percent DOT was also found to be effective for baiting Heterotermes aureus populations. The paper concluded: "Borate baits would undoubtably be helpful in the long-term, but do not appear sufficient as a sole method of structural protection."
Borax is used in various household laundry and cleaning products, including the 20 Mule Team Borax laundry booster, Boraxo powdered hand soap, and some tooth bleaching formulas.
Borate ions (commonly supplied as boric acid) are used in biochemical and chemical laboratories to make buffers, e.g. for polyacrylamide gel electrophoresis of DNA and RNA, such as TBE buffer (borate buffered tris-hydroxymethylaminomethonium) or the newer SB buffer or BBS buffer (borate buffered saline) in coating procedures. Borate buffers (usually at pH 8) are also used as preferential equilibration solutions in dimethyl pimelimidate (DMP) based crosslinking reactions.
Borax as a source of borate has been used to take advantage of the co-complexing ability of borate with other agents in water to form complex ions with various substances. Borate and a suitable polymer bed are used to chromatograph non-glycated hemoglobin differentially from glycated hemoglobin (chiefly HbA1c), which is an indicator of long-term hyperglycemia in diabetes mellitus.
Borax alone does not have a high affinity for hardness cations, although it has been used for water-softening. Its chemical equation for water-softening is given below:
The sodium ions introduced do not make water "hard". This method is suitable for removing both temporary and permanent types of hardness.
A mixture of borax and ammonium chloride is used as a flux when welding iron and steel. It lowers the melting point of the unwanted iron oxide (scale), allowing it to run off. Borax is also mixed with water as a flux when soldering jewelry metals such as gold or silver, where it allows the molten solder to wet the metal and flow evenly into the joint. Borax is also a good flux for "pre-tinning" tungsten with zinc, making the tungsten soft-solderable. Borax is often used as a flux for forge welding.
In artisanal gold mining, borax is sometimes used as part of a process known as the borax method (as a flux) meant to eliminate the need for toxic mercury in the gold extraction process, although it cannot directly replace mercury. Borax was reportedly used by gold miners in parts of the Philippines in the 1900s. There is evidence that, in addition to reducing the environmental impact, this method achieves better gold recovery for suitable ores and is less expensive. This borax method is used in northern Luzon in the Philippines, but miners have been reluctant to adopt it elsewhere for reasons that are not well understood. The method has also been promoted in Bolivia and Tanzania.
A rubbery polymer sometimes called Slime, Flubber, 'gluep' or 'glurch' (or erroneously called Silly Putty, which is based on silicone polymers), can be made by cross-linking polyvinyl alcohol with borax. Making flubber from polyvinyl acetate-based glues, such as Elmer's Glue, and borax is a common elementary science demonstration.
Borax, given the E number E285, is used as a food additive but this use is banned in some countries, such as Australia, China, Thailand and the United States. As a consequence, certain foods, such as caviar, produced for sale in the United States contain higher levels of salt to assist preservation. In addition to its use as a preservative, borax imparts a firm, rubbery texture to food. In China, borax ( or ) has been found in foods including wheat and rice noodles named lamian (), shahe fen (), char kway teow (), and chee cheong fun () In Indonesia, it is a common, but forbidden, additive to such foods as noodles, bakso (meatballs), and steamed rice. When consumed with boric acid, numerous studies have demonstrated a negative association between borax and various types of cancers. Boric acid and borax are low in toxicity for acute oral exposures, at approximately the same acute toxicity as salt. The average dose for asymptomatic ingestion cases, which accounts for 88% of all ingestions, is around 0.9 grams. However, the range of reported asymptomatic doses is wide, from 0.01 to 88.8 g.
Other uses include:
Ingredient in enamel glazes
Component of glass, pottery, and ceramics
Used as an additive in ceramic slips and glazes to improve fit on wet, greenware, and bisque
Fire retardant
Anti-fungal compound for cellulose insulation
Mothproofing 10% solution for wool
Pulverized for the prevention of stubborn pests (e.g. German cockroaches) in closets, pipe and cable inlets, wall panelling gaps, and inaccessible locations where ordinary pesticides are undesirable
Precursor for sodium perborate monohydrate that is used in detergents, as well as for boric acid and other borates
Tackifier ingredient in casein, starch and dextrin-based adhesives
Precursor for boric acid, a tackifier ingredient in polyvinyl acetate, polyvinyl alcohol-based adhesives
To make indelible ink for dip pens by dissolving shellac into heated borax
Curing agent for snake skins
Curing agent for salmon eggs, for use in sport fishing for salmon
Swimming pool buffering agent to control pH
Neutron absorber, are used in nuclear reactors and spent fuel pools to control reactivity and to shut down a nuclear chain reaction
As a micronutrient fertilizer to correct boron-deficient soils
Preservative in taxidermy
To color fires with a green tint
Traditionally used to coat dry-cured meats such as hams to improve the appearance and discourage flies
Used by blacksmiths in forge welding
Used as a flux for melting metals and alloys in casting to draw out impurities and prevent oxidation
Used as a woodworm treatment (diluted in water)
In particle physics as an additive to nuclear emulsion, to extend the latent image lifetime of charged particle tracks. The first observation of the pion, which was awarded the 1950 Nobel Prize, used this type of emulsion.
Toxicity
According to one study, borax is not acutely toxic. Its (median lethal dose) score is tested at 2.66 g/kg in rats, meaning that a significant dose of the chemical is needed to cause severe symptoms or death. The lethal dose is not necessarily the same for humans. On pesticide information websites it is listed as a non-lethal compound and of no hazardous concerns. Borax is absorbed poorly through intact skin, although fatalities have been recorded in persistent treatment of rashes and open wounds with boric acid-containing ointments and bath solutions. Borax is readily absorbed orally, well above 90%, and mostly excreted through the urine. Fatal cases attributed to ingestion include small children mistakenly drinking pesticides or suicide attempts with large volumes of crystals. No genotoxicity or carcinogenicity has been recorded in studies.
Borax has been in use as an insecticide in the United States with various restrictions since 1946. All restrictions were removed in February 1986 due to the low toxicity of borax, as reported in two EPA documents relating to boric acid and borax.
Although it cited inconclusive data, a re-evaluation in 2006 by the EPA still found that "There were no signs of toxicity observed during the study and no evidence of cytotoxicity to the target organ." In the reevaluation, a study of toxicity due to overexposure was checked and the findings were that "The residential handler inhalation risks due to boric acid and its sodium salts as active ingredients are not a risk concern and do not exceed the level of concern..." but that there could be some risk of irritation to children inhaling it if used as a powder for cleaning rugs.
Overexposure to borax dust can cause respiratory irritation, while no skin irritation is known to exist due to external borax exposure. Ingestion may cause gastrointestinal distress including nausea, persistent vomiting, abdominal pain, and diarrhea. Effects on the vascular system and human brain include headaches and lethargy but are less frequent. In severe cases, a "beefy" red rash affecting the palms, soles, buttocks and scrotum has occurred.
The Indonesian Directorate of Consumer Protection warns of the risk of liver cancer with high consumption of borax over a period of 5–10 years.
Borax was added to the Substance of Very High Concern (SVHC) candidate list on December 16, 2010. The SVHC candidate list is part of the EU Regulations on the Registration, Evaluation, Authorisation and Restriction of Chemicals 2006 (REACH), and the addition was based on the revised classification of borax as toxic for reproduction category 1B under the CLP Regulations. Substances and mixtures imported into the EU which contain borax are now required to be labelled with the warnings "May damage fertility" and "May damage the unborn child". It was proposed for addition to REACH Annex XIV by the ECHA on July 1, 2015. If this recommendation is approved, all imports and uses of borax in the EU will have to be authorized by the ECHA.
A review of the boron toxicity (as boric acid and borates) published in 2012 in the Journal of Toxicology and Environmental Health concluded: "It clearly appears that human B [boron] exposures, even in the highest exposed cohorts, are too low to reach the blood (and target tissue) concentrations that would be required to exert adverse effects on reproductive functions." A draft risk assessment released by Health Canada in July 2016 has found that overexposure to boric acid has the potential to cause developmental and reproductive health effects. Since people are already exposed to boric acid naturally through their diets and water, Health Canada advised that exposure from other sources should be reduced as much as possible, especially for children and pregnant women.
The concern is not with any one product, but rather multiple exposures from a variety of sources. With this in mind, the department also announced that certain pesticides that contain boric acid, which are commonly used in homes, will have their registrations cancelled and be phased out of the marketplace. As well, new, more protective label directions are being introduced for other boric acid pesticides that continue to be registered in Canada (for example, enclosed bait stations and spot treatments using gel formulations).
| Physical sciences | Boric oxyanions | Chemistry |
65574 | https://en.wikipedia.org/wiki/M4%20Sherman | M4 Sherman | The M4 Sherman, officially medium tank, M4, was the medium tank most widely used by the United States and Western Allies in World War II. The M4 Sherman proved to be reliable, relatively cheap to produce, and available in great numbers. It was also the basis of several other armored fighting vehicles including self-propelled artillery, tank destroyers, and armored recovery vehicles. Tens of thousands were distributed through the Lend-Lease program to the British Commonwealth, Soviet Union, and other Allied Nations. The tank was named by the British after the American Civil War General William Tecumseh Sherman.
The M4 Sherman tank evolved from the M3 Lee, a medium tank developed by the United States during the early years of World War II. The M3, also known by its service names "Grant" and "Lee," was characterized by a unique design that featured the main armament mounted in a side sponson. The Grant variant, used by British forces, employed a lower-profile turret based on British designs, while the Lee variant, used by the United States, retained the original American turret design. Despite the M3's effectiveness, the tank's unconventional layout and the limitations of its hull-mounted gun prompted the need for a more efficient and versatile design, leading to the development of the M4 Sherman.
The M4 Sherman retained much of the mechanical design of the M3, but it addressed several shortcomings and incorporated improvements in mobility, firepower, and ergonomics. One of the most significant changes was the relocation of the main armament—initially a 75 mm gun—into a fully traversing turret located at the center of the vehicle. This design allowed for more flexible and accurate fire control, enabling the crew to engage targets with greater precision than was possible on the M3. Additionally, the M4 featured a one-axis gyrostabilizer, which, while not precise enough to allow for accurate firing while in motion, helped keep the gun roughly aimed in the direction of the target when the tank came to a stop. This feature proved useful in ensuring the tank could quickly take aim after halting to fire, enhancing its effectiveness in combat. However, by modern standards, this system was relatively rudimentary compared to more advanced stabilizers.
The development of the M4 Sherman emphasized key factors such as reliability, ease of production, and standardization. The U.S. Army and the designers prioritized durability and maintenance ease, which ensured the tank could be quickly repaired in the field. A critical aspect of the design process was the standardization of parts, allowing for streamlined production and the efficient supply of replacement components. Additionally, the tank's size and weight were kept within moderate limits, which facilitated easier shipping and compatibility with existing logistical and engineering equipment, including bridges and transport vehicles. These design principles were essential for meeting the demands of mass production and quick deployment.
The M4 Sherman was designed to be more versatile and easier to produce than previous models, which proved vital as the United States entered World War II. It became the most-produced American tank of the conflict, with a total of 49,324 units built, including various specialized variants. Its production volume surpassed that of any other American tank, and it played a pivotal role in the success of the Allied forces. In terms of tank production, the only World War II-era tank to exceed the M4's production numbers was the Soviet T-34, with approximately 84,070 units built.
On the battlefield, the Sherman was particularly effective against German light and medium tanks during the early stages of the war. Its 75 mm gun and relatively superior armor provided an edge over the tanks fielded by Nazi Germany during this period. The M4 Sherman saw widespread use across various theaters of combat, including North Africa, Italy, and Western Europe. It was instrumental in the success of several Allied offensives, particularly after 1942, when the Allies began to gain momentum following the Allied landings in North Africa (Operation Torch) and the subsequent campaigns in Italy and France. The ability to produce the Sherman in large numbers, combined with its operational flexibility and effectiveness, made it a key component of the Allied war effort.
The Sherman's role as the backbone of U.S. armored forces in World War II cemented its legacy as one of the most influential tank designs of the 20th century. Despite its limitations—such as relatively thin armor compared to German heavy tanks like the Tiger and Panther—the M4 was designed to be both affordable and adaptable. Its widespread deployment, durability, and ease of maintenance ensured it remained in service throughout the war, and it continued to see action even in the years following World War II in various conflicts and regions. The M4 Sherman remains one of the most iconic tanks in military history, symbolizing the industrial might and innovation of the United States during the war.
When the M4 tank went into combat in North Africa with the British Army at the Second Battle of El Alamein in late 1942, it increased the advantage of Allied armor over Axis armor and was superior to the lighter German and Italian tank designs. For this reason, the US Army believed that the M4 would be adequate to win the war, and relatively little pressure was initially applied for further tank development. Logistical and transport restrictions, such as limitations imposed by roads, ports, and bridges, also complicated the introduction of a more capable but heavier tank. Tank destroyer battalions using vehicles built on the M4 hull and chassis, but with open-topped turrets and more potent high-velocity guns, also entered widespread use in the Allied armies. Even by 1944, most M4 Shermans kept their dual-purpose 75 mm gun. By then, the M4 was inferior in firepower and armor to increasing numbers of German upgraded medium tanks and heavy tanks but was able to fight on with the help of considerable numerical superiority, greater mechanical reliability, better logistical support, and support from growing numbers of fighter-bombers and artillery pieces. Later in the war, a more effective armor-piercing gun, the 76 mm gun M1, was incorporated into production vehicles. For anti-tank work, the British refitted Shermans with a 76.2 mm Ordnance QF 17-pounder gun (as the Sherman Firefly). Some were fitted with a 105 mm howitzer to act as infantry support vehicles.
The relative ease of production allowed large numbers of the M4 to be manufactured, and significant investment in tank recovery and repair units allowed disabled vehicles to be repaired and returned to service quickly. These factors combined to give the Allies numerical superiority in most battles, and many infantry divisions were provided with M4s and tank destroyers. By 1944, a typical U.S. infantry division had attached for armor support an M4 Sherman battalion, a tank destroyer battalion, or both.
After World War II, the Sherman, particularly the many improved and upgraded versions, continued to see combat service in many conflicts around the world, including the UN forces in the Korean War, with Israel in the Arab–Israeli wars, briefly with South Vietnam in the Vietnam War, and on both sides of the Indo-Pakistani War of 1965.
U.S. design prototype
The United States Army Ordnance Department designed the M4 medium tank as a replacement for the M3 medium tank. The M3 was an up-gunned development of the M2 medium tank of 1939, in turn, derived from the M2 light tank of 1935. The M3 was developed as a stopgap measure until a new turret mounting a 75 mm gun could be devised. While it was a big improvement when used by the British in Africa against German forces, the placement of a 37 mm gun turret on top gave it a very high profile, and the unusual side-sponson mounted main gun, with limited traverse, could not be aimed across the other side of the tank. Though reluctant to adopt British weapons into their arsenal, the American designers were prepared to accept proven British ideas. These ideas, as embodied in a tank designed by the Canadian General Staff, also influenced the development of the American Sherman tank. Before long American military agencies and designers had accumulated sufficient experience to forge ahead on several points. In the field of tank armament, the American 75 mm and 76 mm dual-purpose tank guns won the acknowledgment of British tank experts. Detailed design characteristics for the M4 was submitted by the Ordnance Department on 31 August 1940, but the development of a prototype was delayed while the final production designs of the M3 were finished and the M3 entered full-scale production. On 18 April 1941, the U.S. Armored Force Board chose the simplest of five designs. Known as the T6, the design was a modified M3 hull and chassis, carrying a newly designed turret mounting the M3's 75 mm gun. This would later become the Sherman.
The Sherman's reliability resulted from many features developed for U.S. light tanks during the 1930s, including vertical volute spring suspension, rubber-bushed tracks, and a rear-mounted radial engine with drive sprockets in front. The goals were to produce a fast, dependable medium tank able to support infantry, provide breakthrough striking capacity, and defeat any tank then in use by the Axis nations.
The T6 prototype was completed on 2 September 1941. The upper hull of the T6 was a single large casting. It featured a single overhead hatch for the driver and a hatch in the side of the hull. In the later M4A1 production model, this large casting was maintained, although the side hatch was eliminated, and a second overhead hatch was added for the assistant driver. The modified T6 was standardized as the M4, and first production completed in February 1942. The cast-hull models would later be re-standardized as M4A1, with the first welded-hull models receiving the designation M4. In August 1942, a variant of the M4 was put forth by the Detroit Arsenal to have angled, rather than rounded hull and turret armor. The changes were intended to improve the tank's protection without increasing weight or degrading other technical characteristics.
Doctrine
As the United States approached entry into World War II, armored employment was doctrinally governed by Field Manual 100–5, Operations (published May 1941, the month following selection of the M4 tank's final design). That field manual stated:
The M4 was, therefore, not originally intended primarily as an infantry support tank. It placed tanks in the "striking echelon" of the armored division and placed the infantry in the "support echelon", without directing that tanks should only seek to attack other tanks, thus leaving target selection up to the field commander based on what types of units were available to him to attack. A field manual covering the use of the Sherman (FM 17–33, "The Tank Battalion, Light and Medium" of September 1942) described fighting enemy tanks, when necessary, as one of the many roles of the Sherman, but devoted only one page of text and four diagrams to tank-versus-tank action out of 142 pages. This early armored doctrine was heavily influenced by the sweeping early war successes of German blitzkrieg tactics. By the time M4s reached combat in significant numbers, battlefield demands for infantry support and tank-versus-tank action far outnumbered the occasional opportunities of rear-echelon exploitation.
United States doctrine held that the most critical anti-tank work stopping massed enemy tank attacks was primarily to be done by towed and self-propelled anti-tank guns, operated by "Tank Destroyer" battalions, with friendly tanks being used in support if possible. Speed was essential to bring the tank destroyers from the rear to destroy incoming tanks. This doctrine was rarely followed in combat, as it was found to be impractical. Commanders were reluctant to leave tank destroyers in reserve; if they were, it was also easier for an opposing armored force to achieve a breakthrough against an American tank battalion, which would not have all of its anti-tank weapons at the front during the beginning of any attack.
U.S. production history
The first production of the Sherman took place at the Lima Locomotive Works and was first used in 1941, with many early vehicles reserved for British use under Lend-Lease; the first production Sherman was given to the U.S. Army for evaluation, and the second tank of the British order went to London. Nicknamed Michael, probably after Michael Dewar, head of the British tank mission in the U.S., the tank was displayed in London and is now an exhibit at The Tank Museum, Bovington, UK.
In World War II, the U.S. Army ultimately fielded 16 armored divisions, along with 70 separate tank battalions, while the U.S. Marine Corps fielded six tank battalions. A third of all Army tank battalions, and all six Marine tank battalions, were deployed to the Pacific Theater of Operations (PTO). Before September 1942, President Franklin D. Roosevelt had announced a production program calling for 120,000 tanks for the Allied war effort. Although the American industrial complex was not affected by enemy aerial bombing or submarine warfare as was Japan, Germany and, to a lesser degree, Great Britain, an enormous amount of steel for tank production was diverted to the construction of warships and other naval vessels. Steel used in naval construction amounted to the equivalent of approximately 67,000 tanks; and consequently, only about 53,500 tanks were produced during 1942 and 1943.
The Army had seven main sub-designations for M4 variants during production: M4, M4A1, M4A2, M4A3, M4A4, M4A5, and M4A6. These designations did not necessarily indicate linear improvement; in that "M4A4" did not indicate it was better than "M4A3". These sub-types indicated standardized production variations, which were often manufactured concurrently at different locations. The sub-types differed mainly in engines, although the M4A1 differed from the other variants by its fully cast upper hull, with a distinctive rounded appearance. The M4A4 had a longer engine that required a longer hull and more track blocks, and thus the most distinguishing feature of the M4A4 was the wider longitudinal spacing between the bogies. "M4A5" was an administrative placeholder designation for Canadian Ram tank. The M4A6 had a radial diesel engine as well as the elongated chassis of the M4A4, but only 75 of these were ever produced.
Most Sherman sub-types ran on gasoline. The air-cooled Continental-produced Wright R-975 Whirlwind 9-cylinder radial gasoline engine in the M4 and M4A1 produced . The M4A3 used the liquid-cooled Ford GAA V8 gasoline engine, and the M4A4 used the liquid-cooled 30 cylinder Chrysler A57 multibank gasoline engine. There were also two diesel-engined variants. The M4A2 was powered by a pair of liquid-cooled GMC Detroit Diesel 6–71 two-stroke inline engines, that produced a total of , while the M4A6 used an RD-1820 (a redesigned Caterpillar D-200A air-cooled radial diesel engine, adapted from Wright Aeronautical's Wright R-1820 Cyclone 9 nine-cylinder radial aircraft engine.) that produced . A 24-volt electrical system was used in the M4. The M4A2 and M4A4 were mostly supplied to other Allied countries under Lend-Lease.
The term "M4" can refer specifically to the initial sub-type with its Continental radial engine, or generically, to the entire family of seven Sherman sub-types, depending on context. Many details of production, shape, strength, and performance improved while in production, without a change to the tank's basic model number. These included stronger suspension units, safer "wet" (W) ammunition stowage, and stronger or more effective armor arrangements, such as the M4 "Composite", which had a cheaper to produce cast front hull section mated to a regular welded rear hull. British nomenclature for Shermans was by mark numbers for the different hulls with letters for differences in armament and suspension: A for a vehicle with the 76 mm gun, B for the 105 mm howitzer, C for the 17-pounder gun, and Y for any vehicle equipped with horizontal volute spring suspension (HVSS), e.g. British operated M4A1(76) was known as Sherman IIA.
Early Shermans mounted a 75 mm medium-velocity general-purpose gun. Although Ordnance began work on the T20/22/23 series as Sherman replacements, the Army Ground Forces were satisfied with the M4 and Armored Force Board considered some features of the experimental tanks unsatisfactory. Continuing with M4 minimized production disruption but elements of the experimental designs were incorporated into the Sherman. Later M4A1, M4A2, and M4A3 models received the larger turret with high velocity 76 mm gun trialed on the T23 tank. The first standard-production 76 mm gun-armed Sherman was an M4A1, accepted in January 1944, which first saw combat in July 1944 during Operation Cobra. Variants of the M4 and M4A3 were factory-produced with a 105 mm howitzer and a distinctive rounded gun mantlet, which surrounded the main gun, on the turret. The first Sherman variant to be armed with the 105 mm howitzer was the M4, first accepted in February 1944.
From May to July 1944, the Army accepted a limited run of 254 M4A3E2 "Jumbo" Shermans, which had very thick hull armor and the 75 mm gun in a new, better-protected T23-style turret ("Jumbos" could mount the 76 mm M1 cannon), to assault fortifications, leading convoys, and spearhead armored columns. The M4A3 model was the first to be factory-produced with the HVSS system with wider tracks to distribute weight, beginning in August 1944. With the smooth ride of the HVSS, it gained the nickname "Easy Eight" from its experimental "E8" designation. The M4 and M4A3 105 mm-armed tanks, as well as the M4A1 and M4A2 76 mm-armed tanks, were also eventually equipped with HVSS. Both the Americans and the British developed a wide array of special attachments for the Sherman, although few saw combat, remaining experimental. Those that saw action included a bulldozer blade, the Duplex Drive system, flamethrowers for Zippo flame tanks, and various rocket launchers such as the T34 Calliope. British variants (DDs and mine flails) formed part of the group of specialized vehicles collectively known as "Hobart's Funnies" (after Percy Hobart, commander of the 79th Armoured Division).
The M4 Sherman's basic chassis was used for all the sundry roles of a modern mechanized force. These included the M10 and M36 tank destroyers; M7B1, M12, M40, and M43 self-propelled artillery; the M32 and M74 "tow truck"-style recovery tanks with winches, booms, and an 81 mm mortar for smoke screens; and the M34 (from M32B1) and M35 (from M10A1) artillery prime movers.
Service history
Allocation
During World War II, approximately 19,247 Shermans were issued to the U.S. Army and about 1,114 to the U.S. Marine Corps. The U.S. also supplied 17,184 to United Kingdom (some of which in turn went to the Canadians and the Free Poles), while the Soviet Union received 4,102 and an estimated 812 were transferred to China. These numbers were distributed further to the respective countries' allied nations.
The U.S. Marine Corps used the diesel M4A2 and gasoline powered M4A3 in the Pacific. However, the Chief of the Army's Armored Force, Lt. Gen. Jacob L. Devers, ordered that no diesel-engined Shermans be used by the Army outside the Zone of Interior (the continental U.S.). The Army used all types for either training or testing within the United States but intended the M4A2 and M4A4 (with the A57 Multibank engine) to be the primary Lend-Lease exports.
First combat
Shermans were being issued in small numbers for familiarization to U.S. armored divisions when there was a turn of events in the Western Desert campaign. On 21 June 1942, Axis forces captured Tobruk, threatening Egypt and Britain's supply line through the Suez Canal. British Prime Minister Winston Churchill was at the Second Washington Conference when news of the defeat broke; President Franklin D. Roosevelt asked what he could do to help and Churchill replied at once, "Give us as many Sherman tanks as you can spare and ship them to the Middle East as quickly as possible." The US considered collecting all Shermans together to be able to send the 2nd Armored Division under Patton to reinforce Egypt, but delivering the Shermans directly to the British was quicker and over 300 – mostly M4A1s, but also including M4A2s – had arrived there by September 1942.
The Shermans were modified for desert warfare with shields over the tracks and another stowage. The Sherman first saw combat at the Second Battle of El Alamein in October 1942 with the British 8th Army. At the start of the offensive, there were 252 tanks fit for action. These equipped the British 9th Armoured Brigade (for the battle under the New Zealand Division), 2nd Armoured Brigade (1st Armoured Division), and 8th and 20th Armoured Brigades (10th Armoured Division). Their first encounter with tanks was against German Panzer III and IV tanks with long 50 mm and 75 mm guns engaging them at . There were losses to both sides.
The first U.S. Shermans in battle were M4s and M4A1s in Operation Torch the following month. On 6 December, near Tebourba, Tunisia, a platoon from the 2nd Battalion, 13th Armored Regiment was lost to enemy tanks and anti-tank guns.
Additional M4s and M4A1s replaced M3s in U.S. tank battalions over the course of the North African campaign.
The M4 and M4A1 were the main types in U.S. units until the fall of 1944 when the Army began replacing them with the preferred M4A3 with its more powerful engine. Some M4s and M4A1s continued in U.S. service for the rest of the war. The first Sherman to enter combat with the 76 mm gun in July 1944 was the M4A1, then the M4A2, closely followed by the M4A3. By the end of the war, roughly half the U.S. Army Shermans in Europe had the 76 mm gun. The first HVSS-equipped Sherman to see combat was the M4A3(76)W in December 1944.
Eastern Front
Under Lend-Lease, 4,102 M4A2 medium tanks were sent to the Soviet Union. Of these, 2,007 were equipped with the original 75 mm main gun, with 2,095 mounting the more-capable 76 mm gun. The total number of Sherman tanks sent to the USSR under Lend-Lease represented 18.6% of all Lend-Lease Shermans. The first 76 mm-armed M4A2 Shermans started to arrive in the Soviet Union in the late summer of 1944. The Soviet records reported the receiving of 3,664 tanks; the difference mainly due to deliveries being sunk on the way and discrepancies between the United States and Soviet Union archives
The Red Army considered the M4A2 to be much less prone to catch fire due to ammunition detonation than the T-34/76, but the M4A2 had a higher tendency to overturn in road accidents and collisions or because of rough terrain than the T-34 due to its higher center of gravity.
By 1945, some Red Army armored units were equipped entirely with the Sherman. Such units included the 1st Guards Mechanized Corps, the 3rd Guards Mechanized Corps and the 9th Guards Mechanized Corps, amongst others. According to Soviet tanker Dmitriy Loza, the Sherman was held in good regard and viewed positively by many Soviet tank crews, with compliments given to its reliability, ease of maintenance, generally good firepower (referring especially to the 76 mm gun version) as well as an auxiliary power unit (APU) to keep the tank's batteries charged without having to run the main engine, as was required on the T-34. However, according to Soviet tank crews, the Sherman also had disadvantages, the greatest being its high center of gravity and the ease of hitting it by enemy fire. The Sherman's relatively narrow-set tracks struggled to negotiate muddy terrain compared to the wider-set tracks of the T-34 or German Panther tank.
David M. Glantz wrote: "[The Sherman’s] narrow treads made it much less mobile on mud than its German and Soviet counterparts, and it consumed great quantities of fuel..." Glantz noted that Soviet tankers preferred the American tanks to the British ones, but preferred Soviet ones most of all.
Pacific Theater
While combat in the European theater often consisted of high-profile armored warfare, the mainly naval nature of the Pacific Theater of Operations (PTO) relegated it to secondary status for both the Allies and the Japanese. While the U.S. Army fielded 16 armored divisions and 70 separate tank battalions during the war, only a third of the battalions and none of the divisions were deployed to the Pacific Theater. The Imperial Japanese Army (IJA) deployed only their 1st Tank Division and 2nd Tank Division to the Pacific during the war with the 3rd Tank Division being deployed in Burma, China and Manchukuo's border with the Soviet Union and the 4th Tank Division remaining on the Japanese home islands in preparation for an allied invasion that never came. Armor from both sides mostly operated in jungle terrain that was poorly suited to armored warfare. For this type of terrain, the Japanese and the Allies found light tanks easier to transport and deploy.
During the early stages of combat in the Pacific, specifically, the Guadalcanal Campaign, the U.S. Marine Corps' M2A4 light tank fought against the equally matched Type 95 Ha-Go light tank; both were armed with a 37 mm main gun. However, the M2 (produced in 1940) was newer by five years. By 1943, the IJA still used the Type 95 and Type 97 Chi-Ha medium tanks, while Allied forces were quickly replacing their light tanks with 75 mm-armed M4s. The Chinese in India received 100 M4 Shermans and used them to great effect in the subsequent 1944 and 1945 offensives in the China Burma India Theater.
To counter the Sherman, the Japanese developed the Type 3 Chi-Nu and the heavier Type 4 Chi-To; both tanks were armed with 75 mm guns, albeit of different type. Only 166 Type 3s and two Type 4s were built, and none saw combat; they were saved for the defense of the Japanese home islands, leaving 1930s era light and medium armor to do battle against 1940s-built Allied light and medium armor.
During the later years of the war, general purpose high explosive ammunition was preferred for fighting Japanese tanks because armor-piercing rounds, which had been designed for penetrating thicker steel, often went through the thin armor of the Type 95 Ha-Go (the most commonly encountered Japanese tank) and out the other side without stopping. Although the high-velocity guns of tank destroyers were useful for penetrating fortifications, M4s armed with flamethrowers were often deployed, as direct fire seldom destroyed Japanese fortifications.
Korean War
During the Korean War, the M4A3E8 Easy Eight was the main tank force of the U.S. military until the signing of the armistice agreement.
At the outbreak of the war, the U.S. military tried to deploy the M4A3E8, a medium-sized tank of the same class, to respond to North Korean T-34-85, but there were few tanks available for rapid deployment from the Far East due to disarmament after World War II. The U.S. Far East Command collected 58 M4A3E8 scattered throughout Japan, created the 8072nd Temporary Tank Battalion (later renamed to the 89th Tank Battalion) on 17 July and landed them in Busan on 1 August. The 8072nd Temporary Tank Battalion was immediately deployed for Battle of Masan to support the 25th U.S. Infantry Division.
Since then, a total of 679 M4A3E8 were deployed on the Korean Peninsula in 1950. The M4A3E8 and T-34-85 were comparable and could destroy each other at normal combat ranges, although the use of High-Velocity Armor Piercing ammunition, advanced optics, and better crew training gave the Sherman an advantage. The M4A3E8, using 76 mm HVAP ammunition, destroyed 41 enemy tanks from July to November 1950.
The M4A3E8 had weaker anti-tank combat capability compared to the larger caliber M26 Pershing and the M46 Patton that were operated at the same time. However, the lighter M4A3E8 became the preferred U.S. tank in the later phases of the war. It was considered more advantageous in terms of maneuverability on rough terrain and ease of maintenance due to the mechanical reliability. Because of this feature, the M4A3E8 were widely used for providing close support to infantry units, particularly during battles for high ground and mountains.
From December 1951, around 20 M4A3E8s saw service with the Republic of Korea Marine Corps during the war while the Army operated M36 GMCs as its main armored asset.
Other uses
After World War II, the U.S. kept the M4A3E8 Easy Eight in service, with either the 76 mm gun or a 105 mm M4 howitzer. The U.S. Army replaced the M4 in 1957, in favor of the M47 Patton, M48 Patton and, M60 Patton. The U.S. continued to transfer Shermans to its allies, which contributed to widespread foreign use.
The Israeli Defense Force used Shermans from its creation in 1948 until the 1980s, having first acquired a single M4A2 lacking the main armament from British forces as they withdrew from Israel. The popularity of the tank (having now been re-armed) compared to the outdated, 1934-origin French Renault R35 interwar light tanks with their 37 mm short-barreled guns, which made up the bulk of the IDF's tank force, led to the purchase of 30 unarmed M4(105 mm)s from Italian scrapyards. Three of these, plus the original M4A2, saw extensive service in the 1948-9 war of independence. The remainder were then serviced and rearmed with 75 mm guns and components whenever these became available, composing a large part of Israeli tank forces for the next eight years. The 75 mm-armed Shermans were replaced by M4A1 (76 mm) Shermans imported from France before the 1956 Suez Crisis after it was realized that their armor penetration was insufficient for combat against newer tanks such as the IDF Centurions as well as the T-34-85s being delivered to Egyptian forces. During further upgrades, the French military helped develop a conversion kit to upgrade about 300 Shermans to the long high-velocity 75 mm gun CN 75-50 used in the AMX-13. These were designated Sherman M-50 by the Israelis. Before the Six-Day War in 1967, the Israeli Army upgraded about 180 M4A1(76)W HVSS Shermans with the French 105 mm Modèle F1 gun, re-engined them with Cummins diesel engines, and designated the upgraded tank Sherman M-51. The Sherman tanks, fighting alongside the 105 mm Centurion Shot Kal and M48 Patton tanks, were able to defeat the T-34-85, T-54/55/62 series, and IS-3 tanks used by the Egyptian and Syrian forces in the 1967 Six-Day War.
M4A3s were also used by British forces in Indonesia during the Indonesian National Revolution until 1946 when they were passed on to the KNIL, which used them until 1949 before they were passed on to the Indonesian National Armed Forces.
Armament
Gun development
As the Sherman was being designed, provisions were made so that multiple types of main armament (specified as a 75 mm gun, a 3-inch gun, or a 105 mm howitzer) could be mounted in the turret. The possibility of mounting the main gun of the M6 heavy tank, the 3-inch gun M7, in the turret of the M4 Sherman was explored first, but its size and weight (the weapon was modified from a land-based antiaircraft gun) made it too large to fit in the turret of the Sherman. Development on a new 76 mm gun better suited to the Sherman began in fall 1942.
In early 1942, tests began on the feasibility of mounting a 105 mm howitzer into the turret of the Sherman. The basic 105 mm howitzer M2A1 was found to be ill-designed for mounting in a tank turret, so it was completely redesigned and re-designated the 105 mm howitzer M4. After modifications to the turret (concerning the balancing of the gun and the strength of the power traverse) and interior of the hull (concerning the stowage of the 105 mm ammunition), the Ordnance Department expressed its approval of the project, and production of M4 tanks armed with 105 mm howitzers began in February 1944.
The Sherman would enter combat in 1942 equipped with the 75 mm gun M3, a 40-caliber gun that could penetrate an estimated of rolled homogeneous armor (RHA) at 90 degrees, a range of and at firing the usual M61 APCBC round, and equipped with an M38A2 telescopic gunsight. Facing the early Panzer III and Panzer IV in North Africa, the Sherman's gun could penetrate the frontal armor of these tanks at normal combat ranges, within . U.S. Army Intelligence discounted the arrival of the Tiger I in 1942 and the Panther tank in 1943, predicting that the Panther would be a heavy tank like the Tiger I, and doubted that many would be produced. There were also reports of British QF 6-pounder (57 mm) guns being able to destroy the Tiger I. However, this only happened at very close ranges and against the thinner side armor. Due to their misconceptions related to this, and also due to tests that seemed to prove that the 76 mm gun was able to destroy both the Tiger and the Panther, the leadership of Army Ground Forces were not especially concerned by the Tiger I. The criteria and results of the 76 mm gun tests were later ruled to have been inaccurate when compared to real-world conditions (tests against sections of American armor plate configured to resemble those found on a Panther tank suggested that the new M1A1 gun would be adequate, but testing against actually captured Panther tanks was never done), with Eisenhower even remarking that he was wrongly told by Ordnance that the 76 mm could knock out any German tank. The Army also failed to anticipate that the Germans would attempt to make the Panther the standard tank of their panzer divisions in 1944, supported by small numbers of Tiger I and IIs.
When the newly designed 76 mm gun, known as the T1, was first installed in the M4 in spring 1943, it was found to unbalance the turret, and the gun barrel also protruded too far forward, making it more difficult to transport and susceptible to hitting the ground when the tank traveled over undulating terrain. The barrel length was reduced by (from 57 calibers to 52), resulting in the M1 variant. Mounting this gun in the original M4 turret proved problematic, so the turret for the aborted T23 tank project was used instead for the definitive production version of the 76 mm M4 Shermans, along with a modified version of the gun known as the M1A1.
Despite the Ordnance Department's development of new 76 mm and 90 mm anti-tank guns, the Army Ground Forces rejected their deployment as unnecessary. An attempt to upgrade the M4 Sherman by installing the 90 mm-armed turret from the T26 tank project on an M4 hull in April 1944 (referred to as the M4/T26) was halted after realizing it could not go into production sooner than the T26 and would likely delay T26 development. Even in 1943, most German armored fighting vehicles (later models of the Panzer IV tank, StuG III assault gun and Marder III panzerjaeger self-propelled anti-tank gun) mounted the 7.5 cm KwK 40. As a result, even weakly armored light German tank destroyers such as the Marder III, which was meant to be a stop-gap measure to fight Soviet tanks in 1942, could destroy Shermans from a distance. The disparity in firepower between the German armored fighting vehicles that began to be fielded in 1943 and the 75 mm-armed M4 was the impetus to begin production of 76 mm-armed M4s in January 1944. In testing before the invasion of Normandy, the 76 mm gun was found to have an undesirably large muzzle blast that kicked up dust from the ground and obscured vision for further firing. The M1A1C gun, which entered production lines in March 1944, was threaded for a muzzle brake, but as the brakes were still in development, the threads were protected with a cap. The addition of a muzzle brake on the new M1A2 gun (which also incorporated a faster rifling twist leading to a slight accuracy increase at longer ranges) beginning in October 1944 finally solved this problem by directing the blast sideways.
Army doctrine at the time emphasized the multirole ability of the tank, and the capability of the high explosive shell was considered important. Being a dedicated anti-tank gun, the 76 mm had a much weaker high explosive shell than the existing 75 mm and was not initially accepted by various U.S. armored division commanders, even though many had already been produced and were available. All of the U.S. Army M4s deployed initially in Normandy in June 1944 had the 75 mm gun. Fighting against Panther tanks in Normandy quickly demonstrated the need for better anti-tank firepower, and the 76 mm M4s were deployed to First Army units in July 1944. Operation Cobra was the combat debut of the 76 mm gun-armed Sherman, in the form of the M4A1(76)W. General George S. Patton's Third Army were initially issued 75 mm M4s and accepted 76 mm-armed M4s only after the Battle of Arracourt against Panther tanks in late September 1944.
The higher-velocity 76 mm gun gave Shermans anti-tank firepower equal to many of the German vehicles they encountered, particularly the Panzer IV and StuG III, but its gun was inferior to that of the Tiger or the Panther. The 76 mm could penetrate of unsloped RHA at and at using the usual M62 round. The M1 helped to equalize the Sherman and the Panzer IV in terms of firepower; the 48-caliber 7.5 cm KwK 40 (75 mm L/48) of the Panzer IV could penetrate of unsloped RHA at and at . The 76 mm gun was still inferior to the much more powerful 70-caliber 7.5 cm KwK 42 (75 mm L/70) of the Panther, which could penetrate of unsloped RHA at and at using the usual PzGr.39/42 round. The 76 mm was capable of knocking out a Panther at normal combat ranges from the flanks or rear but could not overcome the glacis plate. Due to its 55-degree slope, the Panther's glacis had a line-of-sight thickness of with actual effectiveness being even greater. An M4 might only knock out a Panther frontally from point-blank range by aiming for its turret front and transverse-cylindrical shaped mantlet, the lower edge of which on most Panthers (especially the earlier Ausf. D and A versions) constituted a vulnerable shot trap. A 76 mm-armed Sherman could penetrate the upper frontal hull superstructure of a Tiger I tank from normal combat ranges. Although the new gun lessened the gap between the two tanks, the Tiger I was still capable of knocking an M4 out frontally from over .
In late summer 1944, after breaking out of the bocage and moving into open country, U.S. tank units that engaged German defensive positions at longer ranges sometimes took 50% casualties before spotting where the fire was coming from. The average combat range noted by the Americans for tank-versus-tank action was . Sherman crews also had concerns about firing from longer ranges, as Sherman's high-flash powder made their shots easier to spot. This, and the U.S. Army's usual offensive tactical situation, often contributed to losses suffered by the U.S. Army in Europe. Even though the various gunsights fitted to the Sherman had fewer magnification settings than those fitted to German tanks, their gunners were able to use a secondary periscope that featured a far larger field of view than their German counterparts.
T4 High-Velocity Armor Piercing (HVAP) ammunition became available in September 1944 for the 76 mm gun. The projectile contained a tungsten penetrator surrounded by a lightweight aluminum body and ballistic windshield, which gave it a higher velocity and more penetrating power. The increased penetration of HVAP allowed the 76 mm gun to match the Panther's 7.5 cm KwK 42 APCR shot. However, its performance was heavily degraded by sloped armor such as the Panther's glacis. Because of tungsten shortages, HVAP rounds were constantly in short supply. Priority was given to U.S. tank destroyer units and over half of the 18,000 projectiles received were not compatible with the 76 mm gun M1, being fitted into the cartridge case of the M10 tank destroyer's 3-inch gun M7. Most Shermans carried only a few rounds at any one time, and some units never received any.
The British anticipated future developments in German armor and began development of a anti-tank gun even before its 57 mm predecessor entered service. Out of expediency and also driven by delays in their new tank designs, they mounted the powerful 76.2 mm Ordnance QF 17-pounder gun in a standard 75 mm M4 Sherman turret. This conversion became the Sherman Firefly. The U.S. M1 gun and the 17-pounder had nearly identical bore diameters, but the British piece used a more voluminous cartridge case containing a much bigger propellant charge. This allowed it to penetrate of unsloped RHA at and at using APCBC ammunition. The 17-pounder still could not penetrate the steeply sloped glacis plate of the Panther but it was expected to be able to pierce its gun mantlet at over ; moreover it was estimated it would defeat the Tiger I's frontal armor from . However, British Army test results conducted with two Fireflys against a Panther turret-sized target demonstrated relatively poor accuracy at long range; a hit probability of 25.4% at with APCBC, and only 7.4% with APDS. In late 1943, the British offered the 17-pounder to the U.S. Army for use in their M4 tanks. General Devers insisted on comparison tests between the 17-pounder and the U.S. 90 mm gun. The tests were finally done on 25 March – 23 May 1944; they seemed to show the 90 mm gun was equal to or better than the 17-pounder. By then, production of the 76 mm-armed M4 and the 90 mm-armed M36 were both underway and U.S. Army interest in the 17-pounder waned. Late in 1944, the British began to produce tungsten sabot rounds for the 17-pounder, which could readily breach the armor of even the Tiger II; these were not as accurate as standard rounds and not generally available.
After the heavy tank losses of the Battle of the Bulge, in January 1945, General Eisenhower asked that no more 75 mm M4s be sent to Europe: only 76 mm M4s were wanted. Interest in mounting the British 17-pounder in U.S. Shermans flared anew. In February 1945, the U.S. Army began sending 75 mm M4s to England for conversion to the 17-pounder. Approximately 100 conversions were completed by the beginning of May. By then, the end of the war in Europe was clearly in sight, and the U.S. Army decided the logistical difficulties of adding a new ammunition caliber to the supply system was not warranted. None of the converted 17-pounder M4s was deployed in combat by the U.S., and it is unclear what happened to most of them, although some were given to the British as part of Lend-Lease post-war.
The tank destroyer doctrine
General Lesley J. McNair was head of the Army Ground Forces from 1942 to 1944. McNair, a former artilleryman, advocated for the role of the tank destroyer (TD) within the U.S. Army. In McNair's opinion, tanks were to exploit breakthroughs and support infantry, while masses of attacking hostile tanks were to be engaged by tank destroyer units, which were composed of a mix of self-propelled and towed anti-tank guns. Self-propelled tank destroyers, called "gun motor carriages" (as were any U.S. Army self-propelled armored vehicles mounting an artillery piece of heavy caliber), were similar to tanks but were lightly armored with open-topped turrets. The tank destroyers were supposed to be faster and carry a more powerful anti-tank gun than tanks (although in reality tanks often received more powerful guns before tank destroyers did) and armor was sacrificed for speed. Armored Force and Tank Destroyer Force doctrine were developed separately, and it was not against Armored Force doctrine for friendly tanks to engage hostile tanks that appeared while attacking or defending; tank destroyers were to engage numbers of enemy tanks that broke through friendly lines.
McNair approved the 76 mm upgrade to the M4 Sherman and production of the 90 mm gun-armed M36 tank destroyer, but he at first staunchly opposed mass production of the T20 medium tank series and its descendants, the T25 and T26 (which would eventually become the M26 Pershing) during the crucial period of 1943 because they did not meet the two criteria of the Army Ground Forces for accepting new equipment; they were not "battle worthy," and he saw no "battle need" for them. In fall 1943, Lieutenant General Devers, commander of U.S. forces in the European Theater of Operations (ETO), asked for 250 T26 tanks for use in the invasion of France; McNair refused, citing the fact that he believed the M4 was adequate. Devers appealed all the way to the War Department, and Major General Russell L. Maxwell, the Assistant Chief of Staff G-4 of the War Department General Staff, ordered the 250 tanks built in December 1943. McNair finally relented in his opposition, but still opposed mass production; his Army Ground Forces even asked for the tanks to be "down-gunned" from 90 mm to 75 or 76 mm in April 1944, believing the 76 mm gun was capable of performing satisfactorily. General George C. Marshall then summarily ordered the tanks to be provided to the ETO as soon as possible. Soon after the Normandy invasion in June 1944, General Dwight D. Eisenhower urgently requested heavy tanks, but McNair's continued opposition to mass production due to persistent serious mechanical problems with the vehicles delayed their procurement. That same month, the War Department reversed course and completely overruled the Army Ground Forces when making their tank production plan for 1945. 7,800 tanks were to be built, of which 2,060 were to be T26s armed with 90 mm guns, 2,728 were to be T26s armed with 105 mm howitzers and 3,000 were to be M4A3 Sherman tanks armed with 105 mm howitzers. As a part of the plan, the British requested 750 90 mm-armed T26s and 200 105 mm-armed T26s. General McNair was killed in a botched air support mission in July 1944, and the path to production for the T26 tank became somewhat clearer. General Marshall intervened again and the tanks were eventually brought into full production. However, only a few T26 tanks (by then designated M26) saw combat beginning in February 1945, too late to have any effect on the battlefield.
Variants
The Sherman, like its M3 predecessor, was one of the first tanks to feature a gyroscopically stabilized gun and sight. The stabilization was only in the vertical plane; the mechanism could not slew the turret. The stabilizer was sufficient to keep the gun's elevation setting within 1/8th of a degree, or 2 mils, while crossing moderately rough terrain at . This gave a hit probability of 70% on enemy tanks at ranges of to . The utility of the stabilization is debatable, with some saying it was useful for its intended purpose, others that it was useful only for using the sights for stabilized viewing on the move. Some operators disabled the stabilizer.
The 75 mm gun also had an effective canister round that functioned as a large shotgun. In the close fighting of the French bocage of Normandy, the U.S. Army's 2nd Armored Division tanks used Culin Hedgerow Cutters fitted to their tanks to push three tanks together through a hedgerow. The flank tanks would clear the back of the hedgerow on their side with canister rounds while the center tank would engage and suppress known or suspected enemy positions on the next hedgerow. This approach permitted surprisingly fast progress through the very tough and well-defended hedgerows in Normandy. Over 500 sets of these were fitted to US armored vehicles, and many fitted to various British tanks (where they were called "prongs").
The 75 mm gun had a white phosphorus shell originally intended for use as an artillery marker to help with targeting. M4 tank crews discovered that the shell could also be used against the Tiger and Panther—when the burning white phosphorus adhered to the German tanks, their excellent optics would be blinded and the acrid smoke would get sucked inside the vehicle, making it difficult or impossible for the crew to breathe. This, and the fear of fire starting or spreading inside the tank, would sometimes cause the crew to abandon the tank. There were several recorded instances where white phosphorus shells defeated German tanks in this fashion.
M4 Shermans armed with the 105 mm M4 howitzer were employed as a three-vehicle "assault gun" platoon under the tank battalion headquarters company along with another one in each medium tank company (a total of six tanks in the battalion) to provide close fire support and smoke. Armored infantry battalions were also eventually issued three of 105 mm Shermans in the headquarters company. The 105 mm-armed variants were issued the M67 high-explosive anti-tank (HEAT) round; although very effective the low muzzle velocity made hitting enemy armor difficult. The 105 mm Shermans were not equipped with a power-traversing turret, and this resulted in complaints from soldiers in the field. An upgrade was not available before the end of the war.
Secondary armament
The standard secondary armament comprised; a coaxial .30 caliber M1919 Browning machine gun A5 with 4,750 rounds of ammunition, a ball-mounted M1919 A4 in the front hull operated by the assistant driver and a pintle mounted .50 caliber M2 Browning HB machine gun with 300 rounds on the turret roof for anti-aircraft protection. Early production models of the M4 and M4A1 also had a pair of fixed, forward firing M1919 machine guns mounted in the front hull and operated by the driver; this arrangement was inherited from the M2 and M3 medium tanks and was a result of a World War I requirement to be able to sweep the ground in front of an advancing tank with unaimed fire.
Armor
Turret
The turret armor of the 75 mm and 105 mm armed M4 ranged from to thick. The turret front armor was 76.2 mm thick, angled at 30 degrees from the vertical, giving an effective thickness of . The opening in the front of the M4's turret for the main gun was covered by a rounded thick rotor shield. Early Shermans that had a periscopic sight for the main gun mounted in the turret roof possessed a small thick mantlet that only covered the hole where the main gun barrel protruded; the exposed barrel of the coaxial machine gun was vulnerable to bullet splash or shrapnel and a small armored cover was manufactured to protect it. When the Sherman was later fitted with a telescopic sight next to the main gun, a larger thick gun mantlet that covered the entire rotor shield including the sight and coaxial machine gun barrel was produced. 105 mm-armed Sherman tanks did not have a rotor shield, possessing only the mantlet to cover the opening in the turret front. The turret side armor was thick at a 5-degree angle from the vertical. The turret rear armor was thick and vertical, while the turret roof armor was thick, and flat.
Later models of the M4A1, M4A2 and M4A3 Sherman tanks were equipped with the T80 turret developed for the T23 tank and the new 76 mm gun. This turret's armor was thick on the sides and rear, angled from 0 to 13 degrees from the vertical. It had a thick roof, which sat at 0 to 45 degrees from the vertical. The front of the T23 turret, which like the 105 mm-armed Sherman's turret, did not have a rotor shield, was protected by an unsloped thick cast gun mantlet. Combat experience indicated that the single hatch in the three-man 75 mm gun turret was inadequate for timely evacuation, so Ordnance added a loader's hatch beside the commander's beginning in late 1943. All 76 mm gun turrets had two roof hatches.
Hull
The Sherman's glacis plate was originally thick. and angled at 56 degrees from the vertical, providing an effective thickness of . The M4, M4A1, early production M4A2, and early production M4A3 possessed protruding cast "hatchway" structures that allowed the driver and assistant driver's hatches to fit in front of the turret ring. In these areas, the effect of the glacis plate's slope was greatly reduced. Later Shermans had an upgraded glacis plate that was uniformly thick and sloped at 47 degrees from the vertical, providing an effective thickness of over the entire plate. The new design improved overall ballistic protection by eliminating the "hatchways", while also allowing for larger hatches for the driver and bow gunner. The cast hull M4A1 for the most part retained its previous glacis shape even after the larger hatches were introduced; the casting, irrespective of the larger hatches, sat 37 to 55 degrees from the vertical, with the large majority of the piece sitting closer to a 55-degree angle.
The transmission housing was rounded, made of three cast sections bolted together or cast as one piece. It ranged from thick. The upper and lower hull sides were thick, and vertical, while the upper hull rear was also thick, vertical or sloped at 10 degrees from the vertical. The lower hull rear, which protected the engine, was thick, sloped at 0 to 22 degrees from the vertical depending upon the variant. The hull roof was . The hull floor ranged from thick under the driver and assistant driver's positions to thick at the rear. The M4 had a hatch on the hull bottom to dispose of spent shell casings and to provide an emergency escape route. In the Pacific, Marines often used this Sherman feature in reverse to recover wounded infantry under fire.
Effectiveness
The armor of the Sherman was ineffective against most Axis tanks (such as tanks like the Panzer IV with 7.5 cm cannon and above) along with anti-tank weapon fire early in the war during multiple occasions. So it was decided it needed a compound angle to resist later German tank and anti-tank guns. The distinctive protruding "hatchways" of the early Sherman compromised the 56 degree-angled glacis plate, making them weak points where the effect of the glacis plate's slope was greatly reduced. In 1943, to make the thickness of these areas equal with the rest of the glacis plate, appliqué armor plates were fitted in front of them.
A Waffenamt-Prüfwesen 1 report estimated that with the M4 angled 30 degrees sideways and APCBC round, the Tiger I's 8.8 cm KwK 36 L/56 gun would be capable of penetrating the differential case of an American M4 Sherman from and the turret front from , but the Tiger's 88 mm gun would not penetrate the upper glacis plate at any range and that the Panther, with its long barreled 7.5 cm KwK 42 L/70, would have to close in to to achieve a penetration in the same situation. However, other German documents suggested that the glacis of a Sherman could be penetrated at a range of by the Tiger I. The Tiger I was estimated to be able to penetrate the Sherman in most other armor plates at a range of or above, far exceeding the ranges at which the tank itself was vulnerable to fire from the Sherman.
Although the later-model German medium and heavy tanks were greatly feared, Buckley opined "The vast majority of German tanks encountered in Normandy were either inferior or merely equal to the Sherman." (Panzer III or Panzer IV)
Research for tank casualties in Normandy from 6 June to 10 July 1944 conducted by the British No. 2 Operational Research Section concluded that from a sample of 40 Sherman tanks, 33 tanks burned (82 percent) and 7 tanks remained unburned following an average of 1.89 penetrations. In comparison, from a sample of five Panzer IVs, four tanks burned (80 percent) and one tank remained unburned, following an average of 1.5 penetrations. The Panther tank burned 14 times (63 percent) from a sample of 22 tanks and following 3.24 penetrations, while the Tiger burned four times (80 percent) out of a sample of five tanks following 3.25 penetrations. John Buckley, using a case study of the British 8th and 29th Armoured Brigades, found that of their 166 Shermans knocked out in combat during the Normandy campaign, 94 (56.6 percent) burned out. Buckley also notes that an American survey carried out concluded that 65% of tanks burned out after being penetrated. United States Army research proved that the major reason for this was the stowage of main gun ammunition in the vulnerable sponsons above the tracks. A U.S. Army study in 1945 concluded that only 10–15 percent of wet stowage Shermans burned when penetrated, compared to 60–80 percent of the older dry-stowage Shermans. As a burned tank was unrecoverable, it was prudent in combat to continue to fire at a tank until it caught fire.
At first, a partial remedy to ammunition fires in the M4 was found in 1943 by welding appliqué armor plates to the sponson sides over the ammunition stowage bins, though there was doubt that these had any effect. Later models moved ammunition stowage to the hull floor, with water jackets surrounding each storage bin. The practice, known as "wet stowage", reduced the chance of fire after a hit to about 15 percent. The Sherman allegedly gained the grim nickname "Tommy Cooker" (by the Germans, who referred to British soldiers as "Tommies"; a tommy cooker was a World War I-era trench stove), though no evidence appears to exist beyond anecdote on the Allied side and post-war. Conversely, it was also allegedly called "Ronson" or "Zippo" due to the flamethrower version of the tank, and not because "it lights the first time, every time"; this nickname story has been almost conclusively proven to be a fabrication as the Ronson company did not begin using the slogan until the 1950s and the average soldier did not have a Ronson lighter. Fuel fires occasionally occurred, but such fires were far less common and less deadly than ammunition fires. In many cases, the fuel tank of the Sherman was found intact after a fire. Tankers described "fierce, blinding jets of flame", which is consistent with burning pressurized hydraulic fluid, but not gasoline-related fires.
Overview
Comparisons can be drawn between the T-34 and the U.S. M4 Sherman tank. Both tanks were the backbone of the armored units in their respective armies, both nations distributed these tanks to their allies, who also used them as the mainstay of their own armored formations, and both were upgraded extensively and fitted with more powerful guns. Both were designed for mobility and ease of manufacture and maintenance, sacrificing some performance for these goals. Both chassis were used as the foundation for a variety of support vehicles, such as armor recovery vehicles, tank destroyers, and self-propelled artillery. Both were an approximately even match for the standard German medium tank, the Panzer IV, though each of these three tanks had particular advantages and weaknesses compared with the other two. Neither the T-34 nor the M4 was a match for Germany's heavier tanks, the Panther (technically a medium tank) or the Tiger I; the Soviets used the IS-2 heavy tank and the U.S. used the M26 Pershing as the heavy tanks of their forces instead.
Upgrades
Upgrades included the rectangular armor patches protecting ammunition stowage mentioned above, and smaller armor patches in front of each of the protruding hatchway structures in the glacis in an attempt to mitigate their ballistic weakness. Field improvisations included placing sandbags, spare track links, concrete, wire mesh, or even wood for increased protection against shaped-charge rounds. While mounting sandbags around a tank had little effect against high-velocity anti-tank gunfire it was thought to provide standoff protection against HEAT weapons, primarily the German Panzerfaust anti-tank grenade launcher and the 88 mm caliber Panzerschreck anti-tank rocket launcher. In the only study known to have been done to test the use of sandbags, on 9 March 1945, officers of the 1st Armored Group tested standard Panzerfaust 60s against sandbagged M4s; shots against the side blew away the sandbags and still penetrated the side armor, whereas shots fired at an angle against the front plate blew away some of the sandbags but failed to penetrate the armor. Earlier, in the summer of 1944, General Patton, informed by his ordnance officers that sandbags were useless, and that the machines' chassis suffered from the extra weight, had forbidden the use of sandbags. Following the clamor for better armor and firepower after the losses of the Battle of the Bulge, Patton ordered extra armor plates salvaged from knocked-out American and German tanks welded to the turrets and hulls of tanks of his command. Approximately 36 of this up-armored M4s were supplied to each of the three armored divisions of the Third Army in the spring of 1945.
M4A3E2
The M4A3E2 Sherman "Jumbo" assault tank variant, based upon a standard M4A3(75)W hull, had an additional plate welded to the glacis, giving a total thickness of , which resulted in a glacis of line-of-sight thickness, and over effective thickness. The sponson sides had thick plates welded on, to make them thick. The transmission cover was significantly thicker, and a new, more massive T23-style turret with of armor on the sides and rear and a thick flat roof. The gun mantlet had an additional of armor welded on giving a total thickness of 177.8 mm. It was originally to be armed with the 76 mm gun, but the 75 mm was preferred for infantry support and was used, although some were later upgraded to use the 76 mm. The higher weight required changing the transmission gear ratios to reduce maximum speed to 22 mph, and crews were warned not to let the suspension "bottom" too violently. 254 were built at the Fisher Tank Arsenal from May to July 1944, and arrived in Europe in the fall of 1944, being employed throughout the remainder of the fighting in various roles. They were considered "highly successful".
Mobility
In its initial specifications for a replacement for the M3 medium tank, the U.S. Army restricted Sherman's height, width, and weight so that it could be transported via typical bridges, roads, railroads, and landing craft without special accommodation. Army Regulation 850-15 initially restricted the widths of a tank to 103 inches (2.62 m) and its weight to 30 tons (27.2 t). This greatly aided the strategic, logistical, and tactical flexibility and mobility of all Allied armored forces using the Sherman.
A long-distance service trial conducted in Britain in 1943 compared diesel and gasoline Shermans to Cromwell tanks (Rolls-Royce Meteor engine) and Centaur (Liberty L-12). The British officer commanding the trial concluded:
The Sherman had good speed both on and off-road. Off-road performance varied. In the desert, the Sherman's rubber-block tracks performed well, and in the confined, hilly terrain of Italy, the smaller, nimbler Sherman could often cross terrain that some heavy German tanks could not. Albert Speer recounted in his autobiography Inside the Third Reich:
However, while this may have held compared with the first-generation German tanks, such as the Panzer III and Panzer IV, comparative testing with the second generation wide-tracked German tanks (Panther and Tiger) conducted by the Germans at their Kummersdorf testing facility, as well as by the U.S. 2nd Armored Division, proved otherwise. The M4's initial tracks were 16.5 inches wide. This produced ground pressure of 14 pounds per square inch. U.S. crews found that on soft ground, the narrow tracks of the Sherman gave poorer ground pressure compared to the Panther and Tiger.
Because of their wider tracks and use of the characteristic Schachtellaufwerk interleaved and overlapped road wheels (as used on pre-war origin German half-track vehicles), the Panther and Tiger had greater mobility on soft ground because of their greater flotation (i.e., lower ground pressure). Lieutenant Colonel Wilson M. Hawkins of the 2nd Armored Division wrote the following comparing the U.S. M4 Sherman and the German Panther in a report to Allied headquarters:
This was backed up in an interview with Technical Sergeant Willard D. May of the 2nd Armored Division who commented: "I have taken instructions on the Mark V [Panther] and have found, first, it is easily as maneuverable as the Sherman; second the flotation exceeds that of the Sherman."
Staff Sergeant and tank platoon sergeant Charles A. Carden completes the comparison in his report:
The U.S. Army issued extended end connectors ("duckbills") to add width to the standard tracks as a stopgap solution. Duckbills began to reach front-line tank battalions in fall 1944 but were original factory equipment for the heavy M4A3E2 Jumbo to compensate for the extra weight of armor. The M4A3(76)W HVSS Shermans and other late models with wider-tracked suspensions corrected these problems but formed only a small proportion of the tanks in service even in 1945.
Reliability
M4A1
In September 1942 the British developed some potential improvements and tested the tanks.
After the springs of the left front bogie broke, considered typical for this type of suspension. Oil accumulated on the floor of the engine compartment during driving. The engine periodically stalled under high load due to interrupted fuel supply. It was found that the engine had been built and installed incorrectly. Upon disassembly carbon deposits were found on the working surfaces of the cylinders; they were very worn out after only 65 hours of operation or run. In the absence of a replacement by 10 October, the engine was put back in the tank; the revised fuel system was supposed to improve the stalling engine.
In November 1943, several M4A1 Shermans were tested at the American proving ground to test British innovations. On one of them, 37 experimental changes were made, on the second – 47, on the third – 53. In total, 60 changes were developed and implemented for the Shermans, most of which were considered successful after a 600-mile run and firing.
M4A2
In Africa, the British engines ran for 700–900 miles (1130–1450 km), or 180–200 hours. The engine had to be inspected and repaired after 100 hours, which significantly extended its service life, but there was not enough time for such work, and among crews, it was believed that there was little benefit in the procedure. The engine left much to be desired, as evidenced by attempts at modifications in the Eighth Army, which did not affect the reliability of the tank. The Shermans also had other defects, including broken wiring, breaking ignition coils, and clutch rods.
The improved return roller design performed much better than that which the early Sherman production inherited from the M3. A February 1943 report described a unit where there were no broken bogey coil springs even after a 1,000-mile (1,600 km) march. The tracks however suffered; the rubber flaked off and after a run of 600 miles (970 km) the tracks were unusable. Some units rode on tracks without the rubber pads, but the rubber tires of the rollers wore down faster. The introduction of radially grooved tires helped to cope with overheating when driving fast in the desert, but de-lamination of the tires still led to cracks in the rollers after 300 miles (480 km).
The M4A2 performed very well in hot climates in general. The British sent as many of these as possible to the Mediterranean theater, retaining a minimum of vehicles for training in the UK. Complaints began to come in about carbon fouling of the injectors due to oil getting into the combustion chamber.
Other mechanical problems were rare and were most common in the left engine. Shermans suffered from wear to tire trackpads which were mitigated by changing to all-metal tracks and ventilated rollers. The tanks proved to be very reliable with proper operation. In June 1943, it was noted that the average service life was estimated at 1,500 miles (2,400 km). The M4A2 was rated "very high", while the M4A1 was rated "high".
The Soviet 6th Guards Tank Army determined the lifespan of their M4A2 Shermans to be or 250–300 hours, comparable to the T-34.
M4A3
The Ford V-8 engined M4A3s took part in the 1943 'survival' race. On average, the engines worked for 255 hours, though one failed after 87 hours of running. Three tanks were taken out of the test at 187, 247, and 295 operating hours due to reasons unrelated to the engine. The report noted that even disqualified motors could be returned to service by replacing only one part: the rest were still in excellent condition. Of all the Ford engines, it turned out to be the most service friendly. The M4A3 tanks covered a greater distance than other Shermans: ten vehicles covered 20,346 miles (32,743 km) in total (half on-the-road, half off-road) over 2,388 hours - an impressive achievement.
The M4A3 continued to lead to reliability through further testing. On tests in the winter and spring of 1944, one tank covered in 203 hours and 25 minutes. An M4 failed after only 15 hours and 10 minutes and was replaced by another. The M4A1 lasted 27 hours 15 minutes, and the M4A4 covered 1,343 miles (2,161 km) in 149 hours and 35 minutes.
Around the same time, another reliability test began, albeit on a smaller scale of 20 Shermans of various types including four M4A3. The time spent on repairs was carefully measured: on average, the M4A3 took 110 hours to service the engine, which was better than the M4A1 (132 hours) or M4A2 (143 hours), but more than double the average of 45 hours on maintenance of the Chrysler multibank by M4A4 crews. M4A3 remained superior in transmission time: 112 hours versus 340 hours for the M4A4. In terms of suspension, the tanks turned out to be approximately equal. None of the tanks with Ford engines passed the entire route: they dropped out after 293, 302, 347, and 350 hours of running. Only three Chrysler engines and one General Motors diesel engine coped with the task.
Although M4A3s were not in service with other armies, some were supplied to the Allies for review. In early January 1943, a new M4A3 was provided to the British Fighting Vehicle Proving Establishment. By 16 January, it began trials. The engine failed after 495 miles (800 km). A new engine was delivered by the end of February. This gave more power and better performance and despite multiple problems, the tank achieved 2,000 miles (3,220 km). The British considered the M4A3 a very reliable tank but far from perfect. An upgraded vehicle was tested in the spring of 1944; it covered over 3,000 miles (4,863 km) through several defects accumulated over the course of the run. The M4A3 was considered an outstanding vehicle for its reliability.
M4A4
In October 1942, five M3A4s and five M4A4s were tested in the California desert, which was a monstrous test for vehicles with an unsatisfactory cooling system. Constant breakdowns of auxiliary engine units put an end to the tank's combat career in the US Army. By the spring of 1943, the recommendations given by the Armored Council had been implemented, and 10 M4A4 tanks had been driven to a 4,000-miles (6,440 km) range. The average service life of the A57 engine reached 240 hours. M4A4 tanks took second place in reliability after the M4A3 with a Ford GAA engine (255 hours), ahead of diesel M4A2 (225 hours) and M4A1 radial (218 hours). The M4A4 was the easiest to maintain.
Additional tests of four M4A4s from 8 October 1943, to 14 February 1944, showed even better results: one engine broke down after 339 hours, three others worked 400 hours with less than 10% power loss. 3 out of 4 M4A4 could finish the Armored Council test and drive for 4,000-miles (6,440 km).
Despite the positive outcomes of additional testing, oil and fuel consumption was still too high for the engine to be recommended for service in the American army. Production of the M4A4 was discontinued on 10 October 1943, and it was declared obsolete in 1945.
Engine
Diesel M4A2s had a significant superiority over the R975 gasoline engines. The first M4A3 tank with a Ford GAA V8 gasoline engine, surpassing the R975 in all aspects, was assembled in May 1942, and even the M4A4 had a more reliable engine.
The R975 engine began to lose relevance once the vehicle was put into service. The R975 was initially powered by high-octane aviation gasoline. With the entry of the United States into the war, it was necessary to change to a lower grade fuel. To maintain performance, the maximum octane number of fuel for the new engine was limited to 80. In April 1942, an engine with a compression ratio of 5.7 was tested, which was considered acceptable. The nominal revs increased from 1200 to 1800 per minute. The new engine used a richer fuel mixture and had a larger combustion chamber.
Engines were compared in large-scale tests at the Aberdeen Proving Grounds in the winter of 1943–1944 with four examples each of M4A1, M4A2, M4A3, and M4A4. The endpoint was 4,000 miles or 400 hours run time. Faults with anything except the motor were repaired and testing resumed; only critical damage or loss of a third of its original power took the engine out of the competition.
During the tests, it took 132 hours to service the R-975 in the M4A1, 143 hours for the GM diesel M4A2, 110 hours for the Ford GAA M4A3, and 45 hours for the multibank M4A4. None of the R975 engines reached the 200 hours mark, failing on average after 166 hours. It was noted that a lot of time was spent on servicing the air filters for the R-975; over 23 days of testing, 446 man-hours were spent on cleaning and repairing them.
An M4 with the R975-C1 engine was tested a year later over a 5,000-mile (8,050 km) test in which the engine had to be replaced three times. In addition, there were transmission and suspension problems. The filters performed poorly: it was noted that sand and dust severely spoil the engine and other units.
Work to improve the reliability of the R975 engine led to quite significant changes, resulting in the R975-C4. Engine power increased from , and fuel consumption decreased by 10%. The engine torque went from 1800 Nm at 1900 pm to 2040 Nm. Older engines were upgraded to the later model during a major overhaul.
The new engine was approved for production on 17 June 1943, with 200 units ordered for the gun motor carriage T70 tank destroyer. In October 1943 the British demanded that it be provided for their Shermans. Tests in February 1944 on the M4A1 tank that as well as increased power: oil consumption dropped by 35% and cylinder temperature by 50 °C.
The speed increased: the M4A1 with the new engine covered 1.5 miles (2.4 km) of paved track in 4 minutes and 45 seconds – 47 seconds faster than the tank with the R975-C1 engine. Tests have also shown increased reliability. The three new R975-C4s installed on the M4A1 were withdrawn from testing after 177, 219, and 231 hours, respectively, and the R975-C1, upgraded to the C4 standard, worked 222 hours on the M4 tank. Compared to its predecessor, the service life of the engines has increased, albeit only slightly.
In 1943, the Americans conducted large-scale trials of all types of Shermans. In total, 40 tanks were admitted to them: 10 each M4A1, M4A2, M4A3, and M4A4. The target was 400 hours or 4000 miles before the engine failed. The rest of the tank units could be repaired an unlimited number of times.
By 23 April 1943, ten M4A2 had covered a total of 16,215 miles (8229 miles on-road and 7,986 miles off-road), operating for 1,825 hours. Fuel consumption of the M4A2 was lower than that of other Shermans: 1.1 mpg (214 liters per 100 km) on the highway, and 0.5 miles per gallon (470 liters per 100 km) on off-road. On average, tanks consumed 0.81 quarts (0.76 liters) of oil per engine hour. The tests ended on 11 May. By that time, the M4A2 had covered 22,126 miles, running 2,424 hours. The average speed of the M4A2 was the fastest at . The M4A1 and M4A4 both made , while the M4A3 made .
In terms of reliability, the M4A2 was in third place. The first engine failed after 75 hours of operation. Two engines worked all 400 hours, while one was in good condition, and the other was on its last legs. On average, the engines worked for 225 hours before the breakdown of the internal units. Only the R-975 engines showed themselves worse than the GM 6–71 (average service life of 218 hours). Ford GAA (255 hours) and Chrysler A57 (240 hours) proved to be more reliable. In terms of time spent on maintenance, the M4A2 came in second.
The tanks continued to race for survival. At the end of 1943, 20 vehicles entered trials at once: four M4A1, M4A2, M4A3, M4A4, and new M4E1 with an experimental engine. The Shermans drove on three types of surfaces: fine loose sand, clayey stony ground, and highways. As in previous tests, during the run, the repairmen could change any units, and only the breakdown of internal components and engine parts disqualified the tank.
By 27 December, all M4A1s (average mileage of 166 hours) and one M4A3 were out of order, but not a single tank with a diesel engine. By 18 February, tests for the M4A2 ended. Three tanks failed after 276, 278, and 353 hours, respectively, while one covered 4295 miles in 403 hours and was still on the move. From M4A3, one tank also remained on the move, but with a rather modest mileage, since it had been under repair for a long time. Of the four M4A4s, one tank broke down, and the M4E1 was removed from testing – it was decided that the RD1820 engine would not go into a large series anyway.
By 18 March, the tanks had finished testing. The M4A4 turned out to be the most reliable again: out of four tanks, three reached the finish line. The M4A4 engine also took the least time to service: 45 hours per tank. M4A2 was in second place, as the last M4A3 still broke down, and did not cover the required distance. However, the maintenance of the GM 6–71 engine took 143 hours – more than the M4A3 (110 hours) or M4A1 (132 hours). The M4A2 also did not shine in servicing the transmission group: it took 220 hours to take care of each tank (only the M4A4 with 340 hours did more). In terms of suspension service time, the tank was at the level of other "Shermans": 205 hours. A total of 327 hours of a run of the average diesel Sherman took 594.5 hours of mechanics' work.
US variants
Vehicles that used the M4 chassis or hull derived from M4:
M10 tank destroyer also known as 3-in gun motor carriage M10 – tank destroyer
90 mm gun motor carriage M36 also known as Jackson – tank destroyer
105 mm howitzer motor carriage M7B1 also known as Priest – self-propelled artillery
155 mm gun motor carriage M12 – self-propelled gun, paired in service with the cargo carrier M30 (also derived from the Sherman)
155 mm gun motor carriage M40 – 155 mm self-propelled artillery (armed with the Long Tom artillery piece). Other artillery vehicles that share the same chassis include: HMC M43, MMC T94, and cargo carrier T30
Flame Tank Sherman
M4A2 with bow mounted E4-5 flamethrower
POA-CWS-H1-H2 (US Army) M4A3R5 (USMC) "Mark 1" CWS in theater modifications
POA-CWS-H5 (US Army), M4A3R8 (USMC) with coaxial H1A-H5A flamethrower.
M4-2B1E9
Rocket Artillery Sherman – T34 Calliope, T40 Whizbang, and other Sherman rocket launchers
Engineer tanks – D-8, M1, and M1A1 dozers, M4 Doozit, Mobile Assault Bridge, and T1E3 Aunt Jemima mine roller and other mine-clearers
Armored recovery vehicle – M32 tank recovery vehicle and M74 tank recovery vehicle
Artillery tractors – M34 and M35 prime movers
Foreign variants and use
The Sherman was extensively supplied through Lend-Lease to Britain, the Soviet Union, China, and Free France. Britain received 17,181 in various models, mostly M4А2s and M4A4s (5,041 Sherman III and 7,167 V, respectively), of which over 2000 were re-equipped with a more powerful gun to become the Sherman Firefly. The Soviet Union was shipped 4,065 M4 (M4A2s: 1,990 with 75 mm- and 2,073 with 76 mm-armed versions, 2 M4A4s), or 4,102 M4 (2,007 with 75 mm- and 2,095 with 76 mm versions). Еnrolled 3,664. The Free French were the third largest recipient, being given 755 during 1943 and 1944. At least 57 (or 157) Shermans were also delivered to other U.S. allies.
A similar vehicle was developed in Canada from January 1941, known as the Ram tank. Like the Sherman, this was based on the M3 Lee's chassis and powertrain upgraded to have a turret, although it used a new turret of Canadian design. One improvement was the use of all-steel 'CDP' (Canadian Dry Pin) tracks, which although an inch narrower than the early M4 steel and rubber pad tracks, were cheaper to produce and gave better traction. Suspension units and roadwheels remained the M3 vertical volute pattern, with the idler above the mounting bracket, rather than the M4 development with the idler moved behind the mounting bracket to give more room for suspension travel. The Ram had a distinctive turret with a bolted flat-faced mantlet and the UK 6-pounder gun, with the hull machine gunner housed in a rotating turret based on the M3 'Lee' cupola, rather than the simpler ball-mount that was becoming universal for tank hull guns. Production facilities for the Ram were constructed at the Montreal Locomotive Works, with the aid of Alco, but the large armor castings for turret and hull were supplied by General Steel Castings in the US. Greater Sherman production and availability meant that the Ram was never used in action as a gun tank, being either used for training or converted to Kangaroo armored personnel carriers.
A later Canadian medium tank, produced in late 1943, was the Grizzly, an adaption of the Sherman M4A1. This differed only in details, such as the CDP tracks, British radio equipment, and the British 2" smoke mortar in the turret roof. 188 were produced.
After World War II, Shermans were supplied to some NATO armies; Shermans were used by the U.S. and allied forces in the Korean War.
After World War II, quite a few Shermans also went to Israel. The Israeli Ordnance Corps, seeking an upgrade, up-gunned it using the 75mm CN-75-50 L/61.5 from the French AMX-13/75 light tank and the 105 mm Modèle F1 from the French AMX-30 Main Battle Tank, designated the M-50 and M-51 respectively. These Super Shermans, as they were often called, were remarkable examples of how a long-obsolete design can be upgraded for front-line use. They saw combat in the 1967 Six-Day War, fighting Soviet World War II-era armor like the T-34-85, and also in the 1973 Yom Kippur War. The M-50 and M-51 Super Shermans were eventually retired from Israel in 1980, and were replaced by the much more modern Merkava platform.
Paraguay retired three Shermans from the Regimiento Escolta Presidencial (REP, "Presidential Escort Regiment") in 2018, which marked the end of service of the final Sherman tanks in use anywhere in the world.
Former operators
: For testing purposes only.
: M4A3E4 Sherman was used.
80 M4, M4A1 Shermans received
: Obtained through Lend-Lease.
: M4A3E4 Sherman supplied by the US.
: 755
: From post-WWII.
: Inherited from the Netherlands following independence in 1949.
: From post-WWII; M4A3E8 Sherman supplied by the US.
: 20 M4A3E8 (Marine Corps, 1951), 388 M4A3E8 (Army, 1954). Retired (1971, replaced by M48 Patton).
: As Beutepanzer,captured vehicles.
: The Royal Netherlands Army received 44 Sherman tanks in January 1952.
: M4A3 (105)
: Received M4A1E6 Shermans from the US.
: Retired in April 2018.
: M4A3E4 Shermans used.
: 3,664.
: For testing purposes only.
: One turretless M4A1 Sherman.
: 34 delivered in January 1945.
: 17,181.
: Original operator, retired in 1957.
: 599 M4A3E4 Shermans received during the Informbiro period.
| Technology | Specific armor | null |
65593 | https://en.wikipedia.org/wiki/Ruler | Ruler | A ruler, sometimes called a rule, scale or a line gauge or metre/meter stick, is an instrument used to make length measurements, whereby a length is read from a series of markings called "rules" along an edge of the device. Usually, the instrument is rigid and the edge itself is a straightedge ("ruled straightedge"), which additionally allows one to draw straighter lines.
Variants
Rulers have long been made from different materials and in multiple sizes. Historically, they were mainly wooden but plastics have also been used. They can be created with length markings instead of being scribed. Metal is also used for more durable rulers for use in the workshop; sometimes a metal edge is embedded into a wooden desk ruler to preserve the edge when used for straight-line cutting. in length, although some can go up to 100cm, it is useful for a ruler to be kept on a desk to help in drawing. Shorter rulers are convenient for keeping in a pocket. Longer rulers, e.g., , are necessary in some cases. Rigid wooden or plastic yardsticks, 1 yard long, and meter sticks, 1 meter long, are also used. Classically, long measuring rods were used for larger projects, now superseded by the tape measure, the surveyor's wheel or laser rangefinders.
Use in geometry
In geometry, straight lines between points may be drawn using a straightedge (ruler without any markings on it). Furthermore, it is also used to draw accurate graphs and tables.
A ruler and compass construction is a construction that uses a ruler and a compass. It is possible to bisect an angle into two equal parts with a ruler and compass. It can be proven, though, that it is impossible to divide an angle into three equal parts using only a compass and straightedge — the problem of angle trisection. However, if two marks be allowed on the ruler, the problem becomes solvable.
History
In the history of measurement many distance units have been used which were based on human body parts such as the cubit, hand and foot and these units varied in length by era and location. In the late 18th century the metric system came into use and has been adopted to varying degrees in almost all countries in the world.
The oldest preserved measuring rod is a copper-alloy bar that dates from 2650 BC and was found by the German Assyriologist Eckhard Unger while excavating at the Sumerian city of Nippur (present day Iraq).
Rulers made of ivory were in use by the Indus Valley civilization period prior to 1500 BC. Excavations at Lothal (2400 BC) have yielded one such ruler calibrated to about . Ian Whitelaw holds that the Mohenjo-Daro ruler is divided into units corresponding to and these are marked out in decimal subdivisions with amazing accuracy, to within . Ancient bricks found throughout the region have dimensions that correspond to these units.
Anton Ullrich invented the folding ruler in 1851. Frank Hunt later made the flexible ruler in 1902.
Curved and flexible rulers
The equivalent of a ruler for drawing or reproducing a smooth curve, where it takes the form of a rigid template, is known as a French curve. A flexible device that can be bent to the desired shape is known as a flat spline, or (in its more modern incarnation) a flexible curve. Historically, a flexible lead rule used by masons that could be bent to the curves of a molding was known as a lesbian rule.
Philosophy
Ludwig Wittgenstein famously used rulers as an example in his discussion of language games in the Philosophical Investigations (1953). He pointed out that the standard meter bar in Paris was the criterion against which all other rulers were determined to be one meter long. However, there was no analytical way to demonstrate that the standard meter bar itself was one meter long. It could only be asserted as one meter as part of a language game.
| Technology | Measuring instruments | null |
65637 | https://en.wikipedia.org/wiki/Metrology | Metrology | Metrology is the scientific study of measurement. It establishes a common understanding of units, crucial in linking human activities. Modern metrology has its roots in the French Revolution's political motivation to standardise units in France when a length standard taken from a natural source was proposed. This led to the creation of the decimal-based metric system in 1795, establishing a set of standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure conformity between the countries, the Bureau International des Poids et Mesures (BIPM) was established by the Metre Convention. This has evolved into the International System of Units (SI) as a result of a resolution at the 11th General Conference on Weights and Measures (CGPM) in 1960.
Metrology is divided into three basic overlapping activities:
The definition of units of measurement
The realisation of these units of measurement in practice
Traceability—linking measurements made in practice to the reference standards
These overlapping activities are used in varying degrees by the three basic sub-fields of metrology:
Scientific or fundamental metrology, concerned with the establishment of units of measurement
Applied, technical or industrial metrology—the application of measurement to manufacturing and other processes in society
Legal metrology, covering the regulation and statutory requirements for measuring instruments and methods of measurement
In each country, a national measurement system (NMS) exists as a network of laboratories, calibration facilities and accreditation bodies which implement and maintain its metrology infrastructure. The NMS affects how measurements are made in a country and their recognition by the international community, which has a wide-ranging impact in its society (including economics, energy, environment, health, manufacturing, industry and consumer confidence). The effects of metrology on trade and economy are some of the easiest-observed societal impacts. To facilitate fair trade, there must be an agreed-upon system of measurement.
History
The ability to measure alone is insufficient; standardisation is crucial for measurements to be meaningful. The first record of a permanent standard was in 2900 BC, when the royal Egyptian cubit was carved from black granite. The cubit was decreed to be the length of the Pharaoh's forearm plus the width of his hand, and replica standards were given to builders. The success of a standardised length for the building of the pyramids is indicated by the lengths of their bases differing by no more than 0.05 percent.
In China weights and measures had a semi religious meaning as it was used in the various crafts by the Artificers and in ritual utensils and is mentioned in the book of rites along with the steelyard balance and other tools.
Other civilizations produced generally accepted measurement standards, with Roman and Greek architecture based on distinct systems of measurement. The collapse of the empires and the Dark Ages that followed lost much measurement knowledge and standardisation. Although local systems of measurement were common, comparability was difficult since many local systems were incompatible. England established the Assize of Measures to create standards for length measurements in 1196, and the 1215 Magna Carta included a section for the measurement of wine and beer.
Modern metrology has its roots in the French Revolution. With a political motivation to harmonise units throughout France, a length standard based on a natural source was proposed. In March 1791, the metre was defined. This led to the creation of the decimal-based metric system in 1795, establishing standards for other types of measurements. Several other countries adopted the metric system between 1795 and 1875; to ensure international conformity, the International Bureau of Weights and Measures (, or BIPM) was formed by the Metre Convention. Although the BIPM's original mission was to create international standards for units of measurement and relate them to national standards to ensure conformity, its scope has broadened to include electrical and photometric units and ionizing radiation measurement standards. The metric system was modernised in 1960 with the creation of the International System of Units (SI) as a result of a resolution at the 11th General Conference on Weights and Measures (, or CGPM).
Subfields
Metrology is defined by the International Bureau of Weights and Measures (BIPM) as "the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology". It establishes a common understanding of units, crucial to human activity. Metrology is a wide reaching field, but can be summarized through three basic activities: the definition of internationally accepted units of measurement, the realisation of these units of measurement in practice, and the application of chains of traceability (linking measurements to reference standards). These concepts apply in different degrees to metrology's three main fields: scientific metrology; applied, technical or industrial metrology, and legal metrology.
Scientific metrology
Scientific metrology is concerned with the establishment of units of measurement, the development of new measurement methods, the realisation of measurement standards, and the transfer of traceability from these standards to users in a society. This type of metrology is considered the top level of metrology which strives for the highest degree of accuracy. BIPM maintains a database of the metrological calibration and measurement capabilities of institutes around the world. These institutes, whose activities are peer-reviewed, provide the fundamental reference points for metrological traceability. In the area of measurement, BIPM has identified nine metrology areas, which are acoustics, electricity and magnetism, length, mass and related quantities, photometry and radiometry, ionizing radiation, time and frequency, thermometry, and chemistry.
As of May 2019 no physical objects define the base units. The motivation in the change of the base units is to make the entire system derivable from physical constants, which required the removal of the prototype kilogram as it is the last artefact the unit definitions depend on. Scientific metrology plays an important role in this redefinition of the units as precise measurements of the physical constants is required to have accurate definitions of the base units. To redefine the value of a kilogram without an artefact the value of the Planck constant must be known to twenty parts per billion. Scientific metrology, through the development of the Kibble balance and the Avogadro project, has produced a value of Planck constant with low enough uncertainty to allow for a redefinition of the kilogram.
Applied, technical or industrial metrology
Applied, technical or industrial metrology is concerned with the application of measurement to manufacturing and other processes and their use in society, ensuring the suitability of measurement instruments, their calibration and quality control. Producing good measurements is important in industry as it has an impact on the value and quality of the end product, and a 10–15% impact on production costs. Although the emphasis in this area of metrology is on the measurements themselves, traceability of the measuring-device calibration is necessary to ensure confidence in the measurement. Recognition of the metrological competence in industry can be achieved through mutual recognition agreements, accreditation, or peer review. Industrial metrology is important to a country's economic and industrial development, and the condition of a country's industrial-metrology program can indicate its economic status.
Legal metrology
Legal metrology "concerns activities which result from statutory requirements and concern measurement, units of measurement, measuring instruments and methods of measurement and which are performed by competent bodies". Such statutory requirements may arise from the need for protection of health, public safety, the environment, enabling taxation, protection of consumers and fair trade. The International Organization for Legal Metrology (OIML) was established to assist in harmonising regulations across national boundaries to ensure that legal requirements do not inhibit trade. This harmonisation ensures that certification of measuring devices in one country is compatible with another country's certification process, allowing the trade of the measuring devices and the products that rely on them. WELMEC was established in 1990 to promote cooperation in the field of legal metrology in the European Union and among European Free Trade Association (EFTA) member states. In the United States legal metrology is under the authority of the Office of Weights and Measures of National Institute of Standards and Technology (NIST), enforced by the individual states.
Concepts
Definition of units
The International System of Units (SI) defines seven base units: length, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity. By convention, each of these units are considered to be mutually independent and can be constructed directly from their defining constants. All other SI units are constructed as products of powers of the seven base units.
Since the base units are the reference points for all measurements taken in SI units, if the reference value changed all prior measurements would be incorrect. Before 2019, if a piece of the international prototype of the kilogram had been snapped off, it would have still been defined as a kilogram; all previous measured values of a kilogram would be heavier. The importance of reproducible SI units has led the BIPM to complete the task of defining all SI base units in terms of physical constants.
By defining SI base units with respect to physical constants, and not artefacts or specific substances, they are realisable with a higher level of precision and reproducibility. As of the revision of the SI on 20 May 2019 the kilogram, ampere, kelvin, and mole are defined by setting exact numerical values for the Planck constant (), the elementary electric charge (), the Boltzmann constant (), and the Avogadro constant (), respectively. The second, metre, and candela have previously been defined by physical constants (the caesium standard (ΔνCs), the speed of light (), and the luminous efficacy of visible light radiation (Kcd)), subject to correction to their present definitions. The new definitions aim to improve the SI without changing the size of any units, thus ensuring continuity with existing measurements.
Realisation of units
The realisation of a unit of measure is its conversion into reality. Three possible methods of realisation are defined by the international vocabulary of metrology (VIM): a physical realisation of the unit from its definition, a highly-reproducible measurement as a reproduction of the definition (such as the quantum Hall effect for the ohm), and the use of a material object as the measurement standard.
Standards
A standard (or etalon) is an object, system, or experiment with a defined relationship to a unit of measurement of a physical quantity. Standards are the fundamental reference for a system of weights and measures by realising, preserving, or reproducing a unit against which measuring devices can be compared. There are three levels of standards in the hierarchy of metrology: primary, secondary, and working standards. Primary standards (the highest quality) do not reference any other standards. Secondary standards are calibrated with reference to a primary standard. Working standards, used to calibrate (or check) measuring instruments or other material measures, are calibrated with respect to secondary standards. The hierarchy preserves the quality of the higher standards. An example of a standard would be gauge blocks for length. A gauge block is a block of metal or ceramic with two opposing faces ground precisely flat and parallel, a precise distance apart. The length of the path of light in vacuum during a time interval of 1/299,792,458 of a second is embodied in an artefact standard such as a gauge block; this gauge block is then a primary standard which can be used to calibrate secondary standards through mechanical comparators.
Traceability and calibration
Metrological traceability is defined as the "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty". It permits the comparison of measurements, whether the result is compared to the previous result in the same laboratory, a measurement result a year ago, or to the result of a measurement performed anywhere else in the world. The chain of traceability allows any measurement to be referenced to higher levels of measurements back to the original definition of the unit.
Traceability is obtained directly through calibration, establishing the relationship between an indication on a standard traceable measuring instrument and the value of the comparator (or comparative measuring instrument). The process will determine the measurement value and uncertainty of the device that is being calibrated (the comparator) and create a traceability link to the measurement standard. The four primary reasons for calibrations are to provide traceability, to ensure that the instrument (or standard) is consistent with other measurements, to determine accuracy, and to establish reliability. Traceability works as a pyramid, at the top level there is the international standards, which beholds the world's standards. The next level is the national Metrology institutes that have primary standards that are traceable to the international standards. The national Metrology institutes standards are used to establish a traceable link to local laboratory standards, these laboratory standards are then used to establish a traceable link to industry and testing laboratories. Through these subsequent calibrations between national metrology institutes, calibration laboratories, and industry and testing laboratories the realisation of the unit definition is propagated down through the pyramid. The traceability chain works upwards from the bottom of the pyramid, where measurements done by industry and testing laboratories can be directly related to the unit definition at the top through the traceability chain created by calibration.
Uncertainty
Measurement uncertainty is a value associated with a measurement which expresses the spread of possible values associated with the measurand—a quantitative expression of the doubt existing in the measurement. There are two components to the uncertainty of a measurement: the width of the uncertainty interval and the confidence level. The uncertainty interval is a range of values that the measurement value expected to fall within, while the confidence level is how likely the true value is to fall within the uncertainty interval. Uncertainty is generally expressed as follows:
Coverage factor: k = 2
Where y is the measurement value and U is the uncertainty value and k is the coverage factor indicates the confidence interval. The upper and lower limit of the uncertainty interval can be determined by adding and subtracting the uncertainty value from the measurement value. The coverage factor of k = 2 generally indicates a 95% confidence that the measured value will fall inside the uncertainty interval. Other values of k can be used to indicate a greater or lower confidence on the interval, for example k = 1 and k = 3 generally indicate 66% and 99.7% confidence respectively. The uncertainty value is determined through a combination of statistical analysis of the calibration and uncertainty contribution from other errors in measurement process, which can be evaluated from sources such as the instrument history, manufacturer's specifications, or published information.
International infrastructure
Several international organizations maintain and standardise metrology.
Metre Convention
The Metre Convention created three main international organizations to facilitate standardisation of weights and measures. The first, the General Conference on Weights and Measures (CGPM), provided a forum for representatives of member states. The second, the International Committee for Weights and Measures (CIPM), was an advisory committee of metrologists of high standing. The third, the International Bureau of Weights and Measures (BIPM), provided secretarial and laboratory facilities for the CGPM and CIPM.
General Conference on Weights and Measures
The General Conference on Weights and Measures (, or CGPM) is the convention's principal decision-making body, consisting of delegates from member states and non-voting observers from associate states. The conference usually meets every four to six years to receive and discuss a CIPM report and endorse new developments in the SI as advised by the CIPM. The last meeting was held on 13–16 November 2018. On the last day of this conference there was vote on the redefinition of four base units, which the International Committee for Weights and Measures (CIPM) had proposed earlier that year. The new definitions came into force on 20 May 2019.
International Committee for Weights and Measures
The International Committee for Weights and Measures (, or CIPM) is made up of eighteen (originally fourteen) individuals from a member state of high scientific standing, nominated by the CGPM to advise the CGPM on administrative and technical matters. It is responsible for ten consultative committees (CCs), each of which investigates a different aspect of metrology; one CC discusses the measurement of temperature, another the measurement of mass, and so forth. The CIPM meets annually in Sèvres to discuss reports from the CCs, to submit an annual report to the governments of member states concerning the administration and finances of the BIPM and to advise the CGPM on technical matters as needed. Each member of the CIPM is from a different member state, with France (in recognition of its role in establishing the convention) always having one seat.
International Bureau of Weights and Measures
The International Bureau of Weights and Measures (, or BIPM) is an organisation based in Sèvres, France which has custody of the international prototype of the kilogram, provides metrology services for the CGPM and CIPM, houses the secretariat for the organisations and hosts their meetings. Over the years, prototypes of the metre and of the kilogram have been returned to BIPM headquarters for recalibration. The BIPM director is an ex officio member of the CIPM and a member of all consultative committees.
International Organization of Legal Metrology
The International Organization of Legal Metrology (, or OIML), is an intergovernmental organization created in 1955 to promote the global harmonisation of the legal metrology procedures facilitating international trade. This harmonisation of technical requirements, test procedures and test-report formats ensure confidence in measurements for trade and reduces the costs of discrepancies and measurement duplication. The OIML publishes a number of international reports in four categories:
Recommendations: Model regulations to establish metrological characteristics and conformity of measuring instruments
Informative documents: To harmonise legal metrology
Guidelines for the application of legal metrology
Basic publications: Definitions of the operating rules of the OIML structure and system
Although the OIML has no legal authority to impose its recommendations and guidelines on its member countries, it provides a standardised legal framework for those countries to assist the development of appropriate, harmonised legislation for certification and calibration. OIML provides a mutual acceptance arrangement (MAA) for measuring instruments that are subject to legal metrological control, which upon approval allows the evaluation and test reports of the instrument to be accepted in all participating countries. Issuing participants in the agreement issue MAA Type Evaluation Reports of MAA Certificates upon demonstration of compliance with ISO/IEC 17065 and a peer evaluation system to determine competency. This ensures that certification of measuring devices in one country is compatible with the certification process in other participating countries, allowing the trade of the measuring devices and the products that rely on them.
International Laboratory Accreditation Cooperation
The International Laboratory Accreditation Cooperation (ILAC) is an international organisation for accreditation agencies involved in the certification of conformity-assessment bodies. It standardises accreditation practices and procedures, recognising competent calibration facilities and assisting countries developing their own accreditation bodies. ILAC originally began as a conference in 1977 to develop international cooperation for accredited testing and calibration results to facilitate trade. In 2000, 36 members signed the ILAC mutual recognition agreement (MRA), allowing members work to be automatically accepted by other signatories, and in 2012 was expanded to include accreditation of inspection bodies. Through this standardisation, work done in laboratories accredited by signatories is automatically recognised internationally through the MRA. Other work done by ILAC includes promotion of laboratory and inspection body accreditation, and supporting the development of accreditation systems in developing economies.
Joint Committee for Guides in Metrology
The Joint Committee for Guides in Metrology (JCGM) is a committee which created and maintains two metrology guides: Guide to the expression of uncertainty in measurement (GUM) and International vocabulary of metrology – basic and general concepts and associated terms (VIM). The JCGM is a collaboration of eight partner organisations:
International Bureau of Weights and Measures (BIPM)
International Electrotechnical Commission (IEC)
International Federation of Clinical Chemistry and Laboratory Medicine (IFCC)
International Organization for Standardization (ISO)
International Union of Pure and Applied Chemistry (IUPAC)
International Union of Pure and Applied Physics (IUPAP)
International Organization of Legal Metrology (OIML)
International Laboratory Accreditation Cooperation (ILAC)
The JCGM has two working groups: JCGM-WG1 and JCGM-WG2. JCGM-WG1 is responsible for the GUM, and JCGM-WG2 for the VIM. Each member organization appoints one representative and up to two experts to attend each meeting, and may appoint up to three experts for each working group.
National infrastructure
A national measurement system (NMS) is a network of laboratories, calibration facilities and accreditation bodies which implement and maintain a country's measurement infrastructure. The NMS sets measurement standards, ensuring the accuracy, consistency, comparability, and reliability of measurements made in the country. The measurements of member countries of the CIPM Mutual Recognition Arrangement (CIPM MRA), an agreement of national metrology institutes, are recognized by other member countries. As of March 2018, there are 102 signatories of the CIPM MRA, consisting of 58 member states, 40 associate states, and 4 international organizations.
Metrology institutes
A national metrology institute's (NMI) role in a country's measurement system is to conduct scientific metrology, realise base units, and maintain primary national standards. An NMI provides traceability to international standards for a country, anchoring its national calibration hierarchy. For a national measurement system to be recognized internationally by the CIPM Mutual Recognition Arrangement, an NMI must participate in international comparisons of its measurement capabilities. BIPM maintains a comparison database and a list of calibration and measurement capabilities (CMCs) of the countries participating in the CIPM MRA. Not all countries have a centralised metrology institute; some have a lead NMI and several decentralised institutes specialising in specific national standards. Some examples of NMI's are the National Institute of Standards and Technology (NIST) in the United States, the National Research Council (NRC) in Canada, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, and the National Physical Laboratory (United Kingdom) (NPL).
Calibration laboratories
Calibration laboratories are generally responsible for calibrations of industrial instrumentation. Calibration laboratories are accredited and provide calibration services to industry firms, which provides a traceability link back to the national metrology institute. Since the calibration laboratories are accredited, they give companies a traceability link to national metrology standards.
Accreditation bodies
An organisation is accredited when an authoritative body determines, by assessing the organisation's personnel and management systems, that it is competent to provide its services. For international recognition, a country's accreditation body must comply with international requirements and is generally the product of international and regional cooperation. A laboratory is evaluated according to international standards such as ISO/IEC 17025 general requirements for the competence of testing and calibration laboratories. To ensure objective and technically-credible accreditation, the bodies are independent of other national measurement system institutions. The National Association of Testing Authorities in Australia and the United Kingdom Accreditation Service are examples of accreditation bodies.
Impacts
Metrology has wide-ranging impacts on a number of sectors, including economics, energy, the environment, health, manufacturing, industry, and consumer confidence. The effects of metrology on trade and the economy are two of its most-apparent societal impacts. To facilitate fair and accurate trade between countries, there must be an agreed-upon system of measurement. Accurate measurement and regulation of water, fuel, food, and electricity are critical for consumer protection and promote the flow of goods and services between trading partners. A common measurement system and quality standards benefit consumer and producer; production at a common standard reduces cost and consumer risk, ensuring that the product meets consumer needs. Transaction costs are reduced through an increased economy of scale. Several studies have indicated that increased standardisation in measurement has a positive impact on GDP. In the United Kingdom, an estimated 28.4 per cent of GDP growth from 1921 to 2013 was the result of standardisation; in Canada between 1981 and 2004 an estimated nine per cent of GDP growth was standardisation-related, and in Germany the annual economic benefit of standardisation is an estimated 0.72% of GDP.
Legal metrology has reduced accidental deaths and injuries with measuring devices, such as radar guns and breathalyzers, by improving their efficiency and reliability. Measuring the human body is challenging, with poor repeatability and reproducibility, and advances in metrology help develop new techniques to improve health care and reduce costs. Environmental policy is based on research data, and accurate measurements are important for assessing climate change and environmental regulation. Aside from regulation, metrology is essential in supporting innovation, the ability to measure provides a technical infrastructure and tools that can then be used to pursue further innovation. By providing a technical platform which new ideas can be built upon, easily demonstrated, and shared, measurement standards allow new ideas to be explored and expanded upon.
| Physical sciences | Measurement: General | null |
65672 | https://en.wikipedia.org/wiki/Spinning%20wheel | Spinning wheel | A spinning wheel is a device for spinning thread or yarn from fibres. It was fundamental to the cotton textile industry prior to the Industrial Revolution. It laid the foundations for later machinery such as the spinning jenny and spinning frame, which displaced the spinning wheel during the Industrial Revolution.
Function
The basic spinning of yarn involves taking a clump of fibres and teasing a bit of them out, then twisting it into a basic string shape. The spinner continues pulling and twisting the yarn in this manner to make it longer and longer while also controlling the thickness. Thousands of years ago, people began doing this onto a stick, called a spindle, which was a very lengthy process.
The actual wheel part of a spinning wheel does not take the place of the spindle; instead, it automates the twisting process, allowing one to "twist" the thread without having to constantly do so manually, and also the size of the wheel lets one more finely control the amount of twist. The thread still ends up on a spindle, just as it did before the invention of the wheel.
The wheel itself was originally free-moving, spun by a hand or foot reaching out and turning it directly. Eventually, simple mechanisms were created that let a person simply push at a pedal and keep the wheel turning at an even more constant rate. This mechanism was the main source of technological progress for the spinning wheel before the 18th century.
History
The history of the spinning wheel is disputed, with:
J.M Kenoyer, involved in the study of the Indus Valley Civilization speculating that the uniformity of the thread and tight weave from a clay impression indicates the use of a spinning wheel rather than drop spindles, but according to Mukhtar Ahmed, the spinning whorls used since prehistoric times by the Indus Valley people produce a tight weave.
Dieter Kuhn and Weiji Cheng propose the spinning wheel originated in Zhou dynasty China, in the first millennium BCE, are mentioned in Chinese dictionaries of the 2nd century CE, and in widespread use by , with the earliest clear Chinese illustration of the machine dated to around 1270.
C. Wayne Smith and J. Tom Cothren, propose the spinning wheel was invented in India as early as 500–1000 AD.
Arnold Pacey and Irfan Habib propose the spinning wheel was most likely invented in the Middle-East by the early 11th century. There is evidence pointing to the spinning wheel being known in the Middle-East by 1030, and the earliest clear illustration of the spinning wheel is from Baghdad, drawn in 1237. Arnold Pacey and Irfan Habib suggest that early references to cotton spinning in India are vague and do not clearly identify a wheel, but more likely refer to hand spinning, offering as the earliest unambiguous reference to a spinning wheel in India, 1350, is a passage in Abdul Malik Isami’s work, Futuh-us-Salatin, which proposes a woman's place is at her Charkha.
K. A. Nilakanta Sastri and Vijaya Ramaswamy suggest there is clear reference to the use of a spinning wheel (with a description of its parts) by the 12th century in India, by Kannada poet, Remmavve.
The spinning wheel spread from the Middle-East to Europe by the 13th century, with the earliest European illustration dated to around 1280. In France, the spindle and distaff were not displaced until the mid 18th century.
The spinning wheel replaced the earlier method of hand spinning with a spindle. The first stage in mechanizing the process was mounting the spindle horizontally so it could be rotated by a cord encircling a large, hand-driven wheel. The great wheel is an example of this type, where the fibre is held in the left hand and the wheel slowly turned with the right. Holding the fibre at a slight angle to the spindle produced the necessary twist. The spun yarn was then wound onto the spindle by moving it so as to form a right angle with the spindle. This type of wheel, while known in Europe by the 14th century, was not in general use until later. The construction of the Great Wheel made it very good at creating long drawn soft fuzzy wools, but very difficult to create the strong smooth yarns needed to create warp for weaving. Spinning wheels ultimately did not develop the capability to spin a variety of yarns until the beginning of the 19th century and the mechanization of spinning.
In general, the spinning technology was known for a long time before being adopted by the majority of people, thus making it hard to fix dates of the improvements. In 1533, a citizen of Brunswick is said to have added a treadle, by which the spinner could rotate her spindle with one foot and have both hands free to spin. Leonardo da Vinci drew a picture of the flyer, which twists the yarn before winding it onto the spindle. During the 16th century a treadle wheel with flyer was in common use, and gained such names as the Saxony wheel and the flax wheel. It sped up production, as one need not stop spinning to wind up the yarn.
According to Mark Elvin, 14th-century Chinese technical manuals describe an automatic water-powered spinning wheel. Comparable devices were not developed in Europe until the 18th century. However, it fell into disuse when fibre production shifted from hemp to cotton. It was forgotten by the 17th century. The decline of the automatic spinning wheel in China is an important part of Elvin's high level equilibrium trap theory to explain why there was no indigenous Industrial Revolution in China despite its high levels of wealth and scientific knowledge.
On the eve of the Industrial Revolution it took at least five spinners to supply one weaver. Lewis Paul and John Wyatt first worked on the problem in 1738, patenting the Roller Spinning machine and the flyer-and-bobbin system, for drawing wool to a more even thickness. Using two sets of rollers that travelled at different speeds, yarn could be twisted and spun quickly and efficiently. However, they did not have much financial success. In 1771, Richard Arkwright used waterwheels to power looms for the production of cotton cloth, his invention becoming known as the water frame.
More modern spinning machines use a mechanical means to rotate the spindle, as well as an automatic method to draw out fibres, and devices to work many spindles together at speeds previously unattainable. Newer technologies that offer even faster yarn production include friction spinning, an open-end system, and air jets.
Types
Numerous types of spinning wheels exist, including:
the great wheel also known as walking wheel or wool wheel for rapid long draw spinning of woolen-spun yarns;
the flax wheel, which is a double-drive wheel used with a distaff for spinning flax fibres for making linen;
saxony and upright wheels, all-purpose treadle driven wheels used to spin both woolen and worsted-spun yarns; and
the charkha, native to Asia.
Spinning yarn on any spinning wheel requires prepared fibre; excepting silk, which can be spun directly from unwound cocoons, fibres must be prepared for spinning, usually by combing or carding. At the very least, foreign matter (dirt, plant stalks, or animal manure) must be removed before spinning. Most handspinners spin from 'a fluffy mass of aligned fibers' to more easily produce a consistent yarn.
Charkha
The tabletop or floor charkha is one of the oldest known forms of the spinning wheel. The charkha works similarly to the great wheel, with a drive wheel being turned by one hand, while the yarn is spun off the tip of the spindle with the other. The floor charkha and the great wheel closely resemble each other. With both, yarn must be held off the end of the spindle in order to twist it, and off-axis to wind the yarn onto the spindle.
The word charkha, which has links with Persian (Romanized: ), wheel, is related to the Sanskrit word for "circle" (). The charkha was both a tool and a symbol of the Indian independence movement. The charkha, a small, portable, hand-cranked wheel, is ideal for spinning cotton and other fine, short-staple fibres, though it can be used to spin other fibres as well. The size varies, from that of a hardbound novel to the size of a briefcase, to a floor charkha. Leaders of the India's Freedom Struggle brought the charkha into wider use with their teachings. They hoped the charkha would assist the people of India achieve self-sufficiency and independence, and therefore used the charkha as a symbol of the Indian independence movement and included it on earlier versions of the Flag of India.
Great wheel
The great wheel was one of the earlier types of spinning wheel. The fibre is held in the left hand and the wheel slowly turned with the right. Yarn is spun on a great wheel with the long-draw spinning technique, which requires only one active hand most of the time, thus freeing a hand to turn the wheel. The great wheel is usually used to spin short-staple fibres (this includes both cotton and wool), and can only be used with fibre preparations that are suited to long-draw spinning.
The great wheel is usually over in height. The large drive wheel turns the much smaller spindle assembly, with the spindle revolving many times for each turn of the drive wheel. The yarn is spun at an angle off the tip of the spindle, and is then stored on the spindle. To begin spinning on a great wheel, first a leader (a length of waste yarn) is tied onto the base of the spindle and spiraled up to the tip. Then the spinner overlaps a handful of fibre with the leader, holding both gently together with the left hand, and begins to slowly turn the drive wheel clockwise with the right hand, while simultaneously walking backward and drawing the fibre in the left hand away from the spindle at an angle. The left hand must control the tension on the wool to produce an even result. Once a sufficient amount of yarn has been made, the spinner turns the wheel backward a short distance to unwind the spiral on the spindle, then turns it clockwise again, and winds the newly made yarn onto the spindle, finishing the wind-on by spiralling back out to the tip again to make another draw.
Treadle wheel
This type of wheel is powered by the spinner's foot rather than their hand or a motor. The spinner sits and pumps a foot treadle that turns the drive wheel via a crankshaft and a connecting rod. This leaves both hands free for drafting the fibres, which is necessary in the short draw spinning technique, which is often used on this type of wheel. The old-fashioned pointed driven spindle is not a common feature of the treadle wheel. Instead, most modern wheels employ a flyer-and-bobbin system which twists the yarn and winds it onto a spool simultaneously. These wheels can be single- or double-treadle; which is a matter of personal preference and ergonomics and does not materially affect the operation of the wheel.
Double drive
The double drive wheel is named after its drive band, which goes around the spinning wheel twice. The drive band turns the flyer, which is the horse-shoe shaped piece of wood surrounding the bobbin, as well as the bobbin. Due to a difference in the size of the whorls (the round pieces or pulleys around which the drive band runs) the bobbin whorl, which has a smaller radius than the flyer whorl, turns slightly faster. Thus both the flyer and bobbin rotate to twist the yarn, and the difference in speed winds the yarn onto the bobbin when the spinner lets up tension on the newly spun yarn, and spins the bobbin and flyer together to add spin to the yarn when the spinner keeps the new yarn under tension (in this case, the drive band will slip slightly in the groove in the bobbin, flyer whorl, or both). Generally the speed difference or "ratio" is adjusted by the size of the whorls and the tension of the drive band.
The drive band on the double drive wheel is generally made from a non-stretch cotton or hemp yarn or twine.
Single drive
A single drive wheel set up in Scotch tension has one drive band connecting the drive wheel to the flyer. The spinning drive wheel turns the flyer and, via friction with the flyer shaft, the bobbin. A short tension band, or brake band, adds drag to the bobbin such that when the spinner loosens their tension on the newly spun yarn, the bobbin and flyer spin relative to each other and the yarn is wound onto the bobbin. A tighter tension band increases relative torque and 'pulls' the yarn onto the bobbin more forcefully; a looser tension band 'pulls' the yarn more gently. Generally, the tension band is tighter for spinning thicker yarn or yarn with less twist, and looser for spinning thinner yarn or yarn with more twist.
For a single drive wheel set up in Irish tension, or 'bobbin lead', the drive band drives the bobbin and the tension band brakes the flyer. Some wheels can be set up in either single drive configuration, others only one. Additionally some wheels can be set up either in double drive or single drive.
Upright style
When the spindle or flyer is located above the wheel, rather than off to one side, the wheel is called an upright wheel or castle wheel. This type of wheel is often more compact, thus easier to store and transport. Some upright wheels are even made to fold up small enough that they fit in carry-on luggage at the airport. An Irish castle wheel is a type of upright in which the flyer is located below the drive wheel.
Electric spinning wheel
An electric spinning wheels, or e-spinner, is powered by an electric motor rather than via a treadle. Some require mains power while others may be powered by a low-voltage source, such as a rechargeable battery. Most e-spinners are smaller, more portable, and quieter than treadle wheels. One of the attractions of an e-spinner is that it is not necessary to coordinate treadling with handling the fibre (drafting), so it can be easier to learn to spin on an e-spinner than a traditional treadle-style spinning wheel. E-spinners are also suitable for spinners who have trouble treadling or keeping their treadling speed consistent.
Friction drive
This type of treadle-driven wheel does not use a drive band; instead the flyer is directly friction-driven via a rubber ring in contact at a right angle to the flat surface of a solid drive wheel. One example from New Zealand dates to 1918, and a very few other models using this drive method have been manufactured since 1970. These wheels are extremely compact and less fouled by outdoor dirt than drive-band wheels, but they are quite uncommon.
Importance
The spinning wheel increased the productivity of thread making by a factor of greater than 10. Medieval historian Lynn Townsend White Jr. credited the spinning wheel with increasing the supply of rags, which led to cheap paper, which in turn was a factor in the development of printing.
It was fundamental to the cotton textile industry prior to the Industrial Revolution. It laid the foundations for later machinery such as the spinning jenny and spinning frame, which displaced the spinning wheel during the Industrial Revolution.
The spinning wheel was a precursor to the spinning jenny, which was widely used during the Industrial Revolution. The spinning jenny was essentially an adaptation of the spinning wheel.
Culture
The ubiquity of the spinning wheel has led to its inclusion in the art, literature and other expressions of numerous cultures around the world, and in the case of South Asia it has become a powerful political symbol.
Political symbolism
Starting in 1931, the traditional spinning wheel became the primary symbol on the flag of the Provisional Government of Free India.
Mahatma Gandhi’s manner of dress and commitment to hand spinning were essential elements of his philosophy and politics. He chose the traditional loincloth as a rejection of Western culture and a symbolic identification with the poor of India. His personal choice became a powerful political gesture as he urged his more privileged followers to copy his example and discard—or even burn—their European-style clothing and return with pride to their ancient, pre-colonial culture. Gandhi claimed that spinning thread in the traditional manner also had material advantages, as it would create the basis for economic independence and the possibility of survival for India’s impoverished rural areas. This commitment to traditional cloth making was also part of a larger swadeshi movement, which aimed for the boycott of all British goods. As Gandhi explained to Charlie Chaplin in 1931, the return to spinning did not mean a rejection of all modern technology but of the exploitative and controlling economic and political system in which textile manufacture had become entangled. Gandhi said, “Machinery in the past has made us dependent on England, and the only way we can rid ourselves of the dependence is to boycott all goods made by machinery. This is why we have made it the patriotic duty of every Indian to spin his own cotton and weave his own cloth."
Literature and folk tales
The Golden Spinning Wheel (Zlatý kolovrat) is a Czech poem by Karel Jaromír Erben that was included in his classic collection of folk ballads, Kytice.
Rumpelstiltskin, one of the tales collected by the Brothers Grimm, revolves around a woman who is imprisoned under threat of execution unless she can spin straw into gold. Rumpelstiltskin helps her with this task, ultimately at the cost of her first-born child; however, she makes a new bargain with him and is able to keep her child after successfully guessing his name.
Another folk tale that incorporates spinning wheels is the classic fairy tale Sleeping Beauty, in which the main character pricks her hand or finger on the poisoned spindle of a spinning wheel and falls into a deep sleep following a wicked fairy or witch's curse. Numerous variations of the tale exist (the Brothers Grimm had one in their collection entitled Little Briar Rose), and in only some of them is the spindle actually attached to/associated with a spinning wheel.
Perhaps surprisingly, a traditional spindle does not have a sharp end that could prick a person's finger (unlike the walking wheel, often used for wool spinning). Despite this, the narrative idea persists that Sleeping Beauty or Briar Rose or Dornrosen pricks her finger on the spindle—a device which she has never seen before, as they have been banned from the kingdom in a forlorn attempt to prevent the curse of the wicked godmother-fairy.
Walt Disney included the Saxony or flax wheel in their animated film version of Perrault's tale and Rose pricks her finger on the distaff (which holds the plant fibre waiting to be spun). Whereas only a spindle is used in Tchaikovsky's ballet The Sleeping Beauty which is closer to the direct translation of the French "un fuseau".
Spinning wheels are also integral to the plot or characterization in the Scottish folk tale Habitrot and the German tales The Three Spinners and The Twelve Huntsmen.
Louisa May Alcott, most famous as the author of Little Women, wrote a collection of short stories called Spinning-Wheel Stories, which were not about spinning wheels but instead meant to be read while engaging in the rather tedious act of using a spinning wheel.
Music
Classical and symphonic
In 1814, Franz Schubert composed "Gretchen am Spinnrade", a lied for piano and voice based on a poem from Goethe's Faust. the piano part depicts Gretchen's restlessness as she spins on a spinning wheel while waiting by a window for her love to return.
Antonín Dvořák composed The Golden Spinning Wheel, a symphonic poem based on the folk ballad from Kytice by Karel Jaromír Erben.
Camille Saint-Saëns wrote Le Rouet d'Omphale (Omphale's Spinning Wheel), symphonic poem in A major, Op. 31, a musical treatment of the classical story of Omphale and Heracles.
A favorite piano work for students is Albert Ellmenreich's Spinnleidchen (Spinning Song), from his 1863 Musikalische Genrebilder, Op. 14. An ostinato of repeating melodic fifths represents the spinning wheel.
Folk and ballad
The Spinning Wheel is also the title/subject of a classic Irish folk song by John Francis Waller.
A traditional Irish folk song, Túirne Mháire, is generally sung in praise of the spinning wheel, but was regarded by Mrs Costelloe, who collected it, as "much corrupted", and may have had a darker narrative. It is widely taught in junior schools in Ireland.
Sun Charkhe Di Mithi Mithi Kook is a Sufi song in the Punjabi language inspired by the traditional spinning wheel. It is an ode by a lover as she remembers her beloved with the sound of every spin of her Charkha.
Opera
Spinning wheels also feature prominently in the Wagner opera The Flying Dutchman; the second act begins with local girls sitting at their wheels and singing about the act of spinning.
Gilbert and Sullivan's The Yeomen of the Guard begins with a solitary character singing while spinning at her wheel, the first of their operettas not to open with a chorus.
Art
Spinning wheels may be found as motifs in art around the world, ranging from their status as domestic/utilitarian items to their more symbolic role (such as in India, where they may have political implications).
| Technology | Spinning | null |
65685 | https://en.wikipedia.org/wiki/Volkswagen%20Beetle | Volkswagen Beetle | The Volkswagen Beetle, officially the Volkswagen Type 1, is a small family car produced by the German company Volkswagen from 1938 to 2003. One of the most iconic cars in automotive history, the Beetle is noted for its distinctive shape. Its production period of 65 years is the longest of any single generation of automobile, and its total production of over 21.5 million is the most of any car of a single platform.
The Beetle was conceived in the early 1930s. The leader of Nazi Germany, Adolf Hitler, decided there was a need for a people's car—an inexpensive, simple, mass-produced car—to serve Germany's new road network, the Reichsautobahn. The German engineer Ferdinand Porsche and his design team began developing and designing the car in the early 1930s, but the fundamental design concept can be attributed to Béla Barényi in 1925, predating Porsche's claims by almost ten years. The result was the Volkswagen Type 1 and the introduction of the Volkswagen brand. Volkswagen initially slated production for the late 1930s, but the outbreak of war in 1939 meant that production was delayed until the war had ended. The car was originally called the Volkswagen Type 1 and marketed simply as the Volkswagen. It was not until 1968 that it was officially named the "Beetle".
Volkswagen implemented designations for the Beetle in the 1960s, including 1200, 1300, 1500, 1600, 1302, and 1303. Volkswagen introduced a series of large luxury models throughout the 1960s and 1970s—comprising the Type 3, Type 4 and the K70—to supplement the Beetle, but none of these models achieved the level of success that it did. Rapidly changing consumer preferences toward front-wheel drive compact hatchbacks in Europe prompted Volkswagen's gradual shift away from rear-wheel drive, starting with the Golf in 1974. In the late 1970s and '80s, Japanese automakers began to dominate the market, which contributed to the Beetle's declining popularity.
Over its lifespan, the Beetle's design remained consistent, yet Volkswagen implemented over 78,000 incremental updates. These modifications were often subtle, involving minor alterations to its exterior, interior, colours, and lighting. Some more noteworthy changes included the introduction of new engines, models and systems, such as improved technology or comfort.
History
KdF-Wagen
In May 1934, at a meeting at Berlin's Kaiserhof Hotel, the leader of Nazi Germany, Adolf Hitler, insisted on the development of a vehicle that could accommodate two adults and three children while not using more than sevenlitres of fuel per 100 km (33.6 mpg US/40.4 mpg UK). All components were designed for a quick and inexpensive part exchange. As Hitler explained, the reason for choosing an air-cooled engine was the lack of a garage for every country doctor. On 22 June 1934, Ferdinand Porsche received a development contract from the Verband der Automobilindustrie (German Association of the Automotive Industry) for the prototype of an inexpensive and economical passenger car after Hitler decided there was a need for a people's car—a car affordable and practical enough for lower-class people to own—to serve the country's new road network, the Reichsautobahn. Although the Volkswagen Beetle was primarily the conception of Porsche and Hitler, the idea of a "people's car" is much older than Nazism and has existed since the introduction of automotive mass-production.
Originally designated as the Type 60 by Porsche, the Beetle project involved a team of designers and engineers comprising Erwin Komenda, who specialised in the bodywork; Josef Kales, responsible for the engine design; Karl Rabe, serving as the chief engineer; and Josef Mickl and Franz Xaver Reimspiess, the latter credited for devising the iconic Volkswagen badge. The project saw significant milestones in October 1935 with the completion of the first two Type-60 prototypes, identified as cars V1 (sedan) and V2 (convertible), denoted with a "V" (for Versuchs – "prototype") signifying their status as a test car. The testing of three additional V3 prototypes began on 11 July 1936, the first of which was driven to Obersalzberg and inspected by Hitler. Two V3s were delivered to Berlin in August for examination by other Nazi Party officials, who showed great interest in them. By June 1936, the V3s underwent over of testing across various terrains. A series of thirty W30 development models, commissioned by Porsche and manufactured by Daimler-Benz, underwent testing in early April 1937, covering a total distance of over . All vehicles featured the characteristic rounded design and included air-cooled, rear-mounted engines. A further batch of 44 VW38 pre-production cars produced in 1938 introduced split rear windows, and subsequently, Volkswagen introduced fifty VW39 cars, completed in July 1939.
Kraft durch Freude (Strength Through Joy, a political organisation aimed at providing the populace with leisure activities) was in charge of this project. Robert Ley, the Nazi official heading Kraft durch Freude (KdF), announced in 1938 that every German would own a Volkswagen within ten years. However, there were challenges. Gasoline prices in Germany were high due to taxes, making it expensive for private car ownership. Gasoline was also primarily used for the military in the Nazi regime. Despite that, the Nazi leaders saw the mass-produced car as a way to promote their system. It symbolised a shift from cars being a privilege for the wealthy to a dream that lower-class Germans could now fulfil. Hitler was particularly enthusiastic about it because the car could easily be adapted for military use.
The KdF-Wagen was not series-produced before the Second World War because the Volkswagen plant at Fallersleben (now Wolfsburg), founded in May 1938, was not yet finished. During the war, other German manufacturers were concurrently producing military vehicles and armaments, so the series production of the then-called Volkswagen car could not begin until peacetime; nevertheless, 210 KdF-Wagens were manufactured by the end of the war in May 1945. Following the cessation of hostilities the British occupying forces brought the factory into operation and by the close of 1945, 1,785 Volkswagens had been built, delivered to the occupying powers and the postal service.
Design
The Beetle featured a rear-located, air-cooled four-cylinder, boxer engine and rear-wheel drive in a two-door bodywork featuring a flat front windscreen, accommodating four passengers and providing luggage storage under the front bonnet and behind the rear seat, and it has a drag coefficient of 0.48. The bodywork attached with eighteen bolts to the Beetle's nearly flat platform chassis featured a central structural tunnel. The front and rear suspension incorporated torsion bars and a front stabiliser bar, providing independent suspension at all wheels, albeit the front axle was designed with double longitudinal trailing arms, whereas the rear axle was a swing axle. Light alloy forms the Beetle's engine, transmission, and cylinder heads.
Design controversies
German-Bohemian Ferdinand Porsche (1875–1951) and his team were generally known as the original designers of the Volkswagen. However, there has been debate over whether he was the original designer. Rumours circulated, suggesting that other designers, such as Béla Barényi, Paul Jaray, Josef Ganz and Hans Ledwinka, may have influenced its design.
Béla Barényi
In 1925, Austro-Hungarian automotive engineer Béla Barényi designed a car similarly shaped to the Beetle, more than five years before Porsche unveiled his initial "People's Car" design. Through a court ruling in 1953, Barényi successfully asserted his authorship and associated claims. He explained that he had previously elucidated the concept of the Beetle, which was already formulated in the 1920s, to Porsche in great detail. However, this concept was not protected sufficiently by patents. Key elements of this concept included the air-cooled four-cylinder boxer engine at the rear, the transmission positioned in front of the rear axle, and the distinctive roundish shape. Dieter Landenberger, the head of Porsche's historical archive, later affirmed that Barényi played a "decisive role in the authorship of the later VW Beetle". Since then, he has been known for conceiving the original car design.
Paul Jaray
Many assume that Paul Jaray shaped the car's body design through his aerodynamics calculations. According to a November 2021 update of research mentioned in the fifteenth report by the German newspaper Frankfurter Allgemeine Zeitung, Jaray's findings influenced the design of "Hitler's streamlined KdF car", later known as the 'beetle', which became the best-selling car globally post-war. Jaray's research results in fluid mechanics for ground-bound vehicles extended beyond the VW Beetle, impacting the Tatra 77 and other vehicles. His initial patents and publications date back to the early 1920s. The engineer Christian Binnebesel scientifically presented Jaray's significant contribution to streamline form in his 2008 dissertation.
Josef Ganz
Josef Ganz's potential early contributions to the original Beetle's development remained controversial for years and lacked clear clarification. Research suggests that his idea and the concept of a compact car played a significant role in the VW Beetle's development and its prototypes. Ganz personally drove a Hanomag Kommissbrot and a swing-axle Tatra—both featuring elements such as a central tubular frame, independent wheel suspension, and a rear/mid-engine design. Ganz incorporated these technical features into his proposed vehicle designs. Hitler reportedly saw cars designed by Josef Ganz at the 1933 Berlin Auto Show. The Standard Superior, designed by Ganz for the Standard vehicle factory, featured an implied teardrop-shaped body on a central tubular frame with a rear swing axle, yet the engine was transversely installed in front of the axle, not longitudinally as a rear engine.
Hans Ledwinka
The Austrian automobile designer Hans Ledwinka, whom Porsche was a contemporary, worked at the Czechoslovakian company Tatra. In 1931, Tatra built the V570 prototype, which featured an air-cooled flat-twin engine mounted at the rear. Hitler and Porsche both were influenced by the Tatras. Hitler, an avid automotive enthusiast, rode in Tatras multiple times during political tours of Czechoslovakia and had frequent dinners with Ledwinka. Following one such tour, Hitler remarked to Porsche, "This is the car for my roads". From 1933 onwards, Ledwinka and Porsche met regularly to discuss their designs, and Porsche admitted, "Well, sometimes I looked over his shoulder, and sometimes he looked over mine" while designing the Volkswagen. The Tatra 97 of 1936 had a 1,749 cc, rear-located, rear-wheel drive, air-cooled four-cylinder boxer engine. It accommodated five passengers in its compact four-door body, which provided luggage storage under the front bonnet and behind the rear seats.
Just before the outbreak of World War II, Tatra filed numerous legal claims against VW for patent infringement. Tatra launched a lawsuit, halted only by Germany's invasion of Czechoslovakia in 1938, leading to the Nazi administration of the Tatra factory in October. Hitler instructed Tatra to focus exclusively on heavy trucks and diesel engines, discontinuing all car models except the V8-engined Tatra 87. The issue resurfaced post-World War II, and in 1965, Volkswagen paid Ringhoffer-Tatra 1,000,000 Deutsche Marks in an out-of-court settlement.
Tenure
World War II and military production: 1938–1945
The name Volkswagen was officially substituted by the term KdF (Kraft durch Freude; German for 'Strength Through Joy') derived from the Nazi organisation once Hitler ceremoniously laid the foundation stone for the Volkswagen factory on 26 May 1938. As part of this organisation, Volkswagen urged workers to "save five marks a week and get your car". Before the completion of the KdF factory, many Germans had already signed up for a savings plan to buy a car. At that time, Germany had fewer cars than other European countries. In 1930, there were only about 500,000 registered cars in Germany, while France and Great Britain had over 1 million each, and the USA had more than 26 million. However, the onset of the Second World War hindered the distribution of the cars, and there was a lack of time for series production. With the Volkswagen facility dedicated solely to wartime requirements, the over 330,000 KdF savers could not acquire their vehicles. Following the war, numerous KdF savers pressed for the receipt of a Volkswagen. When their request was denied, the VW saver initiative ensued, spanning several years.
During the war, the factory predominantly built the Kübelwagen (Type 82), the Schwimmwagen (Type 166) and numerous other light utility vehicles. These vehicles were derived mechanically from the Type 1 and used by the Wehrmacht. These vehicles, including several hundred Kommandeurswagen (Type 87), featured a Type 1 Beetle body mounted on the robust chassis of the four-wheel-drive Type 86 Kübelwagen prototype. The Kommandeurswagen included a portal axle, a Schwimmwagen drivetrain, wider fenders, and oversized Kronprinz all-terrain tyres, reminiscent of the later Baja Bugs. The production of the Kommandeurswagen persisted until 1944 when the production at the plant halted due to the extensive damage inflicted by the Allied air raids. Due to gasoline shortages late in the war, a few "Holzbrenner" (wood-burner) Beetles were built fueled with wood logs.
Planned for September 1939, Kraft durch Freude arranged an event to showcase Germany's Autobahn highway system and to promote the purported beginning of the production of the KdF-Wagen, involving a 1,500-kilometre(930mi) journey from Berlin to Rome. Erwin Komenda supervised the development process, while Karl Froelich was responsible for creating official plans that they subsequently used to form a wooden scale model. The model was wind tunnel tested at Stuttgart University by Josef Mickl. Dubbed the "Berlin-Rome car", Porsche AG's engineers designed the Type 60 K 10, officially known as the Porsche 64. Although the engineers produced three vehicles, they never made it to the race due to the outbreak of war before the scheduled date; two of them disappeared during the conflict. Austrian Otto Mathé acquired the third Berlin-Rome car and raced it throughout the 1950s, becoming the fastest in its class during the 1950 Alpine Cup. He continued to use it until his death in 1995.
Post-war production and success: 1945–1970
Following the war, the Beetle experienced a significant growth in success. On 11 April 1945, Fallersleben, where 17,000 people lived, was officially designated "Wolfsburg". Official series manufacture of the saloon began on 27 December 1945; Volkswagen made fifty-five vehicles by the end of the year. The Volkswagen facility, initially slated for dismantling and transportation to Britain under American control in 1945, faced a lack of interest from British car manufacturers; an official report included the phrase, "The vehicle does not meet the fundamental technical requirement of a motor-car [...] it is quite unattractive to the average buyer [...] To build the car commercially would be a completely uneconomic enterprise." Instead, the factory remained operational by producing cars for the British Army. Allied dismantling policy changed from late 1946 to mid-1947. During this period, heavy industry in Germany continued until 1951. In March 1947, Herbert Hoover helped change policy by stating:
Major Ivan Hirst (1916–2000), a British Army officer, has been widely acknowledged for the reopening of the factory. Hirst was ordered to take control of the heavily bombed factory, which the Americans had captured. Recognising the scarcity of occupations in Germany and the shortage of vehicles in the British Army, Hirst persuaded the British military to order 20,000 cars, stating that it "was the limit set by the availability of materials". By March 1946, production capacity was rated at approximately 1,000 units per month. Based on an eight-hour shift in mid-1946, production was around 2,500 per month. At the time, about 1,800 machine tools were in operation, of which 200 were used exclusively for the key components.
Once Heinrich Nordhoff assumed management at Volkswagenwerk, manufacturing capacity increased significantly. Production in 1946 and 1947 was rated at 9,878 and 8,973 examples, respectively, but in Nordhoff's first year, 1948, manufacture doubled to approximately 19,244 units. On 6 August 1955, the millionth example was assembled and by 1959, production capacity was rated at 700,000 units per year. By mid-1948, the Forces of Occupation received 20,991 cars, leaving less than 10,000 for export or domestic consumption. The number of employees increased from 6,033 by the end of 1945 to almost 57,000 in 1957. After the war, over 10,000 apartments were built to house the workers in Wolfsburg, which then had a population of nearly 60,000. In 1959, Volkswagen invested more than DM500million to increase daily production by 1,000, reaching a final target of 3,000 per day. During 1960, the company occasionally increased production by around 100; by the end of 1960, Volkswagen planned to produce 4,000 examples daily. Nordhoff stated, "Then we believe we shall have reached a balance between supply and demand so that we can finally deliver Volkswagens to customers without a waiting period".
By the early 1960s, the Wolfsburg facility was massive. It accommodated about 10,000 production machines and covered 10.8million square feet in roofed area, more than the combined residential area in Wolfsburg. Daily production increased to approximately 5,091, and the plant employed over 43,500 workers. By 1962, Nordhoff had spent over DM675million in expanding the factory. At that time, Volkswagen sales constituted 34.5percent of the total West German automotive market and 42.3percent of sub- commercial vehicle market there. Nordhoff's recurring encouragement proved to be highly effective. He consistently urged the team to work harder, reduce expenses and avoid complacency and corporate inefficiencies. In January 1960, Nordhoff quoted:
The Emden facility represented an expenditure exceeding DM154.4million, with Beetle operations beginning there on 1 December 1964. By late 1965, Volkswagen's annual production exceeded 1,600,000 units, averaging 6,800 units per day. Volkswagen's share of all cars produced in West Germany reached 48.6 percent, representing a 3.3% increase from the previous year. When including Audis produced at Ingolstadt, the combined output from Volkswagen and its Auto-Union company constituted 50.4% of all West German cars produced that year. In 1968, the Type 1 was officially given the name "Beetle" (from "der Käfer", German for beetle).
Decline and end of West German production: 1970–1990
While it was largely successful in the 1960s, recording its highest sales growth in North America from 1960 to 1965, the Beetle started facing competition from more contemporary designs worldwide in the 1970s. The decade started out well for Volkswagen, which sold 569,000 Beetles in 1970. In 1970, fifteen Volkswagen dealerships in Washington convened to implement the Volkswagen American Dealers Association, which was made to preserve a free market of imported international automobiles through political pressure and lobbying. On 17 February 1972, the world car production record was broken by the Beetle, with a total of 15,007,034 units produced worldwide, thereby surpassing the production figure that had been held by the American Ford Model T for nearly fifty years. Volkswagen donated the car to the Smithsonian Institution for permanent exhibition in its industrial history section. By 1973, over 16million Beetles had been manufactured. On 1 July 1974, the final Beetle was produced at the Wolfsburg plant after 11,916,519 examples were made there. Following its discontinuation, Volkswagen ceased the ongoing development of the Beetle in Germany. On 19 January 1978, the last Beetle sedan manufactured in Europe rolled off the production line at the Emden plant with the chassis number 1182034030. After its discontinuation in Germany, production of the Volkswagen Beetle continued in Australia, Mexico and Nigeria.
In the 1960s and '70s, Volkswagen augmented its product portfolio with several models to supplement the Type 1—the Type 3, the Type 4 and the NSU-based K70 sedan. None of these models achieved the level of success of the Beetle. The overdependence on a singular model, which was experiencing a decline in popularity, meant that Volkswagen was in a financial crisis and needed German government funding to produce the Beetle's replacement. Consequently, the company introduced a new generation of water-cooled, front-engined, front-wheel-drive models, including the Golf, the Passat, the Polo and the Scirocco, all of which were styled by the Italian automotive designer Giorgetto Giugiaro. By 1979, the Golf constituted over 50 per cent of Volkswagen sales, and it eventually became Volkswagen's most successful model since the Beetle. As opposed to the Beetle, the Golf was substantially redesigned over its lifetime, with only a few components carried over between generations. On 10 January 1980, the final Beetle convertible of 330,281 rolled off the production line at the Karmann facility in Osnabrück. It was the most successful convertible for a long time and was replaced by the first Golf cabriolet in 1979.
The number of Beetle units sold by Volkswagen was at its lowest in the 1980s. The Beetle faced competition from Japanese automakers such as Toyota and Honda, whose cars were uprated in reliability and performance. The closure of Volkswagen's Pennsylvania factory was due to high costs, subpar quality, and poor sales. In the United States, Volkswagen introduced the Rabbit and Corrado, both of which had little success. The overall sales suffered a significant downturn, leading to the loss of many dealerships for the company.
New Beetle and end of production: 1990–2003
In 1991, the planning of a new car began once J Mays and Freeman Thomas returned to California to open Volkswagen's Design Centre at Simi Valley. Recognising that Japanese manufacturers dominated the market in the 1970s and '80s, Volkswagen needed to introduce a vehicle to regain popularity. Before this, the company began the development of a city car, codenamed "Chico", in which they invested millions of Deutsche Marks. In 1993, the brand stated that the Chico was intended to begin production in 1995. However, this plan was abandoned once Volkswagen realised that the project was commercially infeasible. Mays and Thomas recognised the difficulties faced by the brand and suggested the need for a vehicle that included the recognisable design of the Beetle as a potential solution to improve customer appeal. During development, this car was known as the "Concept One" project. The prototype version of the project was revealed at the 1994 Detroit Motor Show, and a red convertible variant of the model was showcased at the 1995 Geneva Motor Show.
It took a year for Volkswagen to officially confirm the production of the concept in its final form, which was slated for completion by the end of the century. To help gauge public demand of the forthcoming automobile in the United States, Volkswagen implemented a free-access telephone line to allow members to express their thoughts on the car. The line quickly became inundated with calls, with many saying, "You build it, I'll buy it!" Work on the Concept One continued, with further redesigns on its front fascia. To reduce production investments and expenses, Volkswagen initially planned to use the platform of the Polo. However, in 1995, at the Tokyo Motor Show, the company unveiled another prototype, sharing its wheelbase and its broader range of engine options with the Golf. Simultaneously, Volkswagen announced that it would be named the "New Beetle". After over six years of planning and development, Volkswagen introduced the New Beetle in 1997.
On 30 July 2003 at 9:05 a.m., at the Puebla plant in Mexico, Volkswagen produced the final Type 1, after 21,529,464 examples were produced globally during its tenure. Its production span of 65 years is the longest of any single generation of automobile, and its total production of over 21.5million is the most of any car of a single platform. To celebrate the occasion, Volkswagen marketed a series of 3,000 Beetles as "Última Edición" (Final Edition).
Models and history of design
While the design of the Beetle changed little over its lifespan, Volkswagen implemented over 78,000 incremental updates. Typically subtle, these alterations usually involved minor updates to the exterior, interior, colours and lighting. More noteworthy changes have comprised new engines, models and systems, such as updated dashboards and hydraulic braking.
Initial and successful models: 1946–1974
The Type 11 standard limousine, initially designated as the Type 60 before 1946, received the dub "Pretzel Beetle" due to its distinctive oval-shaped, vertically divided rear window. On 1 July 1949, the Volkswagen lineup was expanded to include the "export" model featuring enhanced interiors, chrome bumpers, and trim. It was offered in a variety of colours to distinguish it from the preceding "Standard" model. Starting in 1950, an optional sunroof with a textile cover could be added at an extra cost. By March of the same year, the export model began to be equipped with a hydraulic brake system, which became a standard feature from April onwards. In 1952, the equipment was enhanced with the addition of vent windows in the doors, and the wheels were reduced to a diameter of from the previous . On 10 March 1953, the split rear window was replaced with a one-piece rear window. Starting in 1954, the Type 122 engine had a cylinder bore, increased by , and an engine displacement of , surpassing the previous . This engine produced , a improvement over its predecessor. In 1955, the traditional VW semaphore turn signals were replaced by conventional flashing directional indicator lamps for North America, followed by their worldwide replacement in 1961. In 1958, the Beetle received a revised instrument panel, and a larger rectangular rear window replaced the previous oval design.
In 1960, Volkswagen introduced a series of technical alterations. The front indicators were relocated to the front bonnet within chrome housings, and the rear indicators were integrated into the tail lamps. In January, the valve-clearing adjusting nut was slightly enlarged and resistor-type ignition leads were adopted. In March, Volkswagen made several modifications to its front trailing arm and the steering damper. In May 1960, Volkswagen added plastic warm air ducts to decrease noise.
In the mid-1960s, the traditional labels "standard" and "export" for the Beetle's model variants were superseded by numerical designations, approximately correlating with the engine displacements. In the October 1961 issue of Motor Trend, Don Werner noted, "Five years ago, out of every ten imported cars sold, six were Volkswagens. [The] latest figures show the ratio is now down to about every four [Volkswagens] out of every ten. If the current VW starts to slip, the new [Type 3]—soon to be introduced—probably will be imported to justify the [company's] more than 600 [Volkswagen] dealerships and the $100million investment in facilities". He continued by expressing that the Type 3 had failed to leave a positive impression on industry executives in both Europe and North America. The new engine essentially possessed identical specifications as the previous model; it was a horizontally opposed, overhead valve, four-cylinder air-cooled engine. It generated at 3,900rpm and produced at 2,000rpm.
The 1961 Beetle introduced a full-synchronised four-speed manual transmission, replacing the former non-synchronised first gear. The Volkswagen facility implemented 27 alterations to the new model, some of which were minor. Noteworthy changes comprised an automatic choke, an anti-icing carburettor heater, a redesigned fuel tank that increased boot capacity, an external gas tank vent to prevent odours in the car, standard windshield wipers and a new ignition switch. Stylistic improvements included new paint colours and interior design options, a coloured steering wheel and a 90-mile-per-hour speedometer. On 30 July 1962, Volkswagen made several updates for the 1963 model year, including the incorporation of an air filter into the oil filter, the introduction of larger-diameter cylinder head induction ports and the adoption of plastic for the headliner and window guides. Volkswagen replaced the Wolfsburg crest on the hood, which had been present since 1951, with the company's lettering. A heating system was also introduced. In 1965, the 1200A designation was introduced for the standard Beetle with the engine.
Volkswagen introduced the 1300 in August 1965, equipped with a 1.3-litre engine producing . Although featuring an identical design, the increase was achieved through the adoption of the crankshaft from the Type 3. This extended the stroke from to , resulting in an engine displacement of approximately . 1965 also marked the Beetle's most extensive design change when its body stampings were extensively revised. It allowed for significantly larger windows, a departure from previous designs. The windshield increased by 11% in area and adopted a slight curvature, replacing its flat configuration. Door windows also expanded by 6%, with a slight backward canting of door vent window edges. Rear side windows increased by 17.5%, and the rear window by 19.5%.
In 1967, updates comprised shortened front and rear bonnets, box-profile bumpers with a railway rails design that were installed at a higher position, vertically oriented scattering discs for the headlights and larger rear lights with an iron design. Volkswagen introduced external fuel filler flap, eliminating the need to open the front bonnet for refuelling. In September 1967, the 1500 Beetle was introduced. Its engine displacement was approximately , its power output was and it featured a three-speed semi-automatic transmission. In 1968, the 1200 received fully independent suspension, some stylistic improvements and an external fuel cap. The 1300 transitioned from six-volt to twelve-volt electrics and received dual circuit braking and a fuel gauge. The 1500 also received these alterations, as well as carburettor enhancements. In 1969, the 1200 received twelve-volt electrics, hazard warning lamps and a locking fuel cap. The 1300 was available with a semi-automatic transmission and radial-ply tyres. In 1970, the 1500 received a new carburettor and dual circuit braking, and Volkswagen discontinued the 1500.
In 1971, the 1200 received a larger windscreen, while the 1300 received a power increase to and larger brakes, effectively replacing the 1500. Volkswagen replaced the 1300 with the 1300A "economy version" in 1973 for the 1974 model year, possessing the same specifications as the 1300 but maintaining the same overall design as the 1200.
Mid-life and declining models: 1970–1986
The VW 1302, introduced in August 1970, featured a redesigned front end. It incorporated a new front axle featuring MacPherson suspension struts, wishbones and a stabiliser. The enlargement of the front trunk became possible as a result. Unlike its predecessor, the spare wheel was no longer positioned diagonally at the front under the hood but instead rested horizontally under a cover in the trunk area. The company initially intended to designate the car as the "1301", but a trademark already held by the French company Simca compelled Volkswagen to use "1302" instead. Volkswagen produced the more powerful 1302S alongside the 1302. The latter has an engine displacement of , while the former has a capacity of . In English-speaking countries, the name "Super Beetle"—alongside "1600"—was included on the written description but not the engine cover.
The 1302 possessed the same output as the 1300, whereas the 1302S saw an increase to . This was facilitated by a twin-port cylinder head, enabling the engines to breathe more effortlessly. The British automotive magazine Autocar expressed disappointment in its power increase, noting, "Even with 14 [percent] more power, the total output of 50bhp is very modest for the size of the engine". The Super Beetle had a increase in wheelbase, but the extra space was in front of the windshield. For 1971, the overall length increased by , doubling the front trunk capacity and adding of luggage space. Volkswagen also implemented a new fresh-air ventilation system, drawing its air from the rear quarter panels.
In August 1972, the 1303 range superseded the 1302 model, which featured a curved windshield. This design change elicited mixed opinions; some favoured it, while others expressed dissatisfaction. Despite the effort to infuse the Beetle with a modernised design, this did not resonate with consumers, resulting in declining Beetle sales. In 1975, the 1303 and 1303S received rack and pinion steering, but in July of that year, Volkswagen discontinued both of them. The long-serving 1200 was renamed the "1200L" in 1976, with the additional deluxe features incorporated into the car's interior. In July 1984, Volkswagen eliminated the engine lid louvres.
Final models: 1986–2003
Starting in 1986, for the 1987 model year, the sole model available was the single-carburettor version with . From late 1992 for the 1993 model year, Volkswagen standardised catalytic converters, the Bosch Digifant engine management system, a lambda probe and electric ignition. This fuel-injection system proved much more straightforward and reliable than previous injection systems used on German-assembled Volkswagens since 1967. Vehicles with these modifications can be identified externally by the reintroduced louvred engine lid, heavier and larger bumper bars, four-stud wheels with twenty ventilation holes and a "1600i" badge on the engine lid. The 1993 model also featured a third-generation Golf-style steering wheel and front seats, a protection alarm, handbrake and engine compartment lamps and an optional ZF limited-slip differential. The engine received hydraulic tappets, a full-flow oil filter, a 6.6:1 compression ratio—allowing for the use of unleaded fuel—and an electric fuel pump. A standard version was also released in 1993, featuring painted bumper bars and many minor removals.
From 1997, front disc brakes and an immobiliser became available, and the De Luxe model featured small traffic indicator side lamps ahead of the top door hinge. The steering wheel's centre boss was restyled to resemble that of the contemporary Golf and Polo. In 1998, Volkswagen removed the small through-flow ventilation slots behind the rear side windows and standardised front disc brakes. Furthermore, Volkswagen included a security alarm as standard and removed the "1600i" inscription from the engine lid.
Markets and assembly
Over its 65-year tenure, Volkswagen produced the Volkswagen Type 1 in numerous locations worldwide. The following list encompasses all the locations in which it was manufactured.
Specific markets
Brazil
Official exportation of the Beetle to the Brazilian market began on 23 March 1953, with its parts imported from Germany. For the local market, the Type 1 was officially known "Volkswagen Fusca". In January 1959, Volkswagen shifted assembly to the new São Bernardo do Campo plant, initially maintaining 60percent of its German parts. However, by the mid-1960s, the cars had about 99.93percent Brazilian-made components. Production persisted until 1986, after over 3.3million examples were produced there, and resumed in 1992, extending until 1996.
Mexico
The production of the Beetle was possible through agreements with companies like Chrysler in Mexico and the Studebaker-Packard Corporation, which assembled cars imported in complete knock-down form. The Beetle was introduced to the Mexican market in 1954, and began official production ten years later. The local market referred to the Beetle as the "Vocho". The introduction of a new taxi regulation in Mexico City, requiring only four-door vehicles to be permitted to prevent robberies, influenced Volkswagen's decision to the end of the production of the Beetle in 2003.
Australia
Formal introduction of the Volkswagen Beetle to the Australian market took place in 1953, followed by local assembly operations at the Clayton, Victoria facility in the subsequent year. The establishment of Volkswagen Australia Ltd took place in 1957, and by 1960, locally manufactured body panels were integrated for the first time. Despite the introduction of larger windows for the European Type One body in 1965, Volkswagen Australia opted to maintain production of the smaller-windowed bodies with features tailored for Australian models. This decision was influenced by the constraints of the market size and the expenses associated with retooling. By this juncture, Australian content had surged to nearly 95percent. The final Australian-assembled Beetle rolled off the production line in July 1976.
Retrofit program
Volkswagen entered into partnership with eClassics, enabling Beetle owners to electrify their vehicles. The electric conversion kit includes a battery with a capacity of 36.8 kWh, providing an estimated range of . The converted Beetle can achieve a top speed of , and an hour of charging can store sufficient energy for a journey exceeding .
| Technology | Specific automobiles | null |
65827 | https://en.wikipedia.org/wiki/Silicone | Silicone | In organosilicon and polymer chemistry, a silicone or polysiloxane is a polymer composed of repeating units of siloxane (, where R = organic group). They are typically colorless oils or rubber-like substances. Silicones are used in sealants, adhesives, lubricants, medicine, cooking utensils, thermal insulation, and electrical insulation. Some common forms include silicone oil, grease, rubber, resin, and caulk.
Silicone is often confused with silicon, but they are distinct substances. Silicon is a chemical element, a hard dark-grey semiconducting metalloid, which in its crystalline form is used to make integrated circuits ("electronic chips") and solar cells. Silicones are compounds that contain silicon, carbon, hydrogen, oxygen, and perhaps other kinds of atoms as well, and have many very different physical and chemical properties.
History
F. S. Kipping coined the word silicone in 1901 to describe the formula of polydiphenylsiloxane, (Ph = phenyl, ), by analogy with the formula of the ketone benzophenone, (his term was originally silicoketone). Kipping was well aware that polydiphenylsiloxane is polymeric whereas benzophenone is monomeric and noted the contrasting properties of and . The discovery of the structural differences between Kipping's molecules and the ketones means that silicone is no longer the correct term (though it remains in common usage) and that the term siloxane is preferred according to the nomenclature of modern chemistry.
James Franklin Hyde (born 11 March 1903) was an American chemist and inventor. He has been called the "Father of Silicones" and is credited with the launch of the silicone industry in the 1930s. His most notable contributions include his creation of silicone from silicon compounds and his method of making fused silica, a high-quality glass later used in aeronautics, advanced telecommunications, and computer chips. His work led to the formation of Dow Corning, an alliance between the Dow Chemical Company and Corning Glass Works that was specifically created to produce silicone products.
Chemistry
Alfred Stock and Carl Somiesky examined the hydrolysis of dichlorosilane, a reaction that was proposed to initially give the monomer :
When the hydrolysis is performed by treating a solution of in benzene with water, the product was determined to have the approximate formula . Higher polymers were proposed to form with time.
Most polysiloxanes feature organic substituents, e.g., and . All polymerized siloxanes or polysiloxanes, silicones consist of an inorganic silicon–oxygen backbone chain () with two groups attached to each silicon center. The materials can be cyclic or polymeric. By varying the chain lengths, side groups, and crosslinking, silicones can be synthesized with a wide variety of properties and compositions. They can vary in consistency from liquid to gel to rubber to hard plastic. The most common siloxane is linear polydimethylsiloxane (PDMS), a silicone oil. The second-largest group of silicone materials is based on silicone resins, which are formed by branched and cage-like oligosiloxanes.
Synthesis
Most common are materials based on polydimethylsiloxane, which is derived by hydrolysis of dimethyldichlorosilane. This dichloride reacts with water as follows:
The polymerization typically produces linear chains capped with or (silanol) groups. Under different conditions, the polymer is a cyclic, not a chain.
For consumer applications such as caulks, silyl acetates are used instead of silyl chlorides. The hydrolysis of the acetates produces the less dangerous acetic acid (the acid found in vinegar) as the reaction product of a much slower curing process. This chemistry is used in many consumer applications, such as silicone caulk and adhesives.
Branches or crosslinks in the polymer chain can be introduced by using organosilicone precursors with fewer alkyl groups, such as methyl trichlorosilane and methyltrimethoxysilane. Ideally, each molecule of such a compound becomes a branch point. This process can be used to produce hard silicone resins. Similarly, precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain.
Combustion
When silicone is burned in air or oxygen, it forms solid silica (silicon dioxide, ) as a white powder, char, and various gases. The readily dispersed powder is sometimes called silica fume. The pyrolysis of certain polysiloxanes under an inert atmosphere is a valuable pathway towards the production of amorphous silicon oxycarbide ceramics, also known as polymer derived ceramics. Polysiloxanes terminated with functional ligands such as vinyl, mercapto or acrylate groups have been cross linked to yield preceramic polymers, which can be photopolymerised for the additive manufacturing of polymer derived ceramics by stereolithography techniques.
Properties
Silicones exhibit many useful characteristics, including:
Low thermal conductivity
Low chemical reactivity
Low toxicity
Thermal stability (constancy of properties over a wide temperature range of )
The ability to repel water and form watertight seals.
Does not stick to many substrates, but adheres very well to others, e.g. glass
Does not support microbiological growth
Resistance to creasing and wrinkling
Resistance to oxygen, ozone, and ultraviolet (UV) light. This property has led to the widespread use of silicones in the construction industry (e.g. coatings, fire protection, glazing seals) and the automotive industry (external gaskets, external trim).
Electrical insulation properties. Because silicone can be formulated to be electrically insulative or conductive, it is suitable for a wide range of electrical applications.
High gas permeability: at room temperature (25 °C), the permeability of silicone rubber for such gases as oxygen is approximately 400 times that of butyl rubber, making silicone useful for medical applications in which increased aeration is desired. Conversely, silicone rubbers cannot be used where gas-tight seals are necessary such as seals for high-pressure gasses or high vacuum.
Silicone can be developed into rubber sheeting, where it has other properties, such as being FDA compliant. This extends the uses of silicone sheeting to industries that demand hygiene, for example, food and beverage, and pharmaceuticals.
Applications
Silicones are used in many products. Ullmann's Encyclopedia of Industrial Chemistry lists the following major categories of application: Electrical (e.g. insulation), electronics (e.g., coatings), household (e.g., sealants and cooking utensils), automobile (e.g. gaskets), airplane (e.g., seals), office machines (e.g. keyboard pads), medicine and dentistry (e.g. tooth impression molds), textiles and paper (e.g. coatings). For these applications, an estimated 400,000 tonnes of silicones were produced in 1991. Specific examples, both large and small are presented below.
Automotive
In the automotive field, silicone grease is typically used as a lubricant for brake components since it is stable at high temperatures, is not water-soluble, and is far less likely than other lubricants to foul. DOT 5 brake fluids are based on liquid silicones.
Automotive spark plug wires are insulated by multiple layers of silicone to prevent sparks from jumping to adjacent wires, causing misfires. Silicone tubing is sometimes used in automotive intake systems (especially for engines with forced induction).
Sheet silicone is used to manufacture gaskets used in automotive engines, transmissions, and other applications.
Automotive body manufacturing plants and paint shops avoid silicones, as trace contamination may cause "fish eyes", which are small, circular craters which mar a smooth finish.
Additionally, silicone compounds such as silicone rubber are used as coatings and sealants for airbags; the high strength of silicone rubber makes it an optimal adhesive and sealant for high impact airbags. Silicones in combination with thermoplastics provide improvements in scratch and mar resistance and lowered coefficient of friction.
Aerospace
Silicone is a widely used material in the aerospace industry due to its sealing properties, stability across an extreme temperature range, durability, sound dampening and anti-vibration qualities, and naturally flame retardant properties. Maintaining extreme functionality is paramount for passenger safety in the aerospace industry, so each component on an aircraft requires high-performance materials.
Specially developed aerospace grades of silicone are stable from , these grades can be used in the construction of gaskets for windows and cabin doors. During operation, aircraft go through large temperature fluctuations in a relatively short period of time; from the ambient temperatures when on the ground in hot countries to sub-zero temperatures when flying at high altitude. Silicone rubber can be molded with tight tolerances ensuring gaskets form airtight seals both on the ground and in the air, where atmospheric pressure decreases.
Silicone rubber's resistance to heat corrosion enables it to be used for gaskets in aircraft engines where it will outlast other types of rubber, both improving aircraft safety and reducing maintenance costs. The silicone acts to seal instrument panels and other electrical systems in the cockpit, protecting printed circuit boards from the risks of extreme altitude such as moisture and extremely low temperature. Silicone can be used as a sheath to protect wires and electrical components from any dust or ice that may creep into a plane's inner workings.
As the nature of air travel results in much noise and vibration, powerful engines, landings, and high speeds all need to be considered to ensure passenger comfort and safe operation of the aircraft. As silicone rubber has exceptional noise reduction and anti-vibration properties, it can be formed into small components and fitted into small gaps ensuring all equipment can be protected from unwanted vibration such as overhead lockers, vent ducts, hatches, entertainment system seals, and LED lighting systems.
Solid propellant
Polydimethylsiloxane (PDMS) based binders along with ammonium perchlorate (NH4ClO4) are used as fast burning solid propellants in rockets.
Building construction
The strength and reliability of silicone rubber are widely acknowledged in the construction industry. One-part silicone sealants and caulks are in common use to seal gaps, joints and crevices in buildings. One-part silicones cure by absorbing atmospheric moisture, which simplifies installation. In plumbing, silicone grease is typically applied to O-rings in brass taps and valves, preventing lime from sticking to the metal.
Structural silicone has also been used in curtain wall building façades since 1974 when the Art Institute of Chicago became the first building to receive exterior glass fixed only with the material. Silicone membranes have been used to cover and restore industrial roofs, thanks to its extreme UV resistance, and ability to keep their waterproof performance for decades.
3D printing
Silicone rubber can be 3D printed (liquid deposition modelling, LDM) using pump-nozzle extrusion systems. Standard silicone formulations are optimized to be used by extrusion and injection moulding machines and are not applicable in LDM-based 3D printing. The rheological behavior and the pot life need to be adjusted for use with LDM.
3D printing also requires the use of a removable support material that is compatible with the silicone rubber.
Coatings
Silicone films can be applied to such silica-based substrates as glass to form a covalently bonded hydrophobic coating. Such coatings were developed for use on aircraft windshields to repel water and to preserve visibility, without requiring mechanical windshield wipers which are impractical at supersonic speeds. Similar treatments were eventually adapted to the automotive market in products marketed by Rain-X and others.
Many fabrics can be coated or impregnated with silicone to form a strong, waterproof composite such as silnylon.
A silicone polymer can be suspended in water by using stabilizing surfactants. This allows water-based formulations to be used to deliver many ingredients that would otherwise require a stronger solvent, or be too viscous to use effectively. For example, a waterborne formulation using a silane's reactivity and penetration ability into a mineral-based surface can be combined with water-beading properties from a siloxane to produce a more-useful surface protection product.
Cookware
As a low-taint, non-toxic material, silicone can be used where contact with food is required. Silicone is becoming an important product in the cookware industry, particularly bakeware and kitchen utensils.
Silicone is used as an insulator in heat-resistant potholders and similar items; however, it is more conductive of heat than similar less dense fiber-based products. Silicone oven gloves are able to withstand temperatures up to , making it possible to reach into boiling water.
Other products include molds for chocolate, ice, cookies, muffins, and various other foods; non-stick bakeware and reusable mats used on baking sheets; steamers, egg boilers or poachers; cookware lids, pot holders, trivets, and kitchen mats.
Defoaming
Silicones are used as active compounds in defoamers due to their low water solubility and good spreading properties.
Dry cleaning
Liquid silicone can be used as a dry cleaning solvent, providing an alternative to the traditional chlorine-containing perchloroethylene (perc) solvent. The use of silicones in dry cleaning reduces the environmental effect of a typically high-polluting industry.
Electronics
Electronic components are sometimes encased in silicone to increase stability against mechanical and electrical shock, radiation and vibration, a process called "potting". Silicones are used where durability and high performance are demanded of components under extreme environmental conditions, such as in space (satellite technology). They are selected over polyurethane or epoxy encapsulation when a wide operating temperature range is required (−65 to 315 °C). Silicones also have the advantage of little exothermic heat rise during cure, low toxicity, good electrical properties, and high purity.
Silicones are often components of thermal pastes used to improve heat transfer from power-dissipating electronic components to heat sinks.
The use of silicones in electronics is not without problems, however. Silicones are relatively expensive and can be attacked by certain solvents. Silicone easily migrates as either a liquid or vapor onto other components. Silicone contamination of electrical switch contacts can lead to failures by causing an increase in contact resistance, often late in the life of the contact, well after any testing is completed. Use of silicone-based spray products in electronic devices during maintenance or repairs can cause later failures.
Firestops
Silicone foam has been used in North American buildings in an attempt to firestop openings within the fire-resistance-rated wall and floor assemblies to prevent the spread of flames and smoke from one room to another. When properly installed, silicone-foam firestops can be fabricated for building code compliance. Advantages include flexibility and high dielectric strength. Disadvantages include combustibility (hard to extinguish) and significant smoke development.
Silicone-foam firestops have been the subject of controversy and press attention due to smoke development from pyrolysis of combustible components within the foam, hydrogen gas escape, shrinkage, and cracking. These problems have led to reportable events among licensees (operators of nuclear power plants) of the Nuclear Regulatory Commission (NRC).
Silicone firestops are also used in aircraft.
Jewelry
Silicone is a popular alternative to traditional metals (such as silver and gold) with jewelry, specifically rings. Silicone rings are commonly worn in professions where metal rings can lead to injuries, such as electrical conduction and ring avulsions. During the mid-2010's, some professional athletes began wearing silicone rings as an alternative during games.
Lubricants
Silicone greases are used for many purposes, such as bicycle chains, airsoft gun parts, and a wide range of other mechanisms. Typically, a dry-set lubricant is delivered with a solvent carrier to penetrate the mechanism. The solvent then evaporates, leaving a clear film that lubricates but does not attract dirt and grit as much as an oil-based or other traditional "wet" lubricant.
Silicone personal lubricants are also available for use in medical procedures or sexual activity.
Medicine and cosmetic surgery
Silicone is used in microfluidics, seals, gaskets, shrouds, and other applications requiring high biocompatibility. Additionally, the gel form is used in bandages and dressings, breast implants, testicle implants, pectoral implants, contact lenses, and a variety of other medical uses.
Scar treatment sheets are often made of medical grade silicone due to its durability and biocompatibility. Polydimethylsiloxane (PDMS) is often used for this purpose, since its specific crosslinking results in a flexible and soft silicone with high durability and tack. It has also been used as the hydrophobic block of amphiphilic synthetic block copolymers used to form the vesicle membrane of polymersomes.
Illicit cosmetic silicone injections may induce chronic and definitive silicone blood diffusion with dermatologic complications.
Ophthalmology uses many products such as silicone oil used to replace the vitreous humor following vitrectomy, silicone intraocular lenses following cataract extraction, silicone tubes to keep a nasolacrimal passage open following dacryocystorhinostomy, canalicular stents for canalicular stenosis, punctal plugs for punctal occlusion in dry eyes, silicone rubber and bands as an external tamponade in tractional retinal detachment, and anteriorly-located break in rhegmatogenous retinal detachment.
Addition and condensation (e.g. polyvinyl siloxane) silicones find wide application as a dental impression material due to its hydrophobic property and thermal stability.
Moldmaking
Two-part silicone systems are used as rubber molds to cast resins, foams, rubber, and low-temperature alloys. A silicone mold generally requires little or no mold-release or surface preparation, as most materials do not adhere to silicone. For experimental uses, ordinary one-part silicone can be used to make molds or to mold into shapes. If needed, common vegetable cooking oils or petroleum jelly can be used on mating surfaces as a mold-release agent.
Silicone cooking molds used as bakeware do not require coating with cooking oil; in addition, the flexibility of the rubber allows the baked food to be easily removed from the mold after cooking.
Personal care
Silicones are ingredients widely used in skincare, color cosmetic and hair care applications. Some silicones, notably the amine functionalized amodimethicones, are excellent hair conditioners, providing improved compatibility, feel, and softness, and lessening frizz. The phenyl dimethicones, in another silicone family, are used in reflection-enhancing and color-correcting hair products, where they increase shine and glossiness (and possibly impart subtle color changes). Phenyltrimethicones, unlike the conditioning amodimethicones, have refractive indices (typically 1.46) close to that of a human hair (1.54). However, if included in the same formulation, amodimethicone and phenyltrimethicone interact and dilute each other, making it difficult to achieve both high shine and excellent conditioning in the same product.
Silicone rubber is commonly used in baby bottle nipples (teats) for its cleanliness, aesthetic appearance, and low extractable content.
Silicones are used in shaving products and personal lubricants.
Toys and hobbies
Silly Putty and similar materials are composed of silicones dimethyl siloxane, polydimethylsiloxane, and decamethyl cyclopentasiloxane, with other ingredients. This substance is noted for its unusual characteristics, e.g., that it bounces, but breaks when given a sharp blow; it will also flow like a liquid and form a puddle given enough time.
Silicone "rubber bands" are a long-lasting popular replacement refill for real rubber bands in the 2013 fad "rubber band loom" toys at two to four times the price (in 2014). Silicone bands also come in bracelet sizes that can be custom embossed with a name or message. Large silicone bands are also sold as utility tie-downs.
Formerol is a silicone rubber (marketed as Sugru) used as an arts-and-crafts material, as its plasticity allows it to be molded by hand like modeling clay. It hardens at room temperature and it is adhesive to various substances including glass and aluminum.
Oogoo is an inexpensive silicone clay, which can be used as a substitute for Sugru.
In making aquariums, manufacturers now commonly use 100% silicone sealant to join glass plates. Glass joints made with silicone sealant can withstand great pressure, making obsolete the original aquarium construction method of angle-iron and putty. This same silicone is used to make hinges in aquarium lids or for minor repairs. However, not all commercial silicones are safe for aquarium manufacture, nor is silicone used for the manufacture of acrylic aquariums as silicones do not have long-term adhesion to plastics.
Special effects
Silicone is used in special effects as a material for simulating realistic skin, either for prosthetic makeup, prop body parts, or rubber masks. Platinum silicones are ideal for simulating flesh and skin due to their strength, firmness, and translucency, creating a convincing effect.
Silicone masks have an advantage over latex masks in that because of the material properties, the mask hugs the wearers face and moves in a realistic manner with the wearer's facial expressions. Silicone is often used as a hypoallergenic substitute for foam latex prosthetics.
Marketing
The leading global manufacturers of silicone base materials belong to three regional organizations: the European Silicone Center (CES) in Brussels, Belgium; the Silicones Environmental, Health, and Safety Center (SEHSC) in Herndon, Virginia, US; and the Silicone Industry Association of Japan (SIAJ) in Tokyo, Japan. Dow Corning Silicones, Evonik Industries, Momentive Performance Materials, Milliken and Company (SiVance Specialty Silicones), Shin-Etsu Silicones, Wacker Chemie, Bluestar Silicones, JNC Corporation, Wacker Asahikasei Silicone, and Dow Corning Toray represent the collective membership of these organizations. A fourth organization, the Global Silicone Council (GSC) acts as an umbrella structure over the regional organizations. All four are non-profit, having no commercial role; their primary missions are to promote the safety of silicones from a health, safety, and environmental perspective. As the European chemical industry is preparing to implement the Registration, Evaluation, and Authorisation of Chemicals (REACH) legislation, CES is leading the formation of a consortium of silicones, silanes, and siloxanes producers and importers to facilitate data and cost-sharing.
Safety and environmental considerations
Silicone compounds are pervasive in the environment. Particular silicone compounds, cyclic siloxanes D4 and D5, are air and water pollutants and have negative health effects on test animals. They are used in various personal care products. The European Chemicals Agency found that "D4 is a persistent, bioaccumulative and toxic (PBT) substance and D5 is a very persistent, very bioaccumulative (vPvB) substance". Other silicones biodegrade readily, a process that is accelerated by a variety of catalysts, including clays. Cyclic silicones have been shown to involve the occurrence of silanols during biodegradation in mammals. The resulting silanediols and silanetriols are capable of inhibiting hydrolytic enzymes such as thermolysin, acetylcholinesterase. However, the doses required for inhibition are by orders of magnitude higher than the ones resulting from the accumulated exposure to consumer products containing cyclomethicone.
At around in an oxygen-containing atmosphere, polydimethylsiloxane releases traces of formaldehyde (but lesser amounts than other common materials such as polyethylene). At this temperature, silicones were found to have lower formaldehyde generation than mineral oil and plastics (less than 3 to 48 μg CH2O/(g·hr) for a high consistency silicone rubber, versus around 400 μg CH2O/(g·hr) for plastics and mineral oil). By , copious amounts of formaldehyde have been found to be produced by all silicones (1,200 to 4,600 μg CH2O/(g·hr)).
Some persons have been found to develop silicone allergies or extreme sensitivity particularly after prolonged exposure to certain types of silicone products such as cosmetics, medical equipment including CPAP masks and implanted medical devices.
Similar substances
Compounds containing silicon–oxygen double bonds, now called silanones, but which could deserve the name "silicone", have long been identified as intermediates in gas-phase processes such as chemical vapor deposition in microelectronics production, and in the formation of ceramics by combustion. However, they have a strong tendency to polymerize into siloxanes. The first stable silanone was obtained in 2014 by A. Filippou and others.
| Physical sciences | Organic compounds | null |
65845 | https://en.wikipedia.org/wiki/Hypothyroidism | Hypothyroidism | Hypothyroidism (also called underactive thyroid, low thyroid or hypothyreosis) is a disorder of the endocrine system in which the thyroid gland does not produce enough thyroid hormones. It can cause a number of symptoms, such as poor ability to tolerate cold, extreme fatigue, muscle aches, constipation, slow heart rate, depression, and weight gain. Occasionally there may be swelling of the front part of the neck due to goiter. Untreated cases of hypothyroidism during pregnancy can lead to delays in growth and intellectual development in the baby or congenital iodine deficiency syndrome.
Worldwide, too little iodine in the diet is the most common cause of hypothyroidism. Hashimoto's thyroiditis is the most common cause of hypothyroidism in countries with sufficient dietary iodine. Less common causes include previous treatment with radioactive iodine, injury to the hypothalamus or the anterior pituitary gland, certain medications, a lack of a functioning thyroid at birth, or previous thyroid surgery. The diagnosis of hypothyroidism, when suspected, can be confirmed with blood tests measuring thyroid-stimulating hormone (TSH) and thyroxine (T4) levels.
Salt iodization has prevented hypothyroidism in many populations. Thyroid hormone replacement with levothyroxine treats hypothyroidism. Medical professionals adjust the dose according to symptoms and normalization of the thyroxine and TSH levels. Thyroid medication is safe in pregnancy. Although an adequate amount of dietary iodine is important, too much may worsen specific forms of hypothyroidism.
Worldwide about one billion people are estimated to be iodine-deficient; however, it is unknown how often this results in hypothyroidism. In the United States, hypothyroidism occurs in approximately 5% of people. Subclinical hypothyroidism, a milder form of hypothyroidism characterized by normal thyroxine levels and an elevated TSH level, is thought to occur in 4.3–8.5% of people in the United States. Subclinical hypothyroidism has been associated with an increased risk of atrial fibrillation (AF) and is the most frequent thyroid abnormality in acute new-onset AF. Hypothyroidism is more common in women than in men. People over the age of 60 are more commonly affected. Dogs are also known to develop hypothyroidism, as are cats and horses, albeit more rarely. The word hypothyroidism is from Greek hypo- 'reduced', thyreos 'shield', and eidos 'form', where the two latter parts refer to the thyroid gland.
Signs and symptoms
People with hypothyroidism often have no or only mild symptoms. Numerous symptoms and signs are associated with hypothyroidism and can be related to the underlying cause, or a direct effect of not having enough thyroid hormones. Hashimoto's thyroiditis may present with the mass effect of a goiter (enlarged thyroid gland). In middle-aged women, the symptoms may be mistaken for those of menopause.
Delayed relaxation after testing the ankle jerk reflex is a characteristic sign of hypothyroidism and is associated with the severity of the hormone deficit.
Myxedema coma
Myxedema coma is a rare but life-threatening state of extreme hypothyroidism. It may occur in those with established hypothyroidism when they develop an acute illness. Myxedema coma can be the first presentation of hypothyroidism. People with myxedema coma typically have a low body temperature without shivering, confusion, a slow heart rate and reduced breathing effort. There may be physical signs suggestive of hypothyroidism, such as skin changes or enlargement of the tongue.
Pregnancy
Even mild or subclinical hypothyroidism leads to possible infertility and an increased risk of miscarriage. Hypothyroidism in early pregnancy, even with limited or no symptoms, may increase the risk of pre-eclampsia, offspring with lower intelligence, and the risk of infant death around the time of birth. Women are affected by hypothyroidism in 0.3–0.5% of pregnancies. Subclinical hypothyroidism during pregnancy is associated with gestational diabetes, low birth-weight, placental abruption, and the birth of the baby before 37 weeks of pregnancy.
Children
Newborn children with hypothyroidism may have normal birth weight and height (although the head may be larger than expected and the posterior fontanelle may be open). Some may have drowsiness, decreased muscle tone, poor weight gain, a hoarse-sounding cry, feeding difficulties, constipation, an enlarged tongue, umbilical hernia, dry skin, a decreased body temperature, and jaundice. A goiter is rare, although it may develop later in children who have a thyroid gland that does not produce functioning thyroid hormone. A goiter may also develop in children growing up in areas with iodine deficiency. Normal growth and development may be delayed, and not treating infants may lead to an intellectual impairment (IQ 6–15 points lower in severe cases). Other problems include the following: difficulty with large scale and fine motor skills and coordination, reduced muscle tone, squinting, decreased attention span, and delayed speaking. Tooth eruption may be delayed.
In older children and adolescents, the symptoms of hypothyroidism may include fatigue, cold intolerance, sleepiness, muscle weakness, constipation, a delay in growth, overweight for height, pallor, coarse and thick skin, increased body hair, irregular menstrual cycles in girls, and delayed puberty. Signs may include delayed relaxation of the ankle reflex and a slow heartbeat. A goiter may be present with a completely enlarged thyroid gland; sometimes only part of the thyroid is enlarged and it can be knobby.
Causes
Hypothyroidism is caused by inadequate function of the gland itself (primary hypothyroidism), inadequate stimulation by thyroid-stimulating hormone from the pituitary gland (secondary hypothyroidism), or inadequate release of thyrotropin-releasing hormone from the brain's hypothalamus (tertiary hypothyroidism). Primary hypothyroidism is about a thousandfold more common than central hypothyroidism. Central hypothyroidism is the name used for secondary and tertiary hypothyroidism since the hypothalamus and pituitary gland are at the center of thyroid hormone control.
Iodine deficiency is the most common cause of primary hypothyroidism and endemic goiter worldwide. In areas of the world with sufficient dietary iodine, hypothyroidism is most commonly caused by the autoimmune disease Hashimoto's thyroiditis (chronic autoimmune thyroiditis). Hashimoto's may be associated with a goiter. It is characterized by infiltration of the thyroid gland with T lymphocytes and autoantibodies against specific thyroid antigens such as thyroid peroxidase, thyroglobulin and the TSH receptor.
A more uncommon cause of hypothyroidism is estrogen dominance. It is one of the most common hormone imbalance problems in women. Still, it is not always linked with hypothyroidism There are three different types of estrogens: estrone (E1), estradiol (E2), and estriol (E3), with estradiol being the most potent of the three Women with estrogen dominance have estradiol levels of 115 pg/ml on day 3 of their cycle. However, estrogen dominance is not just about an overabundance of estradiol but is more likely correlated with an imbalance between estradiol and progesterone. Women who have too much unopposed estrogen, which is estrogen that does not have enough counterbalancing progesterone in their bodies, commonly have unbalanced thyroid levels, in addition to excess growths within their uteri.
Estradiol disrupts thyroid hormone production because high blood levels of estrogen signal the liver to increase the production of thyroid-binding globulin (TBG). This is an inhibitor protein that binds to the thyroid hormone, reducing the amount of T3 and T4 available for use by cells. Without T3 and T4, the body's cellular function begins to slow down.
After women give birth, about 5% develop postpartum thyroiditis which can occur up to nine months afterwards. This is characterized by a short period of hyperthyroidism followed by a period of hypothyroidism; 20–40% remain permanently hypothyroid.
Autoimmune thyroiditis (Hashimoto's) is associated with other immune-mediated diseases such as diabetes mellitus type 1, pernicious anemia, myasthenia gravis, celiac disease, rheumatoid arthritis and systemic lupus erythematosus. It may occur as part of autoimmune polyendocrine syndrome (type 1 and type 2).
Iatrogenic hypothyroidism can be surgical (a result of thyroidectomy, usually for thyroid nodules or cancer) or following radioiodine ablation (usually for Graves' disease).
Pathophysiology
Thyroid hormone is required for the normal functioning of numerous tissues in the body. In healthy individuals, the thyroid gland predominantly secretes thyroxine (T4), which is converted into triiodothyronine (T3) in other organs by the selenium-dependent enzyme iodothyronine deiodinase. Triiodothyronine binds to the thyroid hormone receptor in the nucleus of cells, where it stimulates the turning on of particular genes and the production of specific proteins. Additionally, the hormone binds to integrin αvβ3 on the cell membrane, thereby stimulating the sodium–hydrogen antiporter and processes such as formation of blood vessels and cell growth. In blood, almost all thyroid hormone (99.97%) are bound to plasma proteins such as thyroxine-binding globulin; only the free unbound thyroid hormone is biologically active.
Electrocardiograms are abnormal in both primary overt hypothyroidism and subclinical hypothyroidism. T3 and TSH are essential for the regulation of cardiac electrical activity. Prolonged ventricular repolarization and atrial fibrillation are often seen in hypothyroidism.
The thyroid gland is the only source of thyroid hormone in the body; the process requires iodine and the amino acid tyrosine. The gland takes up iodine in the bloodstream and incorporates it into thyroglobulin molecules. The process is controlled by the thyroid-stimulating hormone (TSH, thyrotropin), which is secreted by the pituitary. Not enough iodine, or not enough TSH, can decrease thyroid hormone production.
The hypothalamic–pituitary–thyroid axis plays a key role in maintaining thyroid hormone levels within normal limits. Production of TSH by the anterior pituitary gland is stimulated in turn by thyrotropin-releasing hormone (TRH), released from the hypothalamus. Production of TSH and TRH is decreased by thyroxine by a negative feedback process. Not enough TRH, which is uncommon, can lead to insufficient TSH release and therefore insufficient thyroid hormone production.
Pregnancy leads to marked changes in thyroid hormone physiology. The gland increases in size by 10%, thyroxine production increases by 50%, and iodine requirements increase. Many women have normal thyroid function but have immunological evidence of thyroid autoimmunity (as evidenced by autoantibodies) or are iodine deficient, and develop evidence of hypothyroidism before or after giving birth.
Diagnosis
Laboratory testing of thyroid stimulating hormone levels in the blood is considered the best initial test for hypothyroidism; a second TSH level is often obtained several weeks later for confirmation. Levels may be abnormal in the context of other illnesses, and TSH testing in hospitalized people is discouraged unless thyroid dysfunction is strongly suspected as the cause of the acute illness. An elevated TSH level indicates that the thyroid gland is not producing enough thyroid hormone, and free T4 levels are then often obtained. Measuring T3 is discouraged by the AACE in the assessment for hypothyroidism. In England and Wales, the National Institute for Health and Care Excellence (NICE) recommends routine T4 testing in children, and T3 testing in both adults and children if central hypothyroidism is suspected and the TSH is low. There are several symptom rating scales for hypothyroidism; they provide a degree of objectivity but have limited use for diagnosis.
Many cases of hypothyroidism are associated with mild elevations in creatine kinase and liver enzymes in the blood. They typically return to normal when hypothyroidism has been fully treated. Levels of cholesterol, low-density lipoprotein and lipoprotein (a) can be elevated; the impact of subclinical hypothyroidism on lipid parameters is less well-defined.
Very severe hypothyroidism and myxedema coma are characteristically associated with low sodium levels in the blood together with elevations in antidiuretic hormone, as well as acute worsening of kidney function due to several causes. For most causes, however, it is unclear if the relationship is causal.
A diagnosis of hypothyroidism without any lumps or masses felt within the thyroid gland does not require thyroid imaging; however, if the thyroid feels abnormal, diagnostic imaging is then recommended. The presence of antibodies against thyroid peroxidase (TPO) makes it more likely that thyroid nodules are caused by autoimmune thyroiditis, but if there is any doubt, a needle biopsy may be required.
Central
If the TSH level is normal or low and serum free T4 levels are low, this is suggestive of central hypothyroidism (not enough TSH or TRH secretion by the pituitary gland or hypothalamus). There may be other features of hypopituitarism, such as menstrual cycle abnormalities and adrenal insufficiency. There might also be symptoms of a pituitary mass such as headaches and vision changes. Central hypothyroidism should be investigated further to determine the underlying cause.
Overt
In overt primary hypothyroidism, TSH levels are high and T4 and T3 levels are low. Overt hypothyroidism may also be diagnosed in those who have a TSH on multiple occasions of greater than 5mIU/L, appropriate symptoms, and only a borderline low T4. It may also be diagnosed in those with a TSH of greater than 10mIU/L.
Subclinical
Subclinical hypothyroidism is a biochemical diagnosis characterized by an elevated serum TSH level, but with a normal serum free thyroxine level. The incidence of subclinical hypothyroidism is estimated to be 3-15% and a higher incidence is seen in elderly people, females and those with lower iodine levels. Subclinical hypothyroidism is most commonly caused by autoimmune thyroid diseases, especially Hashimoto's thyroiditis. The presentation of subclinical hypothyroidism is variable and classic signs and symptoms of hypothyroidism may not be observed. Of people with subclinical hypothyroidism, a proportion will develop overt hypothyroidism each year. In those with detectable antibodies against thyroid peroxidase (TPO), this occurs in 4.3%, while in those with no detectable antibodies, this occurs in 2.6%. In addition to detectable anti-TPO antibodies, other risk factors for conversion from subclinical hypothyroidism to overt hypothyroidism include female sex or in those with higher TSH levels or lower level of normal free T4 levels. Those with subclinical hypothyroidism and detectable anti-TPO antibodies who do not require treatment should have repeat thyroid function tested more frequently (e.g. every 6 months) compared with those who do not have antibodies.
Pregnancy
During pregnancy, the thyroid gland must produce 50% more thyroid hormone to provide enough thyroid hormone for the developing fetus and the expectant mother. In pregnancy, free thyroxine levels may be lower than anticipated due to increased binding to thyroid binding globulin and decreased binding to albumin. They should either be corrected for the stage of pregnancy, or total thyroxine levels should be used instead for diagnosis. TSH values may also be lower than normal (particularly in the first trimester) and the normal range should be adjusted for the stage of pregnancy.
In pregnancy, subclinical hypothyroidism is defined as a TSH between 2.5 and 10 mIU/L with a normal thyroxine level, while those with TSH above 10 mIU/L are considered to be overtly hypothyroid even if the thyroxine level is normal. Antibodies against TPO may be important in making treatment decisions, and should, therefore, be determined in women with abnormal thyroid function tests.
Determination of TPO antibodies may be considered as part of the assessment of recurrent miscarriage, as subtle thyroid dysfunction can be associated with pregnancy loss, but this recommendation is not universal, and the presence of thyroid antibodies may not predict future outcomes.
Prevention
Hypothyroidism may be prevented in a population by adding iodine to commonly used foods. This public health measure has eliminated endemic childhood hypothyroidism in countries where it was once common. In addition to promoting the consumption of iodine-rich foods such as dairy and fish, many countries with moderate iodine deficiency have implemented universal salt iodization. Encouraged by the World Health Organization, 70% of the world's population across 130 countries are receiving iodized salt. In some countries, iodized salt is added to bread. Despite this, iodine deficiency has reappeared in some Western countries due to attempts to reduce salt intake.
Pregnant and breastfeeding women, who require 66% more daily iodine than non-pregnant women, may still not be getting enough iodine. The World Health Organization recommends a daily intake of 250 μg for pregnant and breastfeeding women. As many women will not achieve this from dietary sources alone, the American Thyroid Association recommends a 150 μg daily supplement by mouth.
Screening
Screening for hypothyroidism is performed in the newborn period in many countries, generally using TSH. This has led to the early identification of many cases and thus the prevention of developmental delay. It is the most widely used newborn screening test worldwide. While TSH-based screening will identify the most common causes, the addition of T4 testing is required to pick up the rarer central causes of neonatal hypothyroidism. If T4 determination is included in the screening done at birth, this will identify cases of congenital hypothyroidism of central origin in 1:16,000 to 1:160,000 children. Considering that these children usually have other pituitary hormone deficiencies, early identification of these cases may prevent complications.
In adults, widespread screening of the general population is debated. Some organizations (such as the United States Preventive Services Task Force) state that evidence is insufficient to support routine screening, while others (such as the American Thyroid Association) recommend either intermittent testing above a certain age in all sexes or only in women. Targeted screening may be appropriate in a number of situations where hypothyroidism is common: other autoimmune diseases, a strong family history of thyroid disease, those who have received radioiodine or other radiation therapy to the neck, those who have previously undergone thyroid surgery, those with an abnormal thyroid examination, those with psychiatric disorders, people taking amiodarone or lithium, and those with a number of health conditions (such as certain heart and skin conditions). Yearly thyroid function tests are recommended in people with Down syndrome, as they are at higher risk of thyroid disease. Guidelines for England and Wales from the National Institute for Health and Care Excellence (NICE) recommend testing for thyroid disease in people with type 1 diabetes and new-onset atrial fibrillation, and suggests testing in those with depression or unexplained anxiety (all ages), in children with abnormal growth, or unexplained change in behavior or school performance. NICE also recommends screening for celiac disease in people with a diagnosis of autoimmune thyroid disease.
Management
Hormone replacement
Most people with hypothyroidism symptoms and confirmed thyroxine deficiency are treated with a synthetic long-acting form of thyroxine, known as levothyroxine (L-thyroxine). In young and otherwise healthy people with overt hypothyroidism, a full replacement dose (adjusted by weight) can be started immediately; in the elderly and people with heart disease a lower starting dose is recommended to prevent over supplementation and risk of complications. Lower doses may be sufficient in those with subclinical hypothyroidism, while people with central hypothyroidism may require a higher than average dose.
Blood free thyroxine and TSH levels are monitored to help determine whether the dose is adequate. This is done 4–8 weeks after the start of treatment or a change in levothyroxine dose. Once the adequate replacement dose has been established, the tests can be repeated after 6 and then 12 months, unless there is a change in symptoms. Normalization of TSH does not mean that other abnormalities associated with hypothyroidism improve entirely, such as elevated cholesterol levels.
In people with central/secondary hypothyroidism, TSH is not a reliable marker of hormone replacement and decisions are based mainly on the free T4 level. Levothyroxine is best taken 30–60 minutes before breakfast, or four hours after food, as certain substances such as food and calcium can inhibit the absorption of levothyroxine. There is no direct way of increasing thyroid hormone secretion by the thyroid gland.
Liothyronine
Treatment with liothyronine (synthetic T3) alone has not received enough study to make a recommendation as to its use; due to its shorter half-life it would need to be taken more often than levothyroxine.
Adding liothyronine to levothyroxine has been suggested as a measure to provide better symptom control, but this has not been confirmed by studies. In 2007, the British Thyroid Association stated that combined T4 and T3 therapy carried a higher rate of side effects and no benefit over T4 alone. Similarly, American guidelines discourage combination therapy due to a lack of evidence, although they acknowledge that some people feel better when receiving combination treatment. Guidelines by NICE for England and Wales discourage liothyronine.
People with hypothyroidism who do not feel well despite optimal levothyroxine dosing may request adjunctive treatment with liothyronine. A 2012 guideline from the European Thyroid Association recommends that support should be offered concerning the chronic nature of the disease and that other causes of the symptoms should be excluded. The addition of liothyronine should be regarded as experimental, initially only for a trial period of 3 months, and in a set ratio to the current dose of levothyroxine. The guideline explicitly aims to enhance the safety of this approach and to counter its indiscriminate use.
Desiccated animal thyroid
Desiccated thyroid extract is an animal-based thyroid gland extract, most commonly from pigs. It is a combination therapy, containing forms of T4 and T3. It also contains calcitonin (a hormone produced in the thyroid gland involved in the regulation of calcium levels), T1 and T2; these are not present in synthetic hormone medication. This extract was once a mainstream hypothyroidism treatment, but its use today is unsupported by evidence; British Thyroid Association and American professional guidelines discourage its use, as does NICE.
Subclinical hypothyroidism
There is no evidence of a benefit from treating subclinical hypothyroidism in those who are not pregnant, and there are potential risks of overtreatment. Untreated subclinical hypothyroidism may be associated with a modest increase in the risk of coronary artery disease when the TSH is over 10 mIU/L. There may be an increased risk for cardiovascular death. A 2007 review found no benefit of thyroid hormone replacement except for "some parameters of lipid profiles and left ventricular function". There is no association between subclinical hypothyroidism and an increased risk of bone fractures, nor is there a link with cognitive decline.
American guidelines recommend that treatment should be considered in people with symptoms of hypothyroidism, detectable antibodies against thyroid peroxidase, a history of heart disease, or are at an increased risk for heart disease if the TSH is elevated but below 10 mIU/L. American guidelines further recommend universal treatment (independent of risk factors) in those with TSH levels that are markedly elevated; above 10 mIU/L because of an increased risk of heart failure or death due to cardiovascular disease. NICE recommends that those with a TSH above 10 mIU/L should be treated in the same way as overt hypothyroidism. Those with an elevated TSH but below 10 mIU/L who have symptoms suggestive of hypothyroidism should have a trial of treatment but intend to stop this if the symptoms persist despite normalization of the TSH.
Myxedema coma
Myxedema coma or severe decompensated hypothyroidism usually requires admission to the intensive care, close observation and treatment of abnormalities in breathing, temperature control, blood pressure, and sodium levels. Mechanical ventilation may be required, as well as fluid replacement, vasopressor agents, careful rewarming, and corticosteroids (for possible adrenal insufficiency which can occur together with hypothyroidism). Careful correction of low sodium levels may be achieved with hypertonic saline solutions or vasopressin receptor antagonists. For rapid treatment of hypothyroidism, levothyroxine or liothyronine may be administered intravenously, particularly if the level of consciousness is too low to be able to safely swallow medication. While administration through a nasogastric tube is possible, this may be unsafe and is discouraged.
Pregnancy
In women with known hypothyroidism who become pregnant, it is recommended that serum TSH levels are closely monitored. Levothyroxine should be used to keep TSH levels within the normal range for that trimester. The first-trimester normal range is below 2.5 mIU/L and the second and third trimesters normal range is below 3.0 mIU/L. Treatment should be guided by total (rather than free) thyroxine or by the free T4 index. Similarly to TSH, the thyroxine results should be interpreted according to the appropriate reference range for that stage of pregnancy. The levothyroxine dose often needs to be increased after pregnancy is confirmed, although this is based on limited evidence and some recommend that it is not always required; decisions may need to based on TSH levels.
Women with anti-TPO antibodies who are trying to become pregnant (naturally or by assisted means) may require thyroid hormone supplementation even if the TSH level is normal. This is particularly true if they have had previous miscarriages or have been hypothyroid in the past. Supplementary levothyroxine may reduce the risk of preterm birth and possibly miscarriage. The recommendation is stronger in pregnant women with subclinical hypothyroidism (defined as TSH 2.5–10 mIU/L) who are anti-TPO positive, in view of the risk of overt hypothyroidism. If a decision is made not to treat, close monitoring of the thyroid function (every 4 weeks in the first 20 weeks of pregnancy) is recommended. If anti-TPO is not positive, treatment for subclinical hypothyroidism is not currently recommended. It has been suggested that many of the aforementioned recommendations could lead to unnecessary treatment, in the sense that the TSH cutoff levels may be too restrictive in some ethnic groups; there may be little benefit from treatment of subclinical hypothyroidism in certain cases.
Alternative medicine
The effectiveness and safety of using Chinese herbal medicines to treat hypothyroidism is not known.
Epidemiology
Hypothyroidism is the most frequent endocrine disorder. Worldwide about one billion people are estimated to be iodine deficient; however, it is unknown how often this results in hypothyroidism. In large population-based studies in Western countries with sufficient dietary iodine, 0.3–0.4% of the population have overt hypothyroidism. A larger proportion, 4.3–8.5%, have subclinical hypothyroidism. Undiagnosed hypothyroidism is estimated to affect about 4–7% of community-derived populations in the US and Europe. Of people with subclinical hypothyroidism, 80% have a TSH level below the 10 mIU/L mark regarded as the threshold for treatment. Children with subclinical hypothyroidism often return to normal thyroid function, and a small proportion develops overt hypothyroidism (as predicted by evolving antibody and TSH levels, the presence of celiac disease, and the presence of a goiter).
Women are more likely to develop hypothyroidism than men. In population-based studies, women were seven times more likely than men to have TSH levels above 10 mU/L. 2–4% of people with subclinical hypothyroidism will progress to overt hypothyroidism each year. The risk is higher in those with antibodies against thyroid peroxidase. Subclinical hypothyroidism is estimated to affect approximately 2% of children; in adults, subclinical hypothyroidism is more common in the elderly, and in White people. There is a much higher rate of thyroid disorders, the most common of which is hypothyroidism, in individuals with Down syndrome and Turner syndrome.
Very severe hypothyroidism and myxedema coma are rare, with it estimated to occur in 0.22 per million people a year. The majority of cases occur in women over 60 years of age, although it may happen in all age groups.
Most hypothyroidism is primary in nature. Central/secondary hypothyroidism affects 1:20,000 to 1:80,000 of the population or about one out of every thousand people with hypothyroidism.
History
In 1811, Bernard Courtois discovered iodine was present in seaweed, and iodine intake was linked with goiter size in 1820 by Jean-Francois Coindet. Gaspard Adolphe Chatin proposed in 1852 that endemic goiter was the result of not enough iodine intake, and Eugen Baumann demonstrated iodine in thyroid tissue in 1896.
The first cases of myxedema were recognized in the mid-19th century (the 1870s), but its connection to the thyroid was not discovered until the 1880s when myxedema was observed in people following the removal of the thyroid gland (thyroidectomy). The link was further confirmed in the late 19th century when people and animals who had had their thyroid removed showed improvement in symptoms with transplantation of animal thyroid tissue. The severity of myxedema, and its associated risk of mortality and complications, created interest in discovering effective treatments for hypothyroidism. Transplantation of thyroid tissue demonstrated some efficacy, but recurrences of hypothyroidism was relatively common, and sometimes required multiple repeat transplantations of thyroid tissue.
In 1891, the English physician George Redmayne Murray introduced subcutaneously injected sheep thyroid extract, followed shortly after by an oral formulation. Purified thyroxine was introduced in 1914 and in the 1930s synthetic thyroxine became available, although desiccated animal thyroid extract remained widely used. Liothyronine was identified in 1952.
Early attempts at titrating therapy for hypothyroidism proved difficult. After hypothyroidism was found to cause a lower basal metabolic rate, this was used as a marker to guide adjustments in therapy in the early 20th century (around 1915). However, a low basal metabolic rate was known to be non-specific, also present in malnutrition. The first laboratory test to help assess thyroid status was the serum protein-bound iodine, which came into use around the 1950s.
In 1971, the thyroid stimulating hormone (TSH) radioimmunoassay was developed, which was the most specific marker for assessing thyroid status in patients. Many people who were being treated based on basal metabolic rate, minimizing hypothyroid symptoms, or based on serum protein-bound iodine, were found to have excessive thyroid hormone. The following year, in 1972, a T3 radioimmunoassay was developed, and in 1974, a T4 radioimmunoassay was developed.
Other animals
In veterinary practice, dogs are the species most commonly affected by hypothyroidism. The majority of cases occur as a result of primary hypothyroidism, of which two types are recognized: lymphocytic thyroiditis, which is probably immune-driven and leads to destruction and fibrosis of the thyroid gland, and idiopathic atrophy, which leads to the gradual replacement of the gland by fatty tissue. There is often lethargy, cold intolerance, exercise intolerance, and weight gain. Furthermore, skin changes and fertility problems are seen in dogs with hypothyroidism, as well as many other symptoms. The signs of myxedema can be seen in dogs, with prominence of skin folds on the forehead, and cases of myxedema coma are encountered. The diagnosis can be confirmed by a blood test, as the clinical impression alone may lead to overdiagnosis. Lymphocytic thyroiditis is associated with detectable antibodies against thyroglobulin, although they typically become undetectable in advanced disease. Treatment is with thyroid hormone replacement.
Other species that are less commonly affected include cats and horses, as well as other large domestic animals. In cats, hypothyroidism is usually the result of other medical treatments such as surgery or radiation. In young horses, congenital hypothyroidism has been reported predominantly in Western Canada and has been linked with the mother's diet.
| Biology and health sciences | Specific diseases | Health |
65847 | https://en.wikipedia.org/wiki/Vitiligo | Vitiligo | Vitiligo (, ) is a chronic autoimmune disorder that causes patches of skin to lose pigment or color. The cause of vitiligo is unknown, but it may be related to immune system changes, genetic factors, stress, or sun exposure. Treatment options include topical medications, light therapy, surgery and cosmetics. The condition can show up on any skin type as a light peachy color and can appear on any place on the body in all sizes. The spots on the skin known as vitiligo are also able to “change” as spots lose and regain pigment; they will stay in relatively the same areas but can move over time and some big patches can move through the years but never disappear overnight.
Signs and symptoms
The only sign of vitiligo is the presence of pale patchy areas of depigmented skin which tend to occur on the extremities. Some people may experience itching before a new patch appears. The patches are initially small, but often grow and change shape. When skin lesions occur, they are most prominent on the face, hands and wrists. The loss of skin pigmentation is particularly noticeable around body orifices, such as the mouth, eyes, nostrils, genitalia and umbilicus. Some lesions have increased skin pigment around the edges. Those affected by vitiligo who are stigmatized for their condition may experience depression and similar mood disorders.
Causes
Although multiple hypotheses have been suggested as potential triggers that cause vitiligo, studies strongly imply that changes in the immune system are responsible for the condition. Vitiligo has been proposed to be a multifactorial disease with genetic susceptibility and environmental factors both thought to play a role. It is hypothesized that damaging environmental factors can disrupt redox reactions necessary for protein folding, so skin cells may initiate the unfolded protein response which releases cytokines, thus mounting an immune response
The National Institutes of Health states that sometimes an event, like a sunburn, emotional distress, or exposure to a chemical, can trigger or exacerbate the condition, Skin depigmentation in particular areas in vitiligo can also be triggered by mechanical trauma: this is an example of the Koebner phenomenom. Unlike in other skin diseases, this can be caused by daily activities, especially chronic friction on particular areas of the body.
Immune
Melanin is the pigment that gives skin its color; it is produced by skin cells called melanocytes.
Variations in genes that are part of the immune system or part of melanocytes have both been associated with vitiligo. It is also thought to be caused by the immune system attacking and destroying the melanocytes of the skin. A genome wide association study found approximately 36 independent susceptibility loci for generalized vitiligo.
The TYR gene encodes the protein tyrosinase, which is not a component of the immune system but is an enzyme of the melanocyte that catalyzes melanin biosynthesis, and a major autoantigen in generalized vitiligo.
Autoimmune associations
Vitiligo is sometimes associated with autoimmune and inflammatory diseases such as Hashimoto's thyroiditis, scleroderma, rheumatoid arthritis, type 1 diabetes mellitus, psoriasis, Addison's disease, pernicious anemia, alopecia areata, systemic lupus erythematosus, and celiac disease.
Among the inflammatory products of NLRP1 are caspase 1 and caspase 7, which activate the inflammatory cytokine interleukin-1β. Interleukin-1β and interleukin-18 are expressed at high levels in people with vitiligo. In one of the mutations, the amino acid leucine in the NALP1 protein was replaced by histidine (Leu155 → His). The original protein and sequence is highly conserved in evolution, and is found in humans, chimpanzee, rhesus monkey, and the bush baby. Addison's disease (typically an autoimmune destruction of the adrenal glands) may also be seen in individuals with vitiligo.
Oxidative stress
Numerous whole-exome sequencing studies have demonstrated that vitiligo is associated with polymorphisms in genes involved in the response to oxidative stress such as CAT, SOD1, SOD2, SOD3, NFE2L2, HMOX1, GST-M1 or GST-T1 supporting the association of elevated levels of reactive oxygen species in melanocytes with the induction of an auto-immune response.
Thus, diseases presenting with altered mitochondrial function such as MELAS, Vogt-Koyanagi-Harada syndrome, Kabuki syndrome are associated with increased risk of vitiligo.
In line with these observations, genetic alterations in mitochondrial DNA (mtDNA) of melanocytes associated with altered mitochondrial function lead to a release of mtDNA that can be detected in the skin of vitiligo patients. This mtDNA can be sensed by the cGAS-STING pathway resulting in pro-inflammatory cytokine and chemokines production promoting the recruitment of cytotoxic CD8+ T cells. The use of mitochondrial antioxidants, NRF2 inhibitors, and TBK1 inhibitors is emerging as potential therapeutic options to block this cascade of events.
Diagnosis
An ultraviolet light can be used in the early phase of this disease for identification and to determine the effectiveness of treatment. Using a Wood's light, skin will change colour (fluoresce) when it is affected by certain bacteria, fungi, and changes to pigmentation of the skin.
Classification
Classification attempts to quantify vitiligo have been analyzed as being somewhat inconsistent, while recent consensus has agreed to a system of segmental vitiligo (SV) and non-segmental vitiligo (NSV). NSV is the most common type of vitiligo.
Non-segmental
In non-segmental vitiligo (NSV), there is usually some form of symmetry in the location of the patches of depigmentation. New patches also appear over time and can be generalized over large portions of the body or localized to a particular area. Extreme cases of vitiligo, to the extent that little pigmented skin remains, are referred to as vitiligo universalis. NSV can come about at any age (unlike segmental vitiligo, which is far more prevalent in teenage years).
Classes of non-segmental vitiligo include the following:
Generalized vitiligo: the most common pattern, wide and randomly distributed areas of depigmentation
Universal vitiligo: depigmentation encompasses most of the body
Focal vitiligo: one or a few scattered macules in one area, most common in children
Acrofacial vitiligo: fingers and periorificial areas
Mucosal vitiligo: depigmentation of only the mucous membranes
Segmental
Segmental vitiligo (SV) differs in appearance, cause, and frequency of associated illnesses. Its treatment is different from that of NSV. It tends to affect areas of skin that are associated with dorsal roots from the spinal cord and is most often unilateral. It is much more stable/static in its course and its association with autoimmune diseases appears to be weaker than that of generalized vitiligo. SV does not improve with topical therapies or UV light; however, surgical treatments such as cellular grafting can be effective.
Differential diagnosis
Chemical leukoderma is a similar condition due to multiple exposures to chemicals. Vitiligo however is a risk factor. Triggers may include inflammatory skin conditions, burns, intralesional steroid injections, and abrasions.
Other conditions with similar symptoms include the following:
albinism
halo nevus
idiopathic guttate hypomelanosis (white sunspots)
piebaldism
pityriasis alba
postinflammatory hypopigmentation
primary adrenal insufficiency
progressive macular hypomelanosis
tinea versicolor
tuberculoid leprosy
Treatment
There is no cure for vitiligo but several treatment options are available. The best evidence is for applied steroids and ultraviolet light in combination with creams. Due to the higher risks of skin cancer, the United Kingdom's National Health Service suggests phototherapy be used only if primary treatments are ineffective. Lesions located on the hands, feet, and joints are the most difficult to repigment; those on the face are easiest to return to the natural skin color as the skin is thinner.
Immune mediators
Topical preparations of immune-suppressing medications including glucocorticoids (such as 0.05% clobetasol or 0.10% betamethasone) and calcineurin inhibitors (such as tacrolimus or pimecrolimus) are considered to be first-line vitiligo treatments.
In July 2022, ruxolitinib cream (sold under the brand name Opzelura) was approved for medical use in the United States for the treatment of vitiligo.
Phototherapy
Phototherapy is considered a second-line treatment for vitiligo. Exposing the skin to light from UVB lamps is the most common treatment for vitiligo. The treatments can be done at home with a UVB lamp or in a clinic. The exposure time is managed so that the skin does not suffer overexposure. Treatment can take a few weeks if the spots are on the neck and face and if they existed not more than 3 years. If the spots are on the hands and legs and have been there for more than 3 years, it can take a few months. Phototherapy sessions are done 2–3 times a week. Spots on a large area of the body may require full-body treatment in a clinic or hospital. UVB broadband and narrowband lamps can be used, but narrowband ultraviolet peaked around 311 nm is the choice. It has been constitutively reported that a combination of UVB phototherapy with other topical treatments improves re-pigmentation. However, some people with vitiligo may not see any changes to skin or re-pigmentation occurring. A serious potential side effect involves the risk of developing skin cancer, the same risk as an overexposure to natural sunlight.
Ultraviolet light (UVA) treatments are normally carried out in a hospital clinic. Psoralen and ultraviolet A light (PUVA) treatment involves taking a drug that increases the skin's sensitivity to ultraviolet light and then exposing the skin to high doses of UVA light. Treatment is required twice a week for 6–12 months or longer. Because of the high doses of UVA and psoralen, PUVA may cause side effects such as sunburn-type reactions or skin freckling.
Narrowband ultraviolet B (NBUVB) phototherapy lacks the side effects caused by psoralens and is as effective as PUVA. As with PUVA, treatment is carried out twice weekly in a clinic or every day at home, and there is no need to use psoralen. Longer treatment is often recommended, and at least 6 months may be required for effects to phototherapy. NBUVB phototherapy appears better than PUVA therapy with the most effective response on the face and neck.
With respect to improved repigmentation: topical calcineurin inhibitors plus phototherapy are better than phototherapy alone, hydrocortisone plus laser light is better than laser light alone, ginkgo biloba is better than placebo, and oral mini-pulse of prednisolone (OMP) plus NB-UVB is better than OMP alone.
Skin camouflage
In mild cases, vitiligo patches can be hidden with makeup or other cosmetic camouflage solutions. If the affected person is pale-skinned, the patches can be made less visible by avoiding tanning of unaffected skin.
Depigmenting
In cases of extensive vitiligo the option to depigment the unaffected skin with topical drugs like monobenzone, mequinol, or hydroquinone may be considered to render the skin an even color. The removal of all the skin pigment with monobenzone is permanent and vigorous. Sun safety must be adhered to for life to avoid severe sunburn and melanomas. Depigmentation takes about a year to complete.
History
Descriptions of a disease believed to be vitiligo date back to a passage in the medical text Ebers Papyrus in ancient Egypt. Also, the Hebrew word "Tzaraath" from the Old Testament book of Leviticus dating to 1280 BC (or 1312 BC) described a group of skin diseases associated with white spots, and a subsequent translation to Greek led to continued conflation of those with vitiligo with leprosy and spiritual uncleanliness.
Medical sources in the ancient world such as Hippocrates often did not differentiate between vitiligo and leprosy, often grouping these diseases together. The name "vitiligo" was first used by the Roman physician Aulus Cornelius Celsus in his classic medical text De Medicina.
The term vitiligo is believed to be derived from "vitium", meaning "defect" or "blemish".
Society and culture
The change in appearance caused by vitiligo can affect a person's emotional and psychological well-being and may create difficulty in becoming or remaining employed, particularly if vitiligo develops on visible areas of the body, such as the face, hands or arms. Participating in a vitiligo support group may improve social coping skills and emotional resilience.
Notable people with vitiligo
Notable cases include American pop singer Michael Jackson, Canadian fashion model Winnie Harlow, New Zealand singer-songwriter Kimbra, American actor David Dastmalchian and Argentine musician Charly García. Professional wrestler Bryan Danielson and French actor Michaël Youn are also affected, as is former French Prime Minister Édouard Philippe, Miss Universe Egypt 2024 Logina Salah, former Roman Catholic priest, Governor of Pampanga and TV host Eddie Panlilio, and model and former Miss Colombia 2007 Taliana Vargas.
In popular culture
The Adult Swim animated sitcom The Boondocks satirizes the idea of vitiligo in Uncle Ruckus, one of the show's characters. Ruckus, who is black, frequently claims to be white, often stating that he has "Re-vitiligo, the opposite of what Michael Jackson had." He frequently uses this argument to maintain that he is actually white, leading him to commit delusional and racist antics in nearly every episode.
Research
, afamelanotide is in phase II and III clinical trials for vitiligo and other skin diseases.
A medication for rheumatoid arthritis, tofacitinib, has been tested for the treatment of vitiligo.
In October 1992, a scientific report was published of successfully transplanting melanocytes to vitiligo-affected areas, effectively repigmenting the region. The procedure involved taking a thin layer of pigmented skin from the person's gluteal region. Melanocytes were then separated out to a cellular suspension that was expanded in culture. The area to be treated was then denuded with a dermabrader and the melanocytes graft applied. Between 70 and 85 percent of people with vitiligo experienced nearly complete repigmentation of their skin. The longevity of the repigmentation differed from person to person.
Current research suggests that the Janus kinase/signal transducer and activator of the transcription pathway (JAK/STAT pathway) plays a crucial role in the loss of epidermal melanocytes. This pathway is activated by CXCR3+ CD8+ T cells, creating a positive feedback loop with interferon-gamma (IFN-γ) chemokines from keratinocytes, potentially contributing to vitiligo. JAK inhibitors like ruxolitinib show promise in targeting the IFN-γ-chemokine signaling axis implicated in vitiligo pathogenesis, and improving nonsegmental vitiligo.
| Biology and health sciences | Specific diseases | Health |
65880 | https://en.wikipedia.org/wiki/Cassette%20tape | Cassette tape | The Compact Cassette, also commonly called a cassette tape, audio cassette, or simply tape or cassette, is an analog magnetic tape recording format for audio recording and playback. Invented by Lou Ottens and his team at the Dutch company Philips, the Compact Cassette was released in August 1963.
Compact Cassettes come in two forms, either containing content as a prerecorded cassette (Musicassette), or as a fully recordable "blank" cassette. Both forms have two sides and are reversible by the user.
Although other tape cassette formats have also existed—for example the Microcassette—the generic term cassette tape is normally used to refer to the Compact Cassette because of its ubiquity.
Compact Cassettes contain two miniature spools, between which the magnetically coated, polyester-type plastic film (magnetic tape) is passed and wound—essentially miniaturizing reel-to-reel audio tape and enclosing it, with its reels, in a small case (cartridge)—hence "cassette". These spools and their attendant parts are held inside a protective plastic shell which is at its largest dimensions. The tape itself is commonly referred to as "eighth-inch" tape, supposedly wide, but actually slightly larger, at . Two stereo pairs of tracks (four total) or two monaural audio tracks are available on the tape; one stereo pair or one monophonic track is played or recorded when the tape is moving in one direction and the second (pair) when moving in the other direction. This reversal is achieved either by manually flipping the cassette when the tape comes to an end, or by the reversal of tape movement, known as "auto-reverse", when the mechanism detects that the tape has ended.
History
Precursors
After the Second World War, magnetic tape recording technology proliferated across the world. In the United States, Ampex, using equipment obtained in Germany as a starting point, began commercial production of reel-to-reel tape recorders. First used by broadcast studios to pre-record radio programs, tape recorders quickly found their way into schools and homes. By 1953, one million US homes had tape machines, and several major record labels were releasing select titles on prerecorded reel-to-reel tapes.
In 1958, following four years of development, RCA introduced the RCA tape cartridge, which enclosed 60 minutes (30 minutes per side) of stereo quarter-inch reel-to-reel tape within a plastic cartridge that could be utilized on a compatible tape recorder/player without having to thread the tape through the machine. This format was not very successful, and RCA discontinued it in 1964.
Development and release
In the early 1960s Philips tasked two different teams to design a high-quality tape cartridge for home use, utilizing thinner and narrower tape compared to what was used in reel-to-reel tape recorders. A team at its Vienna factory, which had experience with dictation machines, developed the Einloch-Kassette, or single-hole cassette with involvement from Grundig. At the same time, a team in Hasselt led by Lou Ottens developed a two-hole cassette under the name Pocket Recorder.
Philips selected the two-spool cartridge as a winner and introduced the 2-track 2-direction mono version in Europe on 28 August 1963 at the Berlin Radio Show, and in the United States (under the Norelco brand) in November 1964. The same year, mass production of blank compact cassettes began in Hanover. Philips also offered a machine to play and record the cassettes, the Philips Typ EL 3300. An updated model, Typ EL 3301 was offered in the US in November 1964 as Norelco Carry-Corder 150. The trademark name Compact Cassette came a year later.
Following rejection of the Einloch-Kassette, Grundig developed the DC-International (DC standing for "Double Cassette") based on drawings of the Compact Cassette, introducing it in 1965 as companies were competing to establish their format as the worldwide standard. After yielding to pressure from Sony to license the Compact Cassette format to them free of charge, Philips' format achieved market dominance, with the DC-International cassette format being discontinued in 1967, just two years after its introduction.
Philips improved on the Compact Cassette's original design to release a stereo version. By 1966 over 250,000 compact cassette recorders had been sold in the US alone and Japanese manufacturers soon became the leading source of recorders. By 1968, 85 manufacturers had sold over 2.4 million mono and stereo units. By the end of the 1960s, the cassette business was worth an estimated 150 million dollars, and by the early 1970s compact cassette machines were outselling other types of tape machines by a large margin.
Popularity of music cassettes
Prerecorded music cassettes (also known as Music-Cassettes, and later just Musicassettes) were launched in Europe in late 1965. The Mercury Record Company, a US affiliate of Philips, introduced Musicassettes to the US in July 1966. The initial offering consisted of 49 titles.
The compact cassette format, however, was initially designed for dictation and portable use, and the audio quality of early players was not well-suited for music. In 1971, the Advent Corporation introduced their Model 201 tape deck that combined Dolby type B noise reduction and chromium(IV) oxide (CrO2) tape, with a commercial-grade tape transport mechanism supplied by the Wollensak camera division of 3M Corporation. This resulted in the format being taken more seriously for musical use, and started the era of high fidelity cassettes and players.
British record labels began releasing Musicassettes in October 1967, and they exploded as a mass-market medium after the first Walkman, the TPS-L2, went on sale on 1 July 1979, as cassettes provided portability, which vinyl records could not. While portable radios and boom boxes had been around for some time, the Walkman was the first truly personal portable music player, one that not only allowed users to listen to music away from home, but to do so in private. According to the technology news website The Verge, "the world changed" on the day the TPS-L2 was released. Stereo tape decks and boom boxes became some of the most highly sought-after consumer products of both decades, as the ability of users to take their music with them anywhere with ease led to its popularity around the globe.
Like the transistor radio in the 1950s and 1960s, the portable CD player in the 1990s, and the MP3 player in the 2000s, the Walkman defined the portable music market for the decade of the '80s, with cassette sales overtaking those of LPs. Total vinyl record sales remained higher well into the 1980s due to greater sales of singles, although cassette singles achieved popularity for a period in the 1990s. Another barrier to cassettes overtaking vinyl in sales was shoplifting; compact cassettes were small enough that a thief could easily place one inside a pocket and walk out of a shop without being noticed. To prevent this, retailers in the US would place cassettes inside oversized "spaghetti box" containers or locked display cases, either of which would significantly inhibit browsing, thus reducing cassette sales.
During the early 1980s some record labels sought to solve this problem by introducing new, larger packages for cassettes which would allow them to be displayed alongside vinyl records and compact discs, or giving them a further market advantage over vinyl by adding bonus tracks. Willem Andriessen wrote that the development in technology allowed "hardware designers to discover and satisfy one of the collective desires of human beings all over the world, independent of region, climate, religion, culture, race, sex, age and education: the desire to enjoy music at any time, at any place, in any desired sound quality and almost at any wanted price". Critic Robert Palmer, writing in The New York Times in 1981, cited the proliferation of personal stereos as well as extra tracks not available on LP as reasons for the surge in popularity of cassettes.
Cassettes' ability to allow users to record content in public also led to a boom in bootleg cassettes made at live shows in the 1980s. The Walkman dominated the decade, selling up to 350 million units. So synonymous did the name "Walkman" become with all portable music players—with a German dictionary at one point defining the term as such without reference to Sony—that the Austrian Supreme Court ruled in 2002 that Sony, which had not sought to have the publisher of that dictionary retract that definition, could not prevent other companies from using that name, as it had now become genericized. As a result of this, a number of Sony's competitors produced their own version of the Walkman. Others made their own branded tape players, like JVC, Panasonic, Sharp, and Aiwa, the second-largest producer of the devices.
Between 1985, when cassettes overtook vinyl, and 1992, when they were overtaken by CDs (introduced in 1983 as a format that offered greater storage capacity and more accurate sound), the cassette tape was the most popular format in the United States and the UK. Record labels experimented with innovative packaging designs. A designer during the era explained: "There was so much money in the industry at the time, we could try anything with design." The introduction of the cassette single, called a "cassingle", was also part of this era and featured a music single in Compact Cassette form. Until 2005, cassettes remained the dominant medium for purchasing and listening to music in some developing countries, but compact disc (CD) technology had superseded the Compact Cassette in the vast majority of music markets throughout the world by this time.
Cassette culture
Compact cassettes served as catalysts for social change. Their small size, durability and ease of copying helped bring underground rock and punk music behind the Iron Curtain, creating a foothold for Western culture among the younger generations. Likewise, in Egypt cassettes empowered an unprecedented number of people to create culture, circulate information, and challenge ruling regimes before the internet became publicly accessible.
One of the political uses of cassette tapes was the dissemination of sermons by the exiled Ayatollah Khomeini throughout Iran before the 1979 Iranian Revolution, in which Khomeini urged the overthrow of the regime of the Shah, Mohammad Reza Pahlavi. During the military dictatorship of Chile (1973–1990) a "cassette culture" emerged where blacklisted music or music that was by other reasons not available as records was shared. Some pirate cassette producers created brands such as Cumbre y Cuatro that have in retrospect received praise for their contributions to popular music. Armed groups such as Manuel Rodríguez Patriotic Front (FPMR) and the Revolutionary Left Movement (MIR) made use of cassettes to spread their messages.
Cassette technology was a booming market for pop music in India, drawing criticism from conservatives while at the same time creating a huge market for legitimate recording companies, as well as pirated tapes. Some sales channels were associated with cassettes: in Spain filling stations often featured a display selling cassettes. While offering also mainstream music these cassettes became associated with genres such as Gipsy rhumba, light music and joke tapes that were common in the 1970s and 1980s.
Decline
Despite sales of CDs overtaking those of prerecorded cassettes in the early 1990s in the U.S., the format remained popular for specific applications, such as car audio, personal stereos, boomboxes, telephone answering machines, dictation, field recording, home recording, and mixtapes well into the decade. Cassette players were typically more resistant to shocks than CD players, and their lower fidelity was not considered a serious drawback in mobile use. With the introduction of electronic skip protection it became possible to use portable CD players on the go, and automotive CD players became viable. CD-R drives and media also became affordable for consumers around the same time.
By 1993, annual shipments of CD players had reached 5 million, up 21% from the year before; while cassette player shipments had dropped 7% to approximately 3.4 million. Sales of pre-recorded music cassettes in the US dropped from 442 million in 1990 to 274,000 by 2007. For audiobooks, the final year that cassettes represented more than 50% of total market sales was 2002 when they were replaced by CDs as the dominant media.
The last new car with an available cassette player was a 2014 TagAZ AQUiLA. Four years prior, Sony had stopped the production of personal cassette players. In 2011, the Oxford English Dictionary removed the phrase "cassette player" from its 12th edition Concise version, which prompted some media sources to mistakenly report that the term "cassette tape" was being removed.
In India, music continued to be released on the cassette format due to its low cost until 2009.
21st century
Although portable digital recorders are most common today, analog tape remains a desirable option for certain artists and consumers. Underground and DIY communities release regularly, and sometimes exclusively, on cassette format, particularly in experimental music circles and to a lesser extent in hardcore punk, death metal, and black metal circles, out of a fondness for the format. Even among major-label stars, the form has at least one devotee: Thurston Moore stated in 2009, "I only listen to cassettes." By 2019, few companies still made cassettes. Among those are National Audio Company, from the US, and Mulann, also known as Recording The Masters, from France.
Sony announced the end of cassette Walkman production on 22 October 2010, a result of the emergence of MP3 players such as Apple's iPod. As of 2022, Sony uses the Walkman brand solely for its line of digital media players.
In 2010, Botswana-based Diamond Studios announced plans for establishing a plant to mass-produce cassettes in a bid to combat piracy. It opened in 2011.
In South Korea, the early English education boom for toddlers encourages a continuous demand for English language cassettes, due to the affordable cost.
National Audio Company in Missouri, the largest of the few remaining manufacturers of audio cassettes in the US, oversaw the mass production of the "Awesome Mix #1" cassette from the film Guardians of the Galaxy in 2014. They reported that they had produced more than 10 million tapes in 2014 and that sales were up 20 percent the following year, their best year since they opened in 1969. In 2016, cassette sales in the United States rose by 74% to 129,000. In 2018, following several years of shortage, National Audio Company began producing their own magnetic tape, becoming the world's first known manufacturer of an all-new tape stock. Mulann, a company which acquired Pyral/RMGI in 2015 and originates from BASF, also started production of its new cassette tape stock in 2018, basing on reel tape formula.
In Japan and South Korea, the pop acts Seiko Matsuda, SHINee, and NCT 127 released their material on limited-run cassettes. In Reiwa era Japan, the revived popularity of cassette tapes is an example of Showa retro. As of 2021, Maxell was selling 8 million cassette tapes per year in Japan.
In the mid-to-late 2010s, cassette sales saw a modest resurgence concurrent with the vinyl revival. As early as 2015, the retail chain Urban Outfitters, which had long sold LPs, started selling new pre-recorded cassettes (both new and old albums), blank cassettes, and players. In 2016, cassette sales increased, a trend that continued in 2017 and 2018. In the UK, sales of cassette tapes in 2021 reached its highest number since 2003.
Cassettes are favored by some artists and listeners, including those of older genres of music such as dansband, as well as independent and underground artists, some of whom were releasing new music on tape by the 2020s, including Britney Spears and Busta Rhymes. Reasons cited for this include tradition, low cost, the DIY ease of use, and a nostalgic fondness for how the format's imperfections lend greater vibrancy to low-fi, experimental music, despite the lack of the "full-bodied richness" of vinyl.
Tape types
Cassette tapes are made of a polyester-type plastic film with a magnetic coating. The original magnetic material was based on gamma ferric oxide (Fe2O3). , 3M Company developed a cobalt volume-doping process combined with a double-coating technique to enhance overall tape output levels. This product was marketed as "High Energy" under its Scotch brand of recording tapes. Inexpensive cassettes commonly are labeled "low-noise", but typically are not optimized for high frequency response. For this reason, some low-grade IEC Type I tapes have been marketed specifically as better suited for data storage than for sound recording.
In 1968, DuPont, the inventor of a chromium dioxide (CrO2) manufacturing process, began commercialization of CrO2 media. The first CrO2 cassette was introduced in 1970 by Advent, and later strongly backed by BASF, the inventor and longtime manufacturer of magnetic recording tape. Next, coatings using magnetite (Fe3O4) such as TDK's Audua were produced in an attempt to approach or exceed the sound quality of vinyl records. Cobalt-adsorbed iron oxide (Avilyn) was introduced by TDK in 1974 and proved very successful. "Type IV" tapes using pure metal particles (as opposed to oxide formulations) were introduced in 1979 by 3M under the trade name Metafine. The tape coating on most cassettes sold as of 2024 are either "normal" or "chrome" consists of ferric oxide and cobalt mixed in varying ratios (and using various processes); there are very few cassettes on the market that use a pure (CrO2) coating.
Simple voice recorders and earlier cassette decks are designed to work with standard ferric formulations. Newer tape decks usually are built with switches and later detectors for the different bias and equalization requirements for higher grade tapes. The most common are iron oxide tapes (as defined by the IEC 60094 standard).
Notches on top of the cassette shell indicate the type of tape. Type I cassettes have only write-protect notches, Type II have an additional pair next to the write protection ones, and Type IV (metal) have a third set near the middle of the top of the cassette shell. These allow later cassette decks to detect the tape type automatically and select the proper bias and equalization.
Features
The cassette was the next step following reel-to-reel audio tape recording, although, because of the limitations of the cassette's size and speed, it initially compared poorly in quality. Unlike the 4-track stereo open-reel format, the two stereo tracks of each side lie adjacent to each other, rather than being interleaved with the tracks of the other side. This permitted monaural cassette players to play stereo recordings "summed" as mono tracks and permitted stereo players to play mono recordings through both speakers. The tape is wide, with each mono track wide, plus an unrecorded guard band between each track. In stereo, each track is further divided into a left and a right channel of each, with a gap of . The tape moves past the playback head at , the speed being a continuation of the increasingly slower speed series in open-reel machines operating at 30, 15, , or inches per second. For comparison, the typical open-reel -inch 4-track consumer format used tape that is wide, each track wide, and running at either twice or four times the speed of a cassette.
Very simple cassette recorders for dictation purposes did not tightly control tape speed and relied on playback on a similar device to maintain intelligible recordings. For accurate reproduction of music, a tape transport incorporating a capstan and pinch roller system was used, to ensure tape passed over the record/playback heads at a constant speed.
Locating write-protect notches
If the cassette is held with one of the labels facing the user and the tape opening at the bottom, the write-protect notch for the corresponding side is at the top-left.
Tape length
Tape length usually is measured in minutes of total playing time. Many of the varieties of blank tape were C60 (30 minutes per side), C90 (45 minutes per side) and C120 (60 minutes per side). Maxell makes 150-minute cassettes (UR-150) - 75 minutes per side. The C46 and C60 lengths typically are thick, but C90s are and (the less common) C120s are just thick, rendering them more susceptible to stretching or breakage. Even C180 tapes were available at one point.
Other lengths are (or were) also available from some vendors, including C10, C12 and C15 (useful for saving data from early home computers and in telephone answering machines), C30, C40, C50, C54, C64, C70, C74, C80, C84, C94, C100, C105, C110, and C150. As late as 2010, Thomann still offered C10, C20, C30 and C40 IEC Type II tape cassettes for use with 4- and 8-track portastudios.
Track width
The full tape width is 3.8 mm. For mono recording the track width is 1.5 mm. In stereo mode each channel has width of 0.6 mm with a 0.3 mm separation to avoid crosstalk.
Head gap
The head-gap width is 2 μm which gives a theoretical maximum frequency of about 12 kHz (at the standard speed of 1 7/8 ips or 4.76 cm/s). A narrower gap would give a higher frequency limit but also weaker magnetization.
Cassette tape adapter
Cassette tape adapters allow external audio sources to be played back from any tape player, but were typically used for car audio systems. An attached audio cable with a phone connector converts the electrical signals to be read by the tape head, while mechanical gears simulate reel to reel movement without actual tapes when driven by the player mechanism.
Optional mechanical elements
In order to wind up the tape more reliably, the former BASF (from 1998 EMTEC) patented the Special Mechanism or Security Mechanism advertised with the abbreviation SM in the early 1970s, which was temporarily used under license by Agfa. This feature each includes a rail to guide the tape to the spool and prevent an unclean roll from forming.
Flaws
Magnetic tape is not an ideal medium for long-term archival storage, as it begins to degrade after 10 – 20 years, with some experts estimating its lifespan to be no more than 30 years.
A common mechanical problem occurs when a defective player or resistance in the tape path causes insufficient tension on the take-up spool. This would cause the magnetic tape to be fed out through the bottom of the cassette and become tangled in the mechanism of the player. In these cases, the player was said to have "eaten" or "chewed" the tape, often destroying the playability of the cassette.
Cassette players and recorders
The first cassette machines (e.g. the Philips EL 3300, introduced in August 1963)
One innovation was the front-loading arrangement. Pioneer's angled cassette bay and the exposed bays of some Sansui models eventually were standardized as a front-loading door into which a cassette would be loaded. Later models would adopt electronic buttons, and replace conventional meters (which could be driven over full scale when overloaded, a condition called "pegging the needle" or simply "pegging") with electronic LED or vacuum fluorescent displays, with level controls typically being controlled by either rotary controls or side-by-side sliders. BIC and Marantz briefly offered models that could be run at double speeds, but Nakamichi was widely recognized as one of the first companies to create decks that rivaled reel-to-reel decks with full 20–20,000 Hz frequency response, low noise, and very low wow and flutter.
Different interpretations of the cassette standard resulted in a 4 dB ambiguity at 16 kHz. Technically, both camps in this debate were still within the original cassette specification as no tolerance for frequency response is provided above 12.5 kHz and all calibration tones above 12.5 kHz are considered optional. Decreasing noise at 16 kHz also decreases the maximum signal level at 16 kHz, the High-Frequency Dynamics stay almost constant.
A third company, Bang & Olufsen of Denmark, created the Dolby HX "head room extension" system for reliably reducing tape saturation effects at high frequencies while maintaining higher bias levels.
Applications for car stereos varied widely. Auto manufacturers in the US typically would fit a cassette slot into their standard large radio faceplates. Europe and Asia would standardize on DIN and double DIN sized faceplates. In the 1980s, a high-end installation would have a Dolby AM/FM cassette deck, and they rendered the 8-track player obsolete in car installations because of space, performance, and audio quality. In the 1990s and 2000s, as the cost of building CD players declined, many manufacturers offered a CD player. The CD player eventually supplanted the cassette deck as standard equipment, but some cars, especially those targeted at older drivers, were offered with the option of a cassette player, either by itself or sometimes in combination with a CD slot. Most new cars can still accommodate aftermarket cassette players, and the auxiliary jack advertised for MP3 players can be used also with portable cassette players, but 2011 was the first model year for which no American manufacturer offered factory-installed cassette players.
Applications
Audio
The Compact Cassette originally was intended for use in dictation machines. In this capacity, some later-model cassette-based dictation machines could also run the tape at half speed (15⁄16 in/s) as playback quality was not critical. The cassette soon became a medium for distributing prerecorded music—initially through the Philips Record Company (and subsidiary labels Mercury and Philips in the US). As of 2009, one still found cassettes used for a variety of purposes, such as journalism, oral history, meeting and interview transcripts, audio-books, and so on. Police are still big buyers of cassette tapes, as some lawyers "don't trust digital technology for interviews". However, they are starting to give way to Compact Discs and more "compact" digital storage media. Prerecorded cassettes were also employed as a way of providing chemotherapy information to recently diagnosed cancer patients as studies found anxiety and fear often gets in the way of the information processing.
The cassette quickly found use in the commercial music industry. One artifact found on some commercially produced music cassettes was a sequence of test tones, called SDR (Super Dynamic Range, also called XDR, or eXtended Dynamic Range) soundburst tones, at the beginning and end of the tape, heard in order of low frequency to high. These were used during SDR/XDR's duplication process to gauge the quality of the tape medium. Many consumers objected to these tones since they were not part of the recorded music.
Multitrack recording
Multitrack recorders utilizing the compact cassette were introduced beginning in 1979 with the TEAC 144 Portastudio. In the simplest configuration, rather than playing a pair of stereo channels of each side of the cassette, the typical portastudio used a four-track tape head assembly to access four tracks on the cassette at once (with the tape playing in one direction). Each track could be recorded to, erased, or played back individually, allowing musicians to overdub themselves and create simple multitrack recordings easily, which could then be mixed down to a finished stereo version on an external machine. To increase audio quality in these recorders, the tape speed sometimes was doubled to 3 inches per second, in comparison to the standard 17⁄8 ips; additionally, dbx, Dolby B or Dolby C noise reduction provided compansion (compression of the signal during recording with equal and opposite expansion of the signal during playback), which yields increased dynamic range by lowering the noise level and increasing the maximum signal level before distortion occurs. Multi-track cassette recorders with built-in mixer and signal routing features ranged from easy-to-use beginner units up to professional-level recording systems. Cassette-based multitrack recorders are credited with launching the home recording revolution.
Home dubbing
Most cassettes were sold blank, and used for recording (dubbing) the owner's records (as backup, to play in the car, or to make mixtape compilations), their friends' records, or music from the radio. This practice was condemned by the music industry with such alarmist slogans as "Home Taping Is Killing Music". However, many claimed that the medium was ideal for spreading new music and would increase sales, and strongly defended their right to copy at least their own records onto tape. For a limited time in the early 1980s Island Records sold chromium dioxide "One Plus One"
Various legal cases arose surrounding the dubbing of cassettes. In the UK, in the case of CBS Songs v. Amstrad (1988), the House of Lords found in favor of Amstrad that producing equipment that facilitated the dubbing of cassettes, in this case a high-speed twin cassette deck that allowed one cassette to be copied directly onto another, did not constitute copyright infringement by the manufacturer. In a similar case, a shop owner who rented cassettes and sold blank tapes was not liable for copyright infringement even though it was clear that his customers likely were dubbing them at home. In both cases, the courts held that manufacturers and retailers could not be held accountable for the actions of consumers.
As an alternative to home dubbing, in the late 1980s, the Personics company installed booths in record stores across America that allowed customers to make personalized mixtapes from a digitally encoded back-catalogue with customised printed covers.
Data recording
Floppy disk storage had become the standard data storage medium in the United States by the mid-1980s; for example, by 1983 the majority of software sold by Atari Program Exchange was on floppy. Cassette remained more popular for 8-bit computers such as the Commodore 64, ZX Spectrum, MSX, and Amstrad CPC 464 in many countries such as the United Kingdom (where 8-bit software was mostly sold on cassette until that market disappeared altogether in the early 1990s). Reliability of cassettes for data storage is inconsistent, with many users recalling repeated attempts to load video games; the Commodore Datasette used very reliable, but slow, digital encoding. In some countries, including the United Kingdom, Poland, Hungary, and the Netherlands, cassette data storage was so popular that some radio stations would broadcast computer programs that listeners could record onto cassette and then load into their computer. See BASICODE.
The cassette was adapted into what is called a streamer cassette (also known as a "D/CAS" cassette), a version dedicated solely for data storage, and used chiefly for hard disk backups and other types of data. Streamer cassettes look almost exactly the same as a standard cassette, with the exception of having a notch about one quarter-inch wide and deep situated slightly off-center at the top edge of the cassette. Streamer cassettes also have a re-usable write-protect tab on only one side of the top edge of the cassette, with the other side of the top edge having either only an open rectangular hole, or no hole at all. This is due to the entire one-eighth inch width of the tape loaded inside being used by a streamer cassette drive for the writing and reading of data, hence only one side of the cassette being used. Streamer cassettes can hold anywhere from 250 kilobytes to 600 megabytes of data.
Rivals and successors
Technical development of the cassette effectively ceased when digital recordable media, such as DAT and MiniDisc, were introduced in the late 1980s and early-to-mid 1990s, with Dolby S recorders marking the peak of Compact Cassette technology. Anticipating the switch from analog to digital format, major companies, such as Sony, shifted their focus to new media. In 1992, Philips introduced the Digital Compact Cassette (DCC), a DAT-like tape in almost the same shell as a Compact Cassette. It was aimed primarily at the consumer market. A DCC deck could play back both types of cassettes. Unlike DAT, which was accepted in professional usage because it could record without lossy compression effects, DCC failed in home, mobile and professional environments, and was discontinued in 1996.
The microcassette largely supplanted the full-sized cassette in situations where voice-level fidelity is all that is required, such as in dictation machines and answering machines. Microcassettes have in turn given way to digital recorders of various descriptions. Since the rise of cheap CD-R discs, and flash memory-based digital audio players, the phenomenon of "home taping" has effectively switched to recording to a Compact Disc or downloading from commercial or music-sharing websites.
Because of consumer demand, the cassette has remained influential on design, more than a decade after its decline as a media mainstay. As the Compact Disc grew in popularity, cassette-shaped audio adapters were developed to provide an economical and clear way to obtain CD functionality in vehicles equipped with cassette decks but no CD player. A portable CD player would have its analog line-out connected to the adapter, which in turn fed the signal to the head of the cassette deck. These adapters continue to function with MP3 players and smartphones, and generally are more reliable than the FM transmitters that must be used to adapt CD players and digital audio players to car stereo systems. Digital audio players shaped as cassettes have also become available, which can be inserted into any cassette player and communicate with the head as if they were normal cassettes.
| Technology | Media and communication: Basics | null |
65886 | https://en.wikipedia.org/wiki/Microphone | Microphone | A microphone, colloquially called a mic (), or mike, is a transducer that converts sound into an electrical signal. Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, and radio and television broadcasting. They are also used in computers and other electronic devices, such as mobile phones, for recording sounds, speech recognition, VoIP, and other purposes, such as ultrasonic sensors or knock sensors.
Several types of microphone are used today, which employ different methods to convert the air pressure variations of a sound wave to an electrical signal. The most common are the dynamic microphone, which uses a coil of wire suspended in a magnetic field; the condenser microphone, which uses the vibrating diaphragm as a capacitor plate; and the contact microphone, which uses a crystal of piezoelectric material. Microphones typically need to be connected to a preamplifier before the signal can be recorded or reproduced.
History
In order to speak to larger groups of people, a need arose to increase the volume of the human voice. The earliest devices used to achieve this were acoustic megaphones. Some of the first examples, from fifth-century-BC Greece, were theater masks with horn-shaped mouth openings that acoustically amplified the voice of actors in amphitheaters. In 1665, the English physicist Robert Hooke was the first to experiment with a medium other than air with the invention of the "lovers' telephone" made of stretched wire with a cup attached at each end.
In 1856, Italian inventor Antonio Meucci developed a dynamic microphone based on the generation of electric current by moving a coil of wire to various depths in a magnetic field. This method of modulation was also the most enduring method for the technology of the telephone as well. Speaking of his device, Meucci wrote in 1857, "It consists of a vibrating diaphragm and an electrified magnet with a spiral wire that wraps around it. The vibrating diaphragm alters the current of the magnet. These alterations of current, transmitted to the other end of the wire, create analogous vibrations of the receiving diaphragm and reproduce the word."
In 1861, German inventor Johann Philipp Reis built an early sound transmitter (the "Reis telephone") that used a metallic strip attached to a vibrating membrane that would produce intermittent current. Better results were achieved in 1876 with the "liquid transmitter" design in early telephones from Alexander Graham Bell and Elisha Gray – the diaphragm was attached to a conductive rod in an acid solution. These systems, however, gave a very poor sound quality.
The first microphone that enabled proper voice telephony was the (loose-contact) carbon microphone. This was independently developed by David Edward Hughes in England and Emile Berliner and Thomas Edison in the US. Although Edison was awarded the first patent in mid-1877 (after a long legal dispute), Hughes had demonstrated his working device in front of many witnesses some years earlier, and most historians credit him with its invention. The Berliner microphone found commercial success through the use by Alexander Graham Bell for his telephone and Berliner became employed by Bell. The carbon microphone was critical in the development of telephony, broadcasting and the recording industries. Thomas Edison refined the carbon microphone into his carbon-button transmitter of 1886. This microphone was employed at the first radio broadcast ever, a performance at the New York Metropolitan Opera House in 1910.
In 1916, E.C. Wente of Western Electric developed the next breakthrough with the first condenser microphone. In 1923, the first practical moving coil microphone was built. The Marconi-Sykes magnetophone, developed by Captain H. J. Round, became the standard for BBC studios in London. This was improved in 1930 by Alan Blumlein and Herbert Holman who released the HB1A and was the best standard of the day.
Also in 1923, the ribbon microphone was introduced, another electromagnetic type, believed to have been developed by Harry F. Olson, who applied the concept used in a ribbon speaker to making a microphone. Over the years these microphones were developed by several companies, most notably RCA that made large advancements in pattern control, to give the microphone directionality. With television and film technology booming there was a demand for high-fidelity microphones and greater directionality. Electro-Voice responded with their Academy Award-winning shotgun microphone in 1963.
During the second half of the 20th century, development advanced quickly with the Shure Brothers bringing out the SM58 and SM57.
Varieties
Microphones are categorized by their transducer principle (condenser, dynamic, etc.) and by their directional characteristics (omni, cardioid, etc.). Sometimes other characteristics such as diaphragm size, intended use or orientation of the principal sound input to the principal axis (end- or side-address) of the microphone are used to describe the microphone.
Condenser
The condenser microphone, invented at Western Electric in 1916 by E. C. Wente, is also called a capacitor microphone or electrostatic microphone—capacitors were historically called condensers. The diaphragm acts as one plate of a capacitor, and audio vibrations produce changes in the distance between the plates. Because the capacitance of the plates is inversely proportional to the distance between them, the vibrations produce changes in capacitance. These changes in capacitance are used to measure the audio signal. The assembly of fixed and movable plates is called an element or capsule.
Condenser microphones span the range from telephone mouthpieces through inexpensive karaoke microphones to high-fidelity recording microphones. They generally produce a high-quality audio signal and are now the popular choice in laboratory and recording studio applications. The inherent suitability of this technology is due to the very small mass that must be moved by the incident sound wave compared to other microphone types that require the sound wave to do more work.
Condenser microphones require a power source, provided either via microphone inputs on equipment as phantom power or from a small battery. Power is necessary for establishing the capacitor plate voltage and is also needed to power the microphone electronics. Condenser microphones are also available with two diaphragms that can be electrically connected to provide a range of polar patterns, such as cardioid, omnidirectional, and figure-eight. It is also possible to vary the pattern continuously with some microphones, for example, the Røde NT2000 or CAD M179.
There are two main categories of condenser microphones, depending on the method of extracting the audio signal from the transducer: DC-biased microphones, and radio frequency (RF) or high frequency (HF) condenser microphones.
DC-biased condenser
With a DC-biased condenser microphone, the plates are biased with a fixed charge (Q). The voltage maintained across the capacitor plates changes with the vibrations in the air, according to the capacitance equation (C = ), where Q = charge in coulombs, C = capacitance in farads and V = potential difference in volts. A nearly constant charge is maintained on the capacitor. As the capacitance changes, the charge across the capacitor does change very slightly, but at audible frequencies it is sensibly constant. The capacitance of the capsule (around 5 to 100 pF) and the value of the bias resistor (100 MΩ to tens of GΩ) form a filter that is high-pass for the audio signal, and low-pass for the bias voltage. Note that the time constant of an RC circuit equals the product of the resistance and capacitance.
Within the time frame of the capacitance change (as much as 50 ms at 20 Hz audio signal), the charge is practically constant and the voltage across the capacitor changes instantaneously to reflect the change in capacitance. The voltage across the capacitor varies above and below the bias voltage. The voltage difference between the bias and the capacitor is seen across the series resistor. The voltage across the resistor is amplified for performance or recording. In most cases, the electronics in the microphone itself contribute no voltage gain as the voltage differential is quite significant, up to several volts for high sound levels.
RF condenser
RF condenser microphones use a comparatively low RF voltage, generated by a low-noise oscillator. The signal from the oscillator may either be amplitude modulated by the capacitance changes produced by the sound waves moving the capsule diaphragm, or the capsule may be part of a resonant circuit that modulates the frequency of the oscillator signal. Demodulation yields a low-noise audio frequency signal with a very low source impedance. The absence of a high bias voltage permits the use of a diaphragm with looser tension, which may be used to achieve wider frequency response due to higher compliance. The RF biasing process results in a lower electrical impedance capsule, a useful by-product of which is that RF condenser microphones can be operated in damp weather conditions that could create problems in DC-biased microphones with contaminated insulating surfaces. The Sennheiser MKH series of microphones use the RF biasing technique. A covert, remotely energized application of the same physical principle called the Thing was devised by Soviet Russian inventor Leon Theremin and used to bug the US Ambassador's residence in Moscow between 1945 and 1952.
Electret condenser
An electret microphone is a type of condenser microphone invented by Gerhard Sessler and Jim West at Bell laboratories in 1962. The externally applied charge used for a conventional condenser microphone is replaced by a permanent charge in an electret material. An electret is a ferroelectric material that has been permanently electrically charged or polarized. The name comes from electrostatic and magnet; a static charge is embedded in an electret by the alignment of the static charges in the material, much the way a permanent magnet is made by aligning the magnetic domains in a piece of iron.
Due to their good performance and ease of manufacture, hence low cost, the vast majority of microphones made today are electret microphones; a semiconductor manufacturer estimates annual production at over one billion units. They are used in many applications, from high-quality recording and lavalier (lapel mic) use to built-in microphones in small sound recording devices and telephones. Prior to the proliferation of MEMS microphones, nearly all cell-phone, computer, PDA and headset microphones were electret types.
Unlike other capacitor microphones, they require no polarizing voltage, but often contain an integrated preamplifier that does require power. This preamplifier is frequently phantom powered in sound reinforcement and studio applications. Monophonic microphones designed for personal computers (PCs), sometimes called multimedia microphones, use a 3.5 mm plug as usually used for stereo connections; the ring, instead of carrying the signal for a second channel, carries power.
Valve microphone
A valve microphone is a condenser microphone that uses a vacuum tube (valve) amplifier. They remain popular with enthusiasts of tube sound.
Dynamic
The dynamic microphone (also known as the moving-coil microphone) works via electromagnetic induction. They are robust, relatively inexpensive and resistant to moisture. This, coupled with their potentially high gain before feedback, makes them popular for on-stage use.
Dynamic microphones use the same dynamic principle as in a loudspeaker, only reversed. A small movable induction coil, positioned in the magnetic field of a permanent magnet, is attached to the diaphragm. When sound enters through the windscreen of the microphone, the sound wave moves the diaphragm which moves the coil in the magnetic field, producing a varying voltage across the coil through electromagnetic induction.
Ribbon
Ribbon microphones use a thin, usually corrugated metal ribbon suspended in a magnetic field. The ribbon is electrically connected to the microphone's output, and its vibration within the magnetic field generates the electrical signal. Ribbon microphones are similar to moving coil microphones in the sense that both produce sound by means of magnetic induction. Basic ribbon microphones detect sound in a bi-directional (also called figure-eight, as in the diagram below) pattern because the ribbon is open on both sides. Also, because the ribbon has much less mass, it responds to the air velocity rather than the sound pressure. Though the symmetrical front and rear pickup can be a nuisance in normal stereo recording, the high side rejection can be used to advantage by positioning a ribbon microphone horizontally, for example above cymbals, so that the rear lobe picks up sound only from the cymbals. The figure-eight response of a ribbon microphone is ideal for Blumlein pair stereo recording.
Other directional patterns are produced by enclosing one side of the ribbon in an acoustic trap or baffle, allowing sound to reach only one side. The classic RCA Type 77-DX microphone has several externally adjustable positions of the internal baffle, allowing the selection of several response patterns ranging from "figure-eight" to "unidirectional". Such older ribbon microphones, some of which still provide high-quality sound reproduction, were once valued for this reason, but a good low-frequency response could be obtained only when the ribbon was suspended very loosely, which made them relatively fragile. Modern ribbon materials, including new nanomaterials, have now been introduced that eliminate those concerns and even improve the effective dynamic range of ribbon microphones at low frequencies. Protective wind screens can reduce the danger of damaging a vintage ribbon, and also reduce plosive artifacts in the recording. Properly designed wind screens produce negligible treble attenuation. In common with other classes of dynamic microphone, ribbon microphones do not require phantom power; in fact, this voltage can damage some older ribbon microphones. Some new modern ribbon microphone designs incorporate a preamplifier and, therefore, do require phantom power, and circuits of modern passive ribbon microphones (i.e. those without the aforementioned preamplifier) are specifically designed to resist damage to the ribbon and transformer by phantom power. Also there are new ribbon materials available that are immune to wind blasts and phantom power.
Carbon
The carbon microphone was the earliest type of microphone. The carbon button microphone (or sometimes just a button microphone), uses a capsule or button containing carbon granules pressed between two metal plates like the Berliner and Edison microphones. A voltage is applied across the metal plates, causing a small current to flow through the carbon. One of the plates, the diaphragm, vibrates in sympathy with incident sound waves, applying a varying pressure to the carbon. The changing pressure deforms the granules, causing the contact area between each pair of adjacent granules to change, and this causes the electrical resistance of the mass of granules to change. The changes in resistance cause a corresponding change in the current flowing through the microphone, producing the electrical signal. Carbon microphones were once commonly used in telephones; they have extremely low-quality sound reproduction and a very limited frequency response range but are very robust devices. The Boudet microphone, which used relatively large carbon balls, was similar to the granule carbon button microphones.
Unlike other microphone types, the carbon microphone can also be used as a type of amplifier, using a small amount of sound energy to control a larger amount of electrical energy. Carbon microphones found use as early telephone repeaters, making long-distance phone calls possible in the era before vacuum tubes. Called a Brown's relay, these repeaters worked by mechanically coupling a magnetic telephone receiver to a carbon microphone: the faint signal from the receiver was transferred to the microphone, where it modulated a stronger electric current, producing a stronger electrical signal to send down the line.
Piezoelectric
A crystal microphone or piezo microphone uses the phenomenon of piezoelectricity—the ability of some materials to produce a voltage when subjected to pressure—to convert vibrations into an electrical signal. An example of this is potassium sodium tartrate, which is a piezoelectric crystal that works as a transducer, both as a microphone and as a slimline loudspeaker component. Crystal microphones were once commonly supplied with vacuum tube (valve) equipment, such as domestic tape recorders. Their high output impedance matched the high input impedance (typically about 10 MΩ) of the vacuum tube input stage well. They were difficult to match to early transistor equipment and were quickly supplanted by dynamic microphones for a time, and later small electret condenser devices. The high impedance of the crystal microphone made it very susceptible to handling noise, both from the microphone itself and from the connecting cable.
Piezoelectric transducers are often used as contact microphones to amplify sound from acoustic musical instruments, to sense drum hits, for triggering electronic samples, and to record sound in challenging environments, such as underwater under high pressure. Saddle-mounted pickups on acoustic guitars are generally piezoelectric devices that contact the strings passing over the saddle. This type of microphone is different from magnetic coil pickups commonly visible on typical electric guitars, which use magnetic induction, rather than mechanical coupling, to pick up vibration.
Fiber-optic
A fiber-optic microphone converts acoustic waves into electrical signals by sensing changes in light intensity, instead of sensing changes in capacitance or magnetic fields as with conventional microphones.
During operation, light from a laser source travels through an optical fiber to illuminate the surface of a reflective diaphragm. Sound vibrations of the diaphragm modulate the intensity of light reflecting off the diaphragm in a specific direction. The modulated light is then transmitted over a second optical fiber to a photodetector, which transforms the intensity-modulated light into analog or digital audio for transmission or recording. Fiber-optic microphones possess high dynamic and frequency range, similar to the best high fidelity conventional microphones.
Fiber-optic microphones do not react to or influence any electrical, magnetic, electrostatic or radioactive fields (this is called EMI/RFI immunity). The fiber-optic microphone design is therefore ideal for use in areas where conventional microphones are ineffective or dangerous, such as inside industrial turbines or in magnetic resonance imaging (MRI) equipment environments.
Fiber-optic microphones are robust, resistant to environmental changes in heat and moisture, and can be produced for any directionality or impedance matching. The distance between the microphone's light source and its photodetector may be up to several kilometers without need for any preamplifier or another electrical device, making fiber-optic microphones suitable for industrial and surveillance acoustic monitoring.
Fiber-optic microphones are used in very specific application areas such as for infrasound monitoring and noise cancellation. They have proven especially useful in medical applications, such as allowing radiologists, staff and patients within the powerful and noisy magnetic field to converse normally, inside the MRI suites as well as in remote control rooms. Other uses include industrial equipment monitoring and audio calibration and measurement, high-fidelity recording and law enforcement.
Laser
Laser microphones are often portrayed in movies as spy gadgets because they can be used to pick up sound at a distance from the microphone equipment. A laser beam is aimed at the surface of a window or other plane surface that is affected by sound. The vibrations of this surface change the angle at which the beam is reflected, and the motion of the laser spot from the returning beam is detected and converted to an audio signal.
In a more robust and expensive implementation, the returned light is split and fed to an interferometer, which detects movement of the surface by changes in the optical path length of the reflected beam. The former implementation is a tabletop experiment; the latter requires an extremely stable laser and precise optics.
A new type of laser microphone is a device that uses a laser beam and smoke or vapor to detect sound vibrations in free air. On August 25, 2009, U.S. patent 7,580,533 issued for a Particulate Flow Detection Microphone based on a laser-photocell pair with a moving stream of smoke or vapor in the laser beam's path. Sound pressure waves cause disturbances in the smoke that in turn cause variations in the amount of laser light reaching the photodetector. A prototype of the device was demonstrated at the 127th Audio Engineering Society convention in New York City from 9 through October 12, 2009.
Liquid
Early microphones did not produce intelligible speech, until Alexander Graham Bell made improvements including a variable-resistance microphone/transmitter. Bell's liquid transmitter consisted of a metal cup filled with water with a small amount of sulfuric acid added. A sound wave caused the diaphragm to move, forcing a needle to move up and down in the water. The electrical resistance between the wire and the cup was then inversely proportional to the size of the water meniscus around the submerged needle. Elisha Gray filed a caveat for a version using a brass rod instead of the needle. Other minor variations and improvements were made to the liquid microphone by Majoranna, Chambers, Vanni, Sykes, and Elisha Gray, and one version was patented by Reginald Fessenden in 1903. These were the first working microphones, but they were not practical for commercial application. The famous first phone conversation between Bell and Watson took place using a liquid microphone.
MEMS
The MEMS (microelectromechanical systems) microphone is also called a microphone chip or silicon microphone. A pressure-sensitive diaphragm is etched directly into a silicon wafer by MEMS processing techniques and is usually accompanied with an integrated preamplifier. Most MEMS microphones are variants of the condenser microphone design. Digital MEMS microphones have built-in analog-to-digital converter (ADC) circuits on the same CMOS chip making the chip a digital microphone and so more readily integrated with modern digital products. Major manufacturers producing MEMS silicon microphones are Wolfson Microelectronics (WM7xxx) now Cirrus Logic, InvenSense (product line sold by Analog Devices), Akustica (AKU200x), Infineon (SMM310 product), Knowles Electronics, Memstech (MSMx), NXP Semiconductors (division bought by Knowles), Sonion MEMS, Vesper, AAC Acoustic Technologies, and Omron.
More recently, since the 2010s, there has been increased interest and research into making piezoelectric MEMS microphones which are a significant architectural and material change from existing condenser style MEMS designs.
Plasma
In a plasma microphone, a plasma arc of ionized gas is used. The sound waves cause variations in the pressure around the plasma in turn causing variations in temperature which alter the conductance of the plasma. These variations in conductance can be picked up as variations superimposed on the electrical supply to the plasma. This is an experimental form of microphone.
Speakers as microphones
A loudspeaker, a transducer that turns an electrical signal into sound waves, is the functional opposite of a microphone. Since a conventional speaker is similar in construction to a dynamic microphone (with a diaphragm, coil and magnet), speakers can actually work "in reverse" as microphones. Reciprocity applies, so the resulting microphone has the same impairments as a single-driver loudspeaker: limited low- and high-end frequency response, poorly controlled directivity, and low sensitivity. In practical use, speakers are sometimes used as microphones in applications where high bandwidth and sensitivity are not needed such as intercoms, walkie-talkies or video game voice chat peripherals, or when conventional microphones are in short supply.
However, there is at least one practical application that exploits those weaknesses: the use of a medium-size woofer placed closely in front of a "kick drum" (bass drum) in a drum set to act as a microphone. A commercial product example is the Yamaha Subkick, a woofer shock-mounted into a 10" drum shell used in front of kick drums. Since a relatively massive membrane is unable to transduce high frequencies while being capable of tolerating strong low-frequency transients, the speaker is often ideal for picking up the kick drum while reducing bleed from the nearby cymbals and snare drums.
Capsule design and directivity
The inner elements of a microphone are the primary source of differences in directivity. A pressure microphone uses a diaphragm between a fixed internal volume of air and the environment and responds uniformly to pressure from all directions, so it is said to be omnidirectional. A pressure-gradient microphone uses a diaphragm that is at least partially open on both sides. The pressure difference between the two sides produces its directional characteristics. Other elements such as the external shape of the microphone and external devices such as interference tubes can also alter a microphone's directional response. A pure pressure-gradient microphone is equally sensitive to sounds arriving from front or back but insensitive to sounds arriving from the side because sound arriving at the front and back at the same time creates no gradient between the two. The characteristic directional pattern of a pure pressure-gradient microphone is like a figure-8. Other polar patterns are derived by creating a capsule that combines these two effects in different ways. The cardioid, for instance, features a partially closed backside, so its response is a combination of pressure and pressure-gradient characteristics.
Polar patterns
| Technology | Media and communication | null |
65888 | https://en.wikipedia.org/wiki/Electromagnetic%20induction | Electromagnetic induction | Electromagnetic or magnetic induction is the production of an electromotive force (emf) across an electrical conductor in a changing magnetic field.
Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. Lenz's law describes the direction of the induced field. Faraday's law was later generalized to become the Maxwell–Faraday equation, one of the four Maxwell equations in his theory of electromagnetism.
Electromagnetic induction has found many applications, including electrical components such as inductors and transformers, and devices such as electric motors and generators.
History
Electromagnetic induction was discovered by Michael Faraday, published in 1831. It was discovered independently by Joseph Henry in 1832.
In Faraday's first experimental demonstration (August 29, 1831), he wrapped two wires around opposite sides of an iron ring or "torus" (an arrangement similar to a modern toroidal transformer). Based on his understanding of electromagnets, he expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. He plugged one wire into a galvanometer, and watched it as he connected the other wire to a battery. He saw a transient current, which he called a "wave of electricity", when he connected the wire to the battery and another when he disconnected it. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. Within two months, Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's model, the time varying aspect of electromagnetic induction is expressed as a differential equation, which Oliver Heaviside referred to as Faraday's law even though it is slightly different from Faraday's original formulation and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations.
In 1834 Heinrich Lenz formulated the law named after him to describe the "flux through the circuit". Lenz's law gives the direction of the induced emf and current resulting from electromagnetic induction.
Theory
Faraday's law of induction and Lenz's law
Faraday's law of induction makes use of the magnetic flux ΦB through a region of space enclosed by a wire loop. The magnetic flux is defined by a surface integral:
where dA is an element of the surface Σ enclosed by the wire loop, B is the magnetic field. The dot product B·dA corresponds to an infinitesimal amount of magnetic flux. In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop.
When the flux through the surface changes, Faraday's law of induction says that the wire loop acquires an electromotive force (emf). The most widespread version of this law states that the induced electromotive force in any closed circuit is equal to the rate of change of the magnetic flux enclosed by the circuit:
where is the emf and ΦB is the magnetic flux. The direction of the electromotive force is given by Lenz's law which states that an induced current will flow in the direction that will oppose the change which produced it. This is due to the negative sign in the previous equation. To increase the generated emf, a common approach is to exploit flux linkage by creating a tightly wound coil of wire, composed of N identical turns, each with the same magnetic flux going through them. The resulting emf is then N times that of one single wire.
Generating an emf through a variation of the magnetic flux through the surface of a wire loop can be achieved in several ways:
the magnetic field B changes (e.g. an alternating magnetic field, or moving a wire loop towards a bar magnet where the B field is stronger),
the wire loop is deformed and the surface Σ changes,
the orientation of the surface dA changes (e.g. spinning a wire loop into a fixed magnetic field),
any combination of the above
Maxwell–Faraday equation
In general, the relation between the emf in a wire loop encircling a surface Σ, and the electric field E in the wire is given by
where dℓ is an element of contour of the surface Σ, combining this with the definition of flux
we can write the integral form of the Maxwell–Faraday equation
It is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism.
Faraday's law and relativity
Faraday's law describes two different phenomena: the motional emf generated by a magnetic force on a moving wire (see Lorentz force), and the transformer emf that is generated by an electric force due to a changing magnetic field (due to the differential form of the Maxwell–Faraday equation). James Clerk Maxwell drew attention to the separate physical phenomena in 1861. This is believed to be a unique example in physics of where such a fundamental law is invoked to explain two such different phenomena.
Albert Einstein noticed that the two situations both corresponded to a relative movement between a conductor and a magnet, and the outcome was unaffected by which one was moving. This was one of the principal paths that led him to develop special relativity.
Applications
The principles of electromagnetic induction are applied in many devices and systems, including:
Electrical generator
The emf generated by Faraday's law of induction due to relative movement of a circuit and a magnetic field is the phenomenon underlying electrical generators. When a permanent magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If the wire is connected through an electrical load, current will flow, and thus electrical energy is generated, converting the mechanical energy of motion to electrical energy. For example, the drum generator is based upon the figure to the bottom-right. A different implementation of this idea is the Faraday's disc, shown in simplified form on the right.
In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to the disc, causing a current to flow in the radial arm due to the Lorentz force. Mechanical work is necessary to drive this current. When the generated current flows through the conducting rim, a magnetic field is generated by this current through Ampère's circuital law (labelled "induced B" in the figure). The rim thus becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the far side of the figure, the return current flows from the rotating arm through the far side of the rim to the bottom brush. The B-field induced by this return current opposes the applied B-field, tending to decrease the flux through that side of the circuit, opposing the increase in flux due to rotation. On the near side of the figure, the return current flows from the rotating arm through the near side of the rim to the bottom brush. The induced B-field increases the flux on this side of the circuit, opposing the decrease in flux due to r the rotation. The energy required to keep the disc moving, despite this reactive force, is exactly equal to the electrical energy generated (plus energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is common to all generators converting mechanical energy to electrical energy.
Electrical transformer
When the electric current in a loop of wire changes, the changing current creates a changing magnetic field. A second wire in reach of this magnetic field will experience this change in magnetic field as a change in its coupled magnetic flux, . Therefore, an electromotive force is set up in the second loop called the induced emf or transformer emf. If the two ends of this loop are connected through an electrical load, current will flow.
Current clamp
A current clamp is a type of transformer with a split core which can be spread apart and clipped onto a wire or coil to either measure the current in it or, in reverse, to induce a voltage. Unlike conventional instruments the clamp does not make electrical contact with the conductor or require it to be disconnected during attachment of the clamp.
Magnetic flow meter
Faraday's law is used for measuring the flow of electrically conductive liquids and slurries. Such instruments are called magnetic flow meters. The induced voltage ε generated in the magnetic field B due to a conductive liquid moving at velocity v is thus given by:
where ℓ is the distance between electrodes in the magnetic flow meter.
Eddy currents
Electrical conductors moving through a steady magnetic field, or stationary conductors within a changing magnetic field, will have circular currents induced within them by induction, called eddy currents. Eddy currents flow in closed loops in planes perpendicular to the magnetic field. They have useful applications in eddy current brakes and induction heating systems. However eddy currents induced in the metal magnetic cores of transformers and AC motors and generators are undesirable since they dissipate energy (called core losses) as heat in the resistance of the metal. Cores for these devices use a number of methods to reduce eddy currents:
Cores of low frequency alternating current electromagnets and transformers, instead of being solid metal, are often made of stacks of metal sheets, called laminations, separated by nonconductive coatings. These thin plates reduce the undesirable parasitic eddy currents, as described below.
Inductors and transformers used at higher frequencies often have magnetic cores made of nonconductive magnetic materials such as ferrite or iron powder held together with a resin binder.
Electromagnet laminations
Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the outer portion of the metal cuts more magnetic lines of force than the inner portion; hence the induced electromotive force is not uniform; this tends to cause electric currents between the points of greatest and least potential. Eddy currents consume a considerable amount of energy and often cause a harmful rise in temperature.
Only five laminations or plates are shown in this example, so as to show the subdivision of the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to 66 per inch (16 to 26 per centimetre), and brings the eddy current loss down to about one percent. While the plates can be separated by insulation, the voltage is so low that the natural rust/oxide coating of the plates is enough to prevent current flow across the laminations.
This is a rotor approximately 20 mm in diameter from a DC motor used in a Note the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses.
Parasitic induction within conductors
In this illustration, a solid copper bar conductor on a rotating armature is just passing under the tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force across the copper bar. The magnetic field is more concentrated and thus stronger on the left edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two edges of the bar move with the same velocity, this difference in field strength across the bar creates whorls or current eddies within the copper bar.
High current power-frequency devices, such as electric motors, generators and transformers, use multiple small conductors in parallel to break up the eddy flows that can form within large solid conductors. The same principle is applied to transformers used at higher than power frequency, for example, those used in switch-mode power supplies and the intermediate frequency coupling transformers of radio receivers.
| Physical sciences | Electrodynamics | null |
65890 | https://en.wikipedia.org/wiki/Magnetic%20flux | Magnetic flux | In physics, specifically electromagnetism, the magnetic flux through a surface is the surface integral of the normal component of the magnetic field B over that surface. It is usually denoted or . The SI unit of magnetic flux is the weber (Wb; in derived units, volt–seconds or V⋅s), and the CGS unit is the maxwell. Magnetic flux is usually measured with a fluxmeter, which contains measuring coils, and it calculates the magnetic flux from the change of voltage on the coils.
Description
The magnetic interaction is described in terms of a vector field, where each point in space is associated with a vector that determines what force a moving charge would experience at that point (see Lorentz force). Since a vector field is quite difficult to visualize, introductory physics instruction often uses field lines to visualize this field. The magnetic flux through some surface, in this simplified picture, is proportional to the number of field lines passing through that surface (in some contexts, the flux may be defined to be precisely the number of field lines passing through that surface; although technically misleading, this distinction is not important). The magnetic flux is the net number of field lines passing through that surface; that is, the number passing through in one direction minus the number passing through in the other direction (see below for deciding in which direction the field lines carry a positive sign and in which they carry a negative sign).
More sophisticated physical models drop the field line analogy and define magnetic flux as the surface integral of the normal component of the magnetic field passing through a surface. If the magnetic field is constant, the magnetic flux passing through a surface of vector area S is
where B is the magnitude of the magnetic field (the magnetic flux density) having the unit of Wb/m2 (tesla), S is the area of the surface, and θ is the angle between the magnetic field lines and the normal (perpendicular) to S. For a varying magnetic field, we first consider the magnetic flux through an infinitesimal area element dS, where we may consider the field to be constant:
A generic surface, S, can then be broken into infinitesimal elements and the total magnetic flux through the surface is then the surface integral
From the definition of the magnetic vector potential A and the fundamental theorem of the curl the magnetic flux may also be defined as:
where the line integral is taken over the boundary of the surface , which is denoted .
Magnetic flux through a closed surface
Gauss's law for magnetism, which is one of the four Maxwell's equations, states that the total magnetic flux through a closed surface is equal to zero. (A "closed surface" is a surface that completely encloses a volume(s) with no holes.) This law is a consequence of the empirical observation that magnetic monopoles have never been found.
In other words, Gauss's law for magnetism is the statement:
for any closed surface S.
Magnetic flux through an open surface
While the magnetic flux through a closed surface is always zero, the magnetic flux through an open surface need not be zero and is an important quantity in electromagnetism.
When determining the total magnetic flux through a surface only the boundary of the surface needs to be defined, the actual shape of the surface is irrelevant and the integral over any surface sharing the same boundary will be equal. This is a direct consequence of the closed surface flux being zero.
Changing magnetic flux
For example, a change in the magnetic flux passing through a loop of conductive wire will cause an electromotive force (emf), and therefore an electric current, in the loop. The relationship is given by Faraday's law:
where:
is the electromotive force (EMF),
The minus sign represents Lenz's Law,
is the magnetic flux through the open surface ,
is the boundary of the open surface ; the surface, in general, may be in motion and deforming, and so is generally a function of time. The electromotive force is induced along this boundary.
is an infinitesimal vector element of the contour ,
is the velocity of the boundary ,
is the electric field, and
is the magnetic field.
The two equations for the EMF are, firstly, the work per unit charge done against the Lorentz force in moving a test charge around the (possibly moving) surface boundary and, secondly, as the change of magnetic flux through the open surface . This equation is the principle behind an electrical generator.
Comparison with electric flux
By way of contrast, Gauss's law for electric fields, another of Maxwell's equations, is
where
E is the electric field,
S is any closed surface,
Q is the total electric charge inside the surface S,
ε0 is the electric constant (a universal constant, also called the "permittivity of free space").
The flux of E through a closed surface is not always zero; this indicates the presence of "electric monopoles", that is, free positive or negative charges.
| Physical sciences | Magnetostatics | Physics |
65894 | https://en.wikipedia.org/wiki/Electromotive%20force | Electromotive force | In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other types of electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted ).
An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage).
In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop.
For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit.
Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see ).
Overview
Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electrical grid as the lines of the magnetic field are shifted about and cut across the conductors.
In a battery, the charge separation that gives rise to a potential difference (voltage) between the terminals is accomplished by chemical reactions at the electrodes that convert chemical potential energy into electromagnetic potential energy. A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is:
In an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which creates a potential difference between the generator terminals. Charge separation takes place within the generator because electrons flow away from one terminal toward the other, until, in the open-circuit case, an electric field is developed that makes further charge separation impossible. The emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction.
History
In 1801, Alessandro Volta introduced the term "force motrice électrique" to describe the active agent of a battery (which he had invented around 1798).
This is called the "electromotive force" in English.
Around 1830, Michael Faraday established that chemical reactions at each of two electrode–electrolyte interfaces provide the "seat of emf" for the voltaic cell. That is, these reactions drive the current and are not an endless source of energy as the earlier obsolete theory thought. In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Alessandro Volta, who had measured a contact potential difference at the metal–metal (electrode–electrode) interface of his cells, held the incorrect opinion that contact alone (without taking into account a chemical reaction) was the origin of the emf.
Notation and units of measurement
Electromotive force is often denoted by or ℰ.
In a device without internal resistance, if an electric charge passing through that device gains an energy via work, the net emf for that device is the energy gained per unit charge: Like other measures of energy per charge, emf uses the SI unit volt, which is equivalent to a joule (SI unit of energy) per coulomb (SI unit of charge).
Electromotive force in electrostatic units is the statvolt (in the centimeter gram second system of units equal in amount to an erg per electrostatic unit of charge).
Formal definitions
Inside a source of emf (such as a battery) that is open-circuited, a charge separation occurs between the negative terminal N and the positive terminal P.
This leads to an electrostatic field that points from P to N, whereas the emf of the source must be able to drive current from N to P when connected to a circuit.
This led Max Abraham to introduce the concept of a nonelectrostatic field that exists only inside the source of emf.
In the open-circuit case, , while when the source is connected to a circuit the electric field inside the source changes but remains essentially the same.
In the open-circuit case, the conservative electrostatic field created by separation of charge exactly cancels the forces producing the emf.
Mathematically:
where is the conservative electrostatic field created by the charge separation associated with the emf, is an element of the path from terminal N to terminal P, '' denotes the vector dot product, and is the electric scalar potential.
This emf is the work done on a unit charge by the source's nonelectrostatic field when the charge moves from N to P.
When the source is connected to a load, its emf is just
and no longer has a simple relation to the electric field inside it.
In the case of a closed path in the presence of a varying magnetic field, the integral of the electric field around the (stationary) closed loop may be nonzero.
Then, the "induced emf" (often called the "induced voltage") in the loop is:
where is the entire electric field, conservative and non-conservative, and the integral is around an arbitrary, but stationary, closed curve through which there is a time-varying magnetic flux , and is the vector potential.
The electrostatic field does not contribute to the net emf around a circuit because the electrostatic portion of the electric field is conservative (i.e., the work done against the field around a closed path is zero, see Kirchhoff's voltage law, which is valid, as long as the circuit elements remain at rest and radiation is ignored).
That is, the "induced emf" (like the emf of a battery connected to a load) is not a "voltage" in the sense of a difference in the electric scalar potential.
If the loop is a conductor that carries current in the direction of integration around the loop, and the magnetic flux is due to that current, we have that , where is the self inductance of the loop.
If in addition, the loop includes a coil that extends from point 1 to 2, such that the magnetic flux is largely localized to that region, it is customary to speak of that region as an inductor, and to consider that its emf is localized to that region.
Then, we can consider a different loop that consists of the coiled conductor from 1 to 2, and an imaginary line down the center of the coil from 2 back to 1.
The magnetic flux, and emf, in loop is essentially the same as that in loop :
For a good conductor, is negligible, so we have, to a good approximation,
where is the electric scalar potential along the centerline between points 1 and 2.
Thus, we can associate an effective "voltage drop" with an inductor (even though our basic understanding of induced emf is based on the vector potential rather than the scalar potential), and consider it as a load element in Kirchhoff's voltage law,
where now the induced emf is not considered to be a source emf.
This definition can be extended to arbitrary sources of emf and paths moving with velocity through the electric field and magnetic field :
which is a conceptual equation mainly, because the determination of the "effective forces" is difficult.
The term
is often called a "motional emf".
In (electrochemical) thermodynamics
When multiplied by an amount of charge the emf yields a thermodynamic work term that is used in the formalism for the change in Gibbs energy when charge is passed in a battery:
where is the Gibbs free energy, is the entropy, is the system volume, is its pressure and is its absolute temperature.
The combination is an example of a conjugate pair of variables. At constant pressure the above relationship produces a Maxwell relation that links the change in open cell voltage with temperature (a measurable quantity) to the change in entropy when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is:
If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is:
where is the number of electrons/ion, and is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by:
where is the enthalpy of reaction. The quantities on the right are all directly measurable. Assuming constant temperature and pressure:
which is used in the derivation of the Nernst equation.
Distinction with potential difference
Although an electrical potential difference (voltage) is sometimes called an emf, they are formally distinct concepts:
Potential difference is a more general term that includes emf.
Emf is the cause of a potential difference.
In a circuit of a voltage source and a resistor, the sum of the source's applied voltage plus the ohmic voltage drop through the resistor is zero. But the resistor provides no emf, only the voltage source does:
For a circuit using a battery source, the emf is due solely to the chemical forces in the battery.
For a circuit using an electric generator, the emf is due solely to a time-varying magnetic forces within the generator.
Both a 1 volt emf and a 1 volt potential difference correspond to 1 joule per coulomb of charge.
In the case of an open circuit, the electric charge that has been separated by the mechanism generating the emf creates an electric field opposing the separation mechanism. For example, the chemical reaction in a voltaic cell stops when the opposing electric field at each electrode is strong enough to arrest the reactions. A larger opposing field can reverse the reactions in what are called reversible cells.
The electric charge that has been separated creates an electric potential difference that can (in many cases) be measured with a voltmeter between the terminals of the device, when not connected to a load. The magnitude of the emf for the battery (or other source) is the value of this open-circuit voltage.
When the battery is charging or discharging, the emf itself cannot be measured directly using the external voltage because some voltage is lost inside the source.
It can, however, be inferred from a measurement of the current and potential difference , provided that the internal resistance already has been measured:
"Potential difference" is not the same as "induced emf" (often called "induced voltage").
The potential difference (difference in the electric scalar potential) between two points A and B is independent of the path we take from A to B.
If a voltmeter always measured the potential difference between A and B, then the position of the voltmeter would make no difference.
However, it is quite possible for the measurement by a voltmeter between points A and B to depend on the position of the voltmeter, if a time-dependent magnetic field is present.
For example, consider an infinitely long solenoid using an AC current to generate a varying flux in the interior of the solenoid.
Outside the solenoid we have two resistors connected in a ring around the solenoid.
The resistor on the left is 100 Ω and the one on the right is 200 Ω, they are connected at the top and bottom at points A and B.
The induced voltage, by Faraday's law is , so the current Therefore, the voltage across the 100 Ω resistor is and the voltage across the 200 Ω resistor is , yet the two resistors are connected on both ends, but measured with the voltmeter to the left of the solenoid is not the same as measured with the voltmeter to the right of the solenoid.
Generation
Chemical sources
The question of how batteries (galvanic cells) generate an emf occupied scientists for most of the 19th century. The "seat of the electromotive force" was eventually determined in 1889 by Walther Nernst to be primarily at the interfaces between the electrodes and the electrolyte.
Atoms in molecules or solids are held together by chemical bonding, which stabilizes the molecule or solid (i.e. reduces its energy). When molecules or solids of relatively high energy are brought together, a spontaneous chemical reaction can occur that rearranges the bonding and reduces the (free) energy of the system. In batteries, coupled half-reactions, often involving metals and their ions, occur in tandem, with a gain of electrons (termed "reduction") by one conductive electrode and loss of electrons (termed "oxidation") by another (reduction-oxidation or redox reactions). The spontaneous overall reaction can only occur if electrons move through an external wire between the electrodes. The electrical energy given off is the free energy lost by the chemical reaction system.
As an example, a Daniell cell consists of a zinc anode (an electron collector) that is oxidized as it dissolves into a zinc sulfate solution. The dissolving zinc leaving behind its electrons in the electrode according to the oxidation reaction (s = solid electrode; aq = aqueous solution):
The zinc sulfate is the electrolyte in that half cell. It is a solution which contains zinc cations , and sulfate anions with charges that balance to zero.
In the other half cell, the copper cations in a copper sulfate electrolyte move to the copper cathode to which they attach themselves as they adopt electrons from the copper electrode by the reduction reaction:
which leaves a deficit of electrons on the copper cathode. The difference of excess electrons on the anode and deficit of electrons on the cathode creates an electrical potential between the two electrodes. (A detailed discussion of the microscopic process of electron transfer between an electrode and the ions in an electrolyte may be found in Conway.) The electrical energy released by this reaction (213 kJ per 65.4 g of zinc) can be attributed mostly due to the 207 kJ weaker bonding (smaller magnitude of the cohesive energy) of zinc, which has filled 3d- and 4s-orbitals, compared to copper, which has an unfilled orbital available for bonding.
If the cathode and anode are connected by an external conductor, electrons pass through that external circuit (light bulb in figure), while ions pass through the salt bridge to maintain charge balance until the anode and cathode reach electrical equilibrium of zero volts as chemical equilibrium is reached in the cell. In the process the zinc anode is dissolved while the copper electrode is plated with copper. The salt bridge has to close the electrical circuit while preventing the copper ions from moving to the zinc electrode and being reduced there without generating an external current. It is not made of salt but of material able to wick cations and anions (a dissociated salt) into the solutions. The flow of positively charged cations along the bridge is equivalent to the same number of negative charges flowing in the opposite direction.
If the light bulb is removed (open circuit) the emf between the electrodes is opposed by the electric field due to the charge separation, and the reactions stop.
For this particular cell chemistry, at 298 K (room temperature), the emf = 1.0934 V, with a temperature coefficient of = −4.53×10−4 V/K.
Voltaic cells
Volta developed the voltaic cell about 1792, and presented his work March 20, 1800. Volta correctly identified the role of dissimilar electrodes in producing the voltage, but incorrectly dismissed any role for the electrolyte. Volta ordered the metals in a 'tension series', "that is to say in an order such that any one in the list becomes positive when in contact with any one that succeeds, but negative by contact with any one that precedes it." A typical symbolic convention in a schematic of this circuit ( –||– ) would have a long electrode 1 and a short electrode 2, to indicate that electrode 1 dominates. Volta's law about opposing electrode emfs implies that, given ten electrodes (for example, zinc and nine other materials), 45 unique combinations of voltaic cells (10 × 9/2) can be created.
Typical values
The electromotive force produced by primary (single-use) and secondary (rechargeable) cells is usually of the order of a few volts. The figures quoted below are nominal, because emf varies according to the size of the load and the state of exhaustion of the cell.
Other chemical sources
Other chemical sources include fuel cells.
Electromagnetic induction
Electromagnetic induction is the production of a circulating electric field by a time-dependent magnetic field. A time-dependent magnetic field can be produced either by motion of a magnet relative to a circuit, by motion of a circuit relative to another circuit (at least one of these must be carrying an electric current), or by changing the electric current in a fixed circuit. The effect on the circuit itself, of changing the electric current, is known as self-induction; the effect on another circuit is known as mutual induction.
For a given circuit, the electromagnetically induced emf is determined purely by the rate of change of the magnetic flux through the circuit according to Faraday's law of induction.
An emf is induced in a coil or conductor whenever there is change in the flux linkages. Depending on the way in which the changes are brought about, there are two types: When the conductor is moved in a stationary magnetic field to procure a change in the flux linkage, the emf is statically induced. The electromotive force generated by motion is often referred to as motional emf. When the change in flux linkage arises from a change in the magnetic field around the stationary conductor, the emf is dynamically induced. The electromotive force generated by a time-varying magnetic field is often referred to as transformer emf.
Contact potentials
When solids of two different materials are in contact, thermodynamic equilibrium requires that one of the solids assume a higher electrical potential than the other. This is called the contact potential. Dissimilar metals in contact produce what is known also as a contact electromotive force or Galvani potential. The magnitude of this potential difference is often expressed as a difference in Fermi levels in the two solids when they are at charge neutrality, where the Fermi level (a name for the chemical potential of an electron system) describes the energy necessary to remove an electron from the body to some common point (such as ground). If there is an energy advantage in taking an electron from one body to the other, such a transfer will occur. The transfer causes a charge separation, with one body gaining electrons and the other losing electrons. This charge transfer causes a potential difference between the bodies, which partly cancels the potential originating from the contact, and eventually equilibrium is reached. At thermodynamic equilibrium, the Fermi levels are equal (the electron removal energy is identical) and there is now a built-in electrostatic potential between the bodies.
The original difference in Fermi levels, before contact, is referred to as the emf.
The contact potential cannot drive steady current through a load attached to its terminals because that current would involve a charge transfer. No mechanism exists to continue such transfer and, hence, maintain a current, once equilibrium is attained.
One might inquire why the contact potential does not appear in Kirchhoff's law of voltages as one contribution to the sum of potential drops. The customary answer is that any circuit involves not only a particular diode or junction, but also all the contact potentials due to wiring and so forth around the entire circuit. The sum of all the contact potentials is zero, and so they may be ignored in Kirchhoff's law.
Solar cell
Operation of a solar cell can be understood from its equivalent circuit. Photons with energy greater than the bandgap of the semiconductor create mobile electron–hole pairs. Charge separation occurs because of a pre-existing electric field associated with the p-n junction. This electric field is created from a built-in potential, which arises from the contact potential between the two different materials in the junction. The charge separation between positive holes and negative electrons across the p–n diode yields a forward voltage, the photo voltage, between the illuminated diode terminals, which drives current through any attached load. Photo voltage is sometimes referred to as the photo emf, distinguishing between the effect and the cause.
Solar cell current–voltage relationship
Two internal current losses limit the total current available to the external circuit. The light-induced charge separation eventually creates a forward current through the cell's internal resistance in the direction opposite the light-induced current . In addition, the induced voltage tends to forward bias the junction, which at high enough voltages will cause a recombination current in the diode opposite the light-induced current.
When the output is short-circuited, the output voltage is zeroed, and so the voltage across the diode is smallest. Thus, short-circuiting results in the smallest losses and consequently the maximum output current, which for a high-quality solar cell is approximately equal to the light-induced current . Approximately this same current is obtained for forward voltages up to the point where the diode conduction becomes significant.
The current delivered by the illuminated diode to the external circuit can be simplified (based on certain assumptions) to:
is the reverse saturation current. Two parameters that depend on the solar cell construction and to some degree upon the voltage itself are the ideality factor m and the thermal voltage , which is about 26 millivolts at room temperature.
Solar cell photo emf
Solving the illuminated diode's above simplified current–voltage relationship for output voltage yields:
which is plotted against in the figure.
The solar cell's photo emf has the same value as the open-circuit voltage , which is determined by zeroing the output current :
It has a logarithmic dependence on the light-induced current and is where the junction's forward bias voltage is just enough that the forward current completely balances the light-induced current. For silicon junctions, it is typically not much more than 0.5 volts. While for high-quality silicon panels it can exceed 0.7 volts in direct sunlight.
When driving a resistive load, the output voltage can be determined using Ohm's law and will lie between the short-circuit value of zero volts and the open-circuit voltage . When that resistance is small enough such that (the near-vertical part of the two illustrated curves), the solar cell acts more like a current generator rather than a voltage generator, since the current drawn is nearly fixed over a range of output voltages. This contrasts with batteries, which act more like voltage generators.
Other sources that generate emf
A transformer coupling two circuits may be considered a source of emf for one of the circuits, just as if it were caused by an electrical generator; this is the origin of the term "transformer emf".
For converting sound waves into voltage signals:
a microphone generates an emf from a moving diaphragm.
a magnetic pickup generates an emf from a varying magnetic field produced by an instrument.
a piezoelectric sensor generates an emf from strain on a piezoelectric crystal.
Devices that use temperature to produce emfs include thermocouples and thermopiles.
Any electrical transducer which converts a physical energy into electrical energy.
| Physical sciences | Electrodynamics | null |
65905 | https://en.wikipedia.org/wiki/Ideal%20gas | Ideal gas | An ideal gas is a theoretical gas composed of many randomly moving point particles that are not subject to interparticle interactions. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. The requirement of zero interaction can often be relaxed if, for example, the interaction is perfectly elastic or regarded as point-like collisions.
Under various conditions of temperature and pressure, many real gases behave qualitatively like an ideal gas where the gas molecules (or atoms for monatomic gas) play the role of the ideal particles. Many gases such as nitrogen, oxygen, hydrogen, noble gases, some heavier gases like carbon dioxide and mixtures such as air, can be treated as ideal gases within reasonable tolerances over a considerable parameter range around standard temperature and pressure. Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure, as the potential energy due to intermolecular forces becomes less significant compared with the particles' kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. One mole of an ideal gas has a volume of (exact value based on 2019 revision of the SI) at standard temperature and pressure (a temperature of 273.15 K and an absolute pressure of exactly 105 Pa).
The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size becomes important. It also fails for most heavy gases, such as many refrigerants, and for gases with strong intermolecular forces, notably water vapor. At high pressures, the volume of a real gas is often considerably larger than that of an ideal gas. At low temperatures, the pressure of a real gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The deviation from the ideal gas behavior can be described by a dimensionless quantity, the compressibility factor, .
The ideal gas model has been explored in both the Newtonian dynamics (as in "kinetic theory") and in quantum mechanics (as a "gas in a box"). The ideal gas model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in statistical mechanics.
If the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not change. (If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.)
Types of ideal gas
There are three basic classes of ideal gas:
the classical or Maxwell–Boltzmann ideal gas,
the ideal quantum Bose gas, composed of bosons, and
the ideal quantum Fermi gas, composed of fermions.
The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used in a number of cases including the Sackur–Tetrode equation for the entropy of an ideal gas and the Saha ionization equation for a weakly ionized plasma.
Classical thermodynamic ideal gas
The classical thermodynamic properties of an ideal gas can be described by two equations of state:
Ideal gas law
The ideal gas law is the equation of state for an ideal gas, given by:
where
is the pressure
is the volume
is the amount of substance of the gas (in moles)
is the absolute temperature
is the gas constant, which must be expressed in units consistent with those chosen for pressure, volume and temperature. For example, in SI units = 8.3145 J⋅K−1⋅mol−1 when pressure is expressed in pascals, volume in cubic meters, and absolute temperature in kelvin.
The ideal gas law is an extension of experimentally discovered gas laws. It can also be derived from microscopic considerations.
Real fluids at low density and high temperature approximate the behavior of a classical ideal gas. However, at lower temperatures or a higher density, a real fluid deviates strongly from the behavior of an ideal gas, particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed as a compressibility factor.
This equation is derived from
Boyle's law: ;
Charles's law: ;
Avogadro's law: .
After combining three laws we get
That is:
.
Internal energy
The other equation of state of an ideal gas must express Joule's second law, that the internal energy of a fixed mass of ideal gas is a function only of its temperature, with . For the present purposes it is convenient to postulate an exemplary version of this law by writing:
where
is the internal energy
is the dimensionless specific heat capacity at constant volume, approximately for a monatomic gas, for diatomic gas, and 3 for non-linear molecules if we treat translations and rotations classically and ignore quantum vibrational contribution and electronic excitation. These formulas arise from application of the classical equipartition theorem to the translational and rotational degrees of freedom.
That for an ideal gas depends only on temperature is a consequence of the ideal gas law, although in the general case depends on temperature and an integral is needed to compute .
Microscopic model
In order to switch from macroscopic quantities (left hand side of the following equation) to microscopic ones (right hand side), we use
where
is the number of gas particles
is the Boltzmann constant ().
The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution.
The ideal gas model depends on the following assumptions:
The molecules of the gas are indistinguishable, small, hard spheres
All collisions are elastic and all motion is frictionless (no energy loss in motion or collision)
Newton's laws apply
The average distance between molecules is much larger than the size of the molecules
The molecules are constantly moving in random directions with a distribution of speeds
There are no attractive or repulsive forces between the molecules apart from those that determine their point-like collisions
The only forces between the gas molecules and the surroundings are those that determine the point-like collisions of the molecules with the walls
In the simplest case, there are no long-range forces between the molecules of the gas and the surroundings.
The assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas. The following three assumptions are very related: molecules are hard, collisions are elastic, and there are no inter-molecular forces. The assumption that the space between particles is much larger than the particles themselves is of paramount importance, and explains why the ideal gas approximation fails at high pressures.
Heat capacity
The dimensionless heat capacity at constant volume is generally defined by
where is the entropy. This quantity is generally a function of temperature due to intermolecular and intramolecular forces, but for moderate temperatures it is approximately constant. Specifically, the Equipartition Theorem predicts that the constant for a monatomic gas is = while for a diatomic gas it is = if vibrations are neglected (which is often an excellent approximation). Since the heat capacity depends on the atomic or molecular nature of the gas, macroscopic measurements on heat capacity provide useful information on the microscopic structure of the molecules.
The dimensionless heat capacity at constant pressure of an ideal gas is:
where is the enthalpy of the gas.
Sometimes, a distinction is made between an ideal gas, where and could vary with temperature, and a perfect gas, for which this is not the case.
The ratio of the constant volume and constant pressure heat capacity is the adiabatic index
For air, which is a mixture of gases that are mainly diatomic (nitrogen and oxygen), this ratio is often assumed to be 7/5, the value predicted by the classical Equipartition Theorem for diatomic gases.
Entropy
Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of ( is a thermodynamic potential), volume and the number of particles , then we will have a complete statement of the thermodynamic behavior of the ideal gas. We will be able to derive both the ideal gas law and the expression for internal energy from it.
Since the entropy is an exact differential, using the chain rule, the change in entropy when going from a reference state 0 to some other state with entropy may be written as where:
where the reference variables may be functions of the number of particles . Using the definition of the heat capacity at constant volume for the first differential and the appropriate Maxwell relation for the second we have:
Expressing in terms of as developed in the above section, differentiating the ideal gas equation of state, and integrating yields:
which implies that the entropy may be expressed as:
where all constants have been incorporated into the logarithm as which is some function of the particle number having the same dimensions as in order that the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This will mean that when the extensive parameters ( and ) are multiplied by a constant, the entropy will be multiplied by the same constant. Mathematically:
From this we find an equation for the function
Differentiating this with respect to , setting equal to 1, and then solving the differential equation yields :
where may vary for different gases, but will be independent of the thermodynamic state of the gas. It will have the dimensions of . Substituting into the equation for the entropy:
and using the expression for the internal energy of an ideal gas, the entropy may be written:
Since this is an expression for entropy in terms of , , and , it is a fundamental equation from which all other properties of the ideal gas may be derived.
This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed – as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above "ideal" development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity – the concept of an ideal gas breaks down at low values of . Nevertheless, there will be a "best" value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur–Tetrode equation which expresses the entropy of a monatomic () ideal gas. In the Sackur–Tetrode theory the constant depends only upon the mass of the gas particle. The Sackur–Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures.
An alternative way of expressing the change in entropy:
Thermodynamic potentials
Expressing the entropy as a function of , , and :
The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic potential):
where is the Gibbs free energy and is equal to so that:
The chemical potential is usually referenced to the potential at some standard pressure Po so that, with :
For a mixture (j=1,2,...) of ideal gases, each at partial pressure Pj, it can be shown that the chemical potential μj will be given by the above expression with the pressure P replaced by Pj.
The thermodynamic potentials for an ideal gas can now be written as functions of , , and as:
{|
|-
|
|
|
|-
|
|
|
|-
|
|
|
|-
|
|
|
|}
where, as before,
.
The most informative way of writing the potentials is in terms of their natural variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are:
In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral for more details.
Speed of sound
The speed of sound in an ideal gas is given by the Newton-Laplace formula:
where the isentropic Bulk modulus
For an isentropic process of an ideal gas, , therefore
Here,
is the adiabatic index ()
is the entropy per particle of the gas.
is the mass density of the gas.
is the pressure of the gas.
is the universal gas constant
is the temperature
is the molar mass of the gas.
Table of ideal gas equations
Ideal quantum gases
In the above-mentioned Sackur–Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur–Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.)
Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature.
Ideal Boltzmann gas
The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant :
where is the thermal de Broglie wavelength of the gas and is the degeneracy of states.
Ideal Bose and Fermi gases
An ideal gas of bosons (e.g. a photon gas) will be governed by Bose–Einstein statistics and the distribution of energy will be in the form of a Bose–Einstein distribution. An ideal gas of fermions will be governed by Fermi–Dirac statistics and the distribution of energy will be in the form of a Fermi–Dirac distribution.
| Physical sciences | Thermodynamics | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.