id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
6432
https://en.wikipedia.org/wiki/Caelum
Caelum
Caelum is a faint constellation in the southern sky, introduced in the 1750s by Nicolas Louis de Lacaille and counted among the 88 modern constellations. Its name means "chisel" in Latin, and it was formerly known as Caelum Sculptorium ("Engraver's Chisel"); it is a rare word, unrelated to the far more common Latin caelum, meaning "sky", "heaven", or "atmosphere". It is the eighth-smallest constellation, and subtends a solid angle of around 0.038 steradians, just less than that of Corona Australis. Due to its small size and location away from the plane of the Milky Way, Caelum is a rather barren constellation, with few objects of interest. The constellation's brightest star, Alpha Caeli, is only of magnitude 4.45, and only one other star, (Gamma) γ1 Caeli, is brighter than magnitude 5 . Other notable objects in Caelum are RR Caeli, a binary star with one known planet approximately away; X Caeli, a Delta Scuti variable that forms an optical double with γ1 Caeli; and HE0450-2958, a Seyfert galaxy that at first appeared as just a jet, with no host galaxy visible. History Caelum was incepted as one of fourteen southern constellations in the 18th century by Nicolas Louis de Lacaille, a French astronomer and celebrated of the Age of Enlightenment. It retains its name Burin among French speakers, latinized in his catalogue of 1763 as Caelum Sculptoris (“Engraver's Chisel”). Francis Baily shortened this name to Caelum, as suggested by John Herschel. In Lacaille's original chart, it was shown as a pair of engraver's tools: a standard burin and more specific shape-forming échoppe tied by a ribbon, but came to be ascribed a simple chisel. Johann Elert Bode stated the name as plural with a singular possessor, Caela Scalptoris – in German (die ) Grabstichel (“the Engraver’s Chisels”) – but this did not stick. Characteristics Caelum is bordered by Dorado and Pictor to the south, Horologium and Eridanus to the east, Lepus to the north, and Columba to the west. Covering only 125 square degrees, it ranks 81st of the 88 modern constellations in size. Its main asterism consists of four stars, and twenty stars in total are brighter than magnitude 6.5 . The constellation's boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are a 12-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and and declinations of to . The International Astronomical Union (IAU) adopted the three-letter abbreviation “Cae” for the constellation in 1922. Its main stars are visible in favourable conditions and with a clear southern horizon, for part of the year as far as about the 41st parallel north These stars avoid being engulfed by daylight for some of every day (when above the horizon) to viewers in mid- and well-inhabited higher latitudes of the Southern Hemisphere. Caelum shares with (to the north) Taurus, Eridanus and Orion midnight culmination in December (high summer), resulting in this fact. In winter (such as June) the constellation can be observed sufficiently inset from the horizons during its rising before dawn and/or setting after dusk as it culminates then at around mid-day, well above the sun. In South Africa, Argentina, their sub-tropical neighbouring areas and some of Australia in high June the key stars may be traced before dawn in the east; near the equator the stars lose night potential in May to June; they ill-compete with the Sun in northern tropics and sub-tropics from late February to mid-September with March being unfavorable as to post-sunset due to the light of the Milky Way. Notable features Stars Caelum is a faint constellation: It has no star brighter than magnitude 4 and only two stars brighter than magnitude 5. Lacaille gave six stars Bayer designations, labeling them Alpha (α ) to Zeta (ζ ) in 1756, but omitted Epsilon (ε ) and designated two adjacent stars as Gamma (γ ). Bode extended the designations to Rho (ρ ) for other stars, but most of these have fallen out of use. Caelum is too far south for any of its stars to bear Flamsteed designations. The brightest star, (Alpha) α Caeli, is a double star, containing an F-type main-sequence star of magnitude 4.45 and a red dwarf of magnitude 12.5 , from Earth. (Beta) β Caeli, another F-type star of magnitude 5.05 , is further away, being located from Earth. Unlike α, β Caeli is a subgiant star, slightly evolved from the main sequence. (Delta) δ Caeli, also of magnitude 5.05 , is a B-type subgiant and is much farther from Earth, at . (Gamma) γ1Caeli is a double-star with a red giant primary of magnitude 4.58 and a secondary of magnitude 8.1 . The primary is from Earth. The two components are difficult to resolve with small amateur telescopes because of their difference in visual magnitude and their close separation. This star system forms an optical double with the unrelated X Caeli (previously named γ2Caeli), a Delta Scuti variable located from Earth. These are a class of short-period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. The only other variable star in Caelum visible to the naked eye is RV Caeli, a pulsating red giant of spectral type M1III, which varies between magnitudes 6.44 and 6.56 . Three other stars in Caelum are still occasionally referred to by their Bayer designations, although they are only on the edge of naked-eye visibility. (Nu) ν Caeli is another double star, containing a white giant of magnitude 6.07 and a star of magnitude 10.66, with unknown spectral type. The system is approximately away. (Lambda) λ Caeli, at magnitude 6.24, is much redder and farther away, being a red giant around from Earth. (Zeta) ζ Caeli is even fainter, being only of magnitude 6.36 . This star, located away, is a K-type subgiant of spectral type K1. The other twelve naked-eye stars in Caelum are not referred to by Bode's Bayer designations anymore, including RV Caeli. One of the nearest stars in Caelum is the eclipsing binary star RR Caeli, at a distance of . This star system consists of a dim red dwarf and a white dwarf. Despite its closeness to the Earth, the system's apparent magnitude is only 14.40 due to the faintness of its components, and thus it cannot be easily seen with amateur equipment. The system is a post-common-envelope binary and is losing angular momentum over time, which will eventually cause mass transfer from the red dwarf to the white dwarf. In approximately 9–20 billion years, this will cause the system to become a cataclysmic variable. In 2012, the system was found to contain a giant planet, and there is evidence for a second substellar body. , it is believed two planets orbit RR Caeli. Another nearby star is LHS 1678, an astrometric binary located some 65 light-years away. The primary star is a red dwarf hosting three close-in exoplanets, all smaller than Earth, the secondary component is a likely brown dwarf. This system is notable as the closest star to Alpha Caeli, just 3.3 light-years distant. Due to its closeness, α Caeli would shine at magnitude from LHS 1678, brighter than Sirius in our sky. Deep-sky objects Due to its small size and location away from the plane of the Milky Way, Caelum is rather devoid of deep-sky objects, and contains no Messier objects. The only deep-sky object in Caelum to receive much attention is HE0450-2958, an unusual Seyfert galaxy. Originally, the jet's host galaxy proved elusive to find, and this jet appeared to be emanating from nothing. Although it has been suggested that the object is an ejected supermassive black hole, the host is now agreed to be a small galaxy that is difficult to see due to light from the jet and a nearby starburst galaxy. The 13th magnitude planetary nebula PN G243-37.1 is also in the eastern regions of the constellation. It is one of only a few planetary nebulae found in the galactic halo, being light-years below the Milky Way's 1000 light-year-thick disk. Galaxies NGC 1595, NGC 1598, and the Carafe galaxy are known as the Carafe group. The Carafe galaxy is a Seyfert galaxy with ring. Its location is 4:28 / -47°54' (2000.0).
Physical sciences
Other
Astronomy
6435
https://en.wikipedia.org/wiki/Canes%20Venatici
Canes Venatici
Canes Venatici ( ) is one of the 88 constellations designated by the International Astronomical Union (IAU). It is a small northern constellation that was created by Johannes Hevelius in the 17th century. Its name is Latin for 'hunting dogs', and the constellation is often depicted in illustrations as representing the dogs of Boötes the Herdsman, a neighboring constellation. Cor Caroli is the constellation's brightest star, with an apparent magnitude of 2.9. La Superba (Y CVn) is one of the reddest naked-eye stars and one of the brightest carbon stars. The Whirlpool Galaxy is a spiral galaxy tilted face-on to observers on Earth, and was the first galaxy whose spiral nature was discerned. In addition, quasar TON 618 is one of the most massive black holes with the mass of 66 billion solar masses. History The stars of Canes Venatici are not bright. In classical times, they were listed by Ptolemy as unfigured stars below the constellation Ursa Major in his star catalogue. In medieval times, the identification of these stars with the dogs of Boötes arose through a mistranslation: some of Boötes's stars were traditionally described as representing the club (, ) of Boötes. When the Greek astronomer Ptolemy's Almagest was translated from Greek to Arabic, the translator Hunayn ibn Ishaq did not know the Greek word and rendered it as a similar-sounding compound Arabic word for a kind of weapon, writing , which means 'the staff having a hook'. When the Arabic text was later translated into Latin, the translator, Gerard of Cremona, mistook ('hook') for ('dogs'). Both written words look the same in Arabic text without diacritics, leading Gerard to write it as ('spearshaft-having dogs'). In 1533, the German astronomer Peter Apian depicted Boötes as having two dogs with him. These spurious dogs floated about the astronomical literature until Hevelius decided to make them a separate constellation in 1687. Hevelius chose the name Asterion for the northern dog and Chara for the southern dog, as , 'the hunting dogs', in his star atlas. In his star catalogue, the Czech astronomer Antonín Bečvář assigned the names Asterion to β CVn and Chara to α CVn. Although the International Astronomical Union dropped several constellations in 1930 that were medieval and Renaissance innovations, Canes Venatici survived to become one of the 88 IAU designated constellations. Neighbors and borders Canes Venatici is bordered by Ursa Major to the north and west, Coma Berenices to the south, and Boötes to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CVn". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between +27.84° and +52.36°. Covering 465 square degrees, it ranks 38th of the 88 constellations in size. Prominent stars and deep-sky objects Stars Canes Venatici contains no very bright stars. The Bayer designation stars, Alpha and Beta Canum Venaticorum are only of third and fourth magnitude respectively. Flamsteed catalogued 25 stars in the constellation, labelling them 1 to 25 Canum Venaticorum (CVn); however, 1CVn turned out to be in Ursa Major, 13CVn was in Coma Berenices, and 22CVn did not exist. Alpha Canum Venaticorum, also known as ('heart of Charles'), is the constellation's brightest star, named by Sir Charles Scarborough in memory of King Charles I, the executed king of Britain. The English astronomer William Henry Smyth wrote in 1844 that α CVn was brighter than usual during the Restoration, as Charles II returned to England to take the throne, but gave no source for this statement, which seems to be apocryphal. Cor Caroli is a wide double star, with a primary of magnitude 2.9 and a secondary of magnitude 5.6; the primary is 110 light-years from Earth. The primary also has an unusually strong variable magnetic field. Beta Canum Venaticorum, or Chara, is a yellow-hued main sequence star of magnitude 4.25, 27 light-years from Earth. Its common name comes from the word for joy. It has been listed as an astrobiologically interesting star because of its proximity and similarity to the Sun. However, no exoplanets have been discovered around it so far. Y Canum Venaticorum (La Superba) is a semiregular variable star that varies between magnitudes 5.0 and 6.5 over a period of around 158 days. It is a carbon star and is deep red in color, with a spectral type of C54J(N3). AM Canum Venaticorum, a very blue star of magnitude 14, is the prototype of a special class of cataclysmic variable stars, in which the companion star is a white dwarf, rather than a main sequence star. It is 143 parsecs distant from the Sun. RS Canum Venaticorum is the prototype of a special class of binary stars of chromospherically active and optically variable components. R Canum Venaticorum is a Mira variable that ranges between magnitudes 6.5 and 12.9 over a period of approximately 329 days. Supervoid The Giant Void, an extremely large void (part of the universe containing very few galaxies), is within the vicinity of this constellation. It is regarded to be the second largest void ever discovered, slightly larger than the Eridanus Supervoid and smaller than the proposed KBC Void and 1,200 times the volume of expected typical voids. It was discovered in 1988 in a deep-sky survey. Its centre is approximately 1.5 billion light-years away. Deep-sky objects Canes Venatici contains five Messier objects, including four galaxies. One of the more significant galaxies in Canes Venatici is the Whirlpool Galaxy (M51, NGC 5194) and NGC 5195, a small barred spiral galaxy that is seen face-on. This was the first galaxy recognised as having a spiral structure, this structure being first observed by Lord Rosse in 1845. It is a face-on spiral galaxy 37 million light-years from Earth. Widely considered to be one of the most beautiful galaxies visible, M51 has many star-forming regions and nebulae in its arms, coloring them pink and blue in contrast to the older yellow core. M 51 has a smaller companion, NGC 5195, that has very few star-forming regions and thus appears yellow. It is passing behind M 51 and may be the cause of the larger galaxy's prodigious star formation. Other notable spiral galaxies in Canes Venatici are the Sunflower Galaxy (M63, NGC 5055), M94 (NGC 4736), and M106 (NGC 4258). M63, the Sunflower Galaxy, was named for its appearance in large amateur telescopes. It is a spiral galaxy with an integrated magnitude of 9.0. M94 (NGC 4736) is a small face-on spiral galaxy with approximate magnitude 8.0, about 15 million light-years from Earth. NGC 4631 is a barred spiral galaxy, which is one of the largest and brightest edge-on galaxies in the sky. M3 (NGC 5272) is a globular cluster 32,000 light-years from Earth. It is 18′ in diameter, and at magnitude 6.3 is bright enough to be seen with binoculars. It can even be seen with the naked eye under particularly dark skies. M94, also cataloged as NGC 4736, is a face-on spiral galaxy 15 million light-years from Earth. It has very tight spiral arms and a bright core. The outskirts of the galaxy are incredibly luminous in the ultraviolet because of a ring of new stars surrounding the core 7,000 light-years in diameter. Though astronomers are not sure what has caused this ring of new stars, some hypothesize that it is from shock waves caused by a bar that is thus far invisible. Ton 618 is a hyperluminous quasar and blazar in this constellation, near its border with the neighboring Coma Berenices. It possesses a black hole with a mass 66 billion times that of the Sun, making it one of the most massive black holes ever measured. There is also a Lyman-alpha blob.
Physical sciences
Other
Astronomy
6436
https://en.wikipedia.org/wiki/Chamaeleon
Chamaeleon
Chamaeleon () is a small constellation in the deep southern sky. It is named after the chameleon, a kind of lizard. It was first defined in the 16th century. History Chamaeleon was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius and Jodocus Hondius. Johann Bayer was the first uranographer to put Chamaeleon in a celestial atlas. It was one of many constellations created by European explorers in the 15th and 16th centuries out of unfamiliar Southern Hemisphere stars. Features Stars There are four bright stars in Chamaeleon that form a compact diamond-shape approximately 10 degrees from the south celestial pole and about 15 degrees south of Acrux, along the axis formed by Acrux and Gamma Crucis. Alpha Chamaeleontis is a white-hued star of magnitude 4.1, 63 light-years from Earth. Beta Chamaeleontis is a blue-white hued star of magnitude 4.2, 271 light-years from Earth. Gamma Chamaeleontis is a red-hued giant star of magnitude 4.1, 413 light-years from Earth. The other bright star in Chamaeleon is Delta Chamaeleontis, a wide double star. The brighter star is Delta2 Chamaeleontis, a blue-hued star of magnitude 4.4. Delta1 Chamaeleontis, the dimmer component, is an orange-hued giant star of magnitude 5.5. They both lie about 350 light years away. Chamaeleon is also the location of Cha 110913, a unique dwarf star or proto solar system. Deep-sky objects In 1999, a nearby open cluster was discovered centered on the star η Chamaeleontis. The cluster, known as either the Eta Chamaeleontis cluster or Mamajek 1, is 8 million years old, and lies 316 light years from Earth. The constellation contains a number of molecular clouds (the Chamaeleon dark clouds) that are forming low-mass T Tauri stars. The cloud complex lies some 400 to 600 light years from Earth, and contains tens of thousands of solar masses of gas and dust. The most prominent cluster of T Tauri stars and young B-type stars are in the Chamaeleon I cloud, and are associated with the reflection nebula IC 2631. Chamaeleon contains one planetary nebula, NGC 3195, which is fairly faint. It appears in a telescope at about the same apparent size as Jupiter. Equivalents In Chinese astronomy, the stars that form Chamaeleon were classified as the Little Dipper () among the Southern Asterisms () by Xu Guangqi. Chamaeleon is sometimes also called the Frying Pan in Australia.
Physical sciences
Other
Astronomy
6437
https://en.wikipedia.org/wiki/Cholesterol
Cholesterol
Cholesterol is the principal sterol of all higher animals, distributed in body tissues, especially the brain and spinal cord, and in animal fats and oils. Cholesterol is biosynthesized by all animal cells and is an essential structural and signaling component of animal cell membranes. In vertebrates, hepatic cells typically produce the greatest amounts. In the brain, astrocytes produce cholesterol and transport it to neurons. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. Elevated levels of cholesterol in the blood, especially when bound to low-density lipoprotein (LDL, often referred to as "bad cholesterol"), may increase the risk of cardiovascular disease. François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. In 1815, chemist Michel Eugène Chevreul named the compound "cholesterine". Etymology The word cholesterol comes from Ancient Greek chole- 'bile' and stereos 'solid', followed by the chemical suffix -ol for an alcohol. Physiology Cholesterol is essential for all animal life. While most cells are capable of synthesizing it, the majority of cholesterol is ingested or synthesized by hepatocytes and transported in the blood to peripheral cells. The levels of cholesterol in peripheral tissues are dictated by a balance of uptake and export. Under normal conditions, brain cholesterol is separate from peripheral cholesterol, i.e., the dietary and hepatic cholesterol do not cross the blood brain barrier. Rather, astrocytes produce and distribute cholesterol in the brain. De novo synthesis, both in astrocytes and hepatocytes, occurs by a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes. Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. Surprisingly, in rats, blood cholesterol is inversely correlated with cholesterol consumption. The more cholesterol a rat eats the lower the blood cholesterol. During the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase. Plants make cholesterol in very small amounts. In larger quantities they produce phytosterols, chemically similar substances which can compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day. Function Membranes Cholesterol is present in varying degrees in all animal cell membranes, but is absent in prokaryotes. It is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move. The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a trans conformation making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions. Substrate presentation Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. Phospholipase D2 (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol-dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation. Signaling Cholesterol is also implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids, both electrical insulators, can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell or oligodendrocyte membranes, provides insulation for more efficient conduction of impulses. Demyelination (loss of myelin) is believed to be part of the basis for multiple sclerosis. Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol also activates the estrogen-related receptor alpha (ERRα), and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol. As a chemical precursor Within cells, cholesterol is also a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives. Epidermis The stratum corneum is the outermost layer of the epidermis. It is composed of terminally differentiated and enucleated corneocytes that reside within a lipid matrix, like "bricks and mortar." Together with ceramides and free fatty acids, cholesterol forms the lipid mortar, a water-impermeable barrier that prevents evaporative water loss. As a rule of thumb, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (≈50% by weight), cholesterol (≈25% by weight), and free fatty acids (≈15% by weight), with smaller quantities of other lipids also being present. Cholesterol sulfate reaches its highest concentration in the granular layer of the epidermis. Steroid sulfate sulfatase then decreases its concentration in the stratum corneum, the outermost layer of the epidermis. The relative abundance of cholesterol sulfate in the epidermis varies across different body sites with the heel of the foot having the lowest concentration. Metabolism Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which are then stored in the gallbladder, which then excretes them in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream. Biosynthesis and regulation Biosynthesis Almost all animal tissues synthesize cholesterol from acetyl-CoA. All animal cells (exceptions exist within the invertebrates) manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include the brain, the adrenal glands, and the reproductive organs. Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA). This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs). Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP. Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase. Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum. Oxidosqualene cyclase then cyclizes squalene to form lanosterol. Finally, lanosterol is converted to cholesterol via either of two pathways, the Bloch pathway, or the Kandutsch-Russell pathway. The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones. Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism. Regulation of cholesterol synthesis Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake of food leads to a net decrease in endogenous production, whereas a lower intake of food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low. The cleaved SREBP then migrates to the nucleus and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase in endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates the expression of many genes that control lipid formation and metabolism and body fuel allocation. Cholesterol synthesis can also be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain. The membrane domain senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, which makes it more susceptible to destruction by the proteasome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low. Plasma transport and regulation of absorption As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophobic core of the lipoprotein, along with triglyceride. There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some is carried as its native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters, known also as cholesterol esters, within the particles. Lipoprotein particles are organized by complex apolipoproteins, typically 80–100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body. Chylomicrons, the least dense cholesterol transport particles, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants and is taken up from here to the bloodstream by the liver. VLDL particles are produced by the liver from triacylglycerol and cholesterol which was not used in the synthesis of bile acids. These particles contain apolipoprotein B100 and apolipoprotein E in their shells and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases the concentration of circulating cholesterol. IDL particles are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol-laden LDL particles. LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL particle shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes. LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol de novo, according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL particles from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol. When this process becomes unregulated, LDL particles without receptors begin to appear in the blood. These LDL particles are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with "bad" cholesterol. HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries. Metabolism, recycling and excretion Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. Three different mechanisms can form these: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the "oxysterol hypothesis". Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription. In biochemical experiments, radiolabelled forms of cholesterol, such as tritiated-cholesterol, are used. These derivatives undergo degradation upon storage, and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns. Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to 1 g of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and it can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces. Although cholesterol is a steroid generally associated with mammals, the human pathogen Mycobacterium tuberculosis is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, but have evolved in such a way as to bind large steroid substrates like cholesterol. Dietary sources Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, shellfish, and butter. Human breast milk also contains significant quantities of cholesterol. Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines and reduce the absorption of both dietary and bile cholesterol. A typical diet contributes on the order of 0.2 gram of phytosterols, which is not enough to have a significant impact on blocking cholesterol absorption. Phytosterols intake can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol. Medical guidelines and recommendations In 2015, the scientific advisory panel of U.S. Department of Health and Human Services and U.S. Department of Agriculture for the 2015 iteration of the Dietary Guidelines for Americans dropped the previously recommended limit of consumption of dietary cholesterol to 300 mg per day with a new recommendation to "eat as little dietary cholesterol as possible", thereby acknowledging an association between a diet low in cholesterol and reduced risk of cardiovascular disease. A 2013 report by the American Heart Association and the American College of Cardiology recommended focusing on healthy dietary patterns rather than specific cholesterol limits, as they are hard for clinicians and consumers to implement. They recommend the DASH and Mediterranean diet, which are low in cholesterol. A 2017 review by the American Heart Association recommends switching saturated fats for polyunsaturated fats to reduce cardiovascular disease risk. Some supplemental guidelines have recommended doses of phytosterols in the 1.6–3.0 grams per day range (Health Canada, EFSA, ATP III, FDA). A meta-analysis demonstrated a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. The benefits of a diet supplemented with phytosterols have also been questioned. Clinical significance Hypercholesterolemia According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined, but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people. Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A post hoc analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol". About one in 250 individuals can have a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholesterolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B. Elevated cholesterol levels are treatable by a diet that reduces or eliminates saturated fat, and trans fats, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, monoclonal antibody therapy (PCSK9 inhibitors), nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolemia. Human trials using HMG-CoA reductase inhibitors, known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7 mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%. Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex. The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. The average global mean total Cholesterol for humans has remained at about 4.6 mmol/L (178 mg/dL) for men and women, both crude and age standardized, for nearly 40 years from 1980 to 2018, with some regional variations and reduction of total Cholesterol in Western nations. More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 100 mg/dL (2.6 mmol/L). Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL. In the Framingham Heart Study, each 10 mg/dL (0.6 mmol/L) increase in total cholesterol levels increased 30-year overall mortality by 5% and CVD mortality by 9%. While subjects over the age of 50 had an 11% increase in overall mortality, and a 14% increase in cardiovascular disease mortality per 1 mg/dL (0.06 mmol/L) year drop in total cholesterol levels. The researchers attributed this phenomenon to a different correlation, whereby the disease itself increases risk of death, as well as changes a myriad of factors, such as weight loss and the inability to eat, which lower serum cholesterol. This effect was also shown in men of all ages and women over 50 in the Vorarlberg Health Monitoring and Promotion Programme. These groups were more likely to die of cancer, liver diseases, and mental diseases with very low total cholesterol, of 186 mg/dL (10.3 mmol/L) and lower. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a marker for frailty occurring with age. Hypocholesterolemia Abnormally low levels of cholesterol are termed hypocholesterolemia. Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, which is often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance which causes upregulation of the LDL receptor, may result in hypocholesterolemia. Testing The American Heart Association recommends testing cholesterol every 4–6 years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that people taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. For men ages 45 to 65 and women ages 55 to 65, a cholesterol test should occur every 1–2 years, and for seniors over age 65, an annual test should be performed. A blood sample after 12-hours of fasting is taken by a healthcare professional from an arm vein to measure a lipid profile for a) total cholesterol, b) HDL cholesterol, c) LDL cholesterol, and d) triglycerides. Results may be expressed as "calculated", indicating a calculation of total cholesterol, HDL, and triglycerides. Cholesterol is tested to determine for "normal" or "desirable" levels if a person has a total cholesterol of 5.2 mmol/L or less (200 mg/dL), an HDL value of more than 1 mmol/L (40 mg/dL, "the higher, the better"), an LDL value of less than 2.6 mmol/L (100 mg/dL), and a triglycerides level of less than 1.7 mmol/L (150 mg/dL). Blood cholesterol in people with lifestyle, aging, or cardiovascular risk factors, such as diabetes mellitus, hypertension, family history of coronary artery disease, or angina, are evaluated at different levels. Interactive pathway map Cholesteric liquid crystals Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the liquid crystalline "cholesteric phase". The cholesteric phase is, in fact, a chiral nematic phase, and it changes colour when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints. Stereoisomers Cholesterol has 256 stereoisomers that arise from its eight stereocenters, although only two of the stereoisomers have biochemical significance (nat-cholesterol and ent-cholesterol, for natural and enantiomer, respectively), and only one occurs naturally (nat-cholesterol). Additional images
Biology and health sciences
Biochemistry and molecular biology
null
6438
https://en.wikipedia.org/wiki/Chromosome
Chromosome
A chromosome is a package of DNA containing part or all of the genetic material of an organism. In most chromosomes, the very long thin DNA fibers are coated with nucleosome-forming packaging proteins; in eukaryotic cells, the most important of these proteins are the histones. Aided by chaperone proteins, the histones bind to and condense the DNA molecule to maintain its integrity. These eukaryotic chromosomes display a complex three-dimensional structure that has a significant role in transcriptional regulation. Normally, chromosomes are visible under a light microscope only during the metaphase of cell division, where all chromosomes are aligned in the center of the cell in their condensed form. Before this stage occurs, each chromosome is duplicated (S phase), and the two copies are joined by a centromere—resulting in either an X-shaped structure if the centromere is located equatorially, or a two-armed structure if the centromere is located distally; the joined copies are called 'sister chromatids'. During metaphase, the duplicated structure (called a 'metaphase chromosome') is highly condensed and thus easiest to distinguish and study. In animal cells, chromosomes reach their highest compaction level in anaphase during chromosome segregation. Chromosomal recombination during meiosis and subsequent sexual reproduction plays a crucial role in genetic diversity. If these structures are manipulated incorrectly, through processes known as chromosomal instability and translocation, the cell may undergo mitotic catastrophe. This will usually cause the cell to initiate apoptosis, leading to its own death, but the process is occasionally hampered by cell mutations that result in the progression of cancer. The term 'chromosome' is sometimes used in a wider sense to refer to the individualized portions of chromatin in cells, which may or may not be visible under light microscopy. In a narrower sense, 'chromosome' can be used to refer to the individualized portions of chromatin during cell division, which are visible under light microscopy due to high condensation. Etymology The word chromosome () comes from the Greek words (chroma, "colour") and (soma, "body"), describing the strong staining produced by particular dyes. The term was coined by the German anatomist Heinrich Wilhelm Waldeyer, referring to the term 'chromatin', which was introduced by Walther Flemming. Some of the early karyological terms have become outdated. For example, 'chromatin' (Flemming 1880) and 'chromosom' (Waldeyer 1888) both ascribe color to a non-colored state. History of discovery Otto Bütschli was the first scientist to recognize the structures now known as chromosomes. In a series of experiments beginning in the mid-1880s, Theodor Boveri gave definitive contributions to elucidating that chromosomes are the vectors of heredity, with two notions that became known as 'chromosome continuity' and 'chromosome individuality'. Wilhelm Roux suggested that every chromosome carries a different genetic configuration, and Boveri was able to test and confirm this hypothesis. Aided by the rediscovery at the start of the 1900s of Gregor Mendel's earlier experimental work, Boveri identified the connection between the rules of inheritance and the behaviour of the chromosomes. Two generations of American cytologists were influenced by Boveri: Edmund Beecher Wilson, Nettie Stevens, Walter Sutton and Theophilus Painter (Wilson, Stevens, and Painter actually worked with him). In his famous textbook, The Cell in Development and Heredity, Wilson linked together the independent work of Boveri and Sutton (both around 1902) by naming the chromosome theory of inheritance the 'Boveri–Sutton chromosome theory' (sometimes known as the 'Sutton–Boveri chromosome theory'). Ernst Mayr remarks that the theory was hotly contested by some famous geneticists, including William Bateson, Wilhelm Johannsen, Richard Goldschmidt and T.H. Morgan, all of a rather dogmatic mindset. Eventually, absolute proof came from chromosome maps in Morgan's own laboratory. The number of human chromosomes was published by Painter in 1923. By inspection through a microscope, he counted 24 pairs of chromosomes, giving 48 in total. His error was copied by others, and it was not until 1956 that the true number (46) was determined by Indonesian-born cytogeneticist Joe Hin Tjio. Prokaryotes The prokaryotes – bacteria and archaea – typically have a single circular chromosome. The chromosomes of most bacteria (also called genophores), can range in size from only 130,000 base pairs in the endosymbiotic bacteria Candidatus Hodgkinia cicadicola and Candidatus Tremblaya princeps, to more than 14,000,000 base pairs in the soil-dwelling bacterium Sorangium cellulosum. Some bacteria have more than one chromosome. For instance, Spirochaetes such as Borrelia burgdorferi (causing Lyme disease), contain a single linear chromosome. Vibrios typically carry two chromosomes of very different size. Genomes of the genus Burkholderia carry one, two, or three chromosomes. Structure in sequences Prokaryotic chromosomes have less sequence-based structure than eukaryotes. Bacteria typically have a one-point (the origin of replication) from which replication starts, whereas some archaea contain multiple replication origins. The genes in prokaryotes are often organized in operons and do not usually contain introns, unlike eukaryotes. DNA packaging Prokaryotes do not possess nuclei. Instead, their DNA is organized into a structure called the nucleoid. The nucleoid is a distinct structure and occupies a defined region of the bacterial cell. This structure is, however, dynamic and is maintained and remodeled by the actions of a range of histone-like proteins, which associate with the bacterial chromosome. In archaea, the DNA in chromosomes is even more organized, with the DNA packaged within structures similar to eukaryotic nucleosomes. Certain bacteria also contain plasmids or other extrachromosomal DNA. These are circular structures in the cytoplasm that contain cellular DNA and play a role in horizontal gene transfer. In prokaryotes and viruses, the DNA is often densely packed and organized; in the case of archaea, by homology to eukaryotic histones, and in the case of bacteria, by histone-like proteins. Bacterial chromosomes tend to be tethered to the plasma membrane of the bacteria. In molecular biology application, this allows for its isolation from plasmid DNA by centrifugation of lysed bacteria and pelleting of the membranes (and the attached DNA). Prokaryotic chromosomes and plasmids are, like eukaryotic DNA, generally supercoiled. The DNA must first be released into its relaxed state for access for transcription, regulation, and replication. Eukaryotes Each eukaryotic chromosome consists of a long linear DNA molecule associated with proteins, forming a compact complex of proteins and DNA called chromatin. Chromatin contains the vast majority of the DNA in an organism, but a small amount inherited maternally can be found in the mitochondria. It is present in most cells, with a few exceptions, for example, red blood cells. Histones are responsible for the first and most basic unit of chromosome organization, the nucleosome. Eukaryotes (cells with nuclei such as those found in plants, fungi, and animals) possess multiple large linear chromosomes contained in the cell's nucleus. Each chromosome has one centromere, with one or two arms projecting from the centromere, although, under most circumstances, these arms are not visible as such. In addition, most eukaryotes have a small circular mitochondrial genome, and some eukaryotes may have additional small circular or linear cytoplasmic chromosomes. In the nuclear chromosomes of eukaryotes, the uncondensed DNA exists in a semi-ordered structure, where it is wrapped around histones (structural proteins), forming a composite material called chromatin. Interphase chromatin The packaging of DNA into nucleosomes causes a 10 nanometer fibre which may further condense up to 30 nm fibres Most of the euchromatin in interphase nuclei appears to be in the form of 30-nm fibers. Chromatin structure is the more decondensed state, i.e. the 10-nm conformation allows transcription. During interphase (the period of the cell cycle where the cell is not dividing), two types of chromatin can be distinguished: Euchromatin, which consists of DNA that is active, e.g., being expressed as protein. Heterochromatin, which consists of mostly inactive DNA. It seems to serve structural purposes during the chromosomal stages. Heterochromatin can be further distinguished into two types: Constitutive heterochromatin, which is never expressed. It is located around the centromere and usually contains repetitive sequences. Facultative heterochromatin, which is sometimes expressed. Metaphase chromatin and division In the early stages of mitosis or meiosis (cell division), the chromatin double helix becomes more and more condensed. They cease to function as accessible genetic material (transcription stops) and become a compact transportable form. The loops of thirty-nanometer chromatin fibers are thought to fold upon themselves further to form the compact metaphase chromosomes of mitotic cells. The DNA is thus condensed about ten-thousand-fold. The chromosome scaffold, which is made of proteins such as condensin, TOP2A and KIF4, plays an important role in holding the chromatin into compact chromosomes. Loops of thirty-nanometer structure further condense with scaffold into higher order structures. This highly compact form makes the individual chromosomes visible, and they form the classic four-arm structure, a pair of sister chromatids attached to each other at the centromere. The shorter arms are called p arms (from the French petit, small) and the longer arms are called q arms (q follows p in the Latin alphabet; q-g "grande"; alternatively it is sometimes said q is short for queue meaning tail in French). This is the only natural context in which individual chromosomes are visible with an optical microscope. Mitotic metaphase chromosomes are best described by a linearly organized longitudinally compressed array of consecutive chromatin loops. During mitosis, microtubules grow from centrosomes located at opposite ends of the cell and also attach to the centromere at specialized structures called kinetochores, one of which is present on each sister chromatid. A special DNA base sequence in the region of the kinetochores provides, along with special proteins, longer-lasting attachment in this region. The microtubules then pull the chromatids apart toward the centrosomes, so that each daughter cell inherits one set of chromatids. Once the cells have divided, the chromatids are uncoiled and DNA can again be transcribed. In spite of their appearance, chromosomes are structurally highly condensed, which enables these giant DNA structures to be contained within a cell nucleus. Human chromosomes Chromosomes in humans can be divided into two types: autosomes (body chromosome(s)) and allosome (sex chromosome(s)). Certain genetic traits are linked to a person's sex and are passed on through the sex chromosomes. The autosomes contain the rest of the genetic hereditary information. All act in the same way during cell division. Human cells have 23 pairs of chromosomes (22 pairs of autosomes and one pair of sex chromosomes), giving a total of 46 per cell. In addition to these, human cells have many hundreds of copies of the mitochondrial genome. Sequencing of the human genome has provided a great deal of information about each of the chromosomes. Below is a table compiling statistics for the chromosomes, based on the Sanger Institute's human genome information in the Vertebrate Genome Annotation (VEGA) database. Number of genes is an estimate, as it is in part based on gene predictions. Total chromosome length is an estimate as well, based on the estimated size of unsequenced heterochromatin regions. Based on the micrographic characteristics of size, position of the centromere and sometimes the presence of a chromosomal satellite, the human chromosomes are classified into the following groups: Karyotype In general, the karyotype is the characteristic chromosome complement of a eukaryote species. The preparation and study of karyotypes is part of cytogenetics. Although the replication and transcription of DNA is highly standardized in eukaryotes, the same cannot be said for their karyotypes, which are often highly variable. There may be variation between species in chromosome number and in detailed organization. In some cases, there is significant variation within species. Often there is: 1. variation between the two sexes 2. variation between the germline and soma (between gametes and the rest of the body) 3. variation between members of a population, due to balanced genetic polymorphism 4. geographical variation between races 5. mosaics or otherwise abnormal individuals. Also, variation in karyotype may occur during development from the fertilized egg. The technique of determining the karyotype is usually called karyotyping. Cells can be locked part-way through division (in metaphase) in vitro (in a reaction vial) with colchicine. These cells are then stained, photographed, and arranged into a karyogram, with the set of chromosomes arranged, autosomes in order of length, and sex chromosomes (here X/Y) at the end. Like many sexually reproducing species, humans have special gonosomes (sex chromosomes, in contrast to autosomes). These are XX in females and XY in males. History and analysis techniques Investigation into the human karyotype took many years to settle the most basic question: How many chromosomes does a normal diploid human cell contain? In 1912, Hans von Winiwarter reported 47 chromosomes in spermatogonia and 48 in oogonia, concluding an XX/XO sex determination mechanism. In 1922, Painter was not certain whether the diploid number of man is 46 or 48, at first favouring 46. He revised his opinion later from 46 to 48, and he correctly insisted on humans having an XX/XY system. New techniques were needed to definitively solve the problem: Using cells in culture Arresting mitosis in metaphase by a solution of colchicine Pretreating cells in a hypotonic solution , which swells them and spreads the chromosomes Squashing the preparation on the slide forcing the chromosomes into a single plane Cutting up a photomicrograph and arranging the result into an indisputable karyogram. It took until 1954 before the human diploid number was confirmed as 46. Considering the techniques of Winiwarter and Painter, their results were quite remarkable. Chimpanzees, the closest living relatives to modern humans, have 48 chromosomes as do the other great apes: in humans two chromosomes fused to form chromosome 2. Aberrations Chromosomal aberrations are disruptions in the normal chromosomal content of a cell. They can cause genetic conditions in humans, such as Down syndrome, although most aberrations have little to no effect. Some chromosome abnormalities do not cause disease in carriers, such as translocations, or chromosomal inversions, although they may lead to a higher chance of bearing a child with a chromosome disorder. Abnormal numbers of chromosomes or chromosome sets, called aneuploidy, may be lethal or may give rise to genetic disorders. Genetic counseling is offered for families that may carry a chromosome rearrangement. The gain or loss of DNA from chromosomes can lead to a variety of genetic disorders. Human examples include: Cri du chat, caused by the deletion of part of the short arm of chromosome 5. "Cri du chat" means "cry of the cat" in French; the condition was so-named because affected babies make high-pitched cries that sound like those of a cat. Affected individuals have wide-set eyes, a small head and jaw, moderate to severe mental health problems, and are very short. DiGeorge syndrome, also known as 22q11.2 deletion syndrome. Symptoms are mild learning disabilities in children, with adults having an increased risk of schizophrenia. Infections are also common in children because of problems with the immune system's T cell-mediated response due to an absence of hypoplastic thymus. Down syndrome, the most common trisomy, usually caused by an extra copy of chromosome 21 (trisomy 21). Characteristics include decreased muscle tone, stockier build, asymmetrical skull, slanting eyes, and mild to moderate developmental disability. Edwards syndrome, or trisomy-18, the second most common trisomy. Symptoms include motor retardation, developmental disability, and numerous congenital anomalies causing serious health problems. Ninety percent of those affected die in infancy. They have characteristic clenched hands and overlapping fingers. Isodicentric 15, also called idic(15), partial tetrasomy 15q, or inverted duplication 15 (inv dup 15). Jacobsen syndrome, which is very rare. It is also called the 11q terminal deletion disorder. Those affected have normal intelligence or mild developmental disability, with poor expressive language skills. Most have a bleeding disorder called Paris-Trousseau syndrome. Klinefelter syndrome (XXY). Men with Klinefelter syndrome are usually sterile, and tend to be taller than their peers, with longer arms and legs. Boys with the syndrome are often shy and quiet, and have a higher incidence of speech delay and dyslexia. Without testosterone treatment, some may develop gynecomastia during puberty. Patau Syndrome, also called D-Syndrome or trisomy-13. Symptoms are somewhat similar to those of trisomy-18, without the characteristic folded hand. Small supernumerary marker chromosome. This means there is an extra, abnormal chromosome. Features depend on the origin of the extra genetic material. Cat-eye syndrome and isodicentric chromosome 15 syndrome (or Idic15) are both caused by a supernumerary marker chromosome, as is Pallister–Killian syndrome. Triple-X syndrome (XXX). XXX girls tend to be tall and thin, and have a higher incidence of dyslexia. Turner syndrome (X instead of XX or XY). In Turner syndrome, female sexual characteristics are present but underdeveloped. Females with Turner syndrome often have a short stature, low hairline, abnormal eye features and bone development, and a "caved-in" appearance to the chest. Wolf–Hirschhorn syndrome, caused by partial deletion of the short arm of chromosome 4. It is characterized by growth retardation, delayed motor skills development, "Greek Helmet" facial features, and mild to profound mental health problems. XYY syndrome. XYY boys are usually taller than their siblings. Like XXY boys and XXX girls, they are more likely to have learning difficulties. Sperm aneuploidy Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa. Number in various organisms In eukaryotes The number of chromosomes in eukaryotes is highly variable. It is possible for chromosomes to fuse or break and thus evolve into novel karyotypes. Chromosomes can also be fused artificially. For example, when the 16 chromosomes of yeast were fused into one giant chromosome, it was found that the cells were still viable with only somewhat reduced growth rates. The tables below give the total number of chromosomes (including sex chromosomes) in a cell nucleus for various eukaryotes. Most are diploid, such as humans who have 22 different types of autosomes—each present as two homologous pairs—and two sex chromosomes, giving 46 chromosomes in total. Some other organisms have more than two copies of their chromosome types, for example bread wheat which is hexaploid, having six copies of seven different chromosome types for a total of 42 chromosomes. Normal members of a particular eukaryotic species all have the same number of nuclear chromosomes. Other eukaryotic chromosomes, i.e., mitochondrial and plasmid-like small chromosomes, are much more variable in number, and there may be thousands of copies per cell. Asexually reproducing species have one set of chromosomes that are the same in all body cells. However, asexual species can be either haploid or diploid. Sexually reproducing species have somatic cells (body cells) that are diploid [2n], having two sets of chromosomes (23 pairs in humans), one set from the mother and one from the father. Gametes (reproductive cells) are haploid [n], having one set of chromosomes. Gametes are produced by meiosis of a diploid germline cell, during which the matching chromosomes of father and mother can exchange small parts of themselves (crossover) and thus create new chromosomes that are not inherited solely from either parent. When a male and a female gamete merge during fertilization, a new diploid organism is formed. Some animal and plant species are polyploid [Xn], having more than two sets of homologous chromosomes. Important crops such as tobacco or wheat are often polyploid, compared to their ancestral species. Wheat has a haploid number of seven chromosomes, still seen in some cultivars as well as the wild progenitors. The more common types of pasta and bread wheat are polyploid, having 28 (tetraploid) and 42 (hexaploid) chromosomes, compared to the 14 (diploid) chromosomes in wild wheat. In prokaryotes Prokaryote species generally have one copy of each major chromosome, but most cells can easily survive with multiple copies. For example, Buchnera, a symbiont of aphids has multiple copies of its chromosome, ranging from 10 to 400 copies per cell. However, in some large bacteria, such as Epulopiscium fishelsoni up to 100,000 copies of the chromosome can be present. Plasmids and plasmid-like small chromosomes are, as in eukaryotes, highly variable in copy number. The number of plasmids in the cell is almost entirely determined by the rate of division of the plasmid – fast division causes high copy number.
Biology and health sciences
Organelles and other cell parts
null
6445
https://en.wikipedia.org/wiki/Carcinogen
Carcinogen
A carcinogen () is any agent that promotes the development of cancer. Carcinogens can include synthetic chemicals, naturally occurring substances, physical agents such as ionizing and non-ionizing radiation, and biologic agents such as viruses and bacteria. Most carcinogens act by creating mutations in DNA that disrupt a cell's normal processes for regulating growth, leading to uncontrolled cellular proliferation. This occurs when the cell's DNA repair processes fail to identify DNA damage allowing the defect to be passed down to daughter cells. The damage accumulates over time. This is typically a multi-step process during which the regulatory mechanisms within the cell are gradually dismantled allowing for unchecked cellular division. The specific mechanisms for carcinogenic activity is unique to each agent and cell type. Carcinogens can be broadly categorized, however, as activation-dependent and activation-independent which relate to the agent's ability to engage directly with DNA. Activation-dependent agents are relatively inert in their original form, but are bioactivated in the body into metabolites or intermediaries capable of damaging human DNA. These are also known as "indirect-acting" carcinogens. Examples of activation-dependent carcinogens include polycyclic aromatic hydrocarbons (PAHs), heterocyclic aromatic amines, and mycotoxins. Activation-independent carcinogens, or "direct-acting" carcinogens, are those that are capable of directly damaging DNA without any modification to their molecular structure. These agents typically include electrophilic groups that react readily with the net negative charge of DNA molecules. Examples of activation-independent carcinogens include ultraviolet light, ionizing radiation and alkylating agents. The time from exposure to a carcinogen to the development of cancer is known as the latency period. For most solid tumors in humans the latency period is between 10 and 40 years depending on cancer type. For blood cancers, the latency period may be as short as two. Due to prolonged latency periods identification of carcinogens can be challenging. A number of organizations review and evaluate the cumulative scientific evidence regarding the potential carcinogenicity of specific substances. Foremost among these is the International Agency for Research on Cancer (IARC). IARC routinely publishes monographs in which specific substances are evaluated for their potential carcinogenicity to humans and subsequently categorized into one of four groupings: Group 1: Carcinogenic to humans, Group 2A: Probably carcinogenic to humans, Group 2B: Possibly carcinogenic to humans and Group 3: Not classifiable as to its carcinogenicity to humans. Other organizations that evaluate the carcinogenicity of substances include the National Toxicology Program of the US Public Health Service, NIOSH, the American Conference of Governmental Industrial Hygienists and others. There are numerous sources of exposures to carcinogens including ultraviolet radiation from the sun, radon gas emitted in residential basements, environmental contaminants such as chlordecone, cigarette smoke and ingestion of some types of foods such as alcohol and processed meats. Occupational exposures represent a major source of carcinogens with an estimated 666,000 annual fatalities worldwide attributable to work related cancers. According to NIOSH, 3-6% of cancers worldwide are due to occupational exposures. Well established occupational carcinogens include vinyl chloride and hemangiosarcoma of the liver, benzene and leukemia, aniline dyes and bladder cancer, asbestos and mesothelioma, polycyclic aromatic hydrocarbons and scrotal cancer among chimney sweeps to name a few. Radiation Ionizing Radiation CERCLA identifies all radionuclides as carcinogens, although the nature of the emitted radiation (alpha, beta, gamma, or neutron and the radioactive strength), its consequent capacity to cause ionization in tissues, and the magnitude of radiation exposure, determine the potential hazard. Carcinogenicity of radiation depends on the type of radiation, type of exposure, and penetration. For example, alpha radiation has low penetration and is not a hazard outside the body, but emitters are carcinogenic when inhaled or ingested. For example, Thorotrast, a (incidentally radioactive) suspension previously used as a contrast medium in x-ray diagnostics, is a potent human carcinogen known because of its retention within various organs and persistent emission of alpha particles. Low-level ionizing radiation may induce irreparable DNA damage (leading to replicational and transcriptional errors needed for neoplasia or may trigger viral interactions) leading to pre-mature aging and cancer. Non-ionizing radiation Not all types of electromagnetic radiation are carcinogenic. Low-energy waves on the electromagnetic spectrum including radio waves, microwaves, infrared radiation and visible light are thought not to be, because they have insufficient energy to break chemical bonds. Evidence for carcinogenic effects of non-ionizing radiation is generally inconclusive, though there are some documented cases of radar technicians with prolonged high exposure experiencing significantly higher cancer incidence. Higher-energy radiation, including ultraviolet radiation (present in sunlight) generally is carcinogenic, if received in sufficient doses. For most people, ultraviolet radiations from sunlight is the most common cause of skin cancer. In Australia, where people with pale skin are often exposed to strong sunlight, melanoma is the most common cancer diagnosed in people aged 15–44 years. Substances or foods irradiated with electrons or electromagnetic radiation (such as microwave, X-ray or gamma) are not carcinogenic. In contrast, non-electromagnetic neutron radiation produced inside nuclear reactors can produce secondary radiation through nuclear transmutation. Common carcinogens associated with food Alcohol Alcohol is a carcinogen of the head and neck, esophagus, liver, colon and rectum, and breast. It has a synergistic effect with tobacco smoke in the development of head and neck cancers. In the United States approximately 6% of cancers and 4% of cancer deaths are attributable to alcohol use. Processed meats Chemicals used in processed and cured meat such as some brands of bacon, sausages and ham may produce carcinogens. For example, nitrites used as food preservatives in cured meat such as bacon have also been noted as being carcinogenic with demographic links, but not causation, to colon cancer. Meats cooked at high temperatures Cooking food at high temperatures, for example grilling or barbecuing meats, may also lead to the formation of minute quantities of many potent carcinogens that are comparable to those found in cigarette smoke (i.e., benzo[a]pyrene). Charring of food looks like coking and tobacco pyrolysis, and produces carcinogens. There are several carcinogenic pyrolysis products, such as polynuclear aromatic hydrocarbons, which are converted by human enzymes into epoxides, which attach permanently to DNA. Pre-cooking meats in a microwave oven for 2–3 minutes before grilling shortens the time on the hot pan, and removes heterocyclic amine (HCA) precursors, which can help minimize the formation of these carcinogens. Acrylamide in foods Frying, grilling or broiling food at high temperatures, especially starchy foods, until a toasted crust is formed generates acrylamides. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth". Biologic Agents Several biologic agents are known carcinogens. Aflatoxin B1, a toxin produced by the fungus Aspergillus flavus which is a common contaminant of stored grains and nuts is a known cause of hepatocellular cancer. The bacteria H. Pylori is known to cause stomach cancer and MALT lymphoma. Hepatitis B and C are associated with the development of hepatocellular cancer. HPV is the primary cause of cervical cancer. Cigarette smoke Tobacco smoke contains at least 70 known carcinogens and is implicated in the development of numerous types of cancers including cancers of the lung, larynx, esophagus, stomach, kidney, pancreas, liver, bladder, cervix, colon, rectum and blood. Potent carcinogens found in cigarette smoke include polycyclic aromatic hydrocarbons (PAH, such as benzo(a)pyrene), benzene, and nitrosamine. Occupational carcinogens Given that populations of workers are more likely to have consistent, often high level exposures to chemicals rarely encountered in normal life, much of the evidence for the carcinogenicity of specific agents is derived from studies of workers. Selected carcinogens Others Gasoline (contains aromatics) Lead and its compounds Alkylating antineoplastic agents (e.g., mechlorethamine) Styrene Other alkylating agents (e.g., dimethyl sulfate) Ultraviolet radiation from the sun and UV lamps Other ionizing radiation (X-rays, gamma rays, etc.) Low refining or unrefined mineral oils Mechanisms of carcinogenicity Carcinogens can be classified as genotoxic or nongenotoxic. Genotoxins cause irreversible genetic damage or mutations by binding to DNA. Genotoxins include chemical agents like N-nitroso-N-methylurea (NMU) or non-chemical agents such as ultraviolet light and ionizing radiation. Certain viruses can also act as carcinogens by interacting with DNA. Nongenotoxins do not directly affect DNA but act in other ways to promote growth. These include hormones and some organic compounds. Classification International Agency for Research on Cancer The International Agency for Research on Cancer (IARC) is an intergovernmental agency established in 1965, which forms part of the World Health Organization of the United Nations. It is based in Lyon, France. Since 1971 it has published a series of Monographs on the Evaluation of Carcinogenic Risks to Humans that have been highly influential in the classification of possible carcinogens. Group 1: the agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans. Group 2A: the agent (mixture) is most likely (product more likely to be) carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. Group 2B: the agent (mixture) is possibly (chance of product being) carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. Group 3: the agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. Group 4: the agent (mixture) is most likely not carcinogenic to humans. Globally Harmonized System The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) is a United Nations initiative to attempt to harmonize the different systems of assessing chemical risk which currently exist (as of March 2009) around the world. It classifies carcinogens into two categories, of which the first may be divided again into subcategories if so desired by the competent regulatory authority: Category 1: known or presumed to have carcinogenic potential for humans Category 1A: the assessment is based primarily on human evidence Category 1B: the assessment is based primarily on animal evidence Category 2: suspected human carcinogens U.S. National Toxicology Program The National Toxicology Program of the U.S. Department of Health and Human Services is mandated to produce a biennial Report on Carcinogens. As of August 2024, the latest edition was the 15th report (2021). It classifies carcinogens into two groups: Known to be a human carcinogen Reasonably anticipated to be a human carcinogen American Conference of Governmental Industrial Hygienists The American Conference of Governmental Industrial Hygienists (ACGIH) is a private organization best known for its publication of threshold limit values (TLVs) for occupational exposure and monographs on workplace chemical hazards. It assesses carcinogenicity as part of a wider assessment of the occupational hazards of chemicals. Group A1: Confirmed human carcinogen Group A2: Suspected human carcinogen Group A3: Confirmed animal carcinogen with unknown relevance to humans Group A4: Not classifiable as a human carcinogen Group A5: Not suspected as a human carcinogen European Union The European Union classification of carcinogens is contained in the Regulation (EC) No 1272/2008. It consists of three categories: Category 1A: Carcinogenic Category 1B: May cause cancer Category 2: Suspected of causing cancer The former European Union classification of carcinogens was contained in the Dangerous Substances Directive and the Dangerous Preparations Directive. It also consisted of three categories: Category 1: Substances known to be carcinogenic to humans. Category 2: Substances which should be regarded as if they are carcinogenic to humans. Category 3: Substances which cause concern for humans, owing to possible carcinogenic effects but in respect of which the available information is not adequate for making a satisfactory assessment. This assessment scheme is being phased out in favor of the GHS scheme (see above), to which it is very close in category definitions. Safe Work Australia Under a previous name, the NOHSC, in 1999 Safe Work Australia published the Approved Criteria for Classifying Hazardous Substances [NOHSC:1008(1999)]. Section 4.76 of this document outlines the criteria for classifying carcinogens as approved by the Australian government. This classification consists of three categories: Category 1: Substances known to be carcinogenic to humans. Category 2: Substances that should be regarded as if they were carcinogenic to humans. Category 3: Substances that have possible carcinogenic effects in humans but about which there is insufficient information to make an assessment. Major carcinogens implicated in the four most common cancers worldwide In this section, the carcinogens implicated as the main causative agents of the four most common cancers worldwide are briefly described. These four cancers are lung, breast, colon, and stomach cancers. Together they account for about 41% of worldwide cancer incidence and 42% of cancer deaths (for more detailed information on the carcinogens implicated in these and other cancers, see references). Lung cancer Lung cancer (pulmonary carcinoma) is the most common cancer in the world, both in terms of cases (1.6 million cases; 12.7% of total cancer cases) and deaths (1.4 million deaths; 18.2% of total cancer deaths). Lung cancer is largely caused by tobacco smoke. Risk estimates for lung cancer in the United States indicate that tobacco smoke is responsible for 90% of lung cancers. Other factors are implicated in lung cancer, and these factors can interact synergistically with smoking so that total attributable risk adds up to more than 100%. These factors include occupational exposure to carcinogens (about 9-15%), radon (10%) and outdoor air pollution (1-2%). Tobacco smoke is a complex mixture of more than 5,300 identified chemicals. The most important carcinogens in tobacco smoke have been determined by a "Margin of Exposure" approach. Using this approach, the most important tumorigenic compounds in tobacco smoke were, in order of importance, acrolein, formaldehyde, acrylonitrile, 1,3-butadiene, cadmium, acetaldehyde, ethylene oxide, and isoprene. Most of these compounds cause DNA damage by forming DNA adducts or by inducing other alterations in DNA. DNA damages are subject to error-prone DNA repair or can cause replication errors. Such errors in repair or replication can result in mutations in tumor suppressor genes or oncogenes leading to cancer. Breast cancer Breast cancer is the second most common cancer [(1.4 million cases, 10.9%), but ranks 5th as cause of death (458,000, 6.1%)]. Increased risk of breast cancer is associated with persistently elevated blood levels of estrogen. Estrogen appears to contribute to breast carcinogenesis by three processes; (1) the metabolism of estrogen to genotoxic, mutagenic carcinogens, (2) the stimulation of tissue growth, and (3) the repression of phase II detoxification enzymes that metabolize ROS leading to increased oxidative DNA damage. The major estrogen in humans, estradiol, can be metabolized to quinone derivatives that form adducts with DNA. These derivatives can cause depurination, the removal of bases from the phosphodiester backbone of DNA, followed by inaccurate repair or replication of the apurinic site leading to mutation and eventually cancer. This genotoxic mechanism may interact in synergy with estrogen receptor-mediated, persistent cell proliferation to ultimately cause breast cancer. Genetic background, dietary practices and environmental factors also likely contribute to the incidence of DNA damage and breast cancer risk. Consumption of alcohol has also been linked to an increased risk for breast cancer. Colon cancer Colorectal cancer is the third most common cancer [1.2 million cases (9.4%), 608,000 deaths (8.0%)]. Tobacco smoke may be responsible for up to 20% of colorectal cancers in the United States. In addition, substantial evidence implicates bile acids as an important factor in colon cancer. Twelve studies (summarized in Bernstein et al.) indicate that the bile acids deoxycholic acid (DCA) or lithocholic acid (LCA) induce production of DNA-damaging reactive oxygen species or reactive nitrogen species in human or animal colon cells. Furthermore, 14 studies showed that DCA and LCA induce DNA damage in colon cells. Also 27 studies reported that bile acids cause programmed cell death (apoptosis). Increased apoptosis can result in selective survival of cells that are resistant to induction of apoptosis. Colon cells with reduced ability to undergo apoptosis in response to DNA damage would tend to accumulate mutations, and such cells may give rise to colon cancer. Epidemiologic studies have found that fecal bile acid concentrations are increased in populations with a high incidence of colon cancer. Dietary increases in total fat or saturated fat result in elevated DCA and LCA in feces and elevated exposure of the colon epithelium to these bile acids. When the bile acid DCA was added to the standard diet of wild-type mice invasive colon cancer was induced in 56% of the mice after 8 to 10 months. Overall, the available evidence indicates that DCA and LCA are centrally important DNA-damaging carcinogens in colon cancer. Stomach cancer Stomach cancer is the fourth most common cancer [990,000 cases (7.8%), 738,000 deaths (9.7%)]. Helicobacter pylori infection is the main causative factor in stomach cancer. Chronic gastritis (inflammation) caused by H. pylori is often long-standing if not treated. Infection of gastric epithelial cells with H. pylori results in increased production of reactive oxygen species (ROS). ROS cause oxidative DNA damage including the major base alteration 8-hydroxydeoxyguanosine (8-OHdG). 8-OHdG resulting from ROS is increased in chronic gastritis. The altered DNA base can cause errors during DNA replication that have mutagenic and carcinogenic potential. Thus H. pylori-induced ROS appear to be the major carcinogens in stomach cancer because they cause oxidative DNA damage leading to carcinogenic mutations. Diet is also thought to be a contributing factor in stomach cancer: in Japan, where very salty pickled foods are popular, the incidence of stomach cancer is high. Preserved meat such as bacon, sausages, and ham increases the risk, while a diet rich in fresh fruit, vegetables, peas, beans, grains, nuts, seeds, herbs, and spices will reduce the risk. The risk also increases with age.
Biology and health sciences
Cancer
null
6446
https://en.wikipedia.org/wiki/Camouflage
Camouflage
Camouflage is the use of any combination of materials, coloration, or illumination for concealment, either by making animals or objects hard to see, or by disguising them as something else. Examples include the leopard's spotted coat, the battledress of a modern soldier, and the leaf-mimic katydid's wings. A third approach, motion dazzle, confuses the observer with a conspicuous pattern, making the object visible but momentarily harder to locate. The majority of camouflage methods aim for crypsis, often through a general resemblance to the background, high contrast disruptive coloration, eliminating shadow, and countershading. In the open ocean, where there is no background, the principal methods of camouflage are transparencying, silveringing, and countershading, while the ability to produce light is among other things used for counter-illumination on the undersides of cephalopods such as squid. Some animals, such as chameleons and octopuses, are capable of actively changing their skin pattern and colors, whether for camouflage or for signalling. It is possible that some plants use camouflage to evade being eaten by herbivores. Military camouflage was spurred by the increasing range and accuracy of firearms in the 19th century. In particular the replacement of the inaccurate musket with the rifle made personal concealment in battle a survival skill. In the 20th century, military camouflage developed rapidly, especially during the World War I. On land, artists such as André Mare designed camouflage schemes and observation posts disguised as trees. At sea, merchant ships and troop carriers were painted in dazzle patterns that were highly visible, but designed to confuse enemy submarines as to the target's speed, range, and heading. During and after World War II, a variety of camouflage schemes were used for aircraft and for ground vehicles in different theatres of war. The use of radar since the mid-20th century has largely made camouflage for fixed-wing military aircraft obsolete. Non-military use of camouflage includes making cell telephone towers less obtrusive and helping hunters to approach wary game animals. Patterns derived from military camouflage are frequently used in fashion clothing, exploiting their strong designs and sometimes their symbolism. Camouflage themes recur in modern art, and both figuratively and literally in science fiction and works of literature. History In ancient Greece, Aristotle (384–322 BC) commented on the colour-changing abilities, both for camouflage and for signalling, of cephalopods including the octopus, in his Historia animalium: Camouflage has been a topic of interest and research in zoology for well over a century. According to Charles Darwin's 1859 theory of natural selection, features such as camouflage evolved by providing individual animals with a reproductive advantage, enabling them to leave more offspring, on average, than other members of the same species. In his Origin of Species, Darwin wrote: The English zoologist Edward Bagnall Poulton studied animal coloration, especially camouflage. In his 1890 book The Colours of Animals, he classified different types such as "special protective resemblance" (where an animal looks like another object), or "general aggressive resemblance" (where a predator blends in with the background, enabling it to approach prey). His experiments showed that swallow-tailed moth pupae were camouflaged to match the backgrounds on which they were reared as larvae. Poulton's "general protective resemblance" was at that time considered to be the main method of camouflage, as when Frank Evers Beddard wrote in 1892 that "tree-frequenting animals are often green in colour. Among vertebrates numerous species of parrots, iguanas, tree-frogs, and the green tree-snake are examples". Beddard did however briefly mention other methods, including the "alluring coloration" of the flower mantis and the possibility of a different mechanism in the orange tip butterfly. He wrote that "the scattered green spots upon the under surface of the wings might have been intended for a rough sketch of the small flowerets of the plant [an umbellifer], so close is their mutual resemblance." He also explained the coloration of sea fish such as the mackerel: "Among pelagic fish it is common to find the upper surface dark-coloured and the lower surface white, so that the animal is inconspicuous when seen either from above or below." The artist Abbott Handerson Thayer formulated what is sometimes called Thayer's Law, the principle of countershading. However, he overstated the case in the 1909 book Concealing-Coloration in the Animal Kingdom, arguing that "All patterns and colors whatsoever of all animals that ever preyed or are preyed on are under certain normal circumstances obliterative" (that is, cryptic camouflage), and that "Not one 'mimicry' mark, not one 'warning color'... nor any 'sexually selected' color, exists anywhere in the world where there is not every reason to believe it the very best conceivable device for the concealment of its wearer", and using paintings such as Peacock in the Woods (1907) to reinforce his argument. Thayer was roundly mocked for these views by critics including Teddy Roosevelt. The English zoologist Hugh Cott's 1940 book Adaptive Coloration in Animals corrected Thayer's errors, sometimes sharply: "Thus we find Thayer straining the theory to a fantastic extreme in an endeavour to make it cover almost every type of coloration in the animal kingdom." Cott built on Thayer's discoveries, developing a comprehensive view of camouflage based on "maximum disruptive contrast", countershading and hundreds of examples. The book explained how disruptive camouflage worked, using streaks of boldly contrasting colour, paradoxically making objects less visible by breaking up their outlines. While Cott was more systematic and balanced in his view than Thayer, and did include some experimental evidence on the effectiveness of camouflage, his 500-page textbook was, like Thayer's, mainly a natural history narrative which illustrated theories with examples. Experimental evidence that camouflage helps prey avoid being detected by predators was first provided in 2016, when ground-nesting birds (plovers and coursers) were shown to survive according to how well their egg contrast matched the local environment. Evolution As there is a lack of evidence for camouflage in the fossil record, studying the evolution of camouflage strategies is very difficult. Furthermore, camouflage traits must be both adaptable (provide a fitness gain in a given environment) and heritable (in other words, the trait must undergo positive selection). Thus, studying the evolution of camouflage strategies requires an understanding of the genetic components and various ecological pressures that drive crypsis. Fossil history Camouflage is a soft-tissue feature that is rarely preserved in the fossil record, but rare fossilised skin samples from the Cretaceous period show that some marine reptiles were countershaded. The skins, pigmented with dark-coloured eumelanin, reveal that both leatherback turtles and mosasaurs had dark backs and light bellies. There is fossil evidence of camouflaged insects going back over 100 million years, for example lacewings larvae that stick debris all over their bodies much as their modern descendants do, hiding them from their prey. Dinosaurs appear to have been camouflaged, as a 120 million year old fossil of a Psittacosaurus has been preserved with countershading. Genetics Camouflage does not have a single genetic origin. However, studying the genetic components of camouflage in specific organisms illuminates the various ways that crypsis can evolve among lineages. Many cephalopods have the ability to actively camouflage themselves, controlling crypsis through neural activity. For example, the genome of the common cuttlefish includes 16 copies of the reflectin gene, which grants the organism remarkable control over coloration and iridescence. The reflectin gene is thought to have originated through transposition from symbiotic Aliivibrio fischeri bacteria, which provide bioluminescence to its hosts. While not all cephalopods use active camouflage, ancient cephalopods may have inherited the gene horizontally from symbiotic A. fischeri, with divergence occurred through subsequent gene duplication (such as in the case of Sepia officinalis) or gene loss (as with cephalopods with no active camouflage capabilities).[3] This is unique as an instance of camouflage arising as an instance of horizontal gene transfer from an endosymbiont. However, other methods of horizontal gene transfer are common in the evolution of camouflage strategies in other lineages. Peppered moths and walking stick insects both have camouflage-related genes that stem from transposition events. The Agouti genes are orthologous genes involved in camouflage across many lineages. They produce yellow and red coloration (phaeomelanin), and work in competition with other genes that produce black (melanin) and brown (eumelanin) colours. In eastern deer mice, over a period of about 8000 years the single agouti gene developed 9 mutations that each made expression of yellow fur stronger under natural selection, and largely eliminated melanin-coding black fur coloration. On the other hand, all black domesticated cats have deletions of the agouti gene that prevent its expression, meaning no yellow or red color is produced. The evolution, history and widespread scope of the agouti gene shows that different organisms often rely on orthologous or even identical genes to develop a variety of camouflage strategies. Ecology While camouflage can increase an organism's fitness, it has genetic and energetic costs. There is a trade-off between detectability and mobility. Species camouflaged to fit a specific microhabitat are less likely to be detected when in that microhabitat, but must spend energy to reach, and sometimes to remain in, such areas. Outside the microhabitat, the organism has a higher chance of detection. Generalized camouflage allows species to avoid predation over a wide range of habitat backgrounds, but is less effective. The development of generalized or specialized camouflage strategies is highly dependent on the biotic and abiotic composition of the surrounding environment. There are many examples of the tradeoffs between specific and general cryptic patterning. Phestilla melanocrachia, a species of nudibranch that feeds on stony coral, utilizes specific cryptic patterning in reef ecosystems. The nudibranch syphons pigments from the consumed coral into the epidermis, adopting the same shade as the consumed coral. This allows the nudibranch to change colour (mostly between black and orange) depending on the coral system that it inhabits. However, P. melanocrachia can only feed and lay eggs on the branches of host-coral, Platygyra carnosa, which limits the geographical range and efficacy in nudibranch nutritional crypsis. Furthermore, the nudibranch colour change is not immediate, and switching between coral hosts when in search for new food or shelter can be costly. The costs associated with distractive or disruptive crypsis are more complex than the costs associated with background matching. Disruptive patterns distort the body outline, making it harder to precisely identify and locate. However, disruptive patterns result in higher predation. Disruptive patterns that specifically involve visible symmetry (such as in some butterflies) reduce survivability and increase predation. Some researchers argue that because wing-shape and color pattern are genetically linked, it is genetically costly to develop asymmetric wing colorations that would enhance the efficacy of disruptive cryptic patterning. Symmetry does not carry a high survival cost for butterflies and moths that their predators views from above on a homogeneous background, such as the bark of a tree. On the other hand, natural selection drives species with variable backgrounds and habitats to move symmetrical patterns away from the centre of the wing and body, disrupting their predators' symmetry recognition. Principles Camouflage can be achieved by different methods, described below. Most of the methods help to hide against a background; but mimesis and motion dazzle protect without hiding. Methods may be applied on their own or in combination. Many mechanisms are visual, but some research has explored the use of techniques against olfactory (scent) and acoustic (sound) detection. Methods may also apply to military equipment. Background matching Some animals' colours and patterns match a particular natural background. This is an important component of camouflage in all environments. For instance, tree-dwelling parakeets are mainly green; woodcocks of the forest floor are brown and speckled; reedbed bitterns are streaked brown and buff; in each case the animal's coloration matches the hues of its habitat. Similarly, desert animals are almost all desert coloured in tones of sand, buff, ochre, and brownish grey, whether they are mammals like the gerbil or fennec fox, birds such as the desert lark or sandgrouse, or reptiles like the skink or horned viper. Military uniforms, too, generally resemble their backgrounds; for example khaki uniforms are a muddy or dusty colour, originally chosen for service in South Asia. Many moths show industrial melanism, including the peppered moth which has coloration that blends in with tree bark. The coloration of these insects evolved between 1860 and 1940 to match the changing colour of the tree trunks on which they rest, from pale and mottled to almost black in polluted areas. This is taken by zoologists as evidence that camouflage is influenced by natural selection, as well as demonstrating that it changes where necessary to resemble the local background. Disruptive coloration Disruptive patterns use strongly contrasting, non-repeating markings such as spots or stripes to break up the outlines of an animal or military vehicle, or to conceal telltale features, especially by masking the eyes, as in the common frog. Disruptive patterns may use more than one method to defeat visual systems such as edge detection. Predators like the leopard use disruptive camouflage to help them approach prey, while potential prey use it to avoid detection by predators. Disruptive patterning is common in military usage, both for uniforms and for military vehicles. Disruptive patterning, however, does not always achieve crypsis on its own, as an animal or a military target may be given away by factors like shape, shine, and shadow. The presence of bold skin markings does not in itself prove that an animal relies on camouflage, as that depends on its behaviour. For example, although giraffes have a high contrast pattern that could be disruptive coloration, the adults are very conspicuous when in the open. Some authors have argued that adult giraffes are cryptic, since when standing among trees and bushes they are hard to see at even a few metres' distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves, even from lions, rather than on camouflage. A different explanation is implied by young giraffes being far more vulnerable to predation than adults. More than half of all giraffe calves die within a year, and giraffe mothers hide their newly born calves, which spend much of the time lying down in cover while their mothers are away feeding. The mothers return once a day to feed their calves with milk. Since the presence of a mother nearby does not affect survival, it is argued that these juvenile giraffes must be very well camouflaged; this is supported by coat markings being strongly inherited. The possibility of camouflage in plants was little studied until the late 20th century. Leaf variegation with white spots may serve as camouflage in forest understory plants, where there is a dappled background; leaf mottling is correlated with closed habitats. Disruptive camouflage would have a clear evolutionary advantage in plants: they would tend to escape from being eaten by herbivores. Another possibility is that some plants have leaves differently coloured on upper and lower surfaces or on parts such as veins and stalks to make green-camouflaged insects conspicuous, and thus benefit the plants by favouring the removal of herbivores by carnivores. These hypotheses are testable. Eliminating shadow Some animals, such as the horned lizards of North America, have evolved elaborate measures to eliminate shadow. Their bodies are flattened, with the sides thinning to an edge; the animals habitually press their bodies to the ground; and their sides are fringed with white scales which effectively hide and disrupt any remaining areas of shadow there may be under the edge of the body. The theory that the body shape of the horned lizards which live in open desert is adapted to minimise shadow is supported by the one species which lacks fringe scales, the roundtail horned lizard, which lives in rocky areas and resembles a rock. When this species is threatened, it makes itself look as much like a rock as possible by curving its back, emphasizing its three-dimensional shape. Some species of butterflies, such as the speckled wood, Pararge aegeria, minimise their shadows when perched by closing the wings over their backs, aligning their bodies with the sun, and tilting to one side towards the sun, so that the shadow becomes a thin inconspicuous line rather than a broad patch. Similarly, some ground-nesting birds, including the European nightjar, select a resting position facing the sun. Eliminating shadow was identified as a principle of military camouflage during the Second World War. Distraction Many prey animals have conspicuous high-contrast markings which paradoxically attract the predator's gaze. These distractive markings may serve as camouflage by distracting the predator's attention from recognising the prey as a whole, for example by keeping the predator from identifying the prey's outline. Experimentally, search times for blue tits increased when artificial prey had distractive markings. Self-decoration Some animals actively seek to hide by decorating themselves with materials such as twigs, sand, or pieces of shell from their environment, to break up their outlines, to conceal the features of their bodies, and to match their backgrounds. For example, a caddisfly larva builds a decorated case and lives almost entirely inside it; a decorator crab covers its back with seaweed, sponges, and stones. The nymph of the predatory masked bug uses its hind legs and a 'tarsal fan' to decorate its body with sand or dust. There are two layers of bristles (trichomes) over the body. On these, the nymph spreads an inner layer of fine particles and an outer layer of coarser particles. The camouflage may conceal the bug from both predators and prey. Similar principles can be applied for military purposes, for instance when a sniper wears a ghillie suit designed to be further camouflaged by decoration with materials such as tufts of grass from the sniper's immediate environment. Such suits were used as early as 1916, the British army having adopted "coats of motley hue and stripes of paint" for snipers. Cott takes the example of the larva of the blotched emerald moth, which fixes a screen of fragments of leaves to its specially hooked bristles, to argue that military camouflage uses the same method, pointing out that the "device is ... essentially the same as one widely practised during the Great War for the concealment, not of caterpillars, but of caterpillar-tractors, [gun] battery positions, observation posts and so forth." Cryptic behaviour Movement catches the eye of prey animals on the lookout for predators, and of predators hunting for prey. Most methods of crypsis therefore also require suitable cryptic behaviour, such as lying down and keeping still to avoid being detected, or in the case of stalking predators such as the tiger, moving with extreme stealth, both slowly and quietly, watching its prey for any sign they are aware of its presence. As an example of the combination of behaviours and other methods of crypsis involved, young giraffes seek cover, lie down, and keep still, often for hours until their mothers return; their skin pattern blends with the pattern of the vegetation, while the chosen cover and lying position together hide the animals' shadows. The flat-tail horned lizard similarly relies on a combination of methods: it is adapted to lie flat in the open desert, relying on stillness, its cryptic coloration, and concealment of its shadow to avoid being noticed by predators. In the ocean, the leafy sea dragon sways mimetically, like the seaweeds amongst which it rests, as if rippled by wind or water currents. Swaying is seen also in some insects, like Macleay's spectre stick insect, Extatosoma tiaratum. The behaviour may be motion crypsis, preventing detection, or motion masquerade, promoting misclassification (as something other than prey), or a combination of the two. Motion camouflage Most forms of camouflage are ineffective when the camouflaged animal or object moves, because the motion is easily seen by the observing predator, prey or enemy. However, insects such as hoverflies and dragonflies use motion camouflage: the hoverflies to approach possible mates, and the dragonflies to approach rivals when defending territories. Motion camouflage is achieved by moving so as to stay on a straight line between the target and a fixed point in the landscape; the pursuer thus appears not to move, but only to loom larger in the target's field of vision. Some insects sway while moving to appear to be blown back and forth by the breeze. The same method can be used for military purposes, for example by missiles to minimise their risk of detection by an enemy. However, missile engineers, and animals such as bats, use the method mainly for its efficiency rather than camouflage. Changeable skin coloration Animals such as chameleon, frog, flatfish such as the peacock flounder, squid, octopus and even the isopod idotea balthica actively change their skin patterns and colours using special chromatophore cells to resemble their current background, or, as in most chameleons, for signalling. However, Smith's dwarf chameleon does use active colour change for camouflage. Each chromatophore contains pigment of only one colour. In fish and frogs, colour change is mediated by a type of chromatophore known as melanophores that contain dark pigment. A melanophore is star-shaped; it contains many small pigmented organelles which can be dispersed throughout the cell, or aggregated near its centre. When the pigmented organelles are dispersed, the cell makes a patch of the animal's skin appear dark; when they are aggregated, most of the cell, and the animal's skin, appears light. In frogs, the change is controlled relatively slowly, mainly by hormones. In fish, the change is controlled by the brain, which sends signals directly to the chromatophores, as well as producing hormones. The skins of cephalopods such as the octopus contain complex units, each consisting of a chromatophore with surrounding muscle and nerve cells. The cephalopod chromatophore has all its pigment grains in a small elastic sac, which can be stretched or allowed to relax under the control of the brain to vary its opacity. By controlling chromatophores of different colours, cephalopods can rapidly change their skin patterns and colours. On a longer timescale, animals like the Arctic hare, Arctic fox, stoat, and rock ptarmigan have snow camouflage, changing their coat colour (by moulting and growing new fur or feathers) from brown or grey in the summer to white in the winter; the Arctic fox is the only species in the dog family to do so. However, Arctic hares which live in the far north of Canada, where summer is very short, remain white year-round. The principle of varying coloration either rapidly or with the changing seasons has military applications. Active camouflage could in theory make use of both dynamic colour change and counterillumination. Simple methods such as changing uniforms and repainting vehicles for winter have been in use since World War II. In 2011, BAE Systems announced their Adaptiv infrared camouflage technology. It uses about 1,000 hexagonal panels to cover the sides of a tank. The Peltier plate panels are heated and cooled to match either the vehicle's surroundings (crypsis), or an object such as a car (mimesis), when viewed in infrared. Countershading Countershading uses graded colour to counteract the effect of self-shadowing, creating an illusion of flatness. Self-shadowing makes an animal appear darker below than on top, grading from light to dark; countershading 'paints in' tones which are darkest on top, lightest below, making the countershaded animal nearly invisible against a suitable background. Thayer observed that "Animals are painted by Nature, darkest on those parts which tend to be most lighted by the sky's light, and vice versa". Accordingly, the principle of countershading is sometimes called Thayer's Law. Countershading is widely used by terrestrial animals, such as gazelles and grasshoppers; marine animals, such as sharks and dolphins; and birds, such as snipe and dunlin. Countershading is less often used for military camouflage, despite Second World War experiments that showed its effectiveness. English zoologist Hugh Cott encouraged the use of methods including countershading, but despite his authority on the subject, failed to persuade the British authorities. Soldiers often wrongly viewed camouflage netting as a kind of invisibility cloak, and they had to be taught to look at camouflage practically, from an enemy observer's viewpoint. At the same time in Australia, zoologist William John Dakin advised soldiers to copy animals' methods, using their instincts for wartime camouflage. The term countershading has a second meaning unrelated to "Thayer's Law". It is that the upper and undersides of animals such as sharks, and of some military aircraft, are different colours to match the different backgrounds when seen from above or from below. Here the camouflage consists of two surfaces, each with the simple function of providing concealment against a specific background, such as a bright water surface or the sky. The body of a shark or the fuselage of an aircraft is not gradated from light to dark to appear flat when seen from the side. The camouflage methods used are the matching of background colour and pattern, and disruption of outlines. Counter-illumination Counter-illumination means producing light to match a background that is brighter than an animal's body or military vehicle; it is a form of active camouflage. It is notably used by some species of squid, such as the firefly squid and the midwater squid. The latter has light-producing organs (photophores) scattered all over its underside; these create a sparkling glow that prevents the animal from appearing as a dark shape when seen from below. Counterillumination camouflage is the likely function of the bioluminescence of many marine organisms, though light is also produced to attract or to detect prey and for signalling. Counterillumination has rarely been used for military purposes. "Diffused lighting camouflage" was trialled by Canada's National Research Council during the Second World War. It involved projecting light on to the sides of ships to match the faint glow of the night sky, requiring awkward external platforms to support the lamps. The Canadian concept was refined in the American Yehudi lights project, and trialled in aircraft including B-24 Liberators and naval Avengers. The planes were fitted with forward-pointing lamps automatically adjusted to match the brightness of the night sky. This enabled them to approach much closer to a target – within – before being seen. Counterillumination was made obsolete by radar, and neither diffused lighting camouflage nor Yehudi lights entered active service. Transparency Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant, but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of ; better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. Some tissues such as muscles can be made transparent, provided either they are very thin or organised as regular layers or fibrils that are small compared to the wavelength of visible light. A familiar example is the transparency of the lens of the vertebrate eye, which is made of the protein crystallin, and the vertebrate cornea which is made of the protein collagen. Other structures cannot be made transparent, notably the retinas or equivalent light-absorbing structures of eyes – they must absorb light to be able to function. The camera-type eye of vertebrates and cephalopods must be completely opaque. Finally, some structures are visible for a reason, such as to lure prey. For example, the nematocysts (stinging cells) of the transparent siphonophore Agalma okenii resemble small copepods. Examples of transparent marine animals include a wide variety of larvae, including radiata (coelenterates), siphonophores, salps (floating tunicates), gastropod molluscs, polychaete worms, many shrimplike crustaceans, and fish; whereas the adults of most of these are opaque and pigmented, resembling the seabed or shores where they live. Adult comb jellies and jellyfish obey the rule, often being mainly transparent. Cott suggests this follows the more general rule that animals resemble their background: in a transparent medium like seawater, that means being transparent. The small Amazon River fish Microphilypnus amazonicus and the shrimps it associates with, Pseudopalaemon gouldingi, are so transparent as to be "almost invisible"; further, these species appear to select whether to be transparent or more conventionally mottled (disruptively patterned) according to the local background in the environment. Silvering Where transparency cannot be achieved, it can be imitated effectively by silvering to make an animal's body highly reflective. At medium depths at sea, light comes from above, so a mirror oriented vertically makes animals such as fish invisible from the side. Most fish in the upper ocean such as sardine and herring are camouflaged by silvering. The marine hatchetfish is extremely flattened laterally, leaving the body just millimetres thick, and the body is so silvery as to resemble aluminium foil. The mirrors consist of microscopic structures similar to those used to provide structural coloration: stacks of between 5 and 10 crystals of guanine spaced about of a wavelength apart to interfere constructively and achieve nearly 100 per cent reflection. In the deep waters that the hatchetfish lives in, only blue light with a wavelength of 500 nanometres percolates down and needs to be reflected, so mirrors 125 nanometres apart provide good camouflage. In fish such as the herring which live in shallower water, the mirrors must reflect a mixture of wavelengths, and the fish accordingly has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically. Silvering is found in other marine animals as well as fish. The cephalopods, including squid, octopus and cuttlefish, have multilayer mirrors made of protein rather than guanine. Ultra-blackness Some deep sea fishes have very black skin, reflecting under 0.5% of ambient light. This can prevent detection by predators or prey fish which use bioluminescence for illumination. Oneirodes had a particularly black skin which reflected only 0.044% of 480 nm wavelength light. The ultra-blackness is achieved with a thin but continuous layer of particles in the dermis, melanosomes. These particles both absorb most of the light, and are sized and shaped so as to scatter rather than reflect most of the rest. Modelling suggests that this camouflage should reduce the distance at which such a fish can be seen by a factor of 6 compared to a fish with a nominal 2% reflectance. Species with this adaptation are widely dispersed in various orders of the phylogenetic tree of bony fishes (Actinopterygii), implying that natural selection has driven the convergent evolution of ultra-blackness camouflage independently many times. Mimesis In mimesis (also called masquerade), the camouflaged object looks like something else which is of no special interest to the observer. Mimesis is common in prey animals, for example when a peppered moth caterpillar mimics a twig, or a grasshopper mimics a dry leaf. It is also found in nest structures; some eusocial wasps, such as Leipomeles dorsata, build a nest envelope in patterns that mimic the leaves surrounding the nest. Mimesis is also employed by some predators and parasites to lure their prey. For example, a flower mantis mimics a particular kind of flower, such as an orchid. This tactic has occasionally been used in warfare, for example with heavily armed Q-ships disguised as merchant ships. The common cuckoo, a brood parasite, provides examples of mimesis both in the adult and in the egg. The female lays her eggs in nests of other, smaller species of bird, one per nest. The female mimics a sparrowhawk. The resemblance is sufficient to make small birds take action to avoid the apparent predator. The female cuckoo then has time to lay her egg in their nest without being seen to do so. The cuckoo's egg itself mimics the eggs of the host species, reducing its chance of being rejected. Motion dazzle Most forms of camouflage are made ineffective by movement: a deer or grasshopper may be highly cryptic when motionless, but instantly seen when it moves. But one method, motion dazzle, requires rapidly moving bold patterns of contrasting stripes. Motion dazzle may degrade predators' ability to estimate the prey's speed and direction accurately, giving the prey an improved chance of escape. Motion dazzle distorts speed perception and is most effective at high speeds; stripes can also distort perception of size (and so, perceived range to the target). As of 2011, motion dazzle had been proposed for military vehicles, but never applied. Since motion dazzle patterns would make animals more difficult to locate accurately when moving, but easier to see when stationary, there would be an evolutionary trade-off between motion dazzle and crypsis. An animal that is commonly thought to be dazzle-patterned is the zebra. The bold stripes of the zebra have been claimed to be disruptive camouflage, background-blending and countershading. After many years in which the purpose of the coloration was disputed, an experimental study by Tim Caro suggested in 2012 that the pattern reduces the attractiveness of stationary models to biting flies such as horseflies and tsetse flies. However, a simulation study by Martin How and Johannes Zanker in 2014 suggests that when moving, the stripes may confuse observers, such as mammalian predators and biting insects, by two visual illusions: the wagon-wheel effect, where the perceived motion is inverted, and the barberpole illusion, where the perceived motion is in a wrong direction. Applications Military Before 1800 Ship camouflage was occasionally used in ancient times. Philostratus () wrote in his Imagines that Mediterranean pirate ships could be painted blue-gray for concealment. Vegetius () says that "Venetian blue" (sea green) was used in the Gallic Wars, when Julius Caesar sent his speculatoria navigia (reconnaissance boats) to gather intelligence along the coast of Britain; the ships were painted entirely in bluish-green wax, with sails, ropes and crew the same colour. There is little evidence of military use of camouflage on land before 1800, but two unusual ceramics show men in Peru's Mochica culture from before 500 AD, hunting birds with blowpipes which are fitted with a kind of shield near the mouth, perhaps to conceal the hunters' hands and faces. Another early source is a 15th-century French manuscript, The Hunting Book of Gaston Phebus, showing a horse pulling a cart which contains a hunter armed with a crossbow under a cover of branches, perhaps serving as a hide for shooting game. Jamaican Maroons are said to have used plant materials as camouflage in the First Maroon War (). 19th-century origins The development of military camouflage was driven by the increasing range and accuracy of infantry firearms in the 19th century. In particular the replacement of the inaccurate musket with weapons such as the Baker rifle made personal concealment in battle essential. Two Napoleonic War skirmishing units of the British Army, the 95th Rifle Regiment and the 60th Rifle Regiment, were the first to adopt camouflage in the form of a rifle green jacket, while the Line regiments continued to wear scarlet tunics. A contemporary study in 1800 by the English artist and soldier Charles Hamilton Smith provided evidence that grey uniforms were less visible than green ones at a range of 150 yards. In the American Civil War, rifle units such as the 1st United States Sharp Shooters (in the Federal army) similarly wore green jackets while other units wore more conspicuous colours. The first British Army unit to adopt khaki uniforms was the Corps of Guides at Peshawar, when Sir Harry Lumsden and his second in command, William Hodson introduced a "drab" uniform in 1848. Hodson wrote that it would be more appropriate for the hot climate, and help make his troops "invisible in a land of dust". Later they improvised by dyeing cloth locally. Other regiments in India soon adopted the khaki uniform, and by 1896 khaki drill uniform was used everywhere outside Europe; by the Second Boer War six years later it was used throughout the British Army. During the late 19th century camouflage was applied to British coastal fortifications. The fortifications around Plymouth, England were painted in the late 1880s in "irregular patches of red, brown, yellow and green." From 1891 onwards British coastal artillery was permitted to be painted in suitable colours "to harmonise with the surroundings" and by 1904 it was standard practice that artillery and mountings should be painted with "large irregular patches of different colours selected to suit local conditions." First World War In the First World War, the French army formed a camouflage corps, led by Lucien-Victor Guirand de Scévola, employing artists known as camoufleurs to create schemes such as tree observation posts and covers for guns. Other armies soon followed them. The term camouflage probably comes from camoufler, a Parisian slang term meaning to disguise, and may have been influenced by camouflet, a French term meaning smoke blown in someone's face. The English zoologist John Graham Kerr, artist Solomon J. Solomon and the American artist Abbott Thayer led attempts to introduce scientific principles of countershading and disruptive patterning into military camouflage, with limited success. In early 1916 the Royal Naval Air Service began to create dummy air fields to draw the attention of enemy planes to empty land. They created decoy homes and lined fake runways with flares, which were meant to help protect real towns from night raids. This strategy was not common practice and did not succeed at first, but in 1918 it caught the Germans off guard multiple times. Ship camouflage was introduced in the early 20th century as the range of naval guns increased, with ships painted grey all over. In April 1917, when German U-boats were sinking many British ships with torpedoes, the marine artist Norman Wilkinson devised dazzle camouflage, which paradoxically made ships more visible but harder to target. In Wilkinson's own words, dazzle was designed "not for low visibility, but in such a way as to break up her form and thus confuse a submarine officer as to the course on which she was heading". Second World War In the Second World War, the zoologist Hugh Cott, a protégé of Kerr, worked to persuade the British army to use more effective camouflage methods, including countershading, but, like Kerr and Thayer in the First World War, with limited success. For example, he painted two rail-mounted coastal guns, one in conventional style, one countershaded. In aerial photographs, the countershaded gun was essentially invisible. The power of aerial observation and attack led every warring nation to camouflage targets of all types. The Soviet Union's Red Army created the comprehensive doctrine of Maskirovka for military deception, including the use of camouflage. For example, during the Battle of Kursk, General Katukov, the commander of the Soviet 1st Tank Army, remarked that the enemy "did not suspect that our well-camouflaged tanks were waiting for him. As we later learned from prisoners, we had managed to move our tanks forward unnoticed". The tanks were concealed in previously prepared defensive emplacements, with only their turrets above ground level. In the air, Second World War fighters were often painted in ground colours above and sky colours below, attempting two different camouflage schemes for observers above and below. Bombers and night fighters were often black, while maritime reconnaissance planes were usually white, to avoid appearing as dark shapes against the sky. For ships, dazzle camouflage was mainly replaced with plain grey in the Second World War, though experimentation with colour schemes continued. As in the First World War, artists were pressed into service; for example, the surrealist painter Roland Penrose became a lecturer at the newly founded Camouflage Development and Training Centre at Farnham Castle, writing the practical Home Guard Manual of Camouflage. The film-maker Geoffrey Barkas ran the Middle East Command Camouflage Directorate during the 1941–1942 war in the Western Desert, including the successful deception of Operation Bertram. Hugh Cott was chief instructor; the artist camouflage officers, who called themselves camoufleurs, included Steven Sykes and Tony Ayrton. In Australia, artists were also prominent in the Sydney Camouflage Group, formed under the chairmanship of Professor William John Dakin, a zoologist from Sydney University. Max Dupain, Sydney Ure Smith, and William Dobell were among the members of the group, which worked at Bankstown Airport, RAAF Base Richmond and Garden Island Dockyard. In the United States, artists like John Vassos took a certificate course in military and industrial camouflage at the American School of Design with Baron Nicholas Cerkasoff, and went on to create camouflage for the Air Force. After 1945 Camouflage has been used to protect military equipment such as vehicles, guns, ships, aircraft and buildings as well as individual soldiers and their positions. Vehicle camouflage methods begin with paint, which offers at best only limited effectiveness. Other methods for stationary land vehicles include covering with improvised materials such as blankets and vegetation, and erecting nets, screens and soft covers which may suitably reflect, scatter or absorb near infrared and radar waves. Some military textiles and vehicle camouflage paints also reflect infrared to help provide concealment from night vision devices. After the Second World War, radar made camouflage generally less effective, though coastal boats are sometimes painted like land vehicles. Aircraft camouflage too came to be seen as less important because of radar, and aircraft of different air forces, such as the Royal Air Force's Lightning, were often uncamouflaged. Many camouflaged textile patterns have been developed to suit the need to match combat clothing to different kinds of terrain (such as woodland, snow, and desert). The design of a pattern effective in all terrains has proved elusive. The American Universal Camouflage Pattern of 2004 attempted to suit all environments, but was withdrawn after a few years of service. Terrain-specific patterns have sometimes been developed but are ineffective in other terrains. The problem of making a pattern that works at different ranges has been solved with multiscale designs, often with a pixellated appearance and designed digitally, that provide a fractal-like range of patch sizes so they appear disruptively coloured both at close range and at a distance. The first genuinely digital camouflage pattern was the Canadian Disruptive Pattern (CADPAT), issued to the army in 2002, soon followed by the American Marine pattern (MARPAT). A pixellated appearance is not essential for this effect, though it is simpler to design and to print. Hunting Hunters of game have long made use of camouflage in the form of materials such as animal skins, mud, foliage, and green or brown clothing to enable them to approach wary game animals. Field sports such as driven grouse shooting conceal hunters in hides (also called blinds or shooting butts). Modern hunting clothing makes use of fabrics that provide a disruptive camouflage pattern; for example, in 1986 the hunter Bill Jordan created cryptic clothing for hunters, printed with images of specific kinds of vegetation such as grass and branches. Civil structures Camouflage is occasionally used to make built structures less conspicuous: for example, in South Africa, towers carrying cell telephone antennae are sometimes camouflaged as tall trees with plastic branches, in response to "resistance from the community". Since this method is costly (a figure of three times the normal cost is mentioned), alternative forms of camouflage can include using neutral colours or familiar shapes such as cylinders and flagpoles. Conspicuousness can also be reduced by siting masts near, or on, other structures. Automotive manufacturers often use patterns to disguise upcoming products. This camouflage is designed to obfuscate the vehicle's visual lines, and is used along with padding, covers, and decals. The patterns' purpose is to prevent visual observation (and to a lesser degree photography), that would subsequently enable reproduction of the vehicle's form factors. Fashion, art and society Military camouflage patterns influenced fashion and art from the time of the First World War onwards. Gertrude Stein recalled the cubist artist Pablo Picasso's reaction in around 1915: In 1919, the attendants of a "dazzle ball", hosted by the Chelsea Arts Club, wore dazzle-patterned black and white clothing. The ball influenced fashion and art via postcards and magazine articles. The Illustrated London News announced: More recently, fashion designers have often used camouflage fabric for its striking designs, its "patterned disorder" and its symbolism. Camouflage clothing can be worn largely for its symbolic significance rather than for fashion, as when, during the late 1960s and early 1970s in the United States, anti-war protestors often ironically wore military clothing during demonstrations against the American involvement in the Vietnam War. Modern artists such as Ian Hamilton Finlay have used camouflage to reflect on war. His 1973 screenprint of a tank camouflaged in a leaf pattern, Arcadia, is described by the Tate as drawing "an ironic parallel between this idea of a natural paradise and the camouflage patterns on a tank". The title refers to the Utopian Arcadia of poetry and art, and the memento mori Latin phrase Et in Arcadia ego which recurs in Hamilton Finlay's work. In science fiction, Camouflage is a novel about shapeshifting alien beings by Joe Haldeman. The word is used more figuratively in works of literature such as Thaisa Frank's collection of stories of love and loss, A Brief History of Camouflage. In 1986, Andy Warhol began a series of monumental camouflage paintings, which helped to transform camouflage into a popular print pattern. A year later, in 1987, New York designer Stephen Sprouse used Warhol's camouflage prints as the basis for his Autumn Winter 1987 collection.
Biology and health sciences
Zoology: General
null
6449
https://en.wikipedia.org/wiki/Clock
Clock
A clock or chronometer is a device that measures and displays time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month, and the year. Devices operating on several physical processes have been used over the millennia. Some predecessors to the modern clock may be considered "clocks" that are based on movement in nature: A sundial shows the time by displaying the position of a shadow on a flat surface. There is a range of duration timers, a well-known example being the hourglass. Water clocks, along with sundials, are possibly the oldest time-measuring instruments. A major advance occurred with the invention of the verge escapement, which made possible the first mechanical clocks around 1300 in Europe, which kept time with oscillating timekeepers like balance wheels. Traditionally, in horology (the study of timekeeping), the term clock was used for a striking clock, while a clock that did not strike the hours audibly was called a timepiece. This distinction is not generally made any longer. Watches and other timepieces that can be carried on one's person are usually not referred to as clocks. Spring-driven clocks appeared during the 15th century. During the 15th and 16th centuries, clockmaking flourished. The next development in accuracy occurred after 1656 with the invention of the pendulum clock by Christiaan Huygens. A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The mechanism of a timepiece with a series of gears driven by a spring or weights is referred to as clockwork; the term is used by extension for a similar mechanism not used in a timepiece. The electric clock was patented in 1840, and electronic clocks were introduced in the 20th century, becoming widespread with the development of small battery-powered semiconductor devices. The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates at a particular frequency. This object can be a pendulum, a balance wheel, a tuning fork, a quartz crystal, or the vibration of electrons in atoms as they emit microwaves, the last of which is so precise that it serves as the definition of the second. Clocks have different ways of displaying the time. Analog clocks indicate time with a traditional clock face and moving hands. Digital clocks display a numeric representation of time. Two numbering systems are in use: 12-hour time notation and 24-hour notation. Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays. For the blind and for use over telephones, speaking clocks state the time audibly in words. There are also clocks for the blind that have displays that can be read by touch. Etymology The word clock derives from the medieval Latin word for 'bell'——and has cognates in many European languages. Clocks spread to England from the Low Countries, so the English word came from the Middle Low German and Middle Dutch . The word is also derived from the Middle English , Old North French , or Middle Dutch , all of which mean 'bell'. History of time-measuring devices Sundials The apparent position of the Sun in the sky changes over the course of each day, reflecting the rotation of the Earth. Shadows cast by stationary objects move correspondingly, so their positions can be used to indicate the time of day. A sundial shows the time by displaying the position of a shadow on a (usually) flat surface that has markings that correspond to the hours. Sundials can be horizontal, vertical, or in other orientations. Sundials were widely used in ancient times. With knowledge of latitude, a well-constructed sundial can measure local solar time with reasonable accuracy, within a minute or two. Sundials continued to be used to monitor the performance of clocks until the 1830s, when the use of the telegraph and trains standardized time and time zones between cities. Devices that measure duration, elapsed time and intervals Many devices can be used to mark the passage of time without respect to reference time (time of day, hours, minutes, etc.) and can be useful for measuring duration or intervals. Examples of such duration timers are candle clocks, incense clocks, and the hourglass. Both the candle clock and the incense clock work on the same principle, wherein the consumption of resources is more or less constant, allowing reasonably precise and repeatable estimates of time passages. In the hourglass, fine sand pouring through a tiny hole at a constant rate indicates an arbitrary, predetermined passage of time. The resource is not consumed, but re-used. Water clocks Water clocks, along with sundials, are possibly the oldest time-measuring instruments, with the only exception being the day-counting tally stick. Given their great antiquity, where and when they first existed is not known and is perhaps unknowable. The bowl-shaped outflow is the simplest form of a water clock and is known to have existed in Babylon and Egypt around the 16th century BC. Other regions of the world, including India and China, also have early evidence of water clocks, but the earliest dates are less certain. Some authors, however, write about water clocks appearing as early as 4000 BC in these regions of the world. The Macedonian astronomer Andronicus of Cyrrhus supervised the construction of the Tower of the Winds in Athens in the 1st century BC, which housed a large clepsydra inside as well as multiple prominent sundials outside, allowing it to function as a kind of early clocktower. The Greek and Roman civilizations advanced water clock design with improved accuracy. These advances were passed on through Byzantine and Islamic times, eventually making their way back to Europe. Independently, the Chinese developed their own advanced water clocks () by 725 AD, passing their ideas on to Korea and Japan. Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. Pre-modern societies do not have the same precise timekeeping requirements that exist in modern industrial societies, where every hour of work or rest is monitored and work may start or finish at any time regardless of external conditions. Instead, water clocks in ancient societies were used mainly for astrological reasons. These early water clocks were calibrated with a sundial. While never reaching the level of accuracy of a modern timepiece, the water clock was the most accurate and commonly used timekeeping device for millennia until it was replaced by the more accurate pendulum clock in 17th-century Europe. Islamic civilization is credited with further advancing the accuracy of clocks through elaborate engineering. In 797 (or possibly 801), the Abbasid caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas together with a "particularly elaborate example" of a water clock. Pope Sylvester II introduced clocks to northern and western Europe around 1000 AD. Mechanical water clocks The first known geared clock was invented by the great mathematician, physicist, and engineer Archimedes during the 3rd century BC. Archimedes created his astronomical clock, which was also a cuckoo clock with birds singing and moving every hour. It is the first carillon clock as it plays music simultaneously with a person blinking his eyes, surprised by the singing birds. The Archimedes clock works with a system of four weights, counterweights, and strings regulated by a system of floats in a water container with siphons that regulate the automatic continuation of the clock. The principles of this type of clock are described by the mathematician and physicist Hero, who says that some of them work with a chain that turns a gear in the mechanism. Another Greek clock probably constructed at the time of Alexander was in Gaza, as described by Procopius. The Gaza clock was probably a Meteoroskopeion, i.e., a building showing celestial phenomena and the time. It had a pointer for the time and some automations similar to the Archimedes clock. There were 12 doors opening one every hour, with Hercules performing his labors, the Lion at one o'clock, etc., and at night a lamp becomes visible every hour, with 12 windows opening to show the time. The Tang dynasty Buddhist monk Yi Xing along with government official Liang Lingzan made the escapement in 723 (or 725) to the workings of a water-powered armillary sphere and clock drive, which was the world's first clockwork escapement. The Song dynasty polymath and genius Su Song (1020–1101) incorporated it into his monumental innovation of the astronomical clock tower of Kaifeng in 1088. His astronomical clock and rotating armillary sphere still relied on the use of either flowing water during the spring, summer, and autumn seasons or liquid mercury during the freezing temperatures of winter (i.e., hydraulics). In Su Song's waterwheel linkwork device, the action of the escapement's arrest and release was achieved by gravity exerted periodically as the continuous flow of liquid-filled containers of a limited size. In a single line of evolution, Su Song's clock therefore united the concepts of the clepsydra and the mechanical clock into one device run by mechanics and hydraulics. In his memorial, Su Song wrote about this concept: According to your servant's opinion there have been many systems and designs for astronomical instruments during past dynasties all differing from one another in minor respects. But the principle of the use of water-power for the driving mechanism has always been the same. The heavens move without ceasing but so also does water flow (and fall). Thus if the water is made to pour with perfect evenness, then the comparison of the rotary movements (of the heavens and the machine) will show no discrepancy or contradiction; for the unresting follows the unceasing. Song was also strongly influenced by the earlier armillary sphere created by Zhang Sixun (976 AD), who also employed the escapement mechanism and used liquid mercury instead of water in the waterwheel of his astronomical clock tower. The mechanical clockworks for Su Song's astronomical tower featured a great driving-wheel that was 11 feet in diameter, carrying 36 scoops, into each of which water was poured at a uniform rate from the "constant-level tank". The main driving shaft of iron, with its cylindrical necks supported on iron crescent-shaped bearings, ended in a pinion, which engaged a gear wheel at the lower end of the main vertical transmission shaft. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet), featured a clock escapement, and was indirectly powered by a rotating wheel either with falling water or liquid mercury. A full-sized working replica of Su Song's clock exists in the Republic of China (Taiwan)'s National Museum of Natural Science, Taichung city. This full-scale, fully functional replica, approximately 12 meters (39 feet) in height, was constructed from Su Song's original descriptions and mechanical drawings. The Chinese escapement spread west and was the source for Western escapement technology. In the 12th century, Al-Jazari, an engineer from Mesopotamia (lived 1136–1206) who worked for the Artuqid king of Diyar-Bakr, Nasir al-Din, made numerous clocks of all shapes and sizes. The most reputed clocks included the elephant, scribe, and castle clocks, some of which have been successfully reconstructed. As well as telling the time, these grand clocks were symbols of the status, grandeur, and wealth of the Urtuq State. Knowledge of these mercury escapements may have spread through Europe with translations of Arabic and Spanish texts. Fully mechanical The word (from the Greek —'hour', and —'to tell') was used to describe early mechanical clocks, but the use of this word (still used in several Romance languages) for all timekeepers conceals the true nature of the mechanisms. For example, there is a record that in 1176, Sens Cathedral in France installed an 'horologe', but the mechanism used is unknown. According to Jocelyn de Brakelond, in 1198, during a fire at the abbey of St Edmundsbury (now Bury St Edmunds), the monks "ran to the clock" to fetch water, indicating that their water clock had a reservoir large enough to help extinguish the occasional fire. The word clock (via Medieval Latin from Old Irish , both meaning 'bell'), which gradually supersedes "horologe", suggests that it was the sound of bells that also characterized the prototype mechanical clocks that appeared during the 13th century in Europe. In Europe, between 1280 and 1320, there was an increase in the number of references to clocks and horologes in church records, and this probably indicates that a new type of clock mechanism had been devised. Existing clock mechanisms that used water power were being adapted to take their driving power from falling weights. This power was controlled by some form of oscillating mechanism, probably derived from existing bell-ringing or alarm devices. This controlled release of power – the escapement – marks the beginning of the true mechanical clock, which differed from the previously mentioned cogwheel clocks. The verge escapement mechanism appeared during the surge of true mechanical clock development, which did not need any kind of fluid power, like water or mercury, to work. These mechanical clocks were intended for two main purposes: for signalling and notification (e.g., the timing of services and public events) and for modeling the solar system. The former purpose is administrative; the latter arises naturally given the scholarly interests in astronomy, science, and astrology and how these subjects integrated with the religious philosophy of the time. The astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system. Simple clocks intended mainly for notification were installed in towers and did not always require faces or hands. They would have announced the canonical hours or intervals between set times of prayer. Canonical hours varied in length as the times of sunrise and sunset shifted. The more sophisticated astronomical clocks would have had moving dials or hands and would have shown the time in various time systems, including Italian hours, canonical hours, and time as measured by astronomers at the time. Both styles of clocks started acquiring extravagant features, such as automata. In 1283, a large clock was installed at Dunstable Priory in Bedfordshire in southern England; its location above the rood screen suggests that it was not a water clock. In 1292, Canterbury Cathedral installed a 'great horloge'. Over the next 30 years, there were mentions of clocks at a number of ecclesiastical institutions in England, Italy, and France. In 1322, a new clock was installed in Norwich, an expensive replacement for an earlier clock installed in 1273. This had a large (2 metre) astronomical dial with automata and bells. The costs of the installation included the full-time employment of two clockkeepers for two years. Astronomical An elaborate water clock, the 'Cosmic Engine', was invented by Su Song, a Chinese polymath, designed and constructed in China in 1092. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet) and was indirectly powered by a rotating wheel with falling water and liquid mercury, which turned an armillary sphere capable of calculating complex astronomical problems. In Europe, there were the clocks constructed by Richard of Wallingford in Albans by 1336, and by Giovanni de Dondi in Padua from 1348 to 1364. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. They illustrate how quickly the theory of the mechanical clock had been translated into practical constructions, and also that one of the many impulses to their development had been the desire of astronomers to investigate celestial phenomena. The Astrarium of Giovanni Dondi dell'Orologio was a complex astronomical clock built between 1348 and 1364 in Padua, Italy, by the doctor and clock-maker Giovanni Dondi dell'Orologio. The Astrarium had seven faces and 107 moving gears; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The astrarium stood about 1 metre high, and consisted of a seven-sided brass or iron framework resting on 7 decorative paw-shaped feet. The lower section provided a 24-hour dial and a large calendar drum, showing the fixed feasts of the church, the movable feasts, and the position in the zodiac of the moon's ascending node. The upper section contained 7 dials, each about 30 cm in diameter, showing the positional data for the Primum Mobile, Venus, Mercury, the moon, Saturn, Jupiter, and Mars. Directly above the 24-hour dial is the dial of the Primum Mobile, so called because it reproduces the diurnal motion of the stars and the annual motion of the sun against the background of stars. Each of the 'planetary' dials used complex clockwork to produce reasonably accurate models of the planets' motion. These agreed reasonably well both with Ptolemaic theory and with observations. Wallingford's clock had a large astrolabe-type dial, showing the sun, the moon's age, phase, and node, a star map, and possibly the planets. In addition, it had a wheel of fortune and an indicator of the state of the tide at London Bridge. Bells rang every hour, the number of strokes indicating the time. Dondi's clock was a seven-sided construction, 1 metre high, with dials showing the time of day, including minutes, the motions of all the known planets, an automatic calendar of fixed and movable feasts, and an eclipse prediction hand rotating once every 18 years. It is not known how accurate or reliable these clocks would have been. They were probably adjusted manually every day to compensate for errors caused by wear and imprecise manufacture. Water clocks are sometimes still used today, and can be examined in places such as ancient castles and museums. The Salisbury Cathedral clock, built in 1386, is considered to be the world's oldest surviving mechanical clock that strikes the hours. Spring-driven Clockmakers developed their art in various ways. Building smaller clocks was a technical challenge, as was improving accuracy and reliability. Clocks could be impressive showpieces to demonstrate skilled craftsmanship, or less expensive, mass-produced items for domestic use. The escapement in particular was an important factor affecting the clock's accuracy, so many different mechanisms were tried. Spring-driven clocks appeared during the 15th century, although they are often erroneously credited to Nuremberg watchmaker Peter Henlein (or Henle, or Hele) around 1511. The earliest existing spring driven clock is the chamber clock given to Phillip the Good, Duke of Burgundy, around 1430, now in the Germanisches Nationalmuseum. Spring power presented clockmakers with a new problem: how to keep the clock movement running at a constant rate as the spring ran down. This resulted in the invention of the stackfreed and the fusee in the 15th century, and many other innovations, down to the invention of the modern going barrel in 1760. Early clock dials did not indicate minutes and seconds. A clock with a dial indicating minutes was illustrated in a 1475 manuscript by Paulus Almanus, and some 15th-century clocks in Germany indicated minutes and seconds. An early record of a seconds hand on a clock dates back to about 1560 on a clock now in the Fremersdorf collection. During the 15th and 16th centuries, clockmaking flourished, particularly in the metalworking towns of Nuremberg and Augsburg, and in Blois, France. Some of the more basic table clocks have only one time-keeping hand, with the dial between the hour markers being divided into four equal parts making the clocks readable to the nearest 15 minutes. Other clocks were exhibitions of craftsmanship and skill, incorporating astronomical indicators and musical movements. The cross-beat escapement was invented in 1584 by Jost Bürgi, who also developed the remontoire. Bürgi's clocks were a great improvement in accuracy as they were correct to within a minute a day. These clocks helped the 16th-century astronomer Tycho Brahe to observe astronomical events with much greater precision than before. Pendulum The next development in accuracy occurred after 1656 with the invention of the pendulum clock. Galileo had the idea to use a swinging bob to regulate the motion of a time-telling device earlier in the 17th century. Christiaan Huygens, however, is usually credited as the inventor. He determined the mathematical formula that related pendulum length to time (about 99.4 cm or 39.1 inches for the one second movement) and had the first pendulum-driven clock made. The first model clock was built in 1657 in the Hague, but it was in England that the idea was taken up. The longcase clock (also known as the grandfather clock) was created to house the pendulum and works by the English clockmaker William Clement in 1670 or 1671. It was also at this time that clock cases began to be made of wood and clock faces to use enamel as well as hand-painted ceramics. In 1670, William Clement created the anchor escapement, an improvement over Huygens' crown escapement. Clement also introduced the pendulum suspension spring in 1671. The concentric minute hand was added to the clock by Daniel Quare, a London clockmaker and others, and the second hand was first introduced. Hairspring In 1675, Huygens and Robert Hooke invented the spiral balance spring, or the hairspring, designed to control the oscillating speed of the balance wheel. This crucial advance finally made accurate pocket watches possible. The great English clockmaker Thomas Tompion, was one of the first to use this mechanism successfully in his pocket watches, and he adopted the minute hand which, after a variety of designs were trialled, eventually stabilised into the modern-day configuration. The rack and snail striking mechanism for striking clocks, was introduced during the 17th century and had distinct advantages over the 'countwheel' (or 'locking plate') mechanism. During the 20th century there was a common misconception that Edward Barlow invented rack and snail striking. In fact, his invention was connected with a repeating mechanism employing the rack and snail. The repeating clock, that chimes the number of hours (or even minutes) on demand was invented by either Quare or Barlow in 1676. George Graham invented the deadbeat escapement for clocks in 1720. Marine chronometer A major stimulus to improving the accuracy and reliability of clocks was the importance of precise time-keeping for navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about 10 seconds per day. This clock could not contain a pendulum, which would be virtually useless on a rocking ship. In 1714, the British government offered large financial rewards to the value of 20,000 pounds for anyone who could determine longitude accurately. John Harrison, who dedicated his life to improving the accuracy of his clocks, later received considerable sums under the Longitude Act. In 1735, Harrison built his first chronometer, which he steadily improved on over the next thirty years before submitting it for examination. The clock had many innovations, including the use of bearings to reduce friction, weighted balances to compensate for the ship's pitch and roll in the sea and the use of two different metals to reduce the problem of expansion from heat. The chronometer was tested in 1761 by Harrison's son and by the end of 10 weeks the clock was in error by less than 5 seconds. Mass production The British had dominated watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass-production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. In 1816, Eli Terry and some other Connecticut clockmakers developed a way of mass-producing clocks by using interchangeable parts. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that also used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company. Early electric In 1815, the English scientist Francis Ronalds published the first electric clock powered by dry pile batteries. Alexander Bain, a Scottish clockmaker, patented the electric clock in 1840. The electric clock's mainspring is wound either with an electric motor or with an electromagnet and armature. In 1841, he first patented the electromagnetic pendulum. By the end of the nineteenth century, the advent of the dry cell battery made it feasible to use electric power in clocks. Spring or weight driven clocks that use electricity, either alternating current (AC) or direct current (DC), to rewind the spring or raise the weight of a mechanical clock would be classified as an electromechanical clock. This classification would also apply to clocks that employ an electrical impulse to propel the pendulum. In electromechanical clocks the electricity serves no time keeping function. These types of clocks were made as individual timepieces but more commonly used in synchronized time installations in schools, businesses, factories, railroads and government facilities as a master clock and slave clocks. Where an AC electrical supply of stable frequency is available, timekeeping can be maintained very reliably by using a synchronous motor, essentially counting the cycles. The supply current alternates with an accurate frequency of 50 hertz in many countries, and 60 hertz in others. While the frequency may vary slightly during the day as the load changes, generators are designed to maintain an accurate number of cycles over a day, so the clock may be a fraction of a second slow or fast at any time, but will be perfectly accurate over a long time. The rotor of the motor rotates at a speed that is related to the alternation frequency. Appropriate gearing converts this rotation speed to the correct ones for the hands of the analog clock. Time in these cases is measured in several ways, such as by counting the cycles of the AC supply, vibration of a tuning fork, the behaviour of quartz crystals, or the quantum vibrations of atoms. Electronic circuits divide these high-frequency oscillations to slower ones that drive the time display. Quartz The piezoelectric properties of crystalline quartz were discovered by Jacques and Pierre Curie in 1880. The first crystal oscillator was invented in 1917 by Alexander M. Nicholson, after which the first quartz crystal oscillator was built by Walter G. Cady in 1921. In 1927 the first quartz clock was built by Warren Marrison and J.W. Horton at Bell Telephone Laboratories in Canada. The following decades saw the development of quartz clocks as precision time measurement devices in laboratory settings—the bulky and delicate counting electronics, built with vacuum tubes at the time, limited their practical use elsewhere. The National Bureau of Standards (now NIST) based the time standard of the United States on quartz clocks from late 1929 until the 1960s, when it changed to atomic clocks. In 1969, Seiko produced the world's first quartz wristwatch, the Astron. Their inherent accuracy and low cost of production resulted in the subsequent proliferation of quartz clocks and watches. Atomic Currently, atomic clocks are the most accurate clocks in existence. They are considerably more accurate than quartz clocks as they can be accurate to within a few seconds over trillions of years. Atomic clocks were first theorized by Lord Kelvin in 1879. In the 1930s the development of magnetic resonance created practical method for doing this. A prototype ammonia maser device was built in 1949 at the U.S. National Bureau of Standards (NBS, now NIST). Although it was less accurate than existing quartz clocks, it served to demonstrate the concept. The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time (ET). As of 2013, the most stable atomic clocks are ytterbium clocks, which are stable to within less than two parts in 1 quintillion (). Operation The invention of the mechanical clock in the 13th century initiated a change in timekeeping methods from continuous processes, such as the motion of the gnomon's shadow on a sundial or the flow of liquid in a water clock, to periodic oscillatory processes, such as the swing of a pendulum or the vibration of a quartz crystal, which had the potential for more accuracy. All modern clocks use oscillation. Although the mechanisms they use vary, all oscillating clocks, mechanical, electric, and atomic, work similarly and can be divided into analogous parts. They consist of an object that repeats the same motion over and over again, an oscillator, with a precisely constant time interval between each repetition, or 'beat'. Attached to the oscillator is a controller device, which sustains the oscillator's motion by replacing the energy it loses to friction, and converts its oscillations into a series of pulses. The pulses are then counted by some type of counter, and the number of counts is converted into convenient units, usually seconds, minutes, hours, etc. Finally some kind of indicator displays the result in human readable form. Power source Oscillator The timekeeping element in every modern clock is a harmonic oscillator, a physical object (resonator) that vibrates or oscillates repetitively at a precisely constant frequency. In mechanical clocks, this is either a pendulum or a balance wheel. In some early electronic clocks and watches such as the Accutron, they use a tuning fork. In quartz clocks and watches, it is a quartz crystal. In atomic clocks, it is the vibration of electrons in atoms as they emit microwaves. In early mechanical clocks before 1657, it was a crude balance wheel or foliot which was not a harmonic oscillator because it lacked a balance spring. As a result, they were very inaccurate, with errors of perhaps an hour a day. The advantage of a harmonic oscillator over other forms of oscillator is that it employs resonance to vibrate at a precise natural resonant frequency or "beat" dependent only on its physical characteristics, and resists vibrating at other rates. The possible precision achievable by a harmonic oscillator is measured by a parameter called its Q, or quality factor, which increases (other things being equal) with its resonant frequency. This is why there has been a long-term trend toward higher frequency oscillators in clocks. Balance wheels and pendulums always include a means of adjusting the rate of the timepiece. Quartz timepieces sometimes include a rate screw that adjusts a capacitor for that purpose. Atomic clocks are primary standards, and their rate cannot be adjusted. Synchronized or slave clocks Some clocks rely for their accuracy on an external oscillator; that is, they are automatically synchronized to a more accurate clock: Slave clocks, used in large institutions and schools from the 1860s to the 1970s, kept time with a pendulum, but were wired to a master clock in the building, and periodically received a signal to synchronize them with the master, often on the hour. Later versions without pendulums were triggered by a pulse from the master clock and certain sequences used to force rapid synchronization following a power failure. Synchronous electric clocks do not have an internal oscillator, but count cycles of the 50 or 60 Hz oscillation of the AC power line, which is synchronized by the utility to a precision oscillator. The counting may be done electronically, usually in clocks with digital displays, or, in analog clocks, the AC may drive a synchronous motor which rotates an exact fraction of a revolution for every cycle of the line voltage, and drives the gear train. Although changes in the grid line frequency due to load variations may cause the clock to temporarily gain or lose several seconds during the course of a day, the total number of cycles per 24 hours is maintained extremely accurately by the utility company, so that the clock keeps time accurately over long periods. Computer real-time clocks keep time with a quartz crystal, but can be periodically (usually weekly) synchronized over the Internet to atomic clocks (UTC), using the Network Time Protocol (NTP). Radio clocks keep time with a quartz crystal, but are periodically synchronized to time signals transmitted from dedicated standard time radio stations or satellite navigation signals, which are set by atomic clocks. Controller This has the dual function of keeping the oscillator running by giving it 'pushes' to replace the energy lost to friction, and converting its vibrations into a series of pulses that serve to measure the time. In mechanical clocks, this is the escapement, which gives precise pushes to the swinging pendulum or balance wheel, and releases one gear tooth of the escape wheel at each swing, allowing all the clock's wheels to move forward a fixed amount with each swing. In electronic clocks this is an electronic oscillator circuit that gives the vibrating quartz crystal or tuning fork tiny 'pushes', and generates a series of electrical pulses, one for each vibration of the crystal, which is called the clock signal. In atomic clocks the controller is an evacuated microwave cavity attached to a microwave oscillator controlled by a microprocessor. A thin gas of caesium atoms is released into the cavity where they are exposed to microwaves. A laser measures how many atoms have absorbed the microwaves, and an electronic feedback control system called a phase-locked loop tunes the microwave oscillator until it is at the frequency that causes the atoms to vibrate and absorb the microwaves. Then the microwave signal is divided by digital counters to become the clock signal. In mechanical clocks, the low Q of the balance wheel or pendulum oscillator made them very sensitive to the disturbing effect of the impulses of the escapement, so the escapement had a great effect on the accuracy of the clock, and many escapement designs were tried. The higher Q of resonators in electronic clocks makes them relatively insensitive to the disturbing effects of the drive power, so the driving oscillator circuit is a much less critical component. Counter chain This counts the pulses and adds them up to get traditional time units of seconds, minutes, hours, etc. It usually has a provision for setting the clock by manually entering the correct time into the counter. In mechanical clocks this is done mechanically by a gear train, known as the wheel train. The gear train also has a second function; to transmit mechanical power from the power source to run the oscillator. There is a friction coupling called the 'cannon pinion' between the gears driving the hands and the rest of the clock, allowing the hands to be turned to set the time. In digital clocks a series of integrated circuit counters or dividers add the pulses up digitally, using binary logic. Often pushbuttons on the case allow the hour and minute counters to be incremented and decremented to set the time. Indicator This displays the count of seconds, minutes, hours, etc. in a human readable form. The earliest mechanical clocks in the 13th century did not have a visual indicator and signalled the time audibly by striking bells. Many clocks to this day are striking clocks which strike the hour. Analog clocks display time with an analog clock face, which consists of a dial with the numbers 1 through 12 or 24, the hours in the day, around the outside. The hours are indicated with an hour hand, which makes one or two revolutions in a day, while the minutes are indicated by a minute hand, which makes one revolution per hour. In mechanical clocks a gear train drives the hands; in electronic clocks the circuit produces pulses every second which drive a stepper motor and gear train, which move the hands. Digital clocks display the time in periodically changing digits on a digital display. A common misconception is that a digital clock is more accurate than an analog wall clock, but the indicator type is separate and apart from the accuracy of the timing source. Talking clocks and the speaking clock services provided by telephone companies speak the time audibly, using either recorded or digitally synthesized voices. Types Clocks can be classified by the type of time display, as well as by the method of timekeeping. Time display methods Analog Analog clocks usually use a clock face which indicates time using rotating pointers called "hands" on a fixed numbered dial or dials. The standard clock face, known universally throughout the world, has a short "hour hand" which indicates the hour on a circular dial of 12 hours, making two revolutions per day, and a longer "minute hand" which indicates the minutes in the current hour on the same dial, which is also divided into 60 minutes. It may also have a "second hand" which indicates the seconds in the current minute. The only other widely used clock face today is the 24 hour analog dial, because of the use of 24 hour time in military organizations and timetables. Before the modern clock face was standardized during the Industrial Revolution, many other face designs were used throughout the years, including dials divided into 6, 8, 10, and 24 hours. During the French Revolution the French government tried to introduce a 10-hour clock, as part of their decimal-based metric system of measurement, but it did not achieve widespread use. An Italian 6 hour clock was developed in the 18th century, presumably to save power (a clock or watch striking 24 times uses more power). Another type of analog clock is the sundial, which tracks the sun continuously, registering the time by the shadow position of its gnomon. Because the sun does not adjust to daylight saving time, users must add an hour during that time. Corrections must also be made for the equation of time, and for the difference between the longitudes of the sundial and of the central meridian of the time zone that is being used (i.e. 15 degrees east of the prime meridian for each hour that the time zone is ahead of GMT). Sundials use some or part of the 24 hour analog dial. There also exist clocks which use a digital display despite having an analog mechanism—these are commonly referred to as flip clocks. Alternative systems have been proposed. For example, the "Twelv" clock indicates the current hour using one of twelve colors, and indicates the minute by showing a proportion of a circular disk, similar to a moon phase. Digital Digital clocks display a numeric representation of time. Two numeric display formats are commonly used on digital clocks: the 24-hour notation with hours ranging 00–23; the 12-hour notation with AM/PM indicator, with hours indicated as 12AM, followed by 1AM–11AM, followed by 12PM, followed by 1PM–11PM (a notation mostly used in domestic environments). Most digital clocks use electronic mechanisms and LCD, LED, or VFD displays; many other display technologies are used as well (cathode-ray tubes, nixie tubes, etc.). After a reset, battery change or power failure, these clocks without a backup battery or capacitor either start counting from 12:00, or stay at 12:00, often with blinking digits indicating that the time needs to be set. Some newer clocks will reset themselves based on radio or Internet time servers that are tuned to national atomic clocks. Since the introduction of digital clocks in the 1960s, there has been a notable decline in the use of analog clocks. Some clocks, called 'flip clocks', have digital displays that work mechanically. The digits are painted on sheets of material which are mounted like the pages of a book. Once a minute, a page is turned over to reveal the next digit. These displays are usually easier to read in brightly lit conditions than LCDs or LEDs. Also, they do not go back to 12:00 after a power interruption. Flip clocks generally do not have electronic mechanisms. Usually, they are driven by AC-synchronous motors. Hybrid (analog-digital) Clocks with analog quadrants, with a digital component, usually minutes and hours displayed analogously and seconds displayed in digital mode. Auditory For convenience, distance, telephony or blindness, auditory clocks present the time as sounds. The sound is either spoken natural language, (e.g. "The time is twelve thirty-five"), or as auditory codes (e.g. number of sequential bell rings on the hour represents the number of the hour like the bell, Big Ben). Most telecommunication companies also provide a speaking clock service as well. Word Word clocks are clocks that display the time visually using sentences. E.g.: "It's about three o'clock." These clocks can be implemented in hardware or software. Projection Some clocks, usually digital ones, include an optical projector that shines a magnified image of the time display onto a screen or onto a surface such as an indoor ceiling or wall. The digits are large enough to be easily read, without using glasses, by persons with moderately imperfect vision, so the clocks are convenient for use in their bedrooms. Usually, the timekeeping circuitry has a battery as a backup source for an uninterrupted power supply to keep the clock on time, while the projection light only works when the unit is connected to an A.C. supply. Completely battery-powered portable versions resembling flashlights are also available. Tactile Auditory and projection clocks can be used by people who are blind or have limited vision. There are also clocks for the blind that have displays that can be read by using the sense of touch. Some of these are similar to normal analog displays, but are constructed so the hands can be felt without damaging them. Another type is essentially digital, and uses devices that use a code such as Braille to show the digits so that they can be felt with the fingertips. Multi-display Some clocks have several displays driven by a single mechanism, and some others have several completely separate mechanisms in a single case. Clocks in public places often have several faces visible from different directions, so that the clock can be read from anywhere in the vicinity; all the faces show the same time. Other clocks show the current time in several time-zones. Watches that are intended to be carried by travellers often have two displays, one for the local time and the other for the time at home, which is useful for making pre-arranged phone calls. Some equation clocks have two displays, one showing mean time and the other solar time, as would be shown by a sundial. Some clocks have both analog and digital displays. Clocks with Braille displays usually also have conventional digits so they can be read by sighted people. Purposes Clocks are in homes, offices and many other places; smaller ones (watches) are carried on the wrist or in a pocket; larger ones are in public places, e.g. a railway station or church. A small clock is often shown in a corner of computer displays, mobile phones and many MP3 players. The primary purpose of a clock is to display the time. Clocks may also have the facility to make a loud alert signal at a specified time, typically to waken a sleeper at a preset time; they are referred to as alarm clocks. The alarm may start at a low volume and become louder, or have the facility to be switched off for a few minutes then resume. Alarm clocks with visible indicators are sometimes used to indicate to children too young to read the time that the time for sleep has finished; they are sometimes called training clocks. A clock mechanism may be used to control a device according to time, e.g. a central heating system, a VCR, or a time bomb (see: digital counter). Such mechanisms are usually called timers. Clock mechanisms are also used to drive devices such as solar trackers and astronomical telescopes, which have to turn at accurately controlled speeds to counteract the rotation of the Earth. Most digital computers depend on an internal signal at constant frequency to synchronize processing; this is referred to as a clock signal. (A few research projects are developing CPUs based on asynchronous circuits.) Some equipment, including computers, also maintains time and date for use as required; this is referred to as time-of-day clock, and is distinct from the system clock signal, although possibly based on counting its cycles. Time standards For some scientific work timing of the utmost accuracy is essential. It is also necessary to have a standard of the maximum accuracy against which working clocks can be calibrated. An ideal clock would give the time to unlimited accuracy, but this is not realisable. Many physical processes, in particular including some transitions between atomic energy levels, occur at exceedingly stable frequency; counting cycles of such a process can give a very accurate and consistent time—clocks which work this way are usually called atomic clocks. Such clocks are typically large, very expensive, require a controlled environment, and are far more accurate than required for most purposes; they are typically used in a standards laboratory. Navigation Until advances in the late twentieth century, navigation depended on the ability to measure latitude and longitude. Latitude can be determined through celestial navigation; the measurement of longitude requires accurate knowledge of time. This need was a major motivation for the development of accurate mechanical clocks. John Harrison created the first highly accurate marine chronometer in the mid-18th century. The Noon gun in Cape Town still fires an accurate signal to allow ships to check their chronometers. Many buildings near major ports used to have (some still do) a large ball mounted on a tower or mast arranged to drop at a pre-determined time, for the same purpose. While satellite navigation systems such as GPS require unprecedentedly accurate knowledge of time, this is supplied by equipment on the satellites; vehicles no longer need timekeeping equipment. Sports and games Clocks can be used to measure varying periods of time in games and sports. Stopwatches can be used to time the performance of track athletes. Chess clocks are used to limit the board game players' time to make a move. In various sports, measure the duration the game or subdivisions of the game, while other clocks may be used for tracking different durations; these include play clocks, shot clocks, and pitch clocks. Culture Folklore and superstition In the United Kingdom, clocks are associated with various beliefs, many involving death or bad luck. In legends, clocks have reportedly stopped of their own accord upon a nearby person's death, especially those of monarchs. The clock in the House of Lords supposedly stopped at "nearly" the hour of George III's death in 1820, the one at Balmoral Castle stopped during the hour of Queen Victoria's death, and similar legends are related about clocks associated with William IV and Elizabeth I. Many superstitions exist about clocks. One stopping before a person has died may foretell coming death. Similarly, if a clock strikes during a church hymn or a marriage ceremony, death or calamity is prefigured for the parishioners or a spouse, respectively. Death or ill events are foreshadowed if a clock strikes the wrong time. It may also be unlucky to have a clock face a fire or to speak while a clock is striking. In Chinese culture, giving a clock () is often taboo, especially to the elderly, as it is a homophone of the act of attending another's funeral (). Specific types Awards (GPHG)
Technology
Navigation and timekeeping
null
6458
https://en.wikipedia.org/wiki/Ceramic
Ceramic
A ceramic is any of the various hard, brittle, heat-resistant, and corrosion-resistant materials made by shaping and then firing an inorganic, nonmetallic material, such as clay, at a high temperature. Common examples are earthenware, porcelain, and brick. The earliest ceramics made by humans were fired clay bricks used for building house walls and other structures. Other pottery objects such as pots, vessels, vases and figurines were made from clay, either by itself or mixed with other materials like silica, hardened by sintering in fire. Later, ceramics were glazed and fired to create smooth, colored surfaces, decreasing porosity through the use of glassy, amorphous ceramic coatings on top of the crystalline ceramic substrates. Ceramics now include domestic, industrial, and building products, as well as a wide range of materials developed for use in advanced ceramic engineering, such as semiconductors. The word ceramic comes from the Ancient Greek word (), meaning "of or for pottery" (). The earliest known mention of the root ceram- is the Mycenaean Greek , workers of ceramic, written in Linear B syllabic script. The word ceramic can be used as an adjective to describe a material, product, or process, or it may be used as a noun, either singular or, more commonly, as the plural noun ceramics. Materials Ceramic material is an inorganic, metallic oxide, nitride, or carbide material. Some elements, such as carbon or silicon, may be considered ceramics. Ceramic materials are brittle, hard, strong in compression, and weak in shearing and tension. They withstand the chemical erosion that occurs in other materials subjected to acidic or caustic environments. Ceramics generally can withstand very high temperatures, ranging from 1,000 °C to 1,600 °C (1,800 °F to 3,000 °F). The crystallinity of ceramic materials varies widely. Most often, fired ceramics are either vitrified or semi-vitrified, as is the case with earthenware, stoneware, and porcelain. Varying crystallinity and electron composition in the ionic and covalent bonds cause most ceramic materials to be good thermal and electrical insulators (researched in ceramic engineering). With such a large range of possible options for the composition/structure of a ceramic (nearly all of the elements, nearly all types of bonding, and all levels of crystallinity), the breadth of the subject is vast, and identifiable attributes (hardness, toughness, electrical conductivity) are difficult to specify for the group as a whole. General properties such as high melting temperature, high hardness, poor conductivity, high moduli of elasticity, chemical resistance, and low ductility are the norm, with known exceptions to each of these rules (piezoelectric ceramics, low glass transition temperature ceramics, superconductive ceramics). Composites such as fiberglass and carbon fiber, while containing ceramic materials, are not considered to be part of the ceramic family. Highly oriented crystalline ceramic materials are not amenable to a great range of processing. Methods for dealing with them tend to fall into one of two categories: either making the ceramic in the desired shape by reaction in situ or "forming" powders into the desired shape and then sintering to form a solid body. Ceramic forming techniques include shaping by hand (sometimes including a rotation process called "throwing"), slip casting, tape casting (used for making very thin ceramic capacitors), injection molding, dry pressing, and other variations. Many ceramics experts do not consider materials with an amorphous (noncrystalline) character (i.e., glass) to be ceramics, even though glassmaking involves several steps of the ceramic process and its mechanical properties are similar to those of ceramic materials. However, heat treatments can convert glass into a semi-crystalline material known as glass-ceramic. Traditional ceramic raw materials include clay minerals such as kaolinite, whereas more recent materials include aluminium oxide, more commonly known as alumina. Modern ceramic materials, which are classified as advanced ceramics, include silicon carbide and tungsten carbide. Both are valued for their abrasion resistance and are therefore used in applications such as the wear plates of crushing equipment in mining operations. Advanced ceramics are also used in the medical, electrical, electronics, and armor industries. History Human beings appear to have been making their own ceramics for at least 26,000 years, subjecting clay and silica to intense heat to fuse and form ceramic materials. The earliest found so far were in southern central Europe and were sculpted figures, not dishes. The earliest known pottery was made by mixing animal products with clay and firing it at up to . While pottery fragments have been found up to 19,000 years old, it was not until about 10,000 years later that regular pottery became common. An early people that spread across much of Europe is named after its use of pottery: the Corded Ware culture. These early Indo-European peoples decorated their pottery by wrapping it with rope while it was still wet. When the ceramics were fired, the rope burned off but left a decorative pattern of complex grooves on the surface. The invention of the wheel eventually led to the production of smoother, more even pottery using the wheel-forming (throwing) technique, like the pottery wheel. Early ceramics were porous, absorbing water easily. It became useful for more items with the discovery of glazing techniques, which involved coating pottery with silicon, bone ash, or other materials that could melt and reform into a glassy surface, making a vessel less pervious to water. Archaeology Ceramic artifacts have an important role in archaeology for understanding the culture, technology, and behavior of peoples of the past. They are among the most common artifacts to be found at an archaeological site, generally in the form of small fragments of broken pottery called sherds. The processing of collected sherds can be consistent with two main types of analysis: technical and traditional. The traditional analysis involves sorting ceramic artifacts, sherds, and larger fragments into specific types based on style, composition, manufacturing, and morphology. By creating these typologies, it is possible to distinguish between different cultural styles, the purpose of the ceramic, and the technological state of the people, among other conclusions. Besides, by looking at stylistic changes in ceramics over time, it is possible to separate (seriate) the ceramics into distinct diagnostic groups (assemblages). A comparison of ceramic artifacts with known dated assemblages allows for a chronological assignment of these pieces. The technical approach to ceramic analysis involves a finer examination of the composition of ceramic artifacts and sherds to determine the source of the material and, through this, the possible manufacturing site. Key criteria are the composition of the clay and the temper used in the manufacture of the article under study: the temper is a material added to the clay during the initial production stage and is used to aid the subsequent drying process. Types of temper include shell pieces, granite fragments, and ground sherd pieces called 'grog'. Temper is usually identified by microscopic examination of the tempered material. Clay identification is determined by a process of refiring the ceramic and assigning a color to it using Munsell Soil Color notation. By estimating both the clay and temper compositions and locating a region where both are known to occur, an assignment of the material source can be made. Based on the source assignment of the artifact, further investigations can be made into the site of manufacture. Properties The physical properties of any ceramic substance are a direct result of its crystalline structure and chemical composition. Solid-state chemistry reveals the fundamental connection between microstructure and properties, such as localized density variations, grain size distribution, type of porosity, and second-phase content, which can all be correlated with ceramic properties such as mechanical strength σ by the Hall-Petch equation, hardness, toughness, dielectric constant, and the optical properties exhibited by transparent materials. Ceramography is the art and science of preparation, examination, and evaluation of ceramic microstructures. Evaluation and characterization of ceramic microstructures are often implemented on similar spatial scales to that used commonly in the emerging field of nanotechnology: from nanometers to tens of micrometers (µm). This is typically somewhere between the minimum wavelength of visible light and the resolution limit of the naked eye. The microstructure includes most grains, secondary phases, grain boundaries, pores, micro-cracks, structural defects, and hardness micro indentions. Most bulk mechanical, optical, thermal, electrical, and magnetic properties are significantly affected by the observed microstructure. The fabrication method and process conditions are generally indicated by the microstructure. The root cause of many ceramic failures is evident in the cleaved and polished microstructure. Physical properties which constitute the field of materials science and engineering include the following: Mechanical properties Mechanical properties are important in structural and building materials as well as textile fabrics. In modern materials science, fracture mechanics is an important tool in improving the mechanical performance of materials and components. It applies the physics of stress and strain, in particular the theories of elasticity and plasticity, to the microscopic crystallographic defects found in real materials in order to predict the macroscopic mechanical failure of bodies. Fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real-life failures. Ceramic materials are usually ionic or covalent bonded materials. A material held together by either type of bond will tend to fracture before any plastic deformation takes place, which results in poor toughness in these materials. Additionally, because these materials tend to be porous, the pores and other microscopic imperfections act as stress concentrators, decreasing the toughness further, and reducing the tensile strength. These combine to give catastrophic failures, as opposed to the more ductile failure modes of metals. These materials do show plastic deformation. However, because of the rigid structure of crystalline material, there are very few available slip systems for dislocations to move, and so they deform very slowly. To overcome the brittle behavior, ceramic material development has introduced the class of ceramic matrix composite materials, in which ceramic fibers are embedded and with specific coatings are forming fiber bridges across any crack. This mechanism substantially increases the fracture toughness of such ceramics. Ceramic disc brakes are an example of using a ceramic matrix composite material manufactured with a specific process. Scientists are working on developing ceramic materials that can withstand significant deformation without breaking. A first such material that can deform in room temperature was found in 2024. Ice-templating for enhanced mechanical properties If a ceramic is subjected to substantial mechanical loading, it can undergo a process called ice-templating, which allows some control of the microstructure of the ceramic product and therefore some control of the mechanical properties. Ceramic engineers use this technique to tune the mechanical properties to their desired application. Specifically, the strength is increased when this technique is employed. Ice templating allows the creation of macroscopic pores in a unidirectional arrangement. The applications of this oxide strengthening technique are important for solid oxide fuel cells and water filtration devices. To process a sample through ice templating, an aqueous colloidal suspension is prepared to contain the dissolved ceramic powder evenly dispersed throughout the colloid, for example Yttria-stabilized zirconia (YSZ). The solution is then cooled from the bottom to the top on a platform that allows for unidirectional cooling. This forces ice crystals to grow in compliance with the unidirectional cooling, and these ice crystals force the dissolved YSZ particles to the solidification front of the solid-liquid interphase boundary, resulting in pure ice crystals lined up unidirectionally alongside concentrated pockets of colloidal particles. The sample is then heated and at the same the pressure is reduced enough to force the ice crystals to sublime and the YSZ pockets begin to anneal together to form macroscopically aligned ceramic microstructures. The sample is then further sintered to complete the evaporation of the residual water and the final consolidation of the ceramic microstructure. During ice-templating, a few variables can be controlled to influence the pore size and morphology of the microstructure. These important variables are the initial solids loading of the colloid, the cooling rate, the sintering temperature and duration, and the use of certain additives which can influence the microstructural morphology during the process. A good understanding of these parameters is essential to understanding the relationships between processing, microstructure, and mechanical properties of anisotropically porous materials. Electrical properties Semiconductors Some ceramics are semiconductors. Most of these are transition metal oxides that are II-VI semiconductors, such as zinc oxide. While there are prospects of mass-producing blue LEDs from zinc oxide, ceramicists are most interested in the electrical properties that show grain boundary effects. One of the most widely used of these is the varistor. These are devices that exhibit the property that resistance drops sharply at a certain threshold voltage. Once the voltage across the device reaches the threshold, there is a breakdown of the electrical structure in the vicinity of the grain boundaries, which results in its electrical resistance dropping from several megohms down to a few hundred ohms. The major advantage of these is that they can dissipate a lot of energy, and they self-reset; after the voltage across the device drops below the threshold, its resistance returns to being high. This makes them ideal for surge-protection applications; as there is control over the threshold voltage and energy tolerance, they find use in all sorts of applications. The best demonstration of their ability can be found in electrical substations, where they are employed to protect the infrastructure from lightning strikes. They have rapid response, are low maintenance, and do not appreciably degrade from use, making them virtually ideal devices for this application. Semiconducting ceramics are also employed as gas sensors. When various gases are passed over a polycrystalline ceramic, its electrical resistance changes. With tuning to the possible gas mixtures, very inexpensive devices can be produced. Superconductivity Under some conditions, such as extremely low temperatures, some ceramics exhibit high-temperature superconductivity (in superconductivity, "high temperature" means above 30 K). The reason for this is not understood, but there are two major families of superconducting ceramics. Ferroelectricity and supersets Piezoelectricity, a link between electrical and mechanical response, is exhibited by a large number of ceramic materials, including the quartz used to measure time in watches and other electronics. Such devices use both properties of piezoelectrics, using electricity to produce a mechanical motion (powering the device) and then using this mechanical motion to produce electricity (generating a signal). The unit of time measured is the natural interval required for electricity to be converted into mechanical energy and back again. The piezoelectric effect is generally stronger in materials that also exhibit pyroelectricity, and all pyroelectric materials are also piezoelectric. These materials can be used to inter-convert between thermal, mechanical, or electrical energy; for instance, after synthesis in a furnace, a pyroelectric crystal allowed to cool under no applied stress generally builds up a static charge of thousands of volts. Such materials are used in motion sensors, where the tiny rise in temperature from a warm body entering the room is enough to produce a measurable voltage in the crystal. In turn, pyroelectricity is seen most strongly in materials that also display the ferroelectric effect, in which a stable electric dipole can be oriented or reversed by applying an electrostatic field. Pyroelectricity is also a necessary consequence of ferroelectricity. This can be used to store information in ferroelectric capacitors, elements of ferroelectric RAM. The most common such materials are lead zirconate titanate and barium titanate. Aside from the uses mentioned above, their strong piezoelectric response is exploited in the design of high-frequency loudspeakers, transducers for sonar, and actuators for atomic force and scanning tunneling microscopes. Positive thermal coefficient Temperature increases can cause grain boundaries to suddenly become insulating in some semiconducting ceramic materials, mostly mixtures of heavy metal titanates. The critical transition temperature can be adjusted over a wide range by variations in chemistry. In such materials, current will pass through the material until joule heating brings it to the transition temperature, at which point the circuit will be broken and current flow will cease. Such ceramics are used as self-controlled heating elements in, for example, the rear-window defrost circuits of automobiles. At the transition temperature, the material's dielectric response becomes theoretically infinite. While a lack of temperature control would rule out any practical use of the material near its critical temperature, the dielectric effect remains exceptionally strong even at much higher temperatures. Titanates with critical temperatures far below room temperature have become synonymous with "ceramic" in the context of ceramic capacitors for just this reason. Optical properties Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Frequency selective optical filters can be utilized to alter or enhance the brightness and contrast of a digital image. Guided lightwave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions as a transmission medium for a range of frequencies simultaneously (multi-mode optical fiber) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation, though low powered, is virtually lossless. Optical waveguides are used as components in Integrated optical circuits (e.g. light-emitting diodes, LEDs) or as the transmission medium in local and long haul optical communication systems. Also of value to the emerging materials scientist is the sensitivity of materials to radiation in the thermal infrared (IR) portion of the electromagnetic spectrum. This heat-seeking ability is responsible for such diverse optical phenomena as night-vision and IR luminescence. Thus, there is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light (electromagnetic waves) in the visible (0.4 – 0.7 micrometers) and mid-infrared (1 – 5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armor, including next-generation high-speed missiles and pods, as well as protection against improvised explosive devices (IED). In the 1960s, scientists at General Electric (GE) discovered that under the right manufacturing conditions, some ceramics, especially aluminium oxide (alumina), could be made translucent. These translucent materials were transparent enough to be used for containing the electrical plasma generated in high-pressure sodium street lamps. During the past two decades, additional types of transparent ceramics have been developed for applications such as nose cones for heat-seeking missiles, windows for fighter aircraft, and scintillation counters for computed tomography scanners. Other ceramic materials, generally requiring greater purity in their make-up than those above, include forms of several chemical compounds, including: Barium titanate: (often mixed with strontium titanate) displays ferroelectricity, meaning that its mechanical, electrical, and thermal responses are c Sialon (silicon aluminium oxynitride) has high strength; resistance to thermal shock, chemical and wear resistance, and low density. These ceramics are used in non-ferrous molten metal handling, weld pins, and the chemical industry. Silicon carbide (SiC) is used as a susceptor in microwave furnaces, a commonly used abrasive, and as a refractory material. Silicon nitride (Si3N4) is used as an abrasive powder. Steatite (magnesium silicates) is used as an electrical insulator. Titanium carbide Used in space shuttle re-entry shields and scratchproof watches. Uranium oxide (UO2), used as fuel in nuclear reactors. Yttrium barium copper oxide (YBa2Cu3O7−x), a high-temperature superconductor. Zinc oxide (ZnO), which is a semiconductor, and used in the construction of varistors. Zirconium dioxide (zirconia), which in pure form undergoes many phase changes between room temperature and practical sintering temperatures, can be chemically "stabilized" in several different forms. Its high oxygen ion conductivity recommends it for use in fuel cells and automotive oxygen sensors. In another variant, metastable structures can impart transformation toughening for mechanical applications; most ceramic knife blades are made of this material. Partially stabilised zirconia (PSZ) is much less brittle than other ceramics and is used for metal forming tools, valves and liners, abrasive slurries, kitchen knives and bearings subject to severe abrasion. Products By usage For convenience, ceramic products are usually divided into four main types; these are shown below with some examples: Structural, including bricks, pipes, floor and roof tiles, vitrified tile Refractories, such as kiln linings, gas fire radiants, steel and glass making crucibles Whitewares, including tableware, cookware, wall tiles, pottery products and sanitary ware Technical, also known as engineering, advanced, special, and fine ceramics. Such items include: gas burner nozzles ballistic protection, vehicle armor nuclear fuel uranium oxide pellets biomedical implants coatings of jet engine turbine blades ceramic matrix composite gas turbine parts reinforced carbon–carbon ceramic disc brakes missile nose cones bearings thermal insulation tiles used on the Space Shuttle orbiter Ceramics made with clay Frequently, the raw materials of modern ceramics do not include clays. Those that do have been classified as: Earthenware, fired at lower temperatures than other types Stoneware, vitreous or semi-vitreous Porcelain, which contains a high content of kaolin Bone china Classification Ceramics can also be classified into three distinct material categories: Oxides: alumina, beryllia, ceria, zirconia Non-oxides: carbide, boride, nitride, silicide Composite materials: particulate reinforced, fiber reinforced, combinations of oxides and non-oxides. Each one of these classes can be developed into unique material properties. Applications Knife blades: the blade of a ceramic knife will stay sharp for much longer than that of a steel knife, although it is more brittle and susceptible to breakage. Carbon-ceramic brake disks for vehicles: highly resistant to brake fade at high temperatures. Advanced composite ceramic and metal matrices have been designed for most modern Armoured fighting vehicles because they offer superior penetrating resistance against shaped charge (HEAT rounds) and kinetic energy penetrators. Ceramics such as alumina and boron carbide have been used as plates in ballistic armored vests to repel high-velocity rifle fire. Such plates are known commonly as small arms protective inserts, or SAPIs. Similar low-weight material is used to protect the cockpits of some military aircraft. Ceramic ball bearings can be used in place of steel. Their greater hardness results in lower susceptibility to wear. Ceramic bearings typically last triple the lifetime of steel bearings. They deform less than steel under load, resulting in less contact with the bearing retainer walls and lower friction. In very high-speed applications, heat from friction causes more problems for metal bearings than ceramic bearings. Ceramics are chemically resistant to corrosion and are preferred for environments where steel bearings would rust. In some applications their electricity-insulating properties are advantageous. Drawbacks to ceramic bearings include significantly higher cost, susceptibility to damage under shock loads, and the potential to wear steel parts due to ceramics' greater hardness. In the early 1980s Toyota researched production of an adiabatic engine using ceramic components in the hot gas area. The use of ceramics would have allowed temperatures exceeding 1650 °C. Advantages would include lighter materials and a smaller cooling system (or no cooling system at all), leading to major weight reduction. The expected increase of fuel efficiency (due to higher operating temperatures, demonstrated in Carnot's theorem) could not be verified experimentally. It was found that heat transfer on the hot ceramic cylinder wall was greater than the heat transfer to a cooler metal wall. This is because the cooler gas film on a metal surface acts as a thermal insulator. Thus, despite the desirable properties of ceramics, prohibitive production costs and limited advantages have prevented widespread ceramic engine component adoption. In addition, small imperfections in ceramic material along with low fracture toughness can lead to cracking and potentially dangerous equipment failure. Such engines are possible experimentally, but mass production is not feasible with current technology. Experiments with ceramic parts for gas turbine engines are being conducted. Currently, even blades made of advanced metal alloys used in the engines' hot section require cooling and careful monitoring of operating temperatures. Turbine engines made with ceramics could operate more efficiently, providing for greater range and payload. Recent advances have been made in ceramics which include bioceramics such as dental implants and synthetic bones. Hydroxyapatite, the major mineral component of bone, has been made synthetically from several biological and chemical components and can be formed into ceramic materials. Orthopedic implants coated with these materials bond readily to bone and other tissues in the body without rejection or inflammatory reaction. They are of great interest for gene delivery and tissue engineering scaffolding. Most hydroxyapatite ceramics are quite porous and lack mechanical strength and are therefore used solely to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. They are also used as fillers for orthopedic plastic screws to aid in reducing inflammation and increase the absorption of these plastic materials. Work is being done to make strong, fully dense nanocrystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic but naturally occurring bone mineral. Ultimately, these ceramic materials may be used as bone replacement, or with the incorporation of protein collagens, the manufacture of synthetic bones. Applications for actinide-containing ceramic materials include nuclear fuels for burning excess plutonium (Pu), or a chemically inert source of alpha radiation in power supplies for uncrewed space vehicles or microelectronic devices. Use and disposal of radioactive actinides require immobilization in a durable host material. Long half-life radionuclides such as actinide are immobilized using chemically durable crystalline materials based on polycrystalline ceramics and large single crystals. High-tech ceramics are used for producing watch cases. The material is valued by watchmakers for its light weight, scratch resistance, durability, and smooth touch. IWC is one of the brands that pioneered the use of ceramic in watchmaking. Ceramics are used in the design of mobile phone bodies due to their high hardness, resistance to scratches, and ability to dissipate heat. Ceramic's thermal management properties help in maintaining optimal device temperatures during heavy use enhancing performance. Additionally, ceramic materials can support wireless charging and offer better signal transmission compared to metals, which can interfere with antennas. Companies like Apple and Samsung have incorporated ceramic in their devices. Ceramics made of Silicon Carbide are used in pump and valve components because of their corrosion resistance characteristics. It is also used in nuclear reactors as fuel cladding materials due to their ability to withstand radiation and thermal stress. Other uses of Silicon carbide ceramics include paper manufacturing, ballistics, chemical production, and as pipe system components.
Technology
Material and chemical
null
6513
https://en.wikipedia.org/wiki/Client%E2%80%93server%20model
Client–server model
The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may be on the same device. A server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests. Examples of computer applications that use the client–server model are email, network printing, and the World Wide Web. Client and server role The server component provides a function or service to one or many clients, which initiate requests for such services. Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer's software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service. Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication. Client and server communication Generally, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevant application protocol, i.e. the content and the formatting of the data for the requested service. Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange. A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, the server software may limit the availability to clients. Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates. Encryption should be applied if sensitive information is to be communicated between the client and the server. Example When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank's web server. The customer's login credentials may be stored in a database, and the webserver accesses the database server as a client. An application server interprets the returned data by applying the bank's business logic and provides the output to the webserver. Finally, the webserver returns the result to the client web browser for display. In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer. This example illustrates a design pattern applicable to the client–server model: separation of concerns. Server-side Server-side refers to programs and operations that run on the server. This is in contrast to client-side programs and operations which run on the client. (See below) General concepts "Server-side software" refers to a computer application, such as a web server, that runs on remote server hardware, reachable from a user's local computer, smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or because performing such operations on the client side would be slow, unreliable, or insecure. Client and server programs may be commonly available ones such as free or commercial web servers and web browsers, communicating with each other using standardized protocols. Or, programmers may write their own server, client, and communications protocol which can only be used with one another. Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks. Computer security In a computer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, or in between the two. For example, an attacker might exploit an SQL injection vulnerability in a web application in order to maliciously change or gain unauthorized access to data in the server's database. Alternatively, an attacker might break into a server system using vulnerabilities in the underlying operating system and then be able to access database and other files in the same manner as authorized administrators of the server. Examples In the case of distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application like Google Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc. In the context of the World Wide Web, commonly encountered server-side computer languages include: C# or Visual Basic in ASP.NET environments Java Perl PHP Python Ruby Node.js Swift However, web applications and services can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use. Client side Client-side refers to operations that are performed by the client in a computer network. General concepts Typically, a client is a computer application, such as a web browser, that runs on a user's local computer, smartphone, or other device, and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk. When the server serves data in a commonly used manner, for example according to standard protocols such as HTTP or FTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications, programmers may write their own server, client, and communications protocol which can only be used with one another. Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations. Computer security In a computer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on the server side, or in between the two. As an example, if a server contained an encrypted file or message which could only be decrypted using a key housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might cause malware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employ cross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware. Examples Distributed computing projects such as SETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications like Google Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (a server-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home). In the context of the World Wide Web, commonly encountered computer languages which are evaluated or run on the client side include: Cascading Style Sheets (CSS) HTML JavaScript Early history An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output. While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s. One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet). Client-host and server-host Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and client-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving. An early use of the word client occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client). By 1992, the word server had entered into general parlance. Centralized computing The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a rich client, such as a personal computer, has many resources and does not rely on a server for essential functions. As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to rich clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s. Comparison with peer-to-peer architecture In addition to the client-server model, distributed computing applications often use the peer-to-peer (P2P) application architecture. In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload. Load-balancing and failover systems are often employed to scale the server beyond a single physical machine. Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them. In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client-server or client-queue-client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests. Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.
Technology
Networks
null
6517
https://en.wikipedia.org/wiki/Clutch
Clutch
A clutch is a mechanical device that allows an output shaft to be disconnected from a rotating input shaft. The clutch's input shaft is typically attached to a motor, while the clutch's output shaft is connected to the mechanism that does the work. In a motor vehicle, the clutch acts as a mechanical linkage between the engine and transmission. By disengaging the clutch, the engine speed (RPM) is no longer determined by the speed of the driven wheels. Another example of clutch usage is in electric drills. The clutch's input shaft is driven by a motor and the output shaft is connected to the drill bit (via several intermediate components). The clutch allows the drill bit to either spin at the same speed as the motor (clutch engaged), spin at a lower speed than the motor (clutch slipping) or remain stationary while the motor is spinning (clutch disengaged). Types Dry clutch A dry clutch uses dry friction to transfer power from the input shaft to the output shaft, for example a friction disk presses against a car engine's flywheel by a spring mechanism. The wheels of the vehicle only rotate when the flywheel is in contact with the friction disk. To stop the transfer of power, the friction disk is moved away from the flywheel by means of a lever mechanism. The majority of automotive clutches on manual transmissions are dry clutches. Slippage of a friction clutch (where the clutch is partially engaged but the shafts are rotating at different speeds) is sometimes required, such as when a motor vehicle accelerates from a standstill; however the slippage should be minimised to avoid increased wear rates. In a pull-type clutch, pressing the pedal pulls the release bearing to disengage the clutch. In a push-type clutch, pressing the pedal pushes the release bearing to disengage the clutch. A multi-plate clutch consists of several friction plates arranged concentrically. In some cases, it is used instead of a larger diameter clutch. Drag racing cars use multi-plate clutches to control the rate of power transfer to the wheels as the vehicle accelerates from a standing start. Some clutch disks include springs designed to change the natural frequency of the clutch disc, in order to reduce NVH within the vehicle. Also, some clutches for manual transmission cars use a clutch delay valve to avoid abrupt engagements of the clutch. Wet clutch In a wet clutch, the friction material sits in an oil bath (or has flow-through oil) which cools and lubricates the clutch. This can provide smoother engagement and a longer lifespan of the clutch, however wet clutches can have a lower efficiency due to some energy being transferred to the oil. Since the surfaces of a wet clutch can be slippery (as with a motorcycle clutch bathed in engine oil), stacking multiple clutch discs can compensate for the lower coefficient of friction and so eliminate slippage under power when fully engaged. Wet clutches often use a composite paper material. Centrifugal clutch A centrifugal clutch automatically engages as the speed of the input shaft increases and disengages as the input shaft speed decreases. Applications include small motorcycles, motor scooters, chainsaws, and some older automobiles. Cone clutch A cone clutch is similar to dry friction plate clutch, except the friction material is applied to the outside of a conical shaped object. This conical shape allows wedging action to occur during engagement. A common application for cone clutches is the synchronizer ring in a manual transmission. Dog clutch A dog clutch is a non-slip design of clutch which is used in non-synchronous transmissions. Single-revolution clutch The single-revolution clutch was developed in the 19th century to power machinery such as shears or presses where a single pull of the operating lever or (later) press of a button would trip the mechanism, engaging the clutch between the power source and the machine's crankshaft for exactly one revolution before disengaging the clutch. When the clutch is disengaged, the driven member is stationary. Early designs were typically dog clutches with a cam on the driven member used to disengage the dogs at the appropriate point. Greatly simplified single-revolution clutches were developed in the 20th century, requiring much smaller operating forces and in some variations, allowing for a fixed fraction of a revolution per operation. Fast action friction clutches replaced dog clutches in some applications, eliminating the problem of impact loading on the dogs every time the clutch engaged. In addition to their use in heavy manufacturing equipment, single-revolution clutches were applied to numerous small machines. In tabulating machines, for example, pressing the operate key would trip a single revolution clutch to process the most recently entered number. In typesetting machines, pressing any key selected a particular character and also engaged a single rotation clutch to cycle the mechanism to typeset that character. Similarly, in teleprinters, the receipt of each character tripped a single-revolution clutch to operate one cycle of the print mechanism. In 1928, Frederick G. Creed developed a single-turn wrap spring clutch that was particularly well suited to the repetitive start-stop action required in teleprinters. In 1942, two employees of Pitney Bowes Postage Meter Company developed an improved single turn spring clutch. In these clutches, a coil spring is wrapped around the driven shaft and held in an expanded configuration by the trip lever. When tripped, the spring rapidly contracts around the power shaft engaging the clutch. At the end of one revolution, if the trip lever has been reset, it catches the end of the spring (or a pawl attached to it), and the angular momentum of the driven member releases the tension on the spring. These clutches have long operating lives—many have performed tens and perhaps hundreds of millions of cycles without the need of maintenance other than occasional lubrication. Cascaded-pawl single-revolution clutches superseded wrap-spring single-revolution clutches in page printers, such as teleprinters, including the Teletype Model 28 and its successors, using the same design principles. IBM Selectric typewriters also used them. These are typically disc-shaped assemblies mounted on the driven shaft. Inside the hollow disc-shaped drive drum are two or three freely floating pawls arranged so that when the clutch is tripped, the pawls spring outward much like the shoes in a drum brake. When engaged, the load torque on each pawl transfers to the others to keep them engaged. These clutches do not slip once locked up, and they engage very quickly, on the order of milliseconds. A trip projection extends out from the assembly. If the trip lever engaged this projection, the clutch was disengaged. When the trip lever releases this projection, internal springs and friction engage the clutch. The clutch then rotates one or more turns, stopping when the trip lever again engages the trip projection. Other designs Kickback clutch-brakes: Found in some types of synchronous-motor-driven electric clocks built before the 1940s, to prevent the clock from running backwards. The clutch consisted of a wrap-spring clutch-brake that was coupled to the rotor by one or two stages of reduction gearing. The clutch-brake locked up when rotated backwards, but also had some spring action. The inertia of the rotor going backwards engaged the clutch and wound the spring. As it unwound, it restarted the motor in the correct direction. Belt clutch: used on agricultural equipment, lawnmowers, tillers, and snow blowers. Engine power is transmitted via a set of belts that are slack when the engine is idling, but an idler pulley can tighten the belts to increase friction between the belts and the pulleys. BMA clutch: Invented by Waldo J Kelleigh in 1949, used for transmitting torque between two shafts consisting of a fixed driving member secured to one of said shafts, and a movable driving member, having a contacting surface with a plurality of indentations. Electromagnetic clutch: typically engaged by an electromagnet that is an integral part of the clutch assembly. Another type, the magnetic particle clutch, contains magnetically influenced particles in a chamber between driving and driven members—application of direct current makes the particles clump together and adhere to the operating surfaces. Engagement and slippage are notably smooth. Wrap-spring clutch: has a helical spring, typically wound with square-cross-section wire. These were developed in the late 19th and early 20th-century. In simple form the spring is fastened at one end to the driven member; its other end is unattached. The spring fits closely around a cylindrical driving member. If the driving member rotates in the direction that would unwind the spring expands minutely and slips although with some drag. Because of this, spring clutches must typically be lubricated with light oil. Rotating the driving member the other way makes the spring wrap itself tightly around the driving surface and the clutch locks up very quickly. The torque required to make a spring clutch slip grows exponentially with the number of turns in the spring, obeying the capstan equation. Usage in automobiles Manual transmissions Most cars and trucks with a manual transmission use a dry clutch, which is operated by the driver using the left-most pedal. The motion of the pedal is transferred to the clutch using mechanical linkage, hydraulics (master and slave cylinders) or a cable. The clutch is only disengaged at times when the driver is pressing on the clutch pedal, therefore the default state is for the transmission to be connected to the engine. A "neutral" gear position is provided, so that the clutch pedal can be released with the vehicle remaining stationary. The clutch is required for standing starts and in vehicles whose transmissions lack synchronising means to assist in matching the speeds of the engine and transmission during gear changes to avoid gear “crashing,” which can cause serious damage to gear teeth. The clutch is usually mounted directly to the face of the engine's flywheel, as this already provides a convenient large-diameter steel disk that can act as one driving plate of the clutch. Some racing clutches use small multi-plate disk packs that are not part of the flywheel. Both clutch and flywheel are enclosed in a conical bellhousing for the gearbox. The friction material used for the clutch disk varies, with a common material being an organic compound resin with a copper wire facing or a ceramic material. Automatic transmissions In an automatic transmission, the role of the clutch is performed by a torque converter. However, the transmission itself often includes internal clutches, such as a lock-up clutch to prevent slippage of the torque converter, in order to reduce the energy loss through the transmission and therefore improve fuel economy. Fans and compressors Older belt-driven engine cooling fans often use a heat-activated clutch, in the form of a bimetallic strip. When the temperature is low, the spring winds and closes the valve, which lets the fan spin at about 20% to 30% of the crankshaft speed. As the temperature of the spring rises, it unwinds and opens the valve, allowing fluid past the valve, making the fan spin at about 60% to 90% of crankshaft speed. A vehicle's air-conditioning compressor often uses magnetic clutches to engage the compressor as required. Usage in motorcycles Motorcycles typically employ a wet clutch with the clutch riding in the same oil as the transmission. These clutches are usually made up of a stack of alternating friction plates and steel plates. The friction plates have lugs on their outer diameters that lock them into a basket that is turned by the crankshaft. The steel plates have lugs on their inner diameters that lock them to the transmission input shaft. A set of coil springs or a diaphragm spring plate force the plates together when the clutch is engaged. On motorcycles the clutch is operated by a hand lever on the left handlebar. No pressure on the lever means that the clutch plates are engaged (driving), while pulling the lever back towards the rider disengages the clutch plates through cable or hydraulic actuation, allowing the rider to shift gears or coast. Racing motorcycles often use slipper clutches to eliminate the effects of engine braking, which, being applied only to the rear wheel, can cause instability.
Technology
Components_2
null
6535
https://en.wikipedia.org/wiki/Celery
Celery
Celery (Apium graveolens Dulce Group or Apium graveolens var. dulce) is a cultivated plant belonging to the species Apium graveolens in the family Apiaceae that has been used as a vegetable since ancient times. Celery has a long fibrous stalk tapering into leaves. Celery seed powder is used as a spice. Celeriac and leaf celery are different groups of cultivars of Apium graveolens. Description Celery leaves are pinnate to bipinnate with rhombic leaflets long and broad. The flowers are creamy-white, in diameter, and are produced in dense compound umbels. The seeds are broad ovoid to globose, long and wide. Modern cultivars have been selected for either solid petioles, leaf stalks, or a large hypocotyl. A celery stalk readily separates into "strings" which are bundles of angular collenchyma cells exterior to the vascular bundles. Chemistry The main chemicals responsible for the aroma and taste of celery are butylphthalide and sedanolide. Etymology First attested and printed in English as "sellery" by John Evelyn in 1664, the modern English word "celery" derives from the French céleri, in turn from Italian seleri, the plural of selero, which comes from Late Latin selinon, the latinisation of the , "celery". The earliest-attested form of the word is the Mycenaean Greek se-ri-no, written in Linear B syllabic script. Taxonomy The species Apium graveolens was described by Carl Linnaeus in Volume One of his Species Plantarum in 1753. Cultivated celery has been called Apium graveolens var. dulce or Apium graveolens Dulce Group. Cultivation The plants are raised from seed, sown either in a hot bed or in the open garden according to the season of the year, and, after one or two thinnings and transplantings, they are, on attaining a height of , planted out in deep trenches for convenience of blanching, which is effected by earthing up to exclude light from the stems. Development of self-blanching varieties of celery, which do not need to be earthed up, dominate both the commercial and amateur market. Celery was first grown as a winter and early spring vegetable. It was considered a cleansing tonic to counter the deficiencies of a winter diet based on salted meats without fresh vegetables. By the 19th century, the season for celery in England had been extended, to last from the beginning of September to late in April. In North America, commercial production of celery is dominated by the cultivar called 'Pascal' celery. Gardeners can grow a range of cultivars, many of which differ from the wild species, mainly in having stouter leaf stems. They are ranged under two classes, white and red. The stalks grow in tight, straight, parallel bunches, and are typically marketed fresh that way. They are sold without roots and only a small amount of green leaf remaining. The stalks can be eaten raw, or as an ingredient in salads, or as a flavouring in soups, stews, and pot roasts. Harvesting and storage Harvesting occurs when the average size of celery in a field is marketable; due to extremely uniform crop growth, fields are harvested only once. The petioles and leaves are removed and harvested; celery is packed by size and quality (determined by colour, shape, straightness and thickness of petiole, stalk and midrib length and absence of disease, cracks, splits, insect damage and rot). During commercial harvesting, celery is packaged into cartons which contain between 36 and 48 stalks and weigh up to . Under optimal conditions, celery can be stored for up to seven weeks from . Inner stalks may continue growing if kept at temperatures above . Shelf life can be extended by packaging celery in anti-fogging, micro-perforated shrink wrap. Freshly cut petioles of celery are prone to decay, which can be prevented or reduced through the use of sharp blades during processing, gentle handling, and proper sanitation. Celery stalk may be preserved through pickling by first removing the leaves, then boiling the stalks in water before finally adding vinegar, salt, and vegetable oil. Sulfites In the past, restaurants used to store celery in a container of water with powdered vegetable preservative, but it was found that the sulfites in the preservative caused allergic reactions in some people. In 1986, the U.S. Food and Drug Administration banned the use of sulfites on fruits and vegetables intended to be eaten raw. Allergic reactions Celery is among a small group of foods that may provoke allergic reactions; for people with celery allergy, exposure can cause potentially fatal anaphylactic shock. Cases of allergic reaction to ingestion of celery root have also been reported in pollen-sensitive individuals resulting in gastrointestinal disorders and other symptoms, although in most cases, celery sensitivity is not considered clinically significant. In the European Union and the United Kingdom, foods that contain or may contain celery, even in trace amounts, must be clearly marked. The Apium graveolens plant has an OPALS allergy scale rating of 4 out of 10, indicating moderate potential to cause allergic reactions, exacerbated by over-use of the same plant throughout a garden. Celery has caused skin rashes and cross-reactions with carrots and ragweed. Uses Nutrition Raw celery is 95% water, 3% carbohydrates, 0.7% protein, and contains negligible fat. A reference amount provides 16 calories of food energy and is a rich source of vitamin K, providing 73% of the Daily Value, with no other micronutrients in significant content. Culinary Celery is eaten around the world as a vegetable. In North America and Europe the crisp petiole (leaf stalk) is used. In Europe the hypocotyl is also used as a root vegetable. The leaves are strongly flavoured and are used less often, either as a flavouring in soups and stews or as a dried herb. Celery, onions, and bell peppers are the "holy trinity" of Louisiana Creole and Cajun cuisine. Celery, onions, and carrots make up the French mirepoix, often used as a base for sauces and soups. Celery is a staple in many soups. It is used in the Iranian stew khoresh karafs. Leaves Celery leaves are frequently used in cooking to add a mild spicy flavour to foods, similar to, but milder than black pepper. Celery leaves are suitable dried and sprinkled on baked, fried or roasted fish or meats, or as part of a blend of fresh seasonings suitable for use in soups and stews. They may also be eaten raw, mixed into a salad or as a garnish. Seeds In temperate countries, celery is also grown for its seeds. Actually very small fruit, these "seeds" yield a valuable essential oil that is used in the perfume industry. The oil contains the chemical compound apiole. Celery seeds can be used as flavouring or spice, either as whole seeds or ground. Celery salt Celery seeds can be ground and mixed with salt to produce celery salt. Celery salt can be made from an extract of the roots or by using dried leaves. Celery salt is used as a seasoning, in cocktails (commonly to enhance the flavour of Bloody Mary cocktails), on the Chicago-style hot dog, and in Old Bay Seasoning. Similarly, combinations of celery powder and salt are used to flavour and preserve cured pork and other processed meats as an alternative to industrial curing salt. The naturally occurring nitrates in celery work synergistically with the added salt to cure food. Celery juice In 2019, a trend of drinking celery juice was reported in the United States, based on "detoxification" claims posted on a blog. The claims have no scientific basis, but the trend caused a sizable spike in celery prices. In culture Daniel Zohary and Maria Hopf note that celery leaves and inflorescences were part of the garlands found in the tomb of pharaoh Tutankhamun (died 1323 BCE), and celery mericarps dated to the seventh century BCE were recovered in the Heraion of Samos. However, they note A. graveolens grows wild in these areas, it is hard to decide whether these remains represent wild or cultivated forms." Only by classical antiquity is it thought that celery was cultivated. M. Fragiska mentions an archeological find of celery dating to the 9th century BCE, at Kastanas; however, the literary evidence for ancient Greece is far more abundant. In Homer's Iliad, the horses of the Myrmidons graze on wild celery that grows in the marshes of Troy, and in Odyssey, there is mention of the meadows of violet and wild celery surrounding Calypso's Cave. In the Capitulary of Charlemagne, compiled c. 800, apium appears, as does olisatum, or alexanders, among medicinal herbs and vegetables the Frankish emperor desired to see grown. At some later point in medieval Europe, celery displaced alexanders. The name "celery" retraces the plant's route of successive adoption in European cooking, as the English "celery" (1664) is derived from the French céleri coming from the Lombard term, seleri, from the Latin selinon, borrowed from Greek. Celery's late arrival in the English kitchen is an end-product of the long tradition of seed selection needed to reduce the sap's bitterness and increase its sugars. By 1699, John Evelyn could recommend it in his Acetaria. A Discourse of Sallets: "Sellery, apium Italicum, (and of the Petroseline Family) was formerly a stranger with us (nor very long since in Italy) is a hot and more generous sort of Macedonian Persley or Smallage... and for its high and grateful Taste is ever plac'd in the middle of the Grand Sallet, at our Great Men's tables, and Praetors feasts, as the Grace of the whole Board". Celery makes a minor appearance in colonial American gardens; its culinary limitations are reflected in the observation by the author of A Treatise on Gardening, by a Citizen of Virginia that it is "one of the species of parsley". Its first extended treatment in print was in Bernard M'Mahon's American Gardener's Calendar (1806). After the mid-19th century, continued selections for refined crisp texture and taste brought celery to American tables, where it was served in celery vases to be salted and eaten raw. Celery was so popular in the United States during the 19th and early 20th centuries that the New York Public Library's historical menu archive shows that it was the third-most-popular dish in New York City menus during that time, behind only coffee and tea. In those days, celery cost more than caviar, as it was difficult to cultivate. There were also many varieties of celery back then that are no longer around because they are difficult to grow and do not ship well. A chthonian symbol among the ancient Greeks, celery was said to have sprouted from the blood of Kadmilos, father of the Cabeiri, chthonian divinities celebrated in Samothrace, Lemnos, and Thebes. The spicy odor and dark leaf colour encouraged this association with the cult of death. In classical Greece, celery leaves were used as garlands for the dead, and the wreaths of the winners at the Isthmian Games were first made of celery before being replaced by crowns made of pine. According to Pliny the Elder, in Achaea, the garland worn by the winners of the sacred Nemean Games was also made of celery. The Ancient Greek colony of Selinous (, Selinous), on Sicily, was named after wild parsley that grew abundantly there; Selinountian coins depicted a parsley leaf as the symbol of the city.
Biology and health sciences
Apiales
null
6543
https://en.wikipedia.org/wiki/Carnivore
Carnivore
A carnivore , or meat-eater (Latin, caro, genitive carnis, meaning meat or "flesh" and vorare meaning "to devour"), is an animal or plant whose nutrition and energy requirements are met by consumption of animal tissues (mainly muscle, fat and other soft tissues) as food, whether through predation or scavenging. Nomenclature Mammal order The technical term for mammals in the order Carnivora is carnivoran, and they are so-named because most member species in the group have a carnivorous diet, but the similarity of the name of the order and the name of the diet causes confusion. Many but not all carnivorans are meat eaters; a few, such as the large and small cats (Felidae) are obligate carnivores (see below). Other classes of carnivore are highly variable. The ursids (bears), for example: while the Arctic polar bear eats meat almost exclusively (more than 90% of its diet is meat), almost all other bear species are omnivorous, and one species, the giant panda, is nearly exclusively herbivorous. Dietary carnivory is not a distinguishing trait of the order. Many mammals with highly carnivorous diets are not members of the order Carnivora. Cetaceans, for example, all eat other animals, but are paradoxically members of the almost exclusively plant-eating hooved mammals. Carnivorous diet Animals that depend solely on animal flesh for their nutrient requirements in nature are called hypercarnivores or obligate carnivores, whilst those that also consume non-animal food are called mesocarnivores, or facultative carnivores, or omnivores (there are no clear distinctions). A carnivore at the top of the food chain (adults not preyed upon by other animals) is termed an apex predator, regardless of whether it is an obligate or facultative carnivore. In captivity or domestic settings, obligate carnivores like cats and crocodiles can, in principle, get all their required nutrients from processed food made from plant and synthetic sources. Outside the animal kingdom, there are several genera containing carnivorous plants (predominantly insectivores) and several phyla containing carnivorous fungi (preying mostly on microscopic invertebrates, such as nematodes, amoebae, and springtails). Subcategories of carnivory Carnivores are sometimes characterized by their type of prey. For example, animals that eat mainly insects and similar terrestrial arthropods are called insectivores, while those that eat mainly soft-bodied invertebrates are called vermivores. Those that eat mainly fish are called piscivores. Carnivores may alternatively be classified according to the percentage of meat in their diet. The diet of a hypercarnivore consists of more than 70% meat, that of a mesocarnivore 30–70%, and that of a hypocarnivore less than 30%, with the balance consisting of non-animal foods, such as fruits, other plant material, or fungi. Omnivores also consume both animal and non-animal food, and apart from their more general definition, there is no clearly defined ratio of plant vs. animal material that distinguishes a facultative carnivore from an omnivore. Obligate carnivores Obligate or "true" carnivores are those whose diet requires nutrients found only in animal flesh in the wild. While obligate carnivores might be able to ingest small amounts of plant matter, they lack the necessary physiology required to fully digest it. Some obligate carnivorous mammals will ingest vegetation as an emetic, a food that upsets their stomachs, to self-induce vomiting. Obligate carnivores are diverse. The amphibian axolotl consumes mainly worms and larvae in its environment, but if necessary will consume algae. All wild felids, including feral domestic cats, require a diet of primarily animal flesh and organs. Specifically, cats have high protein requirements and their metabolisms appear unable to synthesize essential nutrients such as retinol, arginine, taurine, and arachidonic acid; thus, in nature, they must consume flesh to supply these nutrients. Characteristics of carnivores Characteristics commonly associated with carnivores include strength, speed, and keen senses for hunting, as well as teeth and claws for capturing and tearing prey. However, some carnivores do not hunt and are scavengers, lacking the physical characteristics to bring down prey; in addition, most hunting carnivores will scavenge when the opportunity arises. Carnivores have comparatively short digestive systems, as they are not required to break down the tough cellulose found in plants. Many hunting animals have evolved eyes facing forward, enabling depth perception. This is almost universal among mammalian predators, while most reptile and amphibian predators have eyes facing sideways. Prehistory of carnivory Predation (the eating of one living organism by another for nutrition) predates the rise of commonly recognized carnivores by hundreds of millions (perhaps billions) of years. It began with single-celled organisms that phagocytozed and digested other cells, and later evolved into multicellular organisms with specialized cells that were dedicated to breaking down other organisms. Incomplete digestion of the prey organisms, some of which survived inside the predators in a form of endosymbiosis, might have led to symbiogenesis that gave rise to eukaryotes and eukaryotic autotrophs such as green and red algae. Proterozoic origin The earliest predators were microorganisms, which engulfed and "swallowed" other smaller cells (i.e. phagocytosis) and digested them internally. Because the earliest fossil record is poor, these first predators could date back anywhere between 1 and over 2.7 bya (billion years ago). The rise of eukaryotic cells at around 2.7 bya, the rise of multicellular organisms at about 2 bya, and the rise of motile predators (around 600 Mya – 2 bya, probably around 1 bya) have all been attributed to early predatory behavior, and many very early remains show evidence of boreholes or other markings attributed to small predator species. The sudden disappearance of the precambrian Ediacaran biota at the end-Ediacaran extinction, who were mostly bottom-dwelling filter feeders and grazers, has been hypothetized to be partly caused by increased predation by newer animals with hardened skeleton and mouthparts. Paleozoic The degradation of seafloor microbial mats due to the Cambrian substrate revolution led to increased active predation among animals, likely triggering various evolutionary arms races that contributed to the rapid diversification during the Cambrian explosion. Radiodont arthropods, which produced the first apex predators such as Anomalocaris, quickly became the dominant carnivores of the Cambrian sea. After their decline due to the Cambrian-Ordovician extinction event, the niches of large carnivores were taken over by nautiloid cephalopods such as Cameroceras and later eurypterids such as Jaekelopterus during the Ordovician and Silurian periods. The first vertebrate carnivores appeared after the evolution of jawed fish, especially armored placoderms such as the massive Dunkleosteus. The dominance of placoderms in the Devonian ocean forced other fish to venture into other niches, and one clade of bony fish, the lobe-finned fish, became the dominant carnivores of freshwater wetlands formed by early land plants. Some of these fish became better adapted for breathing air and eventually giving rise to amphibian tetrapods. These early tetrapods were large semi-aquatic piscivores and riparian ambush predators that hunt terrestrial arthropods (mainly arachnids and myriopods), and one group in particular, the temnospondyls, became terrestrial apex predators that hunt other tetrapods. The dominance of temnospondyls around the wetland habitats throughout the Carboniferous forced other amphibians to evolve into amniotes that had adaptations that allowed them to live farther away from water bodies. These amniotes began to evolve both carnivory, which was a natural transition from insectivory requiring minimal adaptation; and herbivory, which took advantage of the abundance of coal forest foliage but in contrast required a complex set of adaptations that was necessary for digesting on the cellulose- and lignin-rich plant materials. After the Carboniferous rainforest collapse, both synapsid and sauropsid amniotes quickly gained dominance as the top terrestrial animals during the subsequent Permian period. Some scientists assert that sphenacodontoid synapsids such as Dimetrodon "were the first terrestrial vertebrate to develop the curved, serrated teeth that enable a predator to eat prey much larger than itself". Mesozoic In the Mesozoic, some theropod dinosaurs such as Tyrannosaurus rex are thought probably to have been obligate carnivores. Though the theropods were the larger carnivores, several carnivorous mammal groups were already present. Most notable are the gobiconodontids, the triconodontid Jugulator, the deltatheroidans and Cimolestes. Many of these, such as Repenomamus, Jugulator and Cimolestes, were among the largest mammals in their faunal assemblages, capable of attacking dinosaurs. Cenozoic In the early-to-mid-Cenozoic, the dominant predator forms were mammals: hyaenodonts, oxyaenids, entelodonts, ptolemaiidans, arctocyonids and mesonychians, representing a great diversity of eutherian carnivores in the northern continents and Africa. In South America, sparassodonts were dominant, while Australia saw the presence of several marsupial predators, such as the dasyuromorphs and thylacoleonids. From the Miocene to the present, the dominant carnivorous mammals have been carnivoramorphs. Most carnivorous mammals, from dogs to deltatheridiums, share several dental adaptations, such as carnassial teeth, long canines and even similar tooth replacement patterns. Most aberrant are thylacoleonids, with a diprodontan dentition completely unlike that of any other mammal; and eutriconodonts like gobiconodontids and Jugulator, with a three-cusp anatomy which nevertheless functioned similarly to carnassials.
Biology and health sciences
Ethology
null
6556
https://en.wikipedia.org/wiki/Coprime%20integers
Coprime integers
In number theory, two integers and are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides does not divide , and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also is prime to or is coprime with . The numbers 8 and 9 are coprime, despite the fact that neither—considered individually—is a prime number, since 1 is their only common divisor. On the other hand, 6 and 9 are not coprime, because they are both divisible by 3. The numerator and denominator of a reduced fraction are coprime, by definition. Notation and testing When the integers and are coprime, the standard way of expressing this fact in mathematical notation is to indicate that their greatest common divisor is one, by the formula or . In their 1989 textbook Concrete Mathematics, Ronald Graham, Donald Knuth, and Oren Patashnik proposed an alternative notation to indicate that and are relatively prime and that the term "prime" be used instead of coprime (as in is prime to ). A fast way to determine whether two numbers are coprime is given by the Euclidean algorithm and its faster variants such as binary GCD algorithm or Lehmer's GCD algorithm. The number of integers coprime with a positive integer , between 1 and , is given by Euler's totient function, also known as Euler's phi function, . A set of integers can also be called coprime if its elements share no common positive factor except 1. A stronger condition on a set of integers is pairwise coprime, which means that and are coprime for every pair of different integers in the set. The set is coprime, but it is not pairwise coprime since 2 and 4 are not relatively prime. Properties The numbers 1 and −1 are the only integers coprime with every integer, and they are the only integers that are coprime with 0. A number of conditions are equivalent to and being coprime: No prime number divides both and . There exist integers such that (see Bézout's identity). The integer has a multiplicative inverse modulo , meaning that there exists an integer such that . In ring-theoretic language, is a unit in the ring of integers modulo . Every pair of congruence relations for an unknown integer , of the form and , has a solution (Chinese remainder theorem); in fact the solutions are described by a single congruence relation modulo . The least common multiple of and is equal to their product , i.e. . As a consequence of the third point, if and are coprime and , then . That is, we may "divide by " when working modulo . Furthermore, if are both coprime with , then so is their product (i.e., modulo it is a product of invertible elements, and therefore invertible); this also follows from the first point by Euclid's lemma, which states that if a prime number divides a product , then divides at least one of the factors . As a consequence of the first point, if and are coprime, then so are any powers and . If and are coprime and divides the product , then divides . This can be viewed as a generalization of Euclid's lemma. The two integers and are coprime if and only if the point with coordinates in a Cartesian coordinate system would be "visible" via an unobstructed line of sight from the origin , in the sense that there is no point with integer coordinates anywhere on the line segment between the origin and . (See figure 1.) In a sense that can be made precise, the probability that two randomly chosen integers are coprime is , which is about 61% (see , below). Two natural numbers and are coprime if and only if the numbers and are coprime. As a generalization of this, following easily from the Euclidean algorithm in base : Coprimality in sets A set of integers can also be called coprime or setwise coprime if the greatest common divisor of all the elements of the set is 1. For example, the integers 6, 10, 15 are coprime because 1 is the only positive integer that divides all of them. If every pair in a set of integers is coprime, then the set is said to be pairwise coprime (or pairwise relatively prime, mutually coprime or mutually relatively prime). Pairwise coprimality is a stronger condition than setwise coprimality; every pairwise coprime finite set is also setwise coprime, but the reverse is not true. For example, the integers 4, 5, 6 are (setwise) coprime (because the only positive integer dividing all of them is 1), but they are not pairwise coprime (because ). The concept of pairwise coprimality is important as a hypothesis in many results in number theory, such as the Chinese remainder theorem. It is possible for an infinite set of integers to be pairwise coprime. Notable examples include the set of all prime numbers, the set of elements in Sylvester's sequence, and the set of all Fermat numbers. Coprimality in ring ideals Two ideals and in a commutative ring are called coprime (or comaximal) if This generalizes Bézout's identity: with this definition, two principal ideals () and () in the ring of integers are coprime if and only if and are coprime. If the ideals and of are coprime, then furthermore, if is a third ideal such that contains , then contains . The Chinese remainder theorem can be generalized to any commutative ring, using coprime ideals. Probability of coprimality Given two randomly chosen integers and , it is reasonable to ask how likely it is that and are coprime. In this determination, it is convenient to use the characterization that and are coprime if and only if no prime number divides both of them (see Fundamental theorem of arithmetic). Informally, the probability that any number is divisible by a prime (or in fact any integer) is for example, every 7th integer is divisible by 7. Hence the probability that two numbers are both divisible by is and the probability that at least one of them is not is Any finite collection of divisibility events associated to distinct primes is mutually independent. For example, in the case of two events, a number is divisible by primes and if and only if it is divisible by ; the latter event has probability If one makes the heuristic assumption that such reasoning can be extended to infinitely many divisibility events, one is led to guess that the probability that two numbers are coprime is given by a product over all primes, Here refers to the Riemann zeta function, the identity relating the product over primes to is an example of an Euler product, and the evaluation of as is the Basel problem, solved by Leonhard Euler in 1735. There is no way to choose a positive integer at random so that each positive integer occurs with equal probability, but statements about "randomly chosen integers" such as the ones above can be formalized by using the notion of natural density. For each positive integer , let be the probability that two randomly chosen numbers in are coprime. Although will never equal exactly, with work one can show that in the limit as the probability approaches . More generally, the probability of randomly chosen integers being setwise coprime is Generating all coprime pairs All pairs of positive coprime numbers (with ) can be arranged in two disjoint complete ternary trees, one tree starting from (for even–odd and odd–even pairs), and the other tree starting from (for odd–odd pairs). The children of each vertex are generated as follows: Branch 1: Branch 2: Branch 3: This scheme is exhaustive and non-redundant with no invalid members. This can be proved by remarking that, if is a coprime pair with then if then is a child of along branch 3; if then is a child of along branch 2; if then is a child of along branch 1. In all cases is a "smaller" coprime pair with This process of "computing the father" can stop only if either or In these cases, coprimality, implies that the pair is either or Another (much simpler) way to generate a tree of positive coprime pairs (with ) is by means of two generators and , starting with the root . The resulting binary tree, the Calkin–Wilf tree, is exhaustive and non-redundant, which can be seen as follows. Given a coprime pair one recursively applies or depending on which of them yields a positive coprime pair with . Since only one does, the tree is non-redundant. Since by this procedure one is bound to arrive at the root, the tree is exhaustive. Applications In machine design, an even, uniform gear wear is achieved by choosing the tooth counts of the two gears meshing together to be relatively prime. When a 1:1 gear ratio is desired, a gear relatively prime to the two equal-size gears may be inserted between them. In pre-computer cryptography, some Vernam cipher machines combined several loops of key tape of different lengths. Many rotor machines combine rotors of different numbers of teeth. Such combinations work best when the entire set of lengths are pairwise coprime. Generalizations This concept can be extended to other algebraic structures than for example, polynomials whose greatest common divisor is 1 are called coprime polynomials.
Mathematics
Prime numbers
null
6590
https://en.wikipedia.org/wiki/Cruise%20missile
Cruise missile
A cruise missile is an unmanned self-propelled guided missile that sustains flight through aerodynamic lift for most of its flight path. Cruise missiles are designed to deliver a large payload over long distances with high precision. Modern cruise missiles are capable of traveling at high subsonic, supersonic, or hypersonic speeds, are self-navigating, and are able to fly on a non-ballistic, extremely low-altitude trajectory. History The idea of an "aerial torpedo" was shown in the British 1909 film The Airship Destroyer in which flying torpedoes controlled wirelessly are used to bring down airships bombing London. In 1916, the American aviator Lawrence Sperry built and patented an "aerial torpedo", the Hewitt-Sperry Automatic Airplane, a small biplane carrying a TNT charge, a Sperry autopilot and barometric altitude control. Inspired by the experiments, the United States Army developed a similar flying bomb called the Kettering Bug. Germany had also flown trials with remote-controlled aerial gliders (Torpedogleiter) built by Siemens-Schuckert beginning in 1916. In the Interwar Period, Britain's Royal Aircraft Establishment developed the Larynx (Long Range Gun with Lynx Engine), which underwent a few flight tests in the 1920s. In the Soviet Union, Sergei Korolev headed the GIRD-06 cruise missile project from 1932 to 1939, which used a rocket-powered boost-glide bomb design. The 06/III (RP-216) and 06/IV (RP-212) contained gyroscopic guidance systems. The vehicle was designed to boost to altitude and glide a distance of , but test flights in 1934 and 1936 only reached an altitude of . In 1944, during World War II, Germany deployed the first operational cruise missiles. The V-1, often called a flying bomb, contained a gyroscope guidance system and was propelled by a simple pulsejet engine, the sound of which gave it the nickname of "buzz bomb" or "doodlebug". Accuracy was sufficient only for use against very large targets (the general area of a city), while the range of was significantly lower than that of a bomber carrying the same payload. The main advantages were speed (although not sufficient to outperform contemporary propeller-driven interceptors) and expendability. The production cost of a V-1 was only a small fraction of that of a V-2 supersonic ballistic missile with a similar-sized warhead. Unlike the V-2, the initial deployments of the V-1 required stationary launch ramps which were susceptible to bombardment. Nazi Germany, in 1943, also developed the Mistel composite aircraft program, which can be seen as a rudimentary air-launched cruise missile, where a piloted fighter-type aircraft was mounted atop an unpiloted bomber-sized aircraft that was packed with explosives to be released while approaching the target. Bomber-launched variants of the V-1 saw limited operational service near the end of the war, with the pioneering V-1's design reverse-engineered by the Americans as the Republic-Ford JB-2 cruise missile. Immediately after World War II, the United States Air Force had 21 different guided missile projects, including proposed cruise missiles. By 1948, all but four of these projects had been canceled: the Air Materiel Command Banshee, the SM-62 Snark, the SM-64 Navaho, and the MGM-1 Matador. The Banshee design was similar to Operation Aphrodite; like Aphrodite, it failed, and was canceled in April 1949. Concurrently, the US Navy's Operation Bumblebee, was conducted at Topsail Island, North Carolina, from c. 1 June 1946, to 28 July 1948. Bumblebee produced proof-of-concept technologies that influenced the US military's other missile projects. During the Cold War, both the United States and the Soviet Union experimented further with the concept, of deploying early cruise missiles from land, submarines, and aircraft. The main outcome of the United States Navy submarine missile project was the SSM-N-8 Regulus missile, based upon the V-1 but powered by an Allison J33 jet engine. The Regulus entered service but was phased out with the advent of submarine launched ballistic missiles that did not require the submarine to surface in order to launch the missile and guide it to its target. The United States Air Force's first operational surface-to-surface missile was the winged, mobile, nuclear-capable MGM-1 Matador, also similar in concept to the V-1. Deployment overseas began in 1954, first to West Germany and later to the Republic of China and South Korea. On 7 November 1956, the U.S. Air Force deployed Matador units in West Germany, whose missiles were capable of striking targets in the Warsaw Pact, from their fixed day-to-day sites to unannounced dispersed launch locations. This alert was in response to the crisis posed by the Soviet attack on Hungary which suppressed the Hungarian Revolution of 1956. Between 1957 and 1961 the United States followed an ambitious and well-funded program to develop a nuclear-powered cruise missile, Supersonic Low Altitude Missile (SLAM). It was designed to fly below the enemy's radar at speeds above Mach 3 and carry hydrogen bombs that it would drop along its path over enemy territory. Although the concept was proven sound and the engine finished a successful test run in 1961, no airworthy device was ever completed. The project was finally abandoned in favor of ICBM development. While ballistic missiles were the preferred weapons for land targets, heavy nuclear and conventional weapon tipped cruise missiles were seen by the USSR as a primary weapon to destroy United States naval carrier battle groups. Large submarines (for example, Echo and Oscar classes) were developed to carry these weapons and shadow United States battle groups at sea, and large bombers (for example, Backfire, Bear, and Blackjack models) were equipped with the weapons in their air-launched cruise missile (ALCM) configuration. Categories Cruise missiles can be categorized by payload/warhead size, speed, range, and launch platform. Often variants of the same missile are produced for different launch platforms (for instance, air- and submarine-launched versions). Guidance systems can vary across missiles. Some missiles can be fitted with any of a variety of navigation systems (Inertial navigation, TERCOM, or satellite navigation). Larger cruise missiles can carry either a conventional or a nuclear warhead, while smaller ones carry only conventional warheads. Hypersonic A hypersonic cruise missile travels at least five times the speed of sound (Mach 5). 3M22 Zircon (>1000–1500 km) – hypersonic anti-ship cruise missile ASN4G (Air-Sol Nucléaire de 4e Génération) – scramjet-powered hypersonic cruise missile being developed by France BrahMos-II (≈800–1500 km) / – hypersonic cruise missile under development in India and Russia HSTDV – hypersonic scramjet demonstrator. A carrier vehicle for hypersonic long-range cruise missiles is being developed by Defence Research and Development Organisation (DRDO). Hyfly-2 – hypersonic air-launched cruise missile first displayed at Sea Air Space 2021, developed by Boeing Hypersonic Air-breathing Weapon Concept (HAWC, pronounced Hawk) – scramjet-powered hypersonic air-launched cruise missile without a warhead that uses its own kinetic energy upon impact to destroy the target, developed by DARPA Hypersonic Air Launched Offensive Anti-Surface (HALO) – air-launched anti-ship missile under Offensive Anti-Surface Warfare Increment 2 (OASuW Inc 2) program for the US Navy (Navy) Hypersonic Attack Cruise Missile (HACM) – planned for use by the United States Air Force SCIFiRE / – Southern Cross Integrated Flight Research Experiment (SCIFiRE) is a joint program between the US Department of Defense and the Australian Department of Defence for a Mach 5 scramjet-powered missile. In September 2021, the US Department of Defense awarded Preliminary Design Review contracts to Boeing, Lockheed Martin and Raytheon Missiles & Defense. Supersonic These missiles travel faster than the speed of sound, usually using ramjet engines. The range is typically 100–500 km, but can be greater. Guidance systems vary. Examples: ASALM US ALCM prototype, test-flown to hypersonic Mach 5.5 3M-54 Kalibr (4,500 km, Mach 3) (the "Sizzler" variant is capable of supersonic speed at the terminal stage only) 3M-51 Alfa (250 km, Mach 2.5) Air-Sol Moyenne Portée (300–500 km+, Mach 3)  – supersonic stand-off nuclear missile ASM-3 (400 km, Mach 3+) BrahMos (290–800 km, Mach 3) / Blyskavka (100–370 km)  – Artem Luch Pivdenmash C-101 (50 km, Mach 2) C-301 (100+ km, Mach) C-803 (230 km, Mach 1.4)  – supersonic terminal stage only C-805 CX-1 (280 km, Mach 3) CJ-100 / DF-100 (2000–3000 km, Mach 5) FC/ASW (under development) – transnational cruise missile programme / / Hsiung Feng III (400 km, Mach 3.5) Hyunmoo-3 (1500 km, Mach 1.2) KD-88 (200 km, Mach 0.85) Kh-20 (380–600 km, Mach 2) Kh-31 (25–110 km, Mach 3.5) Kh-32 (600–1,000 km, Mach 4.6) Kh-80 (3,000–5,000 km, Mach 3) / P-270 Moskit (120–250 km, Mach 2–3) / P-500 Bazalt (550 km, Mach 3+) / P-700 Granit (625 km, Mach 2.5+) / P-800 Oniks / Kh-61 (600–800 km, Mach 2.6) / P-1000 Vulkan (800 km, Mach 3+) / YJ-12 (250–400 km, Mach 4) YJ-18 (220–540 km, Mach 3) YJ-91 (15–120 km, Mach 3.5) Yun Feng (1200–2,000 km, Mach 3) SSM-N-9 Regulus II (1,852 km, Mach 2) Intercontinental-range supersonic Burya (8,500 km) MKR (8,000 km) RSS-40 Buran (8,500 km) SLAM (cancelled in 1964) SM-64 Navaho (canceled in 1958) Long-range subsonic The United States, Russia, North Korea, India, Iran, South Korea, Israel, France, China and Pakistan have developed several long-range subsonic cruise missiles. These missiles have a range of over and fly at about . They typically have a launch weight of about and can carry either a conventional or a nuclear warhead. Earlier versions of these missiles used inertial navigation; later versions use much more accurate TERCOM and DSMAC systems. Most recent versions can use satellite navigation. Examples: 3M-54 Kalibr (up to 4,500 km) AGM-86 ALCM (from 1100 to >2400 km) AGM-129 ACM (from 3450 to 3700 km) AGM-181 LRSO (>2500 km) BGM-109 Tomahawk (up to 1,700 km) BGM-109G Ground Launched Cruise Missile (2,500 km) Kh-55 (3,000 km) and Kh-65 Kh-101 (4500–5500 km) Iskander-K (not less than 3 500 km) Hwasal-2 (> 2000 km) RK-55 (3,000 km) Nirbhay (up to 1500 km) MdCN (up to 1,400 km) Paveh (1,650 km) Hoveyzeh (1,350 km) Abu Mahdi (over 1,000 km) Quds 1 Houthi Hsiung Feng IIE (600–1,200 km) Hyunmoo III (Hyunmoo IIIA – 500 km, Hyunmoo IIIB – 1000 km, Hyunmoo IIIC – 1500 km) Type 12 SSM (1,500 km under development) MGM-13 Mace DF-10/CJ-10 (CJ-10K – 1500 km, CJ-20 – 2000 km) Popeye Turbo SLCM Intercontinental-range subsonic 9M730 Burevestnik (unlimited range) SM-62 Snark (10,200 km) Medium-range subsonic These missiles are about the same size and weight and fly at similar speeds to the above category. Guidance systems vary. Examples: AGM-158 JASSM (370–1900 km) AGM-158C LRASM (370 km) Babur (290–900 km) Harbah (250–450 km) Hatf-VIII / Ra'ad Mark-2 ALCM (400 km) Hsiung Feng IIE (600–2000 km) Hyunmoo-3 (within 1500 km) Iskander-K KD-63 Taurus KEPD 350 (500+ km) / / Kh-50 (Kh-SD) and Kh-101 Kh-65 variants MGM-1 Matador (700 km) Ra'ad ALCM (350 km) Raad (360 km) SOM (SOM B Block I) – 500 km, 1500 km and 2500 km versions (350 km range under serial production, 500 km+ range under development) SSM-N-8 Regulus (926 km) P-5 Pyatyorka (450–750 km) / / Storm Shadow / SCALP-EG (550 km, Mach 0.65) / Type 12 SSM (within 1000 km under development) Ya-Ali (700 km) Zarb (320 km) Short-range subsonic These are subsonic missiles that weigh around and have a range of up to . Examples: Apache (100–140 km) AVMT-300 (300 km) MICLA-BR (300 km) Hyunmoo-3 (over 300 km) shorter range SSM-700K Haeseong (180+ km) JFS-M (499 km) Kh-35 (130–300 km) , KN-19 Ks3/4 Kh-59 (115–550 km) P-15 (40–80 km) , KN-1 Nasr-1 Zafar (25 km) Noor Qader Naval Strike Missile (185–555 km) RBS-15 Korshun – local derivative of Kh-55 and RK-55 Neptune V-1 flying bomb (250 km) Hsiung Feng II Wan Chien VCM-01 (100–300 km) Aist (100–300 km) Marte (100+ km) Sea Killer export variant Otomat (180 km) / Otomat Mk2 E / Teseo Mk2/E (360 km) C-801 (40 km) C-802 (120–230 km) C-803 C-805 C-602 CM-602G Delilah missile (250 km) Gabriel IV (200 km) Popeye turbo ALCM (78 km) Sea Breaker (300 km) RGM-84 Harpoon (124–310 km) AGM-84E Standoff Land Attack Missile (110 km) AGM-84H/K SLAM-ER (270 km) Silkworm (100–500 km) SOM Atmaca Çakır Deployment The most common mission for cruise missiles is to attack relatively high-value targets such as ships, command bunkers, bridges and dams. Modern guidance systems permit accurate attacks. , the BGM-109 Tomahawk missile model has become a significant part of the United States naval arsenal. It gives ships and submarines a somewhat accurate, long-range, conventional land attack weapon. Each costs about US$1.99 million. Both the Tomahawk and the AGM-86 were used extensively during Operation Desert Storm. On 7 April 2017, during the Syrian Civil War, U.S. warships fired more than 50 cruise missiles into a Syrian airbase in retaliation for a Syrian chemical weapons attack against a rebel stronghold. The United States Air Force (USAF) deploys an air-launched cruise missile, the AGM-86 ALCM. The Boeing B-52 Stratofortress is the exclusive delivery vehicle for the AGM-86 and AGM-129 ACM. Both missile types are configurable for either conventional or nuclear warheads. The USAF adopted the AGM-86 for its bomber fleet while AGM-109 was adapted to launch from trucks and ships and adopted by the USAF and Navy. The truck-launched versions, and also the Pershing II and SS-20 Intermediate Range Ballistic Missiles, were later destroyed under the bilateral INF (Intermediate-Range Nuclear Forces) treaty with the USSR. The British Royal Navy (RN) also operates cruise missiles, specifically the U.S.-made Tomahawk, used by the RN's nuclear submarine fleet. UK conventional warhead versions were first fired in combat by the RN in 1999, during the Kosovo War (the United States fired cruise missiles in 1991). The Royal Air Force uses the Storm Shadow cruise missile on its Typhoon and previously its Tornado GR4 aircraft. It is also used by France, where it is known as SCALP EG, and carried by the Armée de l'Air's Mirage 2000 and Rafale aircraft. India and Russia have jointly developed the supersonic cruise missile BrahMos. There are three versions of the Brahmos: ship/land-launched, air-launched, and sub-launched. The ship/land-launched version was operational as of late 2007. The Brahmos have the capability to attack targets on land. Russia also continues to operate other cruise missiles: the SS-N-12 Sandbox, SS-N-19 Shipwreck, SS-N-22 Sunburn and SS-N-25 Switchblade. Germany and Spain operate the Taurus missile while Pakistan has made the Babur missile Both the People's Republic of China and the Republic of China (Taiwan) have designed several cruise missile variants, such as the well-known C-802, some of which are capable of carrying biological, chemical, nuclear, and conventional warheads. Nuclear warhead versions China China has the CJ-10 land attack cruise missile which is capable of carrying a nuclear warhead. Additionally, China appears to have tested a hypersonic cruise missile in August 2021, a claim it denies. France The French Force de Frappe nuclear forces include both land and sea-based bombers with Air-Sol Moyenne Portée (ASMP) high-speed medium-range nuclear cruise missiles. Two models are in use, ASMP and a newer ASMP-Amelioré (ASMP-A), which was developed in 1999. An estimated 40 to 50 were produced. India India in 2017 successfully flight-tested its indigenous Nirbhay ('Fearless') land-attack cruise missile, which can deliver nuclear warheads to a strike range of 1,000 km. Nirbhay had been flight-tested successfully. India currently operates 7 variants of Brahmos cruise missile operational range of 300-1000 km. India is currently developing hypersonic BrahMos-II which is going to be the fastest cruise missile. Israel The Israel Defense Forces reportedly deploy the medium-range air-launched Popeye Turbo ALCM and the Popeye Turbo SLCM medium-long range cruise missile with nuclear warheads on Dolphin class submarines. Pakistan Pakistan currently has four cruise missile systems: the air-launched Ra'ad-I and its enhanced version Ra'ad-II; the ground and submarine launched Babur; ship-launched Harbah missile and surface launched Zarb missile. Both, Ra'ad and Babur, can carry nuclear warheads between 10 and 25 kt, and deliver them to targets at a range of up to and respectively. Babur has been in service with the Pakistan Army since 2010, and Pakistan Navy since 2018. Russia Russia has Kh-55SM cruise missiles, with a range similar to the United States' AGM-129 range of 3000 km, but are able to carry a more powerful warhead of 200 kt. They are equipped with a TERCOM system which allows them to cruise at an altitude lower than 110 meters at subsonic speeds while obtaining a CEP accuracy of 15 meters with an inertial navigation system. They are air-launched from either Tupolev Tu-95s, Tupolev Tu-22Ms, or Tupolev Tu-160s, each able to carry 16 for the Tu-95, 12 for the Tu-160, and 4 for the Tu-22M. A stealth version of the missile, the Kh-101 is in development. It has similar qualities as the Kh-55, except that its range has been extended to 5,000 km, is equipped with a 1,000 kg conventional warhead, and has stealth features which reduce its probability of intercept. After the collapse of the Soviet Union, the most recent cruise missile developed was the Kalibr missile which entered production in the early 1990s and was officially inducted into the Russian arsenal in 1994. However, it only saw its combat debut on 7 October 2015, in Syria as a part of the Russian military campaign in Syria. The missile has been used 14 more times in combat operations in Syria since its debut. In the late 1950s and early 1960s, the Soviet Union was attempting to develop cruise missiles. In this short time frame, the Soviet Union was working on nearly ten different types of cruise missiles. However, due to resources, most of the initial types of cruise missiles developed by the Soviet Union were Sea-Launched Cruise Missiles or Submarine-Launched Cruise Missiles (SLCMs). The SS-N-1 cruise missile was developed to have different configurations to be fired from a submarine or a ship. However, as time progressed, the Soviet Union began to work on air-launched cruise missiles as well (ALCM). These ACLM missiles were typically delivered via bombers designated as "Blinders" or "Backfire". The missiles in this configuration were called the AS-1, and AS-2 with eventual new variants with more development time. The main purpose of Soviet-based cruise missiles was to have defense and offensive mechanisms against enemy ships; in other words, most of the Soviet cruise missiles were anti-ship missiles. In the 1980s the Soviet Union had developed an arsenal of cruise missiles nearing 600 platforms which consisted of land, sea, and air delivery systems. United States The United States has deployed nine nuclear cruise missiles at one time or another. MGM-1 Matador ground-launched missile, out of service MGM-13 Mace ground-launched missile, out of service SSM-N-8 Regulus submarine-launched missile, out of service SM-62 Snark ground-launched missile, out of service AGM-28 Hound Dog air-launched missile, out of service BGM-109G Ground Launched Cruise Missile, out of service AGM-129 ACM air-launched missile, out of service AGM-86 ALCM air-launched cruise missile, 350 to 550 missiles and W80 warheads still in service BGM-109 Tomahawk cruise missile in nuclear submarine-, surface ship-, and ground-launched models, nuclear models out of service but warheads kept in reserve. Efficiency in modern warfare Currently, cruise missiles are among the most expensive of single-use weapons, up to several million dollars apiece. One consequence of this is that its users face difficult choices in target allocation, to avoid expending the missiles on targets of low value. For instance, during the 2001 strikes on Afghanistan the United States attacked targets of very low monetary value with cruise missiles, which led many to question the efficiency of the weapon. However, proponents of the cruise missile counter that the weapon can not be blamed for poor target selection, and the same argument applies to other types of UAVs: they are cheaper than human pilots when total training and infrastructure costs are taken into account, not to mention the risk of loss of personnel. As demonstrated in Libya in 2011 and prior conflicts, cruise missiles are much more difficult to detect and intercept than other aerial assets (reduced radar cross-section, infrared and visual signature due to smaller size), suiting them to attacks against static air defense systems.
Technology
Missiles
null
6596
https://en.wikipedia.org/wiki/Computer%20vision
Computer vision
Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form of decisions. "Understanding" in this context signifies the transformation of visual images (the input to the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. Image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, 3D point clouds from LiDaR sensors, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems. Subdisciplines of computer vision include scene reconstruction, object detection, event detection, activity recognition, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration. Definition Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. "Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. Machine vision refers to a systems engineering discipline, especially in the context of factory automation. In more recent times, the terms computer vision and machine vision have converged to a greater degree. History In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through an undergraduate summer project, by attaching a camera to a computer and having it "describe what it saw". What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields. By the 1990s, some of the previous research topics became more active than others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods used in conjunction with machine learning techniques and complex optimization frameworks. The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods. Related fields Solid-state physics Solid-state physics is another field that is closely related to computer vision. Most computer vision systems rely on image sensors, which detect electromagnetic radiation, which is typically in the form of either visible, infrared or ultraviolet light. The sensors are designed using quantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior of optics which are a core part of most imaging systems. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids. Neurobiology Neurobiology has greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing of visual stimuli in both humans and various animals. This has led to a coarse yet convoluted description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g. neural net and deep learning based image and feature analysis and classification) have their background in neurobiology. The Neocognitron, a neural network developed in the 1970s by Kunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically the primary visual cortex. Some strands of computer vision research are closely related to the study of biological vision—indeed, just as many strands of AI research are closely tied with research into human intelligence and the use of stored knowledge to interpret, integrate, and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Signal processing Yet another field related to computer vision is signal processing. Many methods for processing one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision. Robotic navigation Robot navigation sometimes deals with autonomous path planning or deliberation for robotic systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot Visual computing Other fields Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry. Distinctions The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input is an image and the output is an image as well, whereas in computer vision, an image or a video is taken as an input and the output could be an enhanced image, an understanding of the content of an image or even behavior of a computer system based on such understanding. Computer graphics produces image data from 3D models, and computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither requires assumptions nor produces interpretations about the image content. Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image. Machine vision is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control, and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms. There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with the processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications. Progress in convolutional neural networks (CNNs) has improved the accurate detection of disease in medical images, particularly in cardiology, pathology, dermatology, and radiology. Finally, pattern recognition is a field that uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data. Photogrammetry also overlaps with computer vision, e.g., stereophotogrammetry vs. computer stereo vision. Applications Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: Automatic inspection, e.g., in manufacturing applications; Assisting humans in identification tasks, e.g., a species identification system; Controlling processes, e.g., an industrial robot; Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry; Interaction, e.g., as the input to a device for computer-human interaction; monitoring agricultural crops, e.g. an open-source vision transformers model has been developed to help farmers automatically detect strawberry diseases with 98.4% accuracy. Modeling objects or environments, e.g., medical image analysis or topographical modeling; Navigation, e.g., by an autonomous vehicle or mobile robot; Organizing information, e.g., for indexing databases of images and image sequences. Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences. Medicine One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is the detection of tumours, arteriosclerosis or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic images or X-ray images, for example—to reduce the influence of noise. Machine vision A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the Wafer industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a computer chip from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable foodstuff from bulk material, a process called optical sorting. Military Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. Autonomous vehicles One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles. It can also be used for detecting certain task-specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, cameras and LiDAR sensors in vehicles, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover. Tactile feedback Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting microundulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins are being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data. Other application areas include: Support of visual effects creation for cinema and broadcast, e.g., camera tracking (match moving). Surveillance. Driver drowsiness detection Tracking and counting organisms in the biological sciences Typical tasks Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Recognition The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature. Object recognition (also called object classification)one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles, and LikeThat provide stand-alone programs that illustrate this functionality. Identificationan individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or the identification of a specific vehicle. Detectionthe image data are scanned for specific objects along with their locations. Examples include the detection of an obstacle in the car's field of view and possible abnormal cells or tissues in medical images or the detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition. Performance of convolutional neural networks on the ImageNet tests is now close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on the stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease. Several specialized tasks based on recognition exist, such as: Content-based image retrievalfinding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative to a target image (give me all images similar to image X) by utilizing reverse image search techniques, or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter and have no cars in them). Pose estimationestimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR)identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). A related task is reading of 2D codes such as data matrix and QR codes. Facial recognition a technology that enables the matching of faces in digital images or video frames to a face database, which is now widely used for mobile phone facelock, smart door locking, etc. Emotion recognitiona subset of facial recognition, emotion recognition refers to the process of classifying human emotions. Psychologists caution, however, that internal emotions cannot be reliably detected from faces. Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects. Human activity recognition - deals with recognizing the activity from a series of video frames, such as, if the person is picking up an object or walking. Motion analysis Several tasks relate to motion estimation, where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are: Egomotiondetermining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera. Trackingfollowing the movements of a (usually) smaller set of interest points or objects (e.g., vehicles, objects, humans or other organisms) in the image sequence. This has vast industry applications as most high-running machinery can be monitored in this way. Optical flowto determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result of both how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene. Scene reconstruction Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models. Image restoration Image restoration comes into the picture when the original image is degraded or damaged due to some external factors like lens wrong positioning, transmission interference, low lighting or motion blurs, etc., which is referred to as noise. When the images are degraded or damaged, the information to be extracted from them also gets damaged. Therefore, we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters, such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches. An example in this field is inpainting. System methods The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems. Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images) but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or magnetic resonance imaging. Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to ensure that it satisfies certain assumptions implied by the method. Examples are: Re-sampling to ensure that the image coordinate system is correct. Noise reduction to ensure that sensor noise does not introduce false information. Contrast enhancement to ensure that relevant information can be detected. Scale space representation to enhance image structures at locally appropriate scales. Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are: Lines, edges and ridges. Localized interest points such as corners, blobs or points. More complex features may be related to texture, shape, or motion. Detection/segmentation – At some point in the processing, a decision is made about which image points or regions of the image are relevant for further processing. Examples are: Selection of a specific set of interest points. Segmentation of one or multiple image regions that contain a specific object of interest. Segmentation of image into nested scene architecture comprising foreground, object groups, single objects or salient object parts (also referred to as spatial-taxon scene hierarchy), while the visual salience is often implemented as spatial and temporal attention. Segmentation or co-segmentation of one or multiple videos into a series of per-frame foreground masks while maintaining its temporal semantic continuity. High-level processing – At this step, the input is typically a small set of data, for example, a set of points or an image region, which is assumed to contain a specific object. The remaining processing deals with, for example: Verification that the data satisfies model-based and application-specific assumptions. Estimation of application-specific parameters, such as object pose or object size. Image recognition – classifying a detected object into different categories. Image registration – comparing and combining two different views of the same object. Decision making Making the final decision required for the application, for example: Pass/fail on automatic inspection applications. Match/no-match in recognition applications. Flag for further human review in medical, military, security and recognition applications. Image-understanding systems Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research. The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation. While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction. Hardware There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories, such as camera supports, cables, and connectors. Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower). A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers, radar imaging, lidar scanners, magnetic resonance images, side-scan sonar, synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images. While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized. Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective. As of 2016, vision processing units are emerging as a new class of processors to complement CPUs and graphics processing units (GPUs) in this role.
Technology
Artificial intelligence concepts
null
6598
https://en.wikipedia.org/wiki/Camel
Camel
A camel (from and () from Ancient Semitic: gāmāl) is an even-toed ungulate in the genus Camelus that bears distinctive fatty deposits known as "humps" on its back. Camels have long been domesticated and, as livestock, they provide food (camel milk and meat) and textiles (fiber and felt from camel hair). Camels are working animals especially suited to their desert habitat and are a vital means of transport for passengers and cargo. There are three surviving species of camel. The one-humped dromedary makes up 94% of the world's camel population, and the two-humped Bactrian camel makes up 6%. The wild Bactrian camel is a distinct species that is not ancestral to the domestic Bactrian camel, and is now critically endangered, with fewer than 1,000 individuals. The word camel is also used informally in a wider sense, where the more correct term is "camelid", to include all seven species of the family Camelidae: the true camels (the above three species), along with the "New World" camelids: the llama, the alpaca, the guanaco, and the vicuña, which belong to the separate tribe Lamini. Camelids originated in North America during the Eocene, with the ancestor of modern camels, Paracamelus, migrating across the Bering land bridge into Asia during the late Miocene, around 6 million years ago. Taxonomy Extant species Three species are extant: Biology The average life expectancy of a camel is 40 to 50 years. A full-grown adult dromedary camel stands at the shoulder and at the hump. Bactrian camels can be a foot taller. Camels can run at up to in short bursts and sustain speeds of up to . Bactrian camels weigh and dromedaries . The widening toes on a camel's hoof provide supplemental grip for varying soil sediments. The male dromedary camel has an organ called a dulla in his throat, a large, inflatable sac that he extrudes from his mouth when in rut to assert dominance and attract females. It resembles a long, swollen, pink tongue hanging out of the side of the camel's mouth. Camels mate by having both male and female sitting on the ground, with the male mounting from behind. The male usually ejaculates three or four times within a single mating session. Camelids are the only ungulates to mate in a sitting position. Ecological and behavioral adaptations Camels do not directly store water in their humps; they are reservoirs of fatty tissue. When this tissue is metabolized, it yields a greater mass of water than that of the fat processed. This fat metabolization, while releasing energy, causes water to evaporate from the lungs during respiration (as oxygen is required for the metabolic process): overall, there is a net decrease in water. Camels have a series of physiological adaptations that allow them to withstand long periods of time without any external source of water. The dromedary camel can drink as seldom as once every 10 days even under very hot conditions, and can lose up to 30% of its body mass due to dehydration. Unlike other mammals, camels' red blood cells are oval rather than circular in shape. This facilitates the flow of red blood cells during dehydration and makes them better at withstanding high osmotic variation without rupturing when drinking large amounts of water. Camels are able to withstand changes in body temperature and water consumption that would kill most other mammals. Their temperature ranges from at dawn and steadily increases to by sunset, before they cool off at night again. In general, to compare between camels and the other livestock, camels lose only 1.3 liters of fluid intake every day while the other livestock lose 20 to 40 liters per day. Maintaining the brain temperature within certain limits is critical for animals; to assist this, camels have a rete mirabile, a complex of arteries and veins lying very close to each other which utilizes countercurrent blood flow to cool blood flowing to the brain. Camels rarely sweat, even when ambient temperatures reach . Any sweat that does occur evaporates at the skin level rather than at the surface of their coat; the heat of vaporization therefore comes from body heat rather than ambient heat. Camels can withstand losing 25% of their body weight in water, whereas most other mammals can withstand only about 12–14% dehydration before cardiac failure results from circulatory disturbance. When the camel exhales, water vapor becomes trapped in their nostrils and is reabsorbed into the body as a means to conserve water. Camels eating green herbage can ingest sufficient moisture in milder conditions to maintain their bodies' hydrated state without the need for drinking. The camel's thick coat insulates it from the intense heat radiated from desert sand; a shorn camel must sweat 50% more to avoid overheating. During the summer the coat becomes lighter in color, reflecting light as well as helping avoid sunburn. The camel's long legs help by keeping its body farther from the ground, which can heat up to . Dromedaries have a pad of thick tissue over the sternum called the pedestal. When the animal lies down in a sternal recumbent position, the pedestal raises the body from the hot surface and allows cooling air to pass under the body. Camels' mouths have a thick leathery lining, allowing them to chew thorny desert plants. Long eyelashes and ear hairs, together with nostrils that can close, form a barrier against sand. If sand gets lodged in their eyes, they can dislodge it using their translucent third eyelid (also known as the nictitating membrane). The camels' gait and widened feet help them move without sinking into the sand. The kidneys and intestines of a camel are very efficient at reabsorbing water. Camels' kidneys have a 1:4 cortex to medulla ratio. Thus, the medullary part of a camel's kidney occupies twice as much area as a cow's kidney. Secondly, renal corpuscles have a smaller diameter, which reduces surface area for filtration. These two major anatomical characteristics enable camels to conserve water and limit the volume of urine in extreme desert conditions. Camel urine comes out as a thick syrup, and camel faeces are so dry that they do not require drying when used to fuel fires. The camel immune system differs from those of other mammals. Normally, the Y-shaped antibody molecules consist of two heavy (or long) chains along the length of the Y, and two light (or short) chains at each tip of the Y. Camels, in addition to these, also have antibodies made of only two heavy chains, a trait that makes them smaller and more durable. These "heavy-chain-only" antibodies, discovered in 1993, are thought to have developed 50 million years ago, after camelids split from ruminants and pigs. The parasite Trypanosoma evansi causes the disease surra in camels. Genetics The karyotypes of different camelid species have been studied earlier by many groups, but no agreement on chromosome nomenclature of camelids has been reached. A 2007 study flow sorted camel chromosomes, building on the fact that camels have 37 pairs of chromosomes (2n=74), and found that the karyotype consisted of one metacentric, three submetacentric, and 32 acrocentric autosomes. The Y is a small metacentric chromosome, while the X is a large metacentric chromosome. The hybrid camel, a hybrid between Bactrian and dromedary camels, has one hump, though it has an indentation deep that divides the front from the back. The hybrid is at the shoulder and tall at the hump. It weighs an average of and can carry around , which is more than either the dromedary or Bactrian can. According to molecular data, the wild Bactrian camel (C. ferus) separated from the domestic Bactrian camel (C. bactrianus) about 1 million years ago. New World and Old World camelids diverged about 11 million years ago. In spite of this, these species can hybridize and produce viable offspring. The cama is a camel-llama hybrid bred by scientists to see how closely related the parent species are. Scientists collected semen from a camel via an artificial vagina and inseminated a llama after stimulating ovulation with gonadotrophin injections. The cama is halfway in size between a camel and a llama and lacks a hump. It has ears intermediate between those of camels and llamas, longer legs than the llama, and partially cloven hooves. Like the mule, camas are sterile, despite both parents having the same number of chromosomes. Evolution The earliest known camel, called Protylopus, lived in North America 40 to 50 million years ago (during the Eocene). It was about the size of a rabbit and lived in the open woodlands of what is now South Dakota. By 35 million years ago, the Poebrotherium was the size of a goat and had many more traits similar to camels and llamas. The hoofed Stenomylus, which walked on the tips of its toes, also existed around this time, and the long-necked Aepycamelus evolved in the Miocene. The split between the tribes Camelini, which contains modern camels and Lamini, modern llamas, alpacas, vicuñas, and guanacos, is estimated to have occurred over 16 million years ago. The ancestor of modern camels, Paracamelus, migrated into Eurasia from North America via Beringia during the late Miocene, between 7.5 and 6.5 million years ago. During the Pleistocene, around 3 to 1 million years ago, the North American Camelidae spread to South America as part of the Great American Interchange via the newly formed Isthmus of Panama, where they gave rise to guanacos and related animals. Populations of Paracamelus continued to exist in the North American Arctic into the Early Pleistocene. This creature is estimated to have stood around tall. The Bactrian camel diverged from the dromedary about 1 million years ago, according to the fossil record. The last camel native to North America was Camelops hesternus, which vanished along with horses, short-faced bears, mammoths and mastodons, ground sloths, sabertooth cats, and many other megafauna as part of the Quaternary extinction event, coinciding with the migration of humans from Asia at the end of the Pleistocene, around 13–11,000 years ago. An extinct giant camel species, Camelus knoblochi roamed Asia during the Late Pleistocene, before becoming extinct around 20,000 years ago. Domestication Like horses, camels originated in North America and eventually spread across Beringia to Asia. They survived in the Old World, and eventually humans domesticated them and spread them globally. Along with many other megafauna in North America, the original wild camels were wiped out during the spread of the first indigenous peoples of the Americas from Asia into North America, 10 to 12,000 years ago; although fossils have never been associated with definitive evidence of hunting. Most camels surviving today are domesticated. Although feral populations exist in Australia, India and Kazakhstan, wild camels survive only in the wild Bactrian camel population of the Gobi Desert. History When humans first domesticated camels is disputed. Dromedaries may have first been domesticated by humans in Somalia or South Arabia sometime during the 3rd millennium BC, the Bactrian in central Asia around 2,500 BC, as at Shar-i Sokhta (also known as the Burnt City), Iran. A study from 2016, which genotyped and used world-wide sequencing of modern and ancient mitochondrial DNA (mtDNA), suggested that they were initially domesticated in the southeast Arabian Peninsula, with the Bactrian type later being domesticated around Central Asia. Martin Heide's 2010 work on the domestication of the camel tentatively concludes that humans had domesticated the Bactrian camel by at least the middle of the third millennium somewhere east of the Zagros Mountains, with the practice then moving into Mesopotamia. Heide suggests that mentions of camels "in the patriarchal narratives may refer, at least in some places, to the Bactrian camel", while noting that the camel is not mentioned in relationship to Canaan. Heide and Joris Peters reasserted that conclusion in their 2021 study on the subject. In 2009–2013, excavations in the Timna Valley by Lidar Sapir-Hen and Erez Ben-Yosef discovered what may be the earliest domestic camel bones yet found in Israel or even outside the Arabian Peninsula, dating to around 930 BC. This garnered considerable media coverage, as it is strong evidence that the stories of Abraham, Jacob, Esau, and Joseph were written after this time. The existence of camels in Mesopotamia—but not in the eastern Mediterranean lands—is not a new idea. The historian Richard Bulliet did not think that the occasional mention of camels in the Bible meant that the domestic camels were common in the Holy Land at that time. The archaeologist William F. Albright, writing even earlier, saw camels in the Bible as an anachronism. The official report by Sapir-Hen and Ben-Joseph says: The introduction of the dromedary camel (Camelus dromedarius) as a pack animal to the southern Levant ... substantially facilitated trade across the vast deserts of Arabia, promoting both economic and social change (e.g., Kohler 1984; Borowski 1998: 112–116; Jasmin 2005). This ... has generated extensive discussion regarding the date of the earliest domestic camel in the southern Levant (and beyond) (e.g., Albright 1949: 207; Epstein 1971: 558–584; Bulliet 1975; Zarins 1989; Köhler-Rollefson 1993; Uerpmann and Uerpmann 2002; Jasmin 2005; 2006; Heide 2010; Rosen and Saidel 2010; Grigson 2012). Most scholars today agree that the dromedary was exploited as a pack animal sometime in the early Iron Age (not before the 12th century [BC]) and concludes: Current data from copper smelting sites of the Aravah Valley enable us to pinpoint the introduction of domestic camels to the southern Levant more precisely based on stratigraphic contexts associated with an extensive suite of radiocarbon dates. The data indicate that this event occurred not earlier than the last third of the 10th century [BC] and most probably during this time. The coincidence of this event with a major reorganization of the copper industry of the region—attributed to the results of the campaign of Pharaoh Shoshenq I—raises the possibility that the two were connected, and that camels were introduced as part of the efforts to improve efficiency by facilitating trade. Textiles Desert tribes and Mongolian nomads use camel hair for tents, yurts, clothing, bedding and accessories. Camels have outer guard hairs and soft inner down, and the fibers may also be sorted by color and age of the animal. The guard hairs can be felted for use as waterproof coats for the herdsmen, while the softer hair is used for premium goods. The fiber can be spun for use in weaving or made into yarns for hand knitting or crochet. Pure camel hair is recorded as being used for western garments from the 17th century onwards, and from the 19th century a mixture of wool and camel hair was used. Military uses By at least 1200 BC the first camel saddles had appeared, and Bactrian camels could be ridden. The first saddle was positioned to the back of the camel, and control of the Bactrian camel was exercised by means of a stick. However, between 500 and 100 BC, Bactrian camels came into military use. New saddles, which were inflexible and bent, were put over the humps and divided the rider's weight over the animal. In the seventh century BC the military Arabian saddle evolved, which again improved the saddle design slightly. Military forces have used camel cavalries in wars throughout Africa, the Middle East, and their use continues into the modern-day within the Border Security Force (BSF) of India. The first documented use of camel cavalries occurred in the Battle of Qarqar in 853 BC. Armies have also used camels as freight animals instead of horses and mules. The East Roman Empire used auxiliary forces known as dromedarii, whom the Romans recruited in desert provinces. The camels were used mostly in combat because of their ability to scare off horses at close range (horses are afraid of the camels' scent), a quality famously employed by the Achaemenid Persians when fighting Lydia in the Battle of Thymbra (547 BC). 19th and 20th centuries The United States Army established the U.S. Camel Corps, stationed in California, in the 19th century. One may still see stables at the Benicia Arsenal in Benicia, California, where they nowadays serve as the Benicia Historical Museum. Though the experimental use of camels was seen as a success (John B. Floyd, Secretary of War in 1858, recommended that funds be allocated towards obtaining a thousand more camels), the outbreak of the American Civil War in 1861 saw the end of the Camel Corps: Texas became part of the Confederacy, and most of the camels were left to wander away into the desert. France created a méhariste camel corps in 1912 as part of the Armée d'Afrique in the Sahara in order to exercise greater control over the camel-riding Tuareg and Arab insurgents, as previous efforts to defeat them on foot had failed. The Free French Camel Corps fought during World War II, and camel-mounted units remained in service until the end of French rule over Algeria in 1962. In 1916, the British created the Imperial Camel Corps. It was originally used to fight the Senussi, but was later used in the Sinai and Palestine Campaign in World War I. The Imperial Camel Corps comprised infantrymen mounted on camels for movement across desert, though they dismounted at battle sites and fought on foot. After July 1918, the Corps began to become run down, receiving no new reinforcements, and was formally disbanded in 1919. In World War I, the British Army also created the Egyptian Camel Transport Corps, which consisted of a group of Egyptian camel drivers and their camels. The Corps supported British war operations in Sinai, Palestine, and Syria by transporting supplies to the troops. The Somaliland Camel Corps was created by colonial authorities in British Somaliland in 1912; it was disbanded in 1944. Bactrian camels were used by Romanian forces during World War II in the Caucasian region. At the same period the Soviet units operating around Astrakhan in 1942 adopted local camels as draft animals due to shortage of trucks and horses, and kept them even after moving out of the area. Despite severe losses, some of these camels ended up as far west as to Berlin itself. The Bikaner Camel Corps of British India fought alongside the British Indian Army in World Wars I and II. The Tropas Nómadas (Nomad Troops) were an auxiliary regiment of Sahrawi tribesmen serving in the colonial army in Spanish Sahara (today Western Sahara). Operational from the 1930s until the end of the Spanish presence in the territory in 1975, the Tropas Nómadas were equipped with small arms and led by Spanish officers. The unit guarded outposts and sometimes conducted patrols on camelback. 21st century competition The annual King Abdulaziz Camel Festival is held in Saudi Arabia. In addition to camel racing and camel milk tasting, the festival holds a camel "beauty pageant" with prize money of $57m (£40m). In 2018, 12 camels were disqualified from the beauty contest after their owners were found to have injected them with botox. In a similar incident in 2021, over 40 camels were disqualified. Food uses Camel meat and milk are foods that are found in many cuisines, typically in Middle Eastern, North African and some Australian cuisines. Camels provide food in the form of meat and milk. Dairy Camel milk is a staple food of desert nomad tribes and is sometimes considered a meal itself; a nomad can live on only camel milk for almost a month. Camel milk can readily be made into yogurt, but can only be made into butter if it is soured first, churned, and a clarifying agent is then added. Until recently, camel milk could not be made into camel cheese because rennet was unable to coagulate the milk proteins to allow the collection of curds. Developing less wasteful uses of the milk, the FAO commissioned Professor J.P. Ramet of the École Nationale Supérieure d'Agronomie et des Industries Alimentaires, who was able to produce curdling by the addition of calcium phosphate and vegetable rennet in the 1990s. The cheese produced from this process has low levels of cholesterol and is easy to digest, even for the lactose intolerant. Camel milk can also be made into ice cream. Meat Approximately 3.3 million camels and camelids are slaughtered each year for meat worldwide. A camel carcass can provide a substantial amount of meat. The male dromedary carcass can weigh , while the carcass of a male Bactrian can weigh up to . The carcass of a female dromedary weighs less than the male, ranging between . The brisket, ribs and loin are among the preferred parts, and the hump is considered a delicacy. The hump contains "white and sickly fat", which can be used to make the khli (preserved meat) of mutton, beef, or camel. On the other hand, camel milk and meat are rich in protein, vitamins, glycogen, and other nutrients making them essential in the diet of many people. From chemical composition to meat quality, the dromedary camel is the preferred breed for meat production. It does well even in arid areas due to its unusual physiological behaviors and characteristics, which include tolerance to extreme temperatures, radiation from the sun, water paucity, rugged landscape and low vegetation. Camel meat is reported to taste like coarse beef, but older camels can prove to be very tough, although camel meat becomes tenderer the more it is cooked. Camel is one of the animals that can be ritually slaughtered and divided into three portions (one for the home, one for extended family/social networks, and one for those who cannot afford to slaughter an animal themselves) for the qurban of Eid al-Adha. The Abu Dhabi Officers' Club serves a camel burger mixed with beef or lamb fat in order to improve the texture and taste. In Karachi, Pakistan, some restaurants prepare nihari from camel meat. Specialist camel butchers provide expert cuts, with the hump considered the most popular. Camel meat has been eaten for centuries. It has been recorded by ancient Greek writers as an available dish at banquets in ancient Persia, usually roasted whole. The Roman emperor Heliogabalus enjoyed camel's heel. Camel meat is mainly eaten in certain regions, including Eritrea, Somalia, Djibouti, Saudi Arabia, Egypt, Syria, Libya, Sudan, Ethiopia, Kazakhstan, and other arid regions where alternative forms of protein may be limited or where camel meat has had a long cultural history. Camel blood is also consumable, as is the case among pastoralists in northern Kenya, where camel blood is drunk with milk and acts as a key source of iron, vitamin D, salts and minerals. A 2005 report issued jointly by the Saudi Ministry of Health and the United States Centers for Disease Control and Prevention details four cases of human bubonic plague resulting from the ingestion of raw camel liver. Camel meat is also occasionally found in Australian cuisine: for example, a camel lasagna is available in Alice Springs. Australia has exported camel meat, primarily to the Middle East but also to Europe and the US, for many years. The meat is very popular among East African Australians, such as Somalis, and other Australians have also been buying it. The feral nature of the animals means they produce a different type of meat to farmed camels in other parts of the world, and it is sought after because it is disease-free, and a unique genetic group. Demand is outstripping supply, and governments are being urged not to cull the camels, but redirect the cost of the cull into developing the market. Australia has seven camel dairies, which produce milk, cheese and skincare products in addition to meat. Religion Islam Muslims consider camel meat halal (, 'allowed'). However, according to some Islamic schools of thought, a state of impurity is brought on by the consumption of it. Consequently, these schools hold that Muslims must perform wudhu (ablution) before the next time they pray after eating camel meat. Also, some Islamic schools of thought consider it haram (, 'forbidden') for a Muslim to perform Salat in places where camels lie, as it is said to be a dwelling place of the Shaytan (, 'Devil'). According to Abu Yusuf (d.798), the urine of camels may be used for medical treatment if necessary, but according to Abū Ḥanīfah, the drinking of camel urine is discouraged. Islamic texts contain several stories featuring camels. In the story of the people of Thamud, the prophet Salih miraculously brings forth a naqat (, 'milch-camel') out of a rock. After Muhammad migrated from Mecca to Medina (the Hijrah), he allowed his she-camel to roam there; the location where the camel stopped to rest determined the location where he would build his house in Medina. Judaism According to Jewish tradition, camel meat and milk are not kosher. Camels possess only one of the two kosher criteria; although they chew their cud, they do not have cloven hooves: "But these you shall not eat among those that bring up the cud and those that have a cloven hoof: the camel, because it brings up its cud, but does not have a [completely] cloven hoof; it is unclean for you." The Palestinian Muslim Makhamara clan in Yatta, who claim descent from Jews, reportedly avoid eating camel meat, a practice cited as evidence of their Jewish origins. Cultural depictions What may be the oldest carvings of camels were discovered in 2018 in Saudi Arabia. They were analysed by researchers from several scientific disciplines and, in 2021, were estimated to be 7,000 to 8,000 years old. The dating of rock art is made difficult by the lack of organic material in the carvings that may be tested, so the researchers attempting to date them tested animal bones found associated with the carvings, assessed erosion patterns, and analysed tool marks in order to determine a correct date for the creation of the sculptures. This Neolithic dating would make the carvings significantly older than Stonehenge (5,000 years old) and the Egyptian pyramids at Giza (4,500 years old) and it predates estimates for the domestication of camels. Distribution and numbers There are approximately 14 million camels alive , with 90% being dromedaries. Dromedaries alive today are domesticated animals (mostly living in the Horn of Africa, the Sahel, Maghreb, Middle East and South Asia). The Horn region alone has the largest concentration of camels in the world, where the dromedaries constitute an important part of local nomadic life. They provide nomadic people in Somalia and Ethiopia with milk, food, and transportation. Over one million dromedary camels are estimated to be feral in Australia, descended from those introduced as a method of transport in the 19th and early 20th centuries. This population is growing about 8% per year; it was estimated at 700,000 in 2008. Representatives of the Australian government have culled more than 100,000 of the animals in part because the camels use too much of the limited resources needed by sheep farmers. A small population of introduced camels, dromedaries and Bactrians, wandered through Southwestern United States after having been imported in the 19th century as part of the U.S. Camel Corps experiment. When the project ended, they were used as draft animals in mines and escaped or were released. Twenty-five U.S. camels were bought and exported to Canada during the Cariboo Gold Rush. The Bactrian camel is, , reduced to an estimated 1.4 million animals, most of which are domesticated. The Wild Bactrian camel is the only truly wild (as opposed to feral) camel in the world. It is a distinct species that is not ancestral to the domestic Bactrian camel. The wild camels are critically endangered and number approximately 950, inhabiting the Gobi and Taklamakan Deserts in China and Mongolia.
Biology and health sciences
Artiodactyla
null
6621
https://en.wikipedia.org/wiki/Cnidaria
Cnidaria
Cnidaria ( ) is a phylum under kingdom Animalia containing over 11,000 species of aquatic invertebrates found both in fresh water and marine environments (predominantly the latter), including jellyfish, hydroids, sea anemones, corals and some of the smallest marine parasites. Their distinguishing features are an uncentralized nervous system distributed throughout a gelatinous body and the presence of cnidocytes or cnidoblasts, specialized cells with ejectable flagella used mainly for envenomation and capturing prey. Their bodies consist of mesoglea, a non-living, jelly-like substance, sandwiched between two layers of epithelium that are mostly one cell thick. Cnidarians are also some of the few animals that can reproduce both sexually and asexually. Cnidarians mostly have two basic body forms: swimming medusae and sessile polyps, both of which are radially symmetrical with mouths surrounded by tentacles that bear cnidocytes, which are specialized stinging cells used to capture prey. Both forms have a single orifice and body cavity that are used for digestion and respiration. Many cnidarian species produce colonies that are single organisms composed of medusa-like or polyp-like zooids, or both (hence they are trimorphic). Cnidarians' activities are coordinated by a decentralized nerve net and simple receptors. Cnidarians also have rhopalia, which are involved in gravity sensing and sometimes chemoreception. Several free-swimming species of Cubozoa and Scyphozoa possess balance-sensing statocysts, and some have simple eyes. Not all cnidarians reproduce sexually, but many species have complex life cycles of asexual polyp stages and sexual medusae stages. Some, however, omit either the polyp or the medusa stage, and the parasitic classes evolved to have neither form. Cnidarians were formerly grouped with ctenophores, also known as comb jellies, in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Cnidarians are classified into four main groups: the almost wholly sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish); Cubozoa (box jellies); and Hydrozoa (a diverse group that includes all the freshwater cnidarians as well as many marine forms, and which has both sessile members, such as Hydra, and colonial swimmers (such as the Portuguese man o' war)). Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the highly derived parasitic Myxozoa and Polypodiozoa were firmly recognized as cnidarians only in 2007. Most cnidarians prey on organisms ranging in size from plankton to animals several times larger than themselves, but many obtain much of their nutrition from symbiotic dinoflagellates, and a few are parasites. Many are preyed on by other animals including starfish, sea slugs, fish, turtles, and even other cnidarians. Many scleractinian corals—which form the structural foundation for coral reefs—possess polyps that are filled with symbiotic photo-synthetic zooxanthellae. While reef-forming corals are almost entirely restricted to warm and shallow marine waters, other cnidarians can be found at great depths, in polar regions, and in freshwater. Cnidarians are a very ancient phylum, with fossils having been found in rocks formed about during the Ediacaran period, preceding the Cambrian Explosion. Other fossils show that corals may have been present shortly before and diversified a few million years later. Molecular clock analysis of mitochondrial genes suggests an even older age for the crown group of cnidarians, estimated around , almost 200 million years before the Cambrian period, as well as before any fossils. Recent phylogenetic analyses support monophyly of cnidarians, as well as the position of cnidarians as the sister group of bilaterians. Etymology The term cnidaria derives from the Ancient Greek word knídē (κνίδη “nettle”), signifying the coiled thread reminiscent of cnidocytes. The word was first coined in 1766 by the Swedish naturalist Peter Simon Pallas. Distinguishing features Cnidarians form a phylum of animals that are more complex than sponges, about as complex as ctenophores (comb jellies), and less complex than bilaterians, which include almost all other animals. Both cnidarians and ctenophores are more complex than sponges as they have: cells bound by inter-cell connections and carpet-like basement membranes; muscles; nervous systems; and some have sensory organs. Cnidarians are distinguished from all other animals by having cnidocytes that fire harpoon-like structures that are mainly used to capture prey. In some species, cnidocytes can also be used as anchors. Cnidarians are also distinguished by the fact that they have only one opening in their body for ingestion and excretion i.e. they do not have a separate mouth and anus. Like sponges and ctenophores, cnidarians have two main layers of cells that sandwich a middle layer of jelly-like material, which is called the mesoglea in cnidarians; more complex animals have three main cell layers and no intermediate jelly-like layer. Hence, cnidarians and ctenophores have traditionally been labelled diploblastic, along with sponges. However, both cnidarians and ctenophores have a type of muscle that, in more complex animals, arises from the middle cell layer. As a result, some recent text books classify ctenophores as triploblastic, and it has been suggested that cnidarians evolved from triploblastic ancestors. Description Basic body forms Most adult cnidarians appear as either free-swimming medusae or sessile polyps, and many hydrozoans species are known to alternate between the two forms. Both are radially symmetrical, like a wheel and a tube respectively. Since these animals have no heads, their ends are described as "oral" (nearest the mouth) and "aboral" (furthest from the mouth). Most have fringes of tentacles equipped with cnidocytes around their edges, and medusae generally have an inner ring of tentacles around the mouth. Some hydroids may consist of colonies of zooids that serve different purposes, such as defence, reproduction and catching prey. The mesoglea of polyps is usually thin and often soft, but that of medusae is usually thick and springy, so that it returns to its original shape after muscles around the edge have contracted to squeeze water out, enabling medusae to swim by a sort of jet propulsion. Skeletons In medusae, the only supporting structure is the mesoglea. Hydra and most sea anemones close their mouths when they are not feeding, and the water in the digestive cavity then acts as a hydrostatic skeleton, rather like a water-filled balloon. Other polyps such as Tubularia use columns of water-filled cells for support. Sea pens stiffen the mesoglea with calcium carbonate spicules and tough fibrous proteins, rather like sponges. In some colonial polyps, a chitinous epidermis gives support and some protection to the connecting sections and to the lower parts of individual polyps. A few polyps collect materials such as sand grains and shell fragments, which they attach to their outsides. Some colonial sea anemones stiffen the mesoglea with sediment particles. A mineralized exoskeleton made of calcium carbonate is found in subphylum Anthozoa in the order Scleractinia (stony corals; class Hexacorallia) and the class Octocorallia, and in subphylum Medusozoa in three hydrozoan families in order Anthoathecata; Milleporidae, Stylasteridae and Hydractiniidae (the latter with a mix of calcified and uncalcified species). Main cell layers Cnidaria are diploblastic animals; in other words, they have two main cell layers, while more complex animals are triploblasts having three main layers. The two main cell layers of cnidarians form epithelia that are mostly one cell thick, and are attached to a fibrous basement membrane, which they secrete. They also secrete the jelly-like mesoglea that separates the layers. The layer that faces outwards, known as the ectoderm ("outside skin"), generally contains the following types of cells: Epitheliomuscular cells whose bodies form part of the epithelium but whose bases extend to form muscle fibers in parallel rows. The fibers of the outward-facing cell layer generally run at right angles to the fibers of the inward-facing one. In Anthozoa (anemones, corals, etc.) and Scyphozoa (jellyfish), the mesoglea also contains some muscle cells. Cnidocytes, the harpoon-like "nettle cells" that give the phylum Cnidaria its name. These appear between or sometimes on top of the muscle cells. Nerve cells. Sensory cells appear between or sometimes on top of the muscle cells, and communicate via synapses (gaps across which chemical signals flow) with motor nerve cells, which lie mostly between the bases of the muscle cells. Some form a simple nerve net. Interstitial cells, which are unspecialized and can replace lost or damaged cells by transforming into the appropriate types. These are found between the bases of muscle cells. In addition to epitheliomuscular, nerve and interstitial cells, the inward-facing gastroderm ("stomach skin") contains gland cells that secrete digestive enzymes. In some species it also contains low concentrations of cnidocytes, which are used to subdue prey that is still struggling. The mesoglea contains small numbers of amoeba-like cells, and muscle cells in some species. However, the number of middle-layer cells and types are much lower than in sponges. Polymorphism Polymorphism refers to the occurrence of structurally and functionally more than two different types of individuals within the same organism. It is a characteristic feature of cnidarians, particularly the polyp and medusa forms, or of zooids within colonial organisms like those in Hydrozoa. In Hydrozoans, colonial individuals arising from individual zooids will take on separate tasks. For example, in Obelia there are feeding individuals, the gastrozooids; the individuals capable of asexual reproduction only, the gonozooids, blastostyles and free-living or sexually reproducing individuals, the medusae. Cnidocytes These "nettle cells" function as harpoons, since their payloads remain connected to the bodies of the cells by threads. Three types of cnidocytes are known: Nematocysts inject venom into prey, and usually have barbs to keep them embedded in the victims. Most species have nematocysts. Spirocysts do not penetrate the victim or inject venom, but entangle it by means of small sticky hairs on the thread. Ptychocysts are not used for prey capture — instead the threads of discharged ptychocysts are used for building protective tubes in which their owners live. Ptychocysts are found only in the order Ceriantharia, tube anemones. The main components of a cnidocyte are: A cilium (fine hair) which projects above the surface and acts as a trigger. Spirocysts do not have cilia. A tough capsule, the cnida, which houses the thread, its payload and a mixture of chemicals that may include venom or adhesives or both. ("cnida" is derived from the Greek word κνίδη, which means "nettle") A tube-like extension of the wall of the cnida that points into the cnida, like the finger of a rubber glove pushed inwards. When a cnidocyte fires, the finger pops out. If the cell is a venomous nematocyte, the "finger"'s tip reveals a set of barbs that anchor it in the prey. The thread, which is an extension of the "finger" and coils round it until the cnidocyte fires. The thread is usually hollow and delivers chemicals from the cnida to the target. An operculum (lid) over the end of the cnida. The lid may be a single hinged flap or three flaps arranged like slices of pie. The cell body, which produces all the other parts. It is difficult to study the firing mechanisms of cnidocytes as these structures are small but very complex. At least four hypotheses have been proposed: Rapid contraction of fibers round the cnida may increase its internal pressure. The thread may be like a coiled spring that extends rapidly when released. In the case of Chironex (the "sea wasp"), chemical changes in the cnida's contents may cause them to expand rapidly by polymerization. Chemical changes in the liquid in the cnida make it a much more concentrated solution, so that osmotic pressure forces water in very rapidly to dilute it. This mechanism has been observed in nematocysts of the class Hydrozoa, sometimes producing pressures as high as 140 atmospheres, similar to that of scuba air tanks, and fully extending the thread in as little as 2 milliseconds (0.002 second). Cnidocytes can only fire once, and about 25% of a hydra's nematocysts are lost from its tentacles when capturing a brine shrimp. Used cnidocytes have to be replaced, which takes about 48 hours. To minimise wasteful firing, two types of stimulus are generally required to trigger cnidocytes: nearby sensory cells detect chemicals in the water, and their cilia respond to contact. This combination prevents them from firing at distant or non-living objects. Groups of cnidocytes are usually connected by nerves and, if one fires, the rest of the group requires a weaker minimum stimulus than the cells that fire first. Locomotion Medusae swim by a form of jet propulsion: muscles, especially inside the rim of the bell, squeeze water out of the cavity inside the bell, and the springiness of the mesoglea powers the recovery stroke. Since the tissue layers are very thin, they provide too little power to swim against currents and just enough to control movement within currents. Hydras and some sea anemones can move slowly over rocks and sea or stream beds by various means: creeping like snails, crawling like inchworms, or by somersaulting. A few can swim clumsily by waggling their bases. Nervous system and senses Cnidarians are generally thought to have no brains or even central nervous systems. However, they do have integrative areas of neural tissue that could be considered some form of centralization. Most of their bodies are innervated by decentralized nerve nets that control their swimming musculature and connect with sensory structures, though each clade has slightly different structures. These sensory structures, usually called rhopalia, can generate signals in response to various types of stimuli such as light, pressure, chemical changes, and much more. Medusa usually have several of them around the margin of the bell that work together to control the motor nerve net, that directly innervates the swimming muscles. Most cnidarians also have a parallel system. In scyphozoans, this takes the form of a diffuse nerve net, which has modulatory effects on the nervous system. As well as forming the "signal cables" between sensory neurons and motoneurons, intermediate neurons in the nerve net can also form ganglia that act as local coordination centers. Communication between nerve cells can occur by chemical synapses or gap junctions in hydrozoans, though gap junctions are not present in all groups. Cnidarians have many of the same neurotransmitters as bilaterians, including chemicals such as glutamate, GABA, and glycine. Serotonin, dopamine, noradrenaline, octopamine, histamine, and acetylcholine, on the other hand, are absent. This structure ensures that the musculature is excited rapidly and simultaneously, and can be directly stimulated from any point on the body, and it also is better able to recover after injury. Medusae and complex swimming colonies such as siphonophores and chondrophores sense tilt and acceleration by means of statocysts, chambers lined with hairs which detect the movements of internal mineral grains called statoliths. If the body tilts in the wrong direction, the animal rights itself by increasing the strength of the swimming movements on the side that is too low. Most species have ocelli ("simple eyes"), which can detect sources of light. However, the agile box jellyfish are unique among Medusae because they possess four kinds of true eyes that have retinas, corneas and lenses. Although the eyes probably do not form images, Cubozoa can clearly distinguish the direction from which light is coming as well as negotiate around solid-colored objects. Feeding and excretion Cnidarians feed in several ways: predation, absorbing dissolved organic chemicals, filtering food particles out of the water, obtaining nutrients from symbiotic algae within their cells, and parasitism. Most obtain the majority of their food from predation but some, including the corals Hetroxenia and Leptogorgia, depend almost completely on their endosymbionts and on absorbing dissolved nutrients. Cnidaria give their symbiotic algae carbon dioxide, some nutrients, and protection against predators. Predatory species use their cnidocytes to poison or entangle prey, and those with venomous nematocysts may start digestion by injecting digestive enzymes. The "smell" of fluids from wounded prey makes the tentacles fold inwards and wipe the prey off into the mouth. In medusae, the tentacles around the edge of the bell are often short and most of the prey capture is done by "oral arms", which are extensions of the edge of the mouth and are often frilled and sometimes branched to increase their surface area. These "oral arms" aid in cnidarians' ability to move prey towards their mouth once it has been poisoned and entangled. Medusae often trap prey or suspended food particles by swimming upwards, spreading their tentacles and oral arms and then sinking. In species for which suspended food particles are important, the tentacles and oral arms often have rows of cilia whose beating creates currents that flow towards the mouth, and some produce nets of mucus to trap particles. Their digestion is both intra and extracellular. Once the food is in the digestive cavity, gland cells in the gastroderm release enzymes that reduce the prey to slurry, usually within a few hours. This circulates through the digestive cavity and, in colonial cnidarians, through the connecting tunnels, so that gastroderm cells can absorb the nutrients. Absorption may take a few hours, and digestion within the cells may take a few days. The circulation of nutrients is driven by water currents produced by cilia in the gastroderm or by muscular movements or both, so that nutrients reach all parts of the digestive cavity. Nutrients reach the outer cell layer by diffusion or, for animals or zooids such as medusae which have thick mesogleas, are transported by mobile cells in the mesoglea. Indigestible remains of prey are expelled through the mouth. The main waste product of cells' internal processes is ammonia, which is removed by the external and internal water currents. Respiration There are no respiratory organs, and both cell layers absorb oxygen from and expel carbon dioxide into the surrounding water. When the water in the digestive cavity becomes stale it must be replaced, and nutrients that have not been absorbed will be expelled with it. Some Anthozoa have ciliated grooves on their tentacles, allowing them to pump water out of and into the digestive cavity without opening the mouth. This improves respiration after feeding and allows these animals, which use the cavity as a hydrostatic skeleton, to control the water pressure in the cavity without expelling undigested food. Cnidaria that carry photosynthetic symbionts may have the opposite problem, an excess of oxygen, which may prove toxic. The animals produce large quantities of antioxidants to neutralize the excess oxygen. Regeneration All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Medusae have limited ability to regenerate, but polyps can do so from small pieces or even collections of separated cells. This enables corals to recover even after apparently being destroyed by predators. Reproduction Sexual Cnidarian sexual reproduction often involves a complex life cycle with both polyp and medusa stages. For example, in Scyphozoa (jellyfish) and Cubozoa (box jellies), a larva swims until it finds a good site, and then becomes a polyp. This grows normally but then absorbs its tentacles and splits horizontally into a series of disks that become juvenile medusae, a process called strobilation. The juveniles swim off and slowly grow to maturity, while the polyp re-grows and may continue strobilating periodically. The adult medusae have gonads in the gastroderm, and these release ova and sperm into the water in the breeding season. This phenomenon of succession of differently organized generations (one asexually reproducing, sessile polyp, followed by a free-swimming medusa or a sessile polyp that reproduces sexually) is sometimes called "alternation of asexual and sexual phases" or "metagenesis", but should not be confused with the alternation of generations as found in plants. Shortened forms of this life cycle are common, for example some oceanic scyphozoans omit the polyp stage completely, and cubozoan polyps produce only one medusa. Hydrozoa have a variety of life cycles. Some have no polyp stages and some (e.g. hydra) have no medusae. In some species, the medusae remain attached to the polyp and are responsible for sexual reproduction; in extreme cases these reproductive zooids may not look much like medusae. Meanwhile, life cycle reversal, in which polyps are formed directly from medusae without the involvement of sexual reproduction process, was observed in both Hydrozoa (Turritopsis dohrnii and Laodicea undulata) and Scyphozoa (Aurelia sp.1). Anthozoa have no medusa stage at all and the polyps are responsible for sexual reproduction. Spawning is generally driven by environmental factors such as changes in the water temperature, and their release is triggered by lighting conditions such as sunrise, sunset or the phase of the moon. Many species of Cnidaria may spawn simultaneously in the same location, so that there are too many ova and sperm for predators to eat more than a tiny percentage — one famous example is the Great Barrier Reef, where at least 110 corals and a few non-cnidarian invertebrates produce enough gametes to turn the water cloudy. These mass spawnings may produce hybrids, some of which can settle and form polyps, but it is not known how long these can survive. In some species the ova release chemicals that attract sperm of the same species. The fertilized eggs develop into larvae by dividing until there are enough cells to form a hollow sphere (blastula) and then a depression forms at one end (gastrulation) and eventually becomes the digestive cavity. However, in cnidarians the depression forms at the end further from the yolk (at the animal pole), while in bilaterians it forms at the other end (vegetal pole). The larvae, called planulae, swim or crawl by means of cilia. They are cigar-shaped but slightly broader at the "front" end, which is the aboral, vegetal-pole end and eventually attaches to a substrate if the species has a polyp stage. Anthozoan larvae either have large yolks or are capable of feeding on plankton, and some already have endosymbiotic algae that help to feed them. Since the parents are immobile, these feeding capabilities extend the larvae's range and avoid overcrowding of sites. Scyphozoan and hydrozoan larvae have little yolk and most lack endosymbiotic algae, and therefore have to settle quickly and metamorphose into polyps. Instead, these species rely on their medusae to extend their ranges. Asexual All known cnidarians can reproduce asexually by various means, in addition to regenerating after being fragmented. Hydrozoan polyps only bud, while the medusae of some hydrozoans can divide down the middle. Scyphozoan polyps can both bud and split down the middle. In addition to both of these methods, Anthozoa can split horizontally just above the base. Asexual reproduction makes the daughter cnidarian a clone of the adult. The ability of cnidarians to asexually reproduce ensures a greater number of mature medusa that can mature to reproduce sexually. DNA repair Two classical DNA repair pathways, nucleotide excision repair and base excision repair, are present in hydra, and these repair pathways facilitate unhindered reproduction. The identification of these pathways in hydra is based, in part, on the presence in the hydra genome of genes homologous to genes in other genetically well studied species that have been demonstrated to play key roles in these DNA repair pathways. Classification Cnidarians were for a long time grouped with ctenophores in the phylum Coelenterata, but increasing awareness of their differences caused them to be placed in separate phyla. Modern cnidarians are generally classified into four main classes: sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish) and Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both sessile members such as Hydra and colonial swimmers such as the Portuguese Man o' War. Staurozoa have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and the parasitic Myxozoa and Polypodiozoa are now recognized as highly derived cnidarians rather than more closely related to the bilaterians. Stauromedusae, small sessile cnidarians with stalks and no medusa stage, have traditionally been classified as members of the Scyphozoa, but recent research suggests they should be regarded as a separate class, Staurozoa. The Myxozoa, microscopic parasites, were first classified as protozoans. Research then found that Polypodium hydriforme, a non-myxozoan parasite within the egg cells of sturgeon, is closely related to the Myxozoa and suggested that both Polypodium and the Myxozoa were intermediate between cnidarians and bilaterian animals. More recent research demonstrates that the previous identification of bilaterian genes reflected contamination of the myxozoan samples by material from their host organism, and they are now firmly identified as heavily derived cnidarians, and more closely related to Hydrozoa and Scyphozoa than to Anthozoa. Some researchers classify the extinct conulariids as cnidarians, while others propose that they form a completely separate phylum. Current classification according to the World Register of Marine Species: class Anthozoa Ehrenberg, 1834 subclass Ceriantharia Perrier, 1893 — Tube-dwelling anemones subclass Hexacorallia Haeckel, 1896 — stony corals subclass Octocorallia Haeckel, 1866 — soft corals and sea fans class Cubozoa Werner, 1973 — box jellies class Hydrozoa Owen, 1843 — hydrozoans (fire corals, hydroids, hydroid jellyfishes, siphonophores...) class Myxozoa Grassé, 1970 — obligate parasites class Polypodiozoa Raikova, 1994 — (uncertain status) class Scyphozoa Goette, 1887 — "true" jellyfishes class Staurozoa Marques & Collins, 2004 — stalked jellyfishes Ecology Many cnidarians are limited to shallow waters because they depend on endosymbiotic algae for much of their nutrients. The life cycles of most have polyp stages, which are limited to locations that offer stable substrates. Nevertheless, major cnidarian groups contain species that have escaped these limitations. Hydrozoans have a worldwide range: some, such as Hydra, live in freshwater; Obelia appears in the coastal waters of all the oceans; and Liriope can form large shoals near the surface in mid-ocean. Among anthozoans, a few scleractinian corals, sea pens and sea fans live in deep, cold waters, and some sea anemones inhabit polar seabeds while others live near hydrothermal vents over below sea-level. Reef-building corals are limited to tropical seas between 30°N and 30°S with a maximum depth of , temperatures between , high salinity, and low carbon dioxide levels. Stauromedusae, although usually classified as jellyfish, are stalked, sessile animals that live in cool to Arctic waters. Cnidarians range in size from a mere handful of cells for the parasitic myxozoans through Hydra'''s length of , to the lion's mane jellyfish, which may exceed in diameter and in length. Prey of cnidarians ranges from plankton to animals several times larger than themselves. Some cnidarians are parasites, mainly on jellyfish but a few are major pests of fish. Others obtain most of their nourishment from endosymbiotic algae or dissolved nutrients. Predators of cnidarians include: sea slugs, flatworms and comb jellies, which can incorporate nematocysts into their own bodies for self-defense (nematocysts used by cnidarian predators are referred to as kleptocnidae); starfish, notably the crown of thorns starfish, which can devastate corals; butterfly fish and parrot fish, which eat corals; and marine turtles, which eat jellyfish. Some sea anemones and jellyfish have a symbiotic relationship with some fish; for example clownfish live among the tentacles of sea anemones, and each partner protects the other against predators. Coral reefs form some of the world's most productive ecosystems. Common coral reef cnidarians include both anthozoans (hard corals, octocorals, anemones) and hydrozoans (fire corals, lace corals). The endosymbiotic algae of many cnidarian species are very effective primary producers, in other words converters of inorganic chemicals into organic ones that other organisms can use, and their coral hosts use these organic chemicals very efficiently. In addition, reefs provide complex and varied habitats that support a wide range of other organisms. Fringing reefs just below low-tide level also have a mutually beneficial relationship with mangrove forests at high-tide level and seagrass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and seagrass protect the coral from large influxes of silt, fresh water and pollutants. This additional level of variety in the environment is beneficial to many types of coral reef animals, which for example may feed in the sea grass and use the reefs for protection or breeding. Evolutionary history Fossil record The earliest widely accepted animal fossils are rather modern-looking cnidarians, possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. The identification of some of these as embryos of animals has been contested, but other fossils from these rocks strongly resemble tubes and other mineralized structures made by corals. Their presence implies that the cnidarian and bilaterian lineages had already diverged. Although the Ediacaran fossil Charnia used to be classified as a jellyfish or sea pen, more recent study of growth patterns in Charnia and modern cnidarians has cast doubt on this hypothesis, leaving the Canadian polyp Haootia and the British Auroralumina as the only recognized cnidarian body fossils from the Ediacaran. Auroralumina is the earliest known animal predator. Few fossils of cnidarians without mineralized skeletons are known from more recent rocks, except in Lagerstätten that preserved soft-bodied animals. A few mineralized fossils that resemble corals have been found in rocks from the Cambrian period, and corals diversified in the Early Ordovician. These corals, which were wiped out in the Permian–Triassic extinction event about , did not dominate reef construction since sponges and algae also played a major part. During the Mesozoic era, rudist bivalves were the main reef-builders, but they were wiped out in the Cretaceous–Paleogene extinction event , and since then the main reef-builders have been scleractinian corals. Phylogeny It is difficult to reconstruct the early stages in the evolutionary "family tree" of animals using only morphology (their shapes and structures), because the large differences between Porifera (sponges), Cnidaria plus Ctenophora (comb jellies), Placozoa and Bilateria (all the more complex animals) make comparisons difficult. Hence reconstructions now rely largely or entirely on molecular phylogenetics, which groups organisms according to similarities and differences in their biochemistry, usually in their DNA or RNA. It is now generally thought that the Calcarea (sponges with calcium carbonate spicules) are more closely related to Cnidaria, Ctenophora (comb jellies) and Bilateria (all the more complex animals) than they are to the other groups of sponges. In 1866, it was proposed that Cnidaria and Ctenophora were more closely related to each other than to Bilateria and formed a group called Coelenterata ("hollow guts"), because Cnidaria and Ctenophora both rely on the flow of water in and out of a single cavity for feeding, excretion and respiration. In 1881, it was proposed that Ctenophora and Bilateria were more closely related to each other, since they shared features that Cnidaria lack, for example muscles in the middle layer (mesoglea in Ctenophora, mesoderm in Bilateria). However, more recent analyses indicate that these similarities are rather vague, and the current view, based on molecular phylogenetics, is that Cnidaria and Bilateria are more closely related to each other than either is to Ctenophora. This grouping of Cnidaria and Bilateria has been labelled "Planulozoa" because it suggests that the earliest Bilateria were similar to the planula larvae of Cnidaria. However, in 2005, Katja Seipel and Volker Schmid suggested that cnidarians and ctenophores are simplified descendants of triploblastic animals, since ctenophores and the medusa stage of some cnidarians have striated muscle, which in bilaterians arises from the mesoderm. They did not commit themselves on whether bilaterians evolved from early cnidarians or from the hypothesized triploblastic ancestors of cnidarians. Resolving the evolutionary relationships within Cnidaria has been equally challenging, with almost every possible combination of clades being proposed. As time went on though, a semi-consensus has started to emerge. The enigmatic Polypodium hydriforme and subphylum Myxozoa have been firmly placed within the Cnidaria and have been shown to be closely related to the Medusozoa. In addition, these two groups have been found to likely be each others closest relatives which, if true, would form the clade "Endocnidozoa". The relationships within the Medusozoa are currently probably the most contentious part of the tree. Traditionally, the class Scyphozoa also included Staurozoa and Cubozoa, but significant morphological differences eventually lead to the split of the three. The group containing them has since been named "Acraspeda". The relationships between these three and Hydrozoa have since and still are debated. A relationship between Scyphozoa and Cubozoa with Staurozoa as its sister has seen support in nearly all studies, but the position of the remaining class, Hydrozoa, is not understood. Several studies have found that Acraspeda is paraphyletic, with Hydrozoa being more closely related to Scyphozoa than to the other classes. At the same time, other studies have recovered Acraspeda as being monophyletic. The subphylum Anthozoa is argued to have either two or three classes, but the relationships between them is not disputed; the tube-dwelling anemones of the class Ceriantharia have consistently shown to be more closely related to the Hexacorallia than to the Octocorallia. In molecular phylogenetics analyses from 2005 onwards, important groups of developmental genes show the same variety in cnidarians as in chordates. In fact cnidarians, and especially anthozoans (sea anemones and corals), retain some genes that are present in bacteria, protists, plants and fungi but not in bilaterians. The mitochondrial genome in the medusozoan cnidarians, unlike those in other animals, is linear with fragmented genes. The reason for this difference is unknown. Interaction with humans Jellyfish stings killed about 1,500 people in the 20th century, and cubozoans are particularly dangerous. On the other hand, some large jellyfish are considered a delicacy in East and Southeast Asia. Coral reefs have long been economically important as providers of fishing grounds, protectors of shore buildings against currents and tides, and more recently as centers of tourism. However, they are vulnerable to over-fishing, mining for construction materials, pollution, and damage caused by tourism. Beaches protected from tides and storms by coral reefs are often the best places for housing in tropical countries. Reefs are an important food source for low-technology fishing, both on the reefs themselves and in the adjacent seas. However, despite their great productivity, reefs are vulnerable to over-fishing, because much of the organic carbon they produce is exhaled as carbon dioxide by organisms at the middle levels of the food chain and never reaches the larger species that are of interest to fishermen. Tourism centered on reefs provides much of the income of some tropical islands, attracting photographers, divers and sports fishermen. However, human activities damage reefs in several ways: mining for construction materials; pollution, including large influxes of fresh water from storm drains; commercial fishing, including the use of dynamite to stun fish and the capture of young fish for aquariums; and tourist damage caused by boat anchors and the cumulative effect of walking on the reefs. Coral, mainly from the Pacific Ocean has long been used in jewellery, and demand rose sharply in the 1980s. Some large jellyfish species of the Rhizostomeae order are commonly consumed in Japan, Korea and Southeast Asia. In parts of the range, fishing industry is restricted to daylight hours and calm conditions in two short seasons, from March to May and August to November. The commercial value of jellyfish food products depends on the skill with which they are prepared, and "Jellyfish Masters" guard their trade secrets carefully. Jellyfish is very low in cholesterol and sugars, but cheap preparation can introduce undesirable amounts of heavy metals. The "sea wasp" Chironex fleckeri has been described as the world's most venomous jellyfish and is held responsible for 67 deaths, although it is difficult to identify the animal as it is almost transparent. Most stingings by C. fleckeri cause only mild symptoms. Seven other box jellies can cause a set of symptoms called Irukandji syndrome, which takes about 30 minutes to develop, and from a few hours to two weeks to disappear. Hospital treatment is usually required, and there have been a few deaths. A number of the parasitic myxozoans are commercially important pathogens in salmonid aquaculture. A Scyphozoa species – Pelagia noctiluca – and a Hydrozoa – Muggiaea atlantica – have caused repeated mass mortality in salmon farms over the years around Ireland. A loss valued at £1 million struck in November 2007, 20,000 died off Clare Island in 2013 and four fish farms collectively lost tens of thousands of salmon in September 2017.
Biology and health sciences
Cnidarians
null
6631
https://en.wikipedia.org/wiki/Bus%20%28computing%29
Bus (computing)
In computer architecture, a bus (historically also called a data highway or databus) is a communication system that transfers data between components inside a computer or between computers. It encompasses both hardware (e.g., wires, optical fiber) and software, including communication protocols. At its core, a bus is a shared physical pathway, typically composed of wires, traces on a circuit board, or busbars, that allows multiple devices to communicate. To prevent conflicts and ensure orderly data exchange, buses rely on a communication protocol to manage which device can transmit data at a given time. Buses are categorized based on their role, such as system buses (also known as internal buses, internal data buses, or memory buses) connecting the CPU and memory. Expansion buses, also called peripheral buses, extend the system to connect additional devices, including peripherals. Examples of widely used buses include PCI Express (PCIe) for high-speed internal connections and Universal Serial Bus (USB) for connecting external devices. Modern buses utilize both parallel and serial communication, employing advanced encoding methods to maximize speed and efficiency. Features such as direct memory access (DMA) further enhance performance by allowing data transfers directly between devices and memory without requiring CPU intervention. Address bus An address bus is a bus that is used to specify a physical address. When a processor or DMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus (the value to be read or written is sent on the data bus). The width of the address bus determines the amount of memory a system can address. For example, a system with a 32-bit address bus can address 232 (4,294,967,296) memory locations. If each memory location holds one byte, the addressable memory space is 4 GB. Address multiplexing Early processors used a wire for each bit of the address width. For example, a 16-bit address bus had 16 physical wires making up the bus. As the buses became wider and lengthier, this approach became expensive in terms of the number of chip pins and board traces. Beginning with the Mostek 4096 DRAM, address multiplexing implemented with multiplexers became common. In a multiplexed address scheme, the address is sent in two equal parts on alternate bus cycles. This halves the number of address bus signals required to connect to the memory. For example, a 32-bit address bus can be implemented by using 16 lines and sending the first half of the memory address, immediately followed by the second half memory address. Typically two additional pins in the control busrow-address strobe (RAS) and column-address strobe (CAS)are used to tell the DRAM whether the address bus is currently sending the first half of the memory address or the second half. Implementation Accessing an individual byte frequently requires reading or writing the full bus width (a word) at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with the VESA Local Bus which lacks the two least significant bits, limiting this bus to aligned 32-bit transfers. Historically, there were also some examples of computers that were only able to address wordsword machines. Memory bus The memory bus is the bus that connects the main memory to the memory controller in computer systems. Originally, general-purpose buses like VMEbus and the S-100 bus were used, but to reduce latency, modern memory buses are designed to connect directly to DRAM chips, and thus are defined by chip standards bodies such as JEDEC. Examples are the various generations of SDRAM, and serial point-to-point buses like SLDRAM and RDRAM. Implementation details Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in 1-Wire and UNI/O. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs. The transition from parallel to serial buses was allowed by Moore's law which allowed for the incorporation of SerDes in integrated circuits which are used in computers. Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes the busbar origins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serial RS-232, parallel Centronics, IEEE 1284 interfaces and Ethernet, since these devices also needed separate power supplies. Universal Serial Bus devices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by a telephone system with a connected modem, where the RJ11 connection and associated modulated signalling scheme is not considered a bus, and is analogous to an Ethernet connection. A phone line connection scheme is not considered to be a bus with respect to signals, but the Central Office uses buses with cross-bar switches for connections between phones. However, this distinctionthat power is provided by the busis not the case in many avionic systems, where data connections such as ARINC 429, ARINC 629, MIL-STD-1553B (STANAG 3838), and EFABus (STANAG 3910) are commonly referred to as “data buses” or, sometimes, "databuses". Such avionic data buses are usually characterized by having several equipments or Line Replaceable Items/Units (LRI/LRUs) connected to a common, shared media. They may, as with ARINC 429, be simplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, be duplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data. The frequency or the speed of a bus is measured in Hz such as MHz and determines how many clock cycles there are per second; there can be one or more data transfers per clock cycle. If there is a single transfer per clock cycle it is known as Single Data Rate (SDR), and if there are two transfers per clock cycle it is known as Double Data Rate (DDR) although the use of signalling other than SDR is uncommon outside of RAM. An example of this is PCIe which uses SDR. Within each data transfer there can be multiple bits of data. This is described as the width of a bus which is the number of bits the bus can transfer per clock cycle and can be synonymous with the number of physical electrical conductors the bus has if each conductor transfers one bit at a time. The data rate in bits per second can be obtained by multiplying the number of bits per clock cycle times the frequency times the number of transfers per clock cycle. Alternatively a bus such as PCIe can use modulation or encoding such as PAM4 which groups 2 bits into symbols which are then transferred instead of the bits themselves, and allows for an increase in data transfer speed without increasing the frequency of the bus. The effective or real data transfer speed/rate may be lower due to the use of encoding that also allows for error correction such as 128/130b (b for bit) encoding. The data transfer speed is also known as the bandwidth. Bus multiplexing The simplest system bus has completely separate input data lines, output data lines, and address lines. To reduce cost, most microcomputers have a bidirectional data bus, re-using the same wires for input and output at different times. Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus. For example, the 64-pin STEbus is composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses. Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips. One common multiplexing scheme, address multiplexing, has already been mentioned. Another multiplexing scheme re-uses the address bus pins as the data bus pins, an approach used by conventional PCI and the 8086. The various "serial buses" can be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair). History Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE "Superbus" study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the "Gang of Nine" that developed EISA, etc. First generation Early computer buses were bundles of wire that attached computer memory and peripherals. Anecdotally termed the "digit trunk" in the early Australian CSIRAC computer, they were named after electrical power buses, or busbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols. One of the first complications was the use of interrupts. Early computer programs performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others. High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance. To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them. Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling. Minis and micros Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969. Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins. For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system. In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus. These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock. Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers. Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers. Second generation "Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other. A bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the device bus, or just "bus". Devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers. However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus. An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices. Third generation "Third generation" buses have been emerging into the market since about 2001, including HyperTransport and InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third generation buses tend to look more like a network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once. Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal and patent constraints from computer design. The Compute Express Link (CXL) is an open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation data center performance. Examples of internal computer buses Parallel Asus Media Bus proprietary, used on some Asus Socket 7 motherboards Computer Automated Measurement and Control (CAMAC) for instrumentation systems Extended ISA or EISA Industry Standard Architecture or ISA Low Pin Count or LPC MBus MicroChannel or MCA Multibus for industrial systems NuBus or IEEE 1196 OPTi local bus used on early Intel 80486 motherboards. Peripheral Component Interconnect or Conventional PCI Parallel ATA (also known as Advanced Technology Attachment, ATA, PATA, IDE, EIDE, ATAPI, etc.), Hard disk drive, optical disk drive, tape drive peripheral attachment bus S-100 bus or IEEE 696, used in the Altair 8800 and similar microcomputers SBus or IEEE 1496 SS-50 Bus Runway bus, a proprietary front side CPU bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family GSC/HSC, a proprietary peripheral bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family Precision Bus, a proprietary bus developed by Hewlett-Packard for use by its HP3000 computer family STEbus STD Bus (for STD-80 [8-bit] and STD32 [16-/32-bit]), FAQ Unibus, a proprietary bus developed by Digital Equipment Corporation for their PDP-11 and early VAX computers. Q-Bus, a proprietary bus developed by Digital Equipment Corporation for their PDP and later VAX computers. VESA Local Bus or VLB or VL-bus VMEbus, the VERSAmodule Eurocard bus PC/104 PC/104-Plus PCI-104 PCI/104-Express PCI/104 Zorro II and Zorro III, used in Amiga computer systems Serial 1-Wire HyperTransport I²C I3C (bus) SLIMbus PCI Express or PCIe Serial ATA (SATA), Hard disk drive, solid-state drive, optical disc drive, tape drive peripheral attachment bus Serial Peripheral Interface (SPI) bus UNI/O SMBus Advanced eXtensible Interface M-PHY Examples of external computer buses Parallel HIPPI High Performance Parallel Interface IEEE-488 (also known as GPIB, General-Purpose Interface Bus, and HPIB, Hewlett-Packard Instrumentation Bus) PC Card, previously known as PCMCIA, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections Serial Many field buses are serial data buses (not to be confused with the parallel "data bus" section of a system bus or expansion card), several of which use the RS-485 electrical characteristics and then specify their own protocol and connector: CAN bus ("Controller Area Network") Modbus ARINC 429 MIL-STD-1553 IEEE 1355 Other serial buses include: Camera Link eSATA ExpressCard IEEE 1394 interface (FireWire) RS-232 Thunderbolt USB Examples of internal/external computer buses Futurebus InfiniBand PCI Express External Cabling QuickRing Scalable Coherent Interface (SCI) Small Computer System Interface (SCSI), Hard disk drive and tape drive peripheral attachment bus Serial Attached SCSI (SAS) and other serial SCSI buses Thunderbolt Yapbus, a proprietary bus developed for the Pixar Image Computer
Technology
Computer hardware
null
6641
https://en.wikipedia.org/wiki/Cane%20toad
Cane toad
The cane toad (Rhinella marina), also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is a member of the genus Rhinella, which includes many true toad species found throughout Central and South America, but it was formerly assigned to the genus Bufo. A fossil toad (specimen UCMP 41159) from the La Venta fauna of the late Miocene in Colombia is morphologically indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the R. marina habitat preferences have long been for open areas. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average in length; the largest recorded specimen had a snout-vent length of . The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Its toxic skin can kill many animals, both wild and domesticated, and cane toads are particularly dangerous to dogs. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The common name of the species is derived from its use against the cane beetle (Dermolepida albohirtum), which damages sugar cane. The cane toad is now considered a pest and an invasive species in many of its introduced regions. The 1988 film Cane Toads: An Unnatural History documented the trials and tribulations of the introduction of cane toads in Australia. Taxonomy Historically, the cane toad was used to eradicate pests from sugarcane, giving rise to its common name. The cane toad has many other common names, including "giant toad" and "marine toad"; the former refers to its size, and the latter to the binomial name, R. marina. It was one of many species described by Carl Linnaeus in his 18th-century work Systema Naturae (1758). Linnaeus based the specific epithet marina on an illustration by Dutch zoologist Albertus Seba, who mistakenly believed the cane toad to inhabit both terrestrial and marine environments. Other common names include "giant neotropical toad", "Dominican toad", "giant marine toad", and "South American cane toad". In Trinidadian English, they are commonly called crapaud, the French word for toad. The genus Rhinella is considered to constitute a distinct genus of its own, thus changing the scientific name of the cane toad. In this case, the specific name marinus (masculine) changes to marina (feminine) to conform with the rules of gender agreement as set out by the International Code of Zoological Nomenclature, changing the binomial name from Bufo marinus to Rhinella marina; the binomial Rhinella marinus was subsequently introduced as a synonym through misspelling by Pramuk, Robertson, Sites, and Noonan (2008). Though controversial (with many traditional herpetologists still using Bufo marinus) the binomial Rhinella marina is gaining in acceptance with such bodies as the IUCN, Encyclopaedia of Life, Amphibian Species of the World and increasing numbers of scientific publications adopting its usage. Since 2016, cane toad populations native to Mesoamerica and northwestern South America are sometimes considered to be a separate species, Rhinella horribilis. In Australia, the adults may be confused with large native frogs from the genera Limnodynastes, Cyclorana, and Mixophyes. These species can be distinguished from the cane toad by the absence of large parotoid glands behind their eyes and the lack of a ridge between the nostril and the eye. Cane toads have been confused with the giant burrowing frog (Heleioporus australiacus), because both are large and warty in appearance; however, the latter can be readily distinguished from the former by its vertical pupils and its silver-grey (as opposed to gold) irises. Juvenile cane toads may be confused with species of the genus Uperoleia, but their adult colleagues can be distinguished by the lack of bright colouring on the groin and thighs. In the United States, the cane toad closely resembles many bufonid species. In particular, it could be confused with the southern toad (Bufo terrestris), which can be distinguished by the presence of two bulbs in front of the parotoid glands. Taxonomy and evolution The cane toad genome has been sequenced and certain Australian academics believe this will help in understanding how the toad can quickly evolve to adapt to new environments, the workings of its infamous toxin, and hopefully provide new options for halting this species' march across Australia and other places it has spread as an invasive pest. Studies of the genome confirm its evolutionary origins in northern part of South America and its close genetic relation to Rhinella diptycha and other similar species of the genus. Recent studies suggest that R. marina diverged between 2.75 and 9.40 million years ago. A recent split in the species into further subspecies may have occurred approximately 2.7 million years ago following the isolation of population groups by the rising Venezuelan Andes. Description Considered the largest species in the Bufonidae, the cane toad is very large; the females are significantly longer than males, reaching a typical length of , with a maximum of . Larger toads tend to be found in areas of lower population density. They have a life expectancy of 10 to 15 years in the wild, and can live considerably longer in captivity, with one specimen reportedly surviving for 35 years. The skin of the cane toad is dry and warty. Distinct ridges above the eyes run down the snout. Individual cane toads can be grey, yellowish, red-brown, or olive-brown, with varying patterns. A large parotoid gland lies behind each eye. The ventral surface is cream-coloured and may have blotches in shades of black or brown. The pupils are horizontal and the irises golden. The toes have a fleshy webbing at their base, and the fingers are free of webbing. Typically, juvenile cane toads have smooth, dark skin, although some specimens have a red wash. Juveniles lack the adults' large parotoid glands, so they are usually less poisonous. The tadpoles are small and uniformly black, and are bottom-dwellers, tending to form schools. Tadpoles range from in length. Ecology, behaviour and life history The common name "marine toad" and the scientific name Rhinella marina suggest a link to marine life, but cane toads do not live in the sea. However, laboratory experiments suggest that tadpoles can tolerate salt concentrations equivalent to 15% of seawater (~5.4‰), and recent field observations found living tadpoles and toadlets at salinities of 27.5‰ on Coiba Island, Panama. The cane toad inhabits open grassland and woodland, and has displayed a "distinct preference" for areas modified by humans, such as gardens and drainage ditches. In their native habitats, the toads can be found in subtropical forests, although dense foliage tends to limit their dispersal. The cane toad begins life as an egg, which is laid as part of long strings of jelly in water. A female lays 8,000–25,000 eggs at once and the strings can stretch up to in length. The black eggs are covered by a membrane and their diameter is about . The rate at which an egg grows into a tadpole increases with temperature. Tadpoles typically hatch within 48 hours, but the period can vary from 14 hours to almost a week. This process usually involves thousands of tadpoles—which are small, black, and have short tails—forming into groups. Between 12 and 60 days are needed for the tadpoles to develop into juveniles, with four weeks being typical. Similarly to their adult counterparts, eggs and tadpoles are toxic to many animals. When they emerge, toadlets typically are about in length, and grow rapidly. While the rate of growth varies by region, time of year, and sex, an average initial growth rate of per day is seen, followed by an average rate of per day. Growth typically slows once the toads reach sexual maturity. This rapid growth is important for their survival; in the period between metamorphosis and subadulthood, the young toads lose the toxicity that protected them as eggs and tadpoles, but have yet to fully develop the parotoid glands that produce bufotoxin. Only an estimated 0.5% of cane toads reach adulthood, in part because they lack this key defense—but also due to tadpole cannibalism. Although cannibalism does occur in the native population in South America, the rapid evolution occurring in the unnaturally large population in Australia has produced tadpoles 30x more likely to be interested in cannibalising their siblings, and 2.6x more likely to actually do so. They have also evolved to shorten their tadpole phase in response to the presence of older tadpoles. These changes are likely genetic, although no genetic basis has been determined. As with rates of growth, the point at which the toads become sexually mature varies across different regions. In New Guinea, sexual maturity is reached by female toads with a snout–vent length between , while toads in Panama achieve maturity when they are between in length. In tropical regions, such as their native habitats, breeding occurs throughout the year, but in subtropical areas, breeding occurs only during warmer periods that coincide with the onset of the wet season. The cane toad is estimated to have a critical thermal maximum of and a minimum of around . The ranges can change due to adaptation to the local environment. Cane toads from some populations can adjust their thermal tolerance within a few hours of encountering low temperatures. The toad is able to rapidly acclimate to the cold using physiological plasticity, though there is also evidence that more northerly populations of cane toads in the United States are better cold-adapted than more southerly populations. These adaptations have allowed the cane toad to establish invasive populations across the world. The toad's ability to rapidly acclimate to thermal changes suggests that current models may underestimate the potential range of habitats that the toad can populate. The cane toad has a high tolerance to water loss; some can withstand a 52.6% loss of body water, allowing them to survive outside tropical environments. Diet Most frogs identify prey by movement, and vision appears to be the primary method by which the cane toad detects prey; however, it can also locate food using its sense of smell. They eat a wide range of material; in addition to the normal prey of small rodents, other small mammals, reptiles, other amphibians, birds, and even bats and a range of invertebrates (such as ants, beetles, earwigs, dragonflies, grasshoppers, true bugs, crustaceans, and gastropods), they also eat plants, dog food, cat food, feces, and household refuse. Defences The skin of the adult cane toad is toxic, as well as the enlarged parotoid glands behind the eyes, and other glands across its back. When the toad is threatened, its glands secrete a milky-white fluid known as bufotoxin. Components of bufotoxin are toxic to many animals; even human deaths have been recorded due to the consumption of cane toads. Dogs are especially prone to be poisoned by licking or biting toads. Pets showing excessive drooling, extremely red gums, head-shaking, crying, loss of coordination, and/or convulsions require immediate veterinary attention. Bufotenin, one of the chemicals excreted by the cane toad, is classified as a schedule 9 drug under Australian law, alongside heroin and LSD. The effects of bufotenin are thought to be similar to those of mild poisoning; the stimulation, which includes mild hallucinations, lasts less than an hour. As the cane toad excretes bufotenin in small amounts, and other toxins in relatively large quantities, toad licking could result in serious illness or death. In addition to releasing toxin, the cane toad is capable of inflating its lungs, puffing up, and lifting its body off the ground to appear taller and larger to a potential predator. Since 2011, experimenters in the Kimberley region of Western Australia have used poisonous sausages containing toad meat in an attempt to protect native animals from cane toads' deadly impact. The Western Australian Department of Environment and Conservation, along with the University of Sydney, developed these sausage-shaped baits as a tool in order to train native animals not to eat the toads. By blending bits of toad with a nausea-inducing chemical, the baits train the animals to stay away from the amphibians. Young cane toads that aren't lethal upon ingestion have also been used to teach native predators avoidance, namely yellow-spotted monitors. 200,000 metamorphs, tadpoles, and eggs in total were released in areas ahead of inevitable invasion fronts. Following invasion by wild cane toads, yellow-spotted monitors in control areas bereft of the "teacher toads" were virtually wiped out, but experimental areas still contained substantial populations of yellow-spotted monitors. Predators Many species prey on the cane toad and its tadpoles in its native habitat, including the broad-snouted caiman (Caiman latirostris), the banded cat-eyed snake (Leptodeira annulata), eels (family Anguillidae), various species of killifish, and Paraponera clavata (bullet ants). Predators outside the cane toad's native range include the rock flagtail (Kuhlia rupestris), some species of catfish (order Siluriformes), some species of ibis (subfamily Threskiornithinae), the whistling kite (Haliastur sphenurus), the rakali (Hydromys chrysogaster), the black rat (Rattus rattus) and the water monitor (Varanus salvator). The tawny frogmouth (Podargus strigoides) and the Papuan frogmouth (Podargus papuensis) have been reported as feeding on cane toads; some Australian crows (Corvus spp.) have also learned strategies allowing them to feed on cane toads, such as using their beak to flip toads onto their backs. Kookaburras also prey on the amphibians. Opossums of the genus Didelphis likely can eat cane toads with impunity. Meat ants are unaffected by the cane toads' toxins, so are able to kill them. The cane toad's normal response to attack is to stand still and let its toxin kill or repel the attacker, which allows the ants to attack and eat the toad. Saw-shelled turtles have also been seen successfully and safely eating cane toads. In Australia rakali (Australian water rats) in two years learnt how to eat cane toads safely. They select the largest toads, turn them over, remove the poisonous gallbladder, and eat the heart and other organs with "surgical precision". They remove the toxic skin and eat the thigh muscle. Other animals such as crows and kites turn cane toads inside out and eat non-poisonous organs, also thus avoiding the skin. Distribution The cane toad is native to the Americas, and its range stretches from the Rio Grande Valley in South Texas to the central Amazon and southeastern Peru, and some of the continental islands near Venezuela (such as Trinidad and Tobago). This area encompasses both tropical and semiarid environments. The density of the cane toad is significantly lower within its native distribution than in places where it has been introduced. In South America, the density was recorded to be 20 adults per of shoreline, 1 to 2% of the density in Australia. As an introduced species The cane toad has been introduced to many regions of the world—particularly the Pacific—for the biological control of agricultural pests. These introductions have generally been well documented, and the cane toad may be one of the most studied of any introduced species. Before the early 1840s, the cane toad had been introduced into Martinique and Barbados, from French Guiana and Guyana. An introduction to Jamaica was made in 1844 in an attempt to reduce the rat population. Despite its failure to control the rodents, the cane toad was introduced to Puerto Rico in the early 20th century in the hope that it would counter a beetle infestation ravaging the sugarcane plantations. The Puerto Rican scheme was successful and halted the economic damage caused by the beetles, prompting scientists in the 1930s to promote it as an ideal solution to agricultural pests. As a result, many countries in the Pacific region emulated the lead of Puerto Rico and introduced the toad in the 1930s. Introduced populations are in Australia, Florida, Papua New Guinea, the Philippines, the Ogasawara, Ishigaki Island and the Daitō Islands of Japan, Taiwan Nantou Caotun, most Caribbean islands, Fiji and many other Pacific islands, including Hawaii. Since then, the cane toad has become a pest in many host countries, and poses a serious threat to native animals. Australia Following the apparent success of the cane toad in eating the beetles threatening the sugarcane plantations of Puerto Rico, and the fruitful introductions into Hawaiʻi and the Philippines, a strong push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaiʻi and brought to Australia. Queensland's sugar scientists released the toad into cane fields in August 1935. After this initial release, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia. However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles (Dermolepida albohirtum), in part because the cane fields provided insufficient shelter for the predators during the day, and in part because the beetles live at the tops of sugar cane—and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards Varanus mertensi, V. mitchelli, and V. panoptes, the land snakes Pseudechis australis and Acanthophis antarcticus, and the crocodile species Crocodylus johnstoni; in contrast, the population of the agamid lizard Amphibolurus gilberti—known to be a prey item of V. panoptes—has increased. Meat ants, however, are able to kill cane toads. The cane toad has also been linked to decreases in northern quolls in the southern region of Kakadu National Park and even their local extinction. Caribbean The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, Hispaniola and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful. The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date—and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995. In 1920, the cane toad was introduced into Puerto Rico to control the populations of white grub (Phyllophaga spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white grub had a dramatic decline—had the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white grub; this view was reinforced by a Nature article titled "Toads save sugar crop", and this led to large-scale introductions throughout many parts of the Pacific. The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas. The Philippines The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations, after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of bakî or kamprag in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as "bullfrog" in Philippine English. Fiji The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaiʻi. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaiʻi. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions. New Guinea The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaiʻi, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas. United States The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to Hawaiʻi, as well as largely unsuccessful introductions to Louisiana. Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them. Around 150 cane toads were introduced to Oʻahu in Hawaiʻi in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported. Uses Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer. Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations. There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses.
Biology and health sciences
Frogs and toads
Animals
6670
https://en.wikipedia.org/wiki/Cement
Cement
A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource. Cements used in construction are usually inorganic, often lime- or calcium silicate-based, and are either hydraulic or less commonly non-hydraulic, depending on the ability of the cement to set in the presence of water (see hydraulic and non-hydraulic lime plaster). Hydraulic cements (e.g., Portland cement) set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime (calcium oxide). Non-hydraulic cement (less common) does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting. The word "cement" can be traced back to the Ancient Roman term , used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as , , cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete. World production of cement is about 4.4 billion tonnes per year (2021, estimation), of which about half is made in China, followed by India and Vietnam. The cement production process is responsible for nearly 8% (2018) of global emissions, which includes heating raw materials in a cement kiln by fuel combustion and release of stored in the calcium carbonate (calcination process). Its hydrated products, such as concrete, gradually reabsorb atmospheric (carbonation process), compensating for approximately 30% of the initial emissions. Chemistry Cement materials can be classified into two distinct categories: hydraulic cements and non-hydraulic cements according to their respective setting and hardening mechanisms. Hydraulic cement setting and hardening involves hydration reactions and therefore requires water, while non-hydraulic cements only react with a gas and can directly set under air. Hydraulic cement By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added. Hydraulic cements (such as Portland cement) are made of a mixture of silicates and oxides, the four main mineral phases of the clinker, abbreviated in the cement chemist notation, being: C3S: alite (3CaO·SiO2); C2S: belite (2CaO·SiO2); C3A: tricalcium aluminate (3CaO·Al2O3) (historically, and still occasionally, called celite); C4AF: brownmillerite (4CaO·Al2O3·Fe2O3). The silicates are responsible for the cement's mechanical properties — the tricalcium aluminate and brownmillerite are essential for the formation of the liquid phase during the sintering (firing) process of clinker at high temperature in the kiln. The chemistry of these reactions is not completely clear and is still the object of research. First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions. CaCO3 -> CaO + CO2 The lime reacts with silicon dioxide to produce dicalcium silicate and tricalcium silicate. 2CaO + SiO2 -> 2CaO.SiO2 3CaO + SiO2 -> 3CaO.SiO2 The lime also reacts with aluminium oxide to form tricalcium aluminate. 3CaO + Al2O3 -> 3CaO.Al2O3 In the last step, calcium oxide, aluminium oxide, and ferric oxide react together to form brownmillerite. 4CaO + Al2O3 + Fe2O3 -> 4CaO.Al2O3.Fe2O3 Non-hydraulic cement A less common form of cement is non-hydraulic cement, such as slaked lime (calcium oxide mixed with water), which hardens by carbonation in contact with carbon dioxide, which is present in the air (~ 412 vol. ppm ≃ 0.04 vol. %). First calcium oxide (lime) is produced from calcium carbonate (limestone or chalk) by calcination at temperatures above 825 °C (1,517 °F) for about 10 hours at atmospheric pressure: CaCO3 -> CaO + CO2 The calcium oxide is then spent (slaked) by mixing it with water to make slaked lime (calcium hydroxide): CaO + H2O -> Ca(OH)2 Once the excess water is completely evaporated (this process is technically called setting), the carbonation starts: Ca(OH)2 + CO2 -> CaCO3 + H2O This reaction is slow, because the partial pressure of carbon dioxide in the air is low (~ 0.4 millibar). The carbonation reaction requires that the dry cement be exposed to air, so the slaked lime is a non-hydraulic cement and cannot be used under water. This process is called the lime cycle. History Perhaps the earliest known occurrence of cement is from twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned by natural causes. These ancient deposits were investigated in the 1960s and 1970s. Alternatives to cement used in antiquity Cement, chemically speaking, is a product that includes lime as the primary binding ingredient, but is far from the first material used for cementation. The Babylonians and Assyrians used bitumen (asphalt or pitch) to bind together burnt brick or alabaster slabs. In Ancient Egypt, stone blocks were cemented together with a mortar made of sand and roughly burnt gypsum (CaSO4 · 2H2O), which is plaster of Paris, which often contained calcium carbonate (CaCO3), Ancient Greece and Rome Lime (calcium oxide) was used on Crete and by the Ancient Greeks. There is evidence that the Minoans of Crete used crushed potsherds as an artificial pozzolan for hydraulic cement. Nobody knows who first discovered that a combination of hydrated non-hydraulic lime and a pozzolan produces a hydraulic mixture (see also: Pozzolanic reaction), but such concrete was used by the Greeks, specifically the Ancient Macedonians, and three centuries later on a large scale by Roman engineers. The Greeks used volcanic tuff from the island of Thera as their pozzolan and the Romans used crushed volcanic ash (activated aluminium silicates) with lime. This mixture could set under water, increasing its resistance to corrosion like rust. The material was called pozzolana from the town of Pozzuoli, west of Naples where volcanic ash was extracted. In the absence of pozzolanic ash, the Romans used powdered brick or pottery as a substitute and they may have used crushed tiles for this purpose before discovering natural sources near Rome. The huge dome of the Pantheon in Rome and the massive Baths of Caracalla are examples of ancient structures made from these concretes, many of which still stand. The vast system of Roman aqueducts also made extensive use of hydraulic cement. Roman concrete was rarely used on the outside of buildings. The normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble. Mesoamerica Lightweight concrete was designed and used for the construction of structural elements by the pre-Columbian builders who lived in a very advanced civilisation in El Tajin near Mexico City, in Mexico. A detailed study of the composition of the aggregate and binder show that the aggregate was pumice and the binder was a pozzolanic cement made with volcanic ash and lime. Middle Ages Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass. 16th century Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century. 18th century The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century. John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high tides. He performed experiments with combinations of different limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the "hydraulicity" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further. In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s. In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's "Roman cement". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of "Roman cement" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk. Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s. 19th century Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an "artificial cement" in 1817 considered the "principal forerunner" of Portland cement and "...Edgar Dobbs of Southwark patented a cement of this kind in 1811." In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments. Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called "British cement" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of "artificial cements", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement. Setting time and "early strength" are important characteristics of cements. Hydraulic limes, "natural" cements, and "artificial" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below , they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today "modern" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln. In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes. Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction. The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, because more alite (C3S) is formed at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes. 20th century Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J. In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in concrete highway and concrete bridge construction. Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal. Types Modern development of hydraulic cement began with the start of the Industrial Revolution (around 1800), driven by three main needs: Hydraulic cement render (stucco) for finishing brick buildings in wet climates Hydraulic mortars for masonry construction of harbor works, etc., in contact with sea water Development of strong concretes Modern cements are often Portland cement or Portland cement blends, but other cement blends are used in some industrial settings. Portland cement Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum () into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC). Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Portland cement may be grey or white. Portland cement blend Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant. Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements. Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement. Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement. Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer. Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks. Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints. White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements. Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly because there is more surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements. Other Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement. Slag-lime cements—ground granulated blast-furnace slag—are not hydraulic on their own, but are "activated" by addition of alkalis, most economically using lime. They are similar to pozzolan lime cements in their properties. Only granulated slag (i.e., water-quenched, glassy slag) is effective as a cement component. Supersulfated cements contain about 80% ground granulated blast furnace slag, 15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They produce strength by formation of ettringite, with strength growth similar to a slow Portland cement. They exhibit good resistance to aggressive agents, including sulfate. Calcium aluminate cements are hydraulic cements made primarily from limestone and bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CaO · Al2O3 or CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings. Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3 in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in "low-energy" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher. "Natural" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties. Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin. Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength. Sorel cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking. Electric cement is proposed to be made by recycling cement from demolition wastes in an electric arc furnace as part of a steelmaking process. The recycled cement is intended to be used to replace part or all of the lime used in steelmaking, resulting in a slag-like material that is similar in mineralogy to Portland cement, eliminating most of the associated carbon emissions. Setting, hardening and curing Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind. The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone, a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ. Safety issues Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO42−) into trivalent chromium (Cr3+), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing. Cement industry in the world In 2010, the world production of hydraulic cement was . The top three producers were China with 1,800, India with 220, and the United States with 63.5 million tonnes for a total of over half the world total by the world's three most populated states. For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and the US) accounting for just under half the world total capacity. Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively. China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate. Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012. Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Because of climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad. The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012. The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively. As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world. Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world. China "For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality." In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. "Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin." In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes. Environmental impacts Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them. emissions Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of . The cement industry produces about 10% of global human-made emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide emissions. Nearly 900 kg of are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of emissions. This accounts for approximately 5% of anthropogenic . The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder. To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers. carbon capture and storage is about to be trialed, but its financial viability is uncertain. absorption Hydrated products of Portland cement, such as concrete and mortars, slowly reabsorb atmospheric CO2 gas, which has been released during calcination in a kiln. This natural process, reversed to calcination, is called carbonation. As it depends on CO2 diffusion into the bulk of concrete, its rate depends on many parameters, such as environmental conditions and surface area exposed to the atmosphere. Carbonation is particularly significant at the latter stages of the concrete life - after demolition and crushing of the debris. It was estimated that during the whole life-cycle of cement products, it can be reabsorbed nearly 30% of atmospheric CO2 generated by cement production. Carbonation process is considered as a mechanism of concrete degradation. It reduces pH of concrete that promotes reinforcement steel corrosion. However, as the product of Ca(OH)2 carbonation, CaCO3, occupies a greater volume, porosity of concrete reduces. This increases strength and hardness of concrete. There are proposals to reduce carbon footprint of hydraulic cement by adopting non-hydraulic cement, lime mortar, for certain applications. It reabsorbs some of the during hardening, and has a lower energy requirement in production than Portland cement. A few other attempts to increase absorption of carbon dioxide include cements based on magnesium (Sorel cement). Heavy metal emissions in the air In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and clay minerals can release in the atmosphere gases and dust rich in volatile heavy metals, e.g. thallium, cadmium and mercury are the most toxic. Heavy metals (Tl, Cd, Hg, ...) and also selenium are often found as trace elements in common metal sulfides (pyrite (FeS2), zinc blende (ZnS), galena (PbS), ...) present as secondary minerals in most of the raw materials. Environmental regulations exist in many countries to limit these emissions. As of 2011 in the United States, cement kilns are "legally allowed to pump more toxins into the air than are hazardous-waste incinerators." Heavy metals present in the clinker The presence of heavy metals in the clinker arises both from the natural raw materials and from the use of recycled by-products or alternative fuels. The high pH prevailing in the cement porewater (12.5 < pH < 13.5) limits the mobility of many heavy metals by decreasing their solubility and increasing their sorption onto the cement mineral phases. Nickel, zinc and lead are commonly found in cement in non-negligible concentrations. Chromium may also directly arise as natural impurity from the raw materials or as secondary contamination from the abrasion of hard chromium steel alloys used in the ball mills when the clinker is ground. As chromate (CrO42−) is toxic and may cause severe skin allergies at trace concentration, it is sometimes reduced into trivalent Cr(III) by addition of ferrous sulfate (FeSO4). Use of alternative fuels and by-products materials A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on the raw materials and the process used. Most cement kilns today use coal and petroleum coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln (referred to as co-processing), replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Selected waste and by-products containing useful minerals such as calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw materials such as clay, shale, and limestone. Because some materials have both useful mineral content and recoverable calorific value, the distinction between alternative fuels and raw materials is not always clear. For example, sewage sludge has a low but significant calorific value, and burns to give ash containing minerals useful in the clinker matrix. Scrap automobile and truck tires are useful in cement manufacturing as they have high calorific value and the iron embedded in tires is useful as a feed stock. Clinker is manufactured by heating raw materials inside the main burner of a kiln to a temperature of 1,450 °C. The flame reaches temperatures of 1,800 °C. The material remains at 1,200 °C for 12–15 seconds at 1,800 °C or sometimes for 5–8 seconds (also referred to as residence time). These characteristics of a clinker kiln offer numerous benefits and they ensure a complete destruction of organic compounds, a total neutralization of acid gases, sulphur oxides and hydrogen chloride. Furthermore, heavy metal traces are embedded in the clinker structure and no by-products, such as ash or residues, are produced. The EU cement industry already uses more than 40% fuels derived from waste and biomass in supplying the thermal energy to the grey clinker making process. Although the choice for this so-called alternative fuels (AF) is typically cost driven, other factors are becoming more important. Use of alternative fuels provides benefits for both society and the company: -emissions are lower than with fossil fuels, waste can be co-processed in an efficient and sustainable manner and the demand for certain virgin materials can be reduced. Yet there are large differences in the share of alternative fuels used between the European Union (EU) member states. The societal benefits could be improved if more member states increase their alternative fuels share. The Ecofys study assessed the barriers and opportunities for further uptake of alternative fuels in 14 EU member states. The Ecofys study found that local factors constrain the market potential to a much larger extent than the technical and economic feasibility of the cement industry itself. Reduced-footprint cement Growing environmental concerns and the increasing cost of fossil fuels have resulted, in many countries, in a sharp reduction of the resources needed to produce cement, as well as effluents (dust and exhaust gases). Reduced-footprint cement is a cementitious material that meets or exceeds the functional performance capabilities of Portland cement. Various techniques are under development. One is geopolymer cement, which incorporates recycled materials, thereby reducing consumption of raw materials, water, and energy. Another approach is to reduce or eliminate the production and release of damaging pollutants and greenhouse gasses, particularly . Recycling old cement in electric arc furnaces is another approach. Also, a team at the University of Edinburgh has developed the 'DUPE' process based on the microbial activity of Sporosarcina pasteurii, a bacterium precipitating calcium carbonate, which, when mixed with sand and urine, can produce mortar blocks with a compressive strength 70% of that of concrete. An overview of climate-friendly methods for cement production can be found here.
Technology
Building materials
null
6678
https://en.wikipedia.org/wiki/Cat
Cat
The cat (Felis catus), also referred to as the domestic cat, is a small domesticated carnivorous mammal. It is the only domesticated species of the family Felidae. Advances in archaeology and genetics have shown that the domestication of the cat occurred in the Near East around 7500 BC. It is commonly kept as a pet and farm cat, but also ranges freely as a feral cat avoiding human contact. It is valued by humans for companionship and its ability to kill vermin. Its retractable claws are adapted to killing small prey species such as mice and rats. It has a strong, flexible body, quick reflexes, and sharp teeth, and its night vision and sense of smell are well developed. It is a social species, but a solitary hunter and a crepuscular predator. Cat communication includes vocalizations—including meowing, purring, trilling, hissing, growling, and grunting—as well as body language. It can hear sounds too faint or too high in frequency for human ears, such as those made by small mammals. It secretes and perceives pheromones. Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal, and reptile species. the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats there were an estimated 220 million owned and 480 million stray cats in the world. Etymology and naming The origin of the English word cat, Old English , is thought to be the Late Latin word , which was first used at the beginning of the 6th century. The Late Latin word may be derived from an unidentified African language. The Nubian word 'wildcat' and Nobiin are possible sources or cognates. The forms might also have derived from an ancient Germanic word that was absorbed into Latin and then into Greek, Syriac, and Arabic. The word may be derived from Germanic and Northern European languages, and ultimately be borrowed from Uralic, Northern Sámi , 'female stoat', and Hungarian , 'lady, female stoat'; from Proto-Uralic , 'female (of a furred animal)'. The English puss, extended as pussy and pussycat, is attested from the 16th century and may have been introduced from Dutch or from Low German , related to Swedish , or Norwegian , . Similar forms exist in Lithuanian and Irish or . The etymology of this word is unknown, but it may have arisen from a sound used to attract a cat. A male cat is called a tom or tomcat (or a gib, if neutered). A female is called a queen (or sometimes a molly, if spayed). A juvenile cat is referred to as a kitten. In Early Modern English, the word kitten was interchangeable with the now-obsolete word catling. A group of cats can be referred to as a clowder, a glaring, or a colony. Taxonomy The scientific name Felis catus was proposed by Carl Linnaeus in 1758 for a domestic cat. Felis catus domesticus was proposed by Johann Christian Polycarp Erxleben in 1777. Felis daemon proposed by Konstantin Satunin in 1904 was a black cat from the Transcaucasus, later identified as a domestic cat. In 2003, the International Commission on Zoological Nomenclature ruled that the domestic cat is a distinct species, namely Felis catus. In 2007, the modern domesticated subspecies F. silvestris catus sampled worldwide was considered to have probably descended from the African wildcat (F. lybica), following results of phylogenetic research. In 2017, the IUCN Cat Classification Taskforce followed the recommendation of the ICZN in regarding the domestic cat as a distinct species, Felis catus. Evolution The domestic cat is a member of the Felidae, a family that had a common ancestor about . The evolutionary radiation of the Felidae began in Asia during the Miocene around . Analysis of mitochondrial DNA of all Felidae species indicates a radiation at . The genus Felis genetically diverged from other Felidae around . Results of phylogenetic research shows that the wild members of this genus evolved through sympatric or parapatric speciation, whereas the domestic cat evolved through artificial selection. The domestic cat and its closest wild ancestor are diploid and both possess 38 chromosomes and roughly 20,000 genes. Domestication It was long thought that the domestication of the cat began in ancient Egypt, where cats were venerated from around 3100 BC. However, the earliest known indication for the taming of an African wildcat was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 7500–7200 BC. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland. Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus), and were tamed by Neolithic farmers. This mutual relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats. Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time. The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BC. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. During the Roman Empire, they were introduced to Corsica and Sardinia before the beginning of the 1st century AD. By the end of the Western Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany. The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BC. This line of partially domesticated cats leaves no trace in the domestic cat populations of today. During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play, and high intelligence. Since they practice rigorous grooming habits and have an instinctual drive to bury and hide their urine and feces, they are generally much less messy than other domesticated animals. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats. Hybridization between domestic and other Felinae species is also possible, producing hybrids such as the Kellas cat in Scotland. Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders. Characteristics Size The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about in head-to-body length and in height, with about long tails. Males are larger than females. Adult domestic cats typically weigh . Skeleton Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only three to five vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head. Skull The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae. The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar. Nonetheless, they are subject to occasional tooth loss and infection. Claws Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows for the silent stalking of prey. The claws on the forefeet are typically sharper than those on the hindfeet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces. Most cats have five claws on their front paws and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws on the inside of the wrists has no function in normal walking but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits ("polydactyly"). Ambulation The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up from walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously. Balance Cats are generally fond of sitting in high places or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to can right itself and land on its paws. During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of or more. How cats are able to right themselves when falling has been investigated as the "falling cat problem". Coats The cat family (Felidae) can pass down many colors and patterns to their offspring. The domestic cat genes MC1R and ASIP allow color variety in their coats. The feline ASIP gene consists of three coding exons. Three novel microsatellite markers linked to ASIP were isolated from a domestic cat BAC clone containing this gene to perform linkage analysis on 89 domestic cats segregated for melanism. The domestic cat family demonstrated a cosegregation between the ASIP allele and coat black coloration. Senses Vision Cats have excellent night vision and can see at one sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Cats also have a nictitating membrane, allowing them to blink without hindering their vision. Hearing The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79 kHz, whereas humans can only detect frequencies between 20 Hz and 20 kHz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices. Smell Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors. Taste Cats have relatively few taste buds compared to humans (470 or so, compared to more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. But they do have taste bud receptors specialized for acids, amino acids such as the constituents of protein, and bitter tastes. Their taste buds possess the receptors needed to detect umami. However, these receptors contain molecular changes that make cats taste umami differently from humans. In humans, they detect the amino acids glutamic acid and aspartic acid; but in cats, they instead detect inosine monophosphate and histidine. These molecules are particularly enriched in tuna. This, it has been argued, is why cats find tuna so palatable: as put by researchers into cat taste, "the specific combination of the high IMP and free histidine contents of tuna, which produces a strong umami taste synergy that is highly preferred by cats." One of the researchers in this research has stated, "I think umami is as important for cats as sweet is for humans." Cats also have a distinct temperature preference for their food, preferring food at a temperature around which is similar to that of a fresh kill; some cats reject cold food (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing). Whiskers To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage. Behavior Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes, but they can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging . The timing of cats' activity is quite flexible and varied; but being low-light predators, they are generally crepuscular, which means they tend to be more active near dawn and dusk. However, house cats' behavior is also influenced by human activity, and they may adapt to their owners' sleeping patterns to some extent. Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 to 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming. A common misconception is that a cat's behavioral and personality traits correspond to its coat color. These traits instead depend on a complex interplay between genetic and environmental factors. Sociability The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling, and, if that does not work, by short and violent, noisy attacks. Although cats do not have a social survival strategy or herd behavior, they always hunt alone. Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, a cat's human keeper functions as a mother surrogate. Adult cats live their lives in a type of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which includes biting and scratching; this type of behavior is known as feline asocial aggression. Redirected aggression is a common form of aggression which can occur in multiple cat households. In redirected aggression, there is usually something that agitates the cat: this could be a sight, sound, or another source of stimuli which causes a heightened level of anxiety or arousal. If the cat cannot attack the stimuli, it may direct anger elsewhere by attacking or directing aggression to the nearest cat, pet, human or other being. Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means of social bonding. Communication Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms; a raised tail indicates a friendly greeting, and flattened ears indicate hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head. Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens, who are thought to use it as a care-soliciting signal. Post-nursing cats also often purr as a sign of contentment: when being petted, becoming relaxed, or eating. Although purring is popularly interpreted as indicative of pleasure, it has been recorded in a wide variety of circumstances, most of which involve physical contact between the cat and another, presumably trusted individual. Some cats have been observed to purr continuously when chronically ill or in apparent pain. The exact mechanism by which cats purr has long been elusive, but it has been proposed that purring is generated via a series of sudden build-ups and releases of pressure as the glottis is opened and closed, which causes the vocal folds to separate forcefully. The laryngeal muscles in control of the glottis are thought to be driven by a neural oscillator which generates a cycle of contraction and release every 30–40 milliseconds (giving a frequency of 33 to 25 Hz). Domestic cats observed in rescue facilities have 276 morphologically distinct facial expressions based on 26 facial movements; each facial expression corresponds to different social functions that are probably influenced by domestication. Facial expressions have helped researchers detect pain in cats. The feline grimace scale's five criteria—ear position, orbital tightening, muzzle tension, whisker change, and head position—indicated the presence of acute pain in cats. Grooming Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 0.5 millimeter long, called lingual papillae, which contain keratin making them rigid. The papillae act like a hairbrush, and some cats, particularly long-haired cats, occasionally regurgitate sausage-shaped long hairballs of fur that have collected in their stomachs from grooming. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush. Fighting Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones. When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways, and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. Cats may also vocalize loudly and bare their teeth in an effort to further intimidate their opponents. Fights usually consist of grappling and delivering slaps to the face and body with the forepaws, as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their hind legs. Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe, and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections from scratches and bites, although these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of the feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives and often have decidedly battered faces with obvious scars and cuts to their ears and nose. Cats are willing to threaten animals larger than them to defend their territory, such as dogs and foxes. Hunting and feeding The shape and structure of cats' cheeks is insufficient to allow them to take in liquids using suction. Therefore, when drinking, they lap with the tongue to draw liquid upward into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upward. Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell, and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past. It is also a common misconception that cats like milk/cream, as they tend to avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea. Some also develop odd eating habits and like to eat or chew on things such as wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten. Cats hunt small prey, primarily birds and rodents, and are often used as a form of pest control. Other common small creatures, such as lizards and snakes, may also become prey. Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured. The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds. Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.3 to 4.0 billion birds and 6.3 to 22.3 billion mammals annually. Certain species appear more susceptible than others; in one English village, for example, 30% of house sparrow mortality was linked to the domestic cat. In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis) in Britain, 31% of deaths were a result of cat predation. In parts of North America, the presence of larger carnivores such as coyotes, which prey on cats and other small predators, reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety. Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at or near the top. Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding "an elderly cat, or an inept kitten". This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens. Play Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. Cats also engage in play fighting, both with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and it might also reduce the fear that they associate with launching attacks on other animals. Cats also tend to play with toys more when they are hungry. Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before. String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death. Reproduction The cat secretes and perceives pheromones. Female cats, called queens, are polyestrous with several estrus cycles during a year, lasting usually 21 days. They are usually ready to mate between early February and August in northern temperate zones and throughout the year in equatorial regions. Several males, called tomcats, are attracted to a female in heat. They fight over her, and the victor wins the right to mate. At first, the female rejects the male, but eventually, the female allows the male to mate. The female utters a loud yowl as the male pulls out of her because a male cat's penis has a band of about 120–150 backward-pointing penile spines, which are about long; upon withdrawal of the penis, the spines may provide the female with increased sexual stimulation, which acts to induce ovulation. After mating, the female cleans her vulva thoroughly. If a male attempts to mate with her at this point, the female attacks him. After about 20 to 30 minutes, once the female is finished grooming, the cycle will repeat. Because ovulation is not always triggered by a single mating, females may not be impregnated by the first male with which they mate. Furthermore, cats are superfecund; that is, a female may mate with more than one male when she is in heat, with the result that different kittens in a litter may have different fathers. The morula forms 124 hours after conception. At 148 hours, early blastocysts form. At 10–12 days, implantation occurs. The gestation of queens lasts between 64 and 67 days, with an average of 65 days. Based on a study of 2,300 free-ranging queens conducted from May 1998 and October2000, they had one to six kittens per litter, with an average of three kittens. They produced a mean of 1.4 litters per year, but a maximum of three litters in a year. Of 169 kittens, 127 died before they were six months old due to a trauma caused in most cases by dog attacks and road accidents. The first litter is usually smaller than subsequent litters. Kittens are weaned between six and seven weeks of age. Queens normally reach sexual maturity at 5–10 months, and males at 5–7 months. This varies depending on breed. Kittens reach puberty at the age of 9–10 months. Cats are ready to go to new homes at about 12 weeks of age, when they are ready to leave their mother. They can be surgically sterilized (spayed or castrated) as early as seven weeks to limit unwanted reproduction. This surgery also prevents undesirable sex-related behavior, such as aggression, territory marking (spraying urine) in males, and yowling (calling) in females. Traditionally, this surgery was performed at around six to nine months of age, but it is increasingly being performed before puberty, at about three to six months. In the United States, about 80% of household cats are neutered. Lifespan and health The average lifespan of pet cats has risen in recent decades. In the early 1980s, it was about seven years, rising to 9.4 years in 1995 and an average of about 13 years as of 2014 and 2023. Neutering increases life expectancy; one study found castrated male cats live twice as long as intact males, while spayed female cats live 62% longer than intact females. Having a cat neutered confers some health benefits, such as a greater life expectancy and a decreased incidence of reproductive neoplasia. However, neutering decreases metabolism and increases food intake, both of which can cause obesity in neutered cats. Pre-pubertal neutering (neutering at 4 months or earlier) was only recommended by 28% of American veterinarians in one study. Some concerns of early neutering were metabolic, retarded physeal closure, and urinary tract disease related. Disease About 250 heritable genetic disorders have been identified in cats; many are similar to human inborn errors of metabolism. The high level of similarity among the metabolism of mammals allows many of these feline diseases to be diagnosed using genetic tests that were originally developed for use in humans, as well as the use of cats as animal models in the study of the human diseases. Diseases affecting domestic cats include acute infections, parasitic infestations, injuries, and chronic diseases such as kidney disease, thyroid disease, and arthritis. Vaccinations are available for many infectious diseases, as are treatments to eliminate parasites such as worms, ticks, and fleas. Ecology Habitats The domestic cat is a cosmopolitan species and occurs across much of the world. It is adaptable and now present on all continents except Antarctica, and on 118 of the 131 main groups of islands, even on the remote Kerguelen Islands. Due to its ability to thrive in almost any terrestrial habitat, it is among the world's most invasive species. It lives on small islands with no human inhabitants. Feral cats can live in forests, grasslands, tundra, coastal areas, agricultural land, scrublands, urban areas, and wetlands. The unwantedness that leads to the domestic cat being treated as an invasive species is twofold. As it is little altered from the wildcat, it can readily interbreed with the wildcat. This hybridization poses a danger to the genetic distinctiveness of some wildcat populations, particularly in Scotland and Hungary, possibly also the Iberian Peninsula, and where protected natural areas are close to human-dominated landscapes, such as Kruger National Park in South Africa. However, its introduction to places where no native felines are present also contributes to the decline of native species. Ferality Feral cats are domestic cats that were born in or have reverted to a wild state. They are unfamiliar with and wary of humans and roam freely in urban and rural areas. The numbers of feral cats are not known, but estimates of the United States feral population range from 25 to 60 million. Feral cats may live alone, but most are found in large colonies, which occupy a specific territory and are usually associated with a source of food. Famous feral cat colonies are found in Rome around the Colosseum and Forum Romanum, with cats at some of these sites being fed and given medical attention by volunteers. Public attitudes toward feral cats vary widely, from seeing them as free-ranging pets to regarding them as vermin. Impact on wildlife On islands, birds can contribute as much as 60% of a cat's diet. In nearly all cases, the cat cannot be identified as the sole cause for reducing the numbers of island birds, and in some instances, eradication of cats has caused a "mesopredator release" effect; where the suppression of top carnivores creates an abundance of smaller predators that cause a severe decline in their shared prey. Domestic cats are a contributing factor to the decline of several species, a factor that has ultimately led, in some cases, to extinction. The South Island piopio, Chatham rail, and the New Zealand merganser are a few from a long list, with the most extreme case being the flightless Lyall's wren, which was driven to extinction only a few years after its discovery. One feral cat in New Zealand killed 102 New Zealand lesser short-tailed bats in seven days. In the United States, feral and free-ranging domestic cats kill an estimated 6.3–22.3 billion mammals annually. In Australia, one study found feral cats to kill 466 million reptiles per year. More than 258 reptile species were identified as being predated by cats. Cats have contributed to the extinction of the Navassa curly-tailed lizard and Chioninia coctei. Interaction with humans Cats are common pets throughout the world, and their worldwide population as of 2007 exceeded 500 million. the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats there were an estimated 220 million owned and 480 million stray cats in the world. Cats have been used for millennia to control rodents, notably around grain stores and aboard ships, and both uses extend to the present day. Cats are also used in the international fur trade and leather industries for making coats, hats, blankets, stuffed toys, shoes, gloves, and musical instruments. About 24 cats are needed to make a cat-fur coat. This use has been outlawed in the United States since 2000 and in the European Union (as well as the United Kingdom) since 2007. Cat pelts have been used for superstitious purposes as part of the practice of witchcraft, and they are still made into blankets in Switzerland as traditional medicines thought to cure rheumatism. A few attempts to build a cat census have been made over the years, both through associations or national and international organizations (such as that of the Canadian Federation of Humane Societies) and over the Internet. General estimates for the global population of domestic cats range widely from anywhere between 200 million to 600 million. Walter Chandoha made his career photographing cats after his 1949 images of Loco, a stray cat, were published. He is reported to have photographed 90,000 cats during his career and maintained an archive of 225,000 images that he drew from for publications during his lifetime. Pet humanization is a form of anthropomorphism in which cats are kept for companionship and treated more like human family members than traditional pets. This trend of pet culture involves providing cats with a higher level of care, attention and often even luxury, similar to the way humans are treated. Shows A cat show is a judged event in which the owners of cats compete to win titles in various cat-registering organizations by entering their cats to be judged after a breed standard. It is often required that a cat must be healthy and vaccinated to participate in a cat show. Both pedigreed and non-purebred companion ("moggy") cats are admissible, although the rules differ depending on the organization. Competing cats are compared to the applicable breed standard, and assessed for temperament. Infection Cats can be infected or infested with viruses, bacteria, fungus, protozoans, arthropods or worms that can transmit diseases to humans. In some cases, the cat exhibits no symptoms of the disease. The same disease can then become evident in a human. The likelihood that a person will become diseased depends on the age and immune status of the person. Humans who have cats living in their home or in close association are more likely to become infected. Others might also acquire infections from cat feces and parasites exiting the cat's body. Some of the infections of most concern include salmonella, cat-scratch disease, and toxoplasmosis. History and mythology In ancient Egypt, cats were revered, and the goddess Bastet often depicted in cat form, sometimes taking on the war-like aspect of a lioness. The Greek historian Herodotus reported that killing a cat was forbidden, and when a household cat died, the entire family mourned and shaved their eyebrows. Families took their dead cats to the sacred city of Bubastis, where they were embalmed and buried in sacred repositories. Herodotus expressed astonishment at the domestic cats in Egypt, because he had only ever seen wildcats. Ancient Greeks and Romans kept weasels as pets, which were seen as the ideal rodent-killers. The earliest unmistakable evidence of the Greeks having domestic cats comes from two coins from Magna Graecia dating to the mid-fifth century BC showing Iokastos and Phalanthos, the legendary founders of Rhegion and Taras respectively, playing with their pet cats. The usual ancient Greek word for 'cat' was , meaning 'thing with the waving tail'. Cats are rarely mentioned in ancient Greek literature. Aristotle remarked in his History of Animals that "female cats are naturally lecherous". The Greeks later syncretized their own goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. In Ovid's Metamorphoses, when the deities flee to Egypt and take animal forms, the goddess Diana turns into a cat. Cats eventually displaced weasels as the pest control of choice because they were more pleasant to have around the house and were more enthusiastic hunters of mice. During the Middle Ages, many of Artemis's associations with cats were grafted onto the Virgin Mary. Cats are often shown in icons of Annunciation and of the Holy Family and, according to Italian folklore, on the same night that Mary gave birth to Jesus, a cat in Bethlehem gave birth to a kitten. Domestic cats were spread throughout much of the rest of the world during the Age of Discovery, as ships' cats were carried on sailing ships to control shipboard rodents and as good-luck charms. Several ancient religions believed cats are exalted souls, companions or guides for humans, that are all-knowing but mute so they cannot influence decisions made by humans. In Japan, the cat is a symbol of good fortune. In Norse mythology, Freyja, the goddess of love, beauty, and fertility, is depicted as riding a chariot drawn by cats. In Jewish legend, the first cat was living in the house of the first man Adam as a pet that got rid of mice. The cat was once partnering with the first dog before the latter broke an oath they had made which resulted in enmity between the descendants of these two animals. It is also written that neither cats nor foxes are represented in the water, while every other animal has an incarnation species in the water. Although no species are sacred in Islam, cats are revered by Muslims. Some Western writers have stated Muhammad had a favorite cat, Muezza. He is reported to have loved cats so much, "he would do without his cloak rather than disturb one that was sleeping on it". The story has no origin in early Muslim writers, and seems to confuse a story of a later Sufi saint, Ahmed ar-Rifa'i, centuries after Muhammad. One of the companions of Muhammad was known as Abu Hurayrah ("father of the kitten"), in reference to his documented affection to cats. Superstitions and rituals Many cultures have negative superstitions about cats. An example would be the belief that encountering a black cat ("crossing one's path") leads to bad luck, or that cats are witches' familiar spirits used to augment a witch's powers and skills. The killing of cats in medieval Ypres, Belgium, is commemorated in the innocuous present-day Kattenstoet (cat parade). In mid-16th century France, cats would allegedly be burnt alive as a form of entertainment, particularly during midsummer festivals. According to Norman Davies, the assembled people "shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized". The remaining ashes were sometimes taken back home by the people for good luck. According to a myth in many cultures, cats have multiple lives. In many countries, they are believed to have nine lives, but in Italy, Germany, Greece, Brazil and some Spanish-speaking regions, they are said to have seven lives, while in Arabic traditions, the number of lives is six. An early mention of the myth can be found in John Heywood's The Proverbs of John Heywood (1546): The myth is attributed to the natural suppleness and swiftness cats exhibit to escape life-threatening situations. Also lending credence to this myth is the fact that falling cats often land on their feet, using an instinctive righting reflex to twist their bodies around. Nonetheless, cats can still be injured or killed by a high fall.
Biology and health sciences
Biology
null
6682
https://en.wikipedia.org/wiki/Clade
Clade
In biological phylogenetics, a clade (), also known as a monophyletic group or natural group, is a grouping of organisms that are monophyletic – that is, composed of a common ancestor and all its lineal descendants – on a phylogenetic tree. In the taxonomical literature, sometimes the Latin form cladus (plural cladi) is used rather than the English form. Clades are the fundamental unit of cladistics, a modern approach to taxonomy adopted by most biological fields. The common ancestor may be an individual, a population, or a species (extinct or extant). Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: "one clan") groups. Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea. The term "clade" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology. Naming and etymology The term "clade" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch. Many commonly named groups – rodents and insects, for example – are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade "rodent" is in turn included in the mammal, vertebrate and animal clades. History of nomenclature and taxonomy The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms. Many of the better known animal groups in Linnaeus's original Systema Naturae (mostly vertebrate groups) do represent clades. The phenomenon of convergent evolution is responsible for many cases of misleading similarities in the morphology of groups that evolved from different lineages. With the increasing realization in the first half of the 19th century that species had changed and split through the ages, classification increasingly came to be seen as branches on the evolutionary tree of life. The publication of Darwin's theory of evolution in 1859 gave this view increasing weight. In 1876 Thomas Henry Huxley, an early advocate of evolutionary theory, proposed a revised taxonomy based on a concept strongly resembling clades, although the term clade itself would not be coined until 1957 by his grandson, Julian Huxley. German biologist Emil Hans Willi Hennig (1913–1976) is considered to be the founder of cladistics. He proposed a classification system that represented repeated branchings of the family tree, as opposed to the previous systems, which put organisms on a "ladder", with supposedly more "advanced" organisms at the top. Taxonomists have increasingly worked to make the taxonomic system reflect evolution. When it comes to naming, this principle is not always compatible with the traditional rank-based nomenclature (in which only taxa associated with a rank can be named) because not enough ranks exist to name a long series of nested clades. For these and other reasons, phylogenetic nomenclature has been developed; it is still controversial. As an example, see the full current classification of Anas platyrhynchos (the mallard duck) with 40 clades from Eukaryota down by following this Wikispecies link and clicking on "Expand". The name of a clade is conventionally a plural, where the singular refers to each member individually. A unique exception is the reptile clade Dracohors, which was made by haplology from Latin "draco" and "cohors", i.e. "the dragon cohort"; its form with a suffix added should be e.g. "dracohortian". Definition A clade is by definition monophyletic, meaning that it contains one ancestor which can be an organism, a population, or a species and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct. Clades and phylogenetic trees The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from "clade". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called cladograms; they, and all their branches, are phylogenetic hypotheses. Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions). Terminology The relationship between clades can be described in several ways: A clade located within a clade is said to be nested within that clade. In the diagram, the hominoid clade, i.e. the apes and humans, is nested within the primate clade. Two clades are sisters if they have an immediate common ancestor. In the diagram, lemurs and lorises are sister clades, while humans and tarsiers are not. A clade A is basal to a clade B if A branches off the lineage leading to B before the first branch leading only to members of B. In the adjacent diagram, the strepsirrhine/prosimian clade, is basal to the hominoids/ape clade. In this example, both Haplorrhine as prosimians should be considered as most basal groupings. It is better to say that the prosimians are the sister group to the rest of the primates. This way one also avoids unintended and misconceived connotations about evolutionary advancement, complexity, diversity and ancestor status, e.g. due to impact of sampling diversity and extinction. Basal clades should not be confused with stem groupings, as the latter is associated with paraphyletic or unresolved groupings. Age The age of a clade can be described based on two different reference points, crown age and stem age. The crown age of a clade refers to the age of the most recent common ancestor of all of the species in the clade. The stem age of a clade refers to the time that the ancestral lineage of the clade diverged from its sister clade. A clade's stem age is either the same as or older than its crown age. Ages of clades cannot be directly observed. They are inferred, either from stratigraphy of fossils, or from molecular clock estimates. Viruses Viruses, and particularly RNA viruses form clades. These are useful in tracking the spread of viral infections. HIV, for example, has clades called subtypes, which vary in geographical prevalence. HIV subtype (clade) B, for example is predominant in Europe, the Americas and Japan, whereas subtype A is more common in east Africa.
Biology and health sciences
Phylogenetics and taxonomy
Biology
6696
https://en.wikipedia.org/wiki/Chain%20mail
Chain mail
Chain mail (also known as chain-mail, mail or maille) is a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, while it continued to be used in Asia, Africa, and the Middle East as late as the 17th century. A coat of this armour is often called a hauberk or sometimes a byrnie. History The earliest examples of surviving mail were found in the Carpathian Basin at a burial in Horný Jatov, Slovakia dated in the 3rd century BC, and in a chieftain's burial located in Ciumești, Romania. Its invention is commonly credited to the Celts, but there are examples of Etruscan pattern mail dating from at least the 4th century BC. Mail may have been inspired by the much earlier scale armour. Mail spread to North Africa, West Africa, the Middle East, Central Asia, India, Tibet, South East Asia, and Japan. Herodotus wrote that the ancient Persians wore scale armour, but mail is also distinctly mentioned in the Avesta, the holy scripture of the Zoroastrian religion that written in the 6th century AD. Mail continues to be used in the 21st century as a component of stab-resistant body armour, cut-resistant gloves for butchers and woodworkers, shark-resistant wet-suits for defense against shark bites, and a number of other applications. Etymology The origin of the word mail are not fully known. One theory is that it originally derives from the Latin word , meaning 'spot' or 'opacity' (as in macula of retina). Another theory relates the word to the old French , meaning 'to hammer' (related to the modern English word malleable). In modern French, maille refers to a loop or stitch. The Arabic words burnus ( 'burnoose, a hooded cloak', also a chasuble worn by Coptic priests) and barnaza ( 'to bronze') suggest an Arabic influence for the Carolingian armour known as byrnie (see below). The first attestations of the word mail are in Old French and Anglo-Norman: maille, maile, or male or other variants, which became mailye, maille, maile, male, or meile in Middle English. Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or multiple layers) of mail sandwiched between layers of fabric is called a jazerant. A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the "most highly valued piece of armour" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that: There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it with strong thread. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue. In Europe The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armour and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period. After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word "mail" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Historically mail makers were often men, but women also undertook the work: Alice la Haubergere was an armourer who worked in Cheapside in the early 1300s and in York in 1446 Agnes Hecche inherited her father's mail making tools to continue her work after his death. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach. By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers, along with brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use throughout Europe into the 16th century. During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of "Iron Men" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket. Mail was also used for face protection in World War I. Oculist Captain Cruise of the British Infantry designed a mail fringe to be attached to helmets to protect the upper face. This proved unpopular with soldiers, in spite of being proven to defend against a shrapnel round fired at a distance of . Another invention, a "splatter mask" or "splinter mask", consisted of rigid upper face protection and a mail veil to protect the lower face, and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle. In Asia Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection. Mail armour is mentioned in the Quran as being a gift revealed by Allah to David: 21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation) From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour). China Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. Earliest assumed reference to mail can be found in early 3rd century record by Cao Zhi, being called "chained ring armor". China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals. Japan In Japan, mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks. Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits. Riveted kusari was known and used in Japan. On page 58 of the book Japanese Arms & Armor: Introduction by H. Russell Robinson, there is a picture of Japanese riveted kusari, and this quote from the translated reference of Sakakibara Kozan's 1800 book, The Manufacture of Armour and Helmets in Sixteenth-Century Japan, shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well. ... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate Butted or split (twisted) links made up the majority of kusari links used by the Japanese. Links were either butted together meaning that the ends touched each other and were not riveted, or the kusari was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth. Kusari gusoku or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone Entire suits of mail kusari gusoku were worn on occasions, sometimes under the ordinary clothing In his book Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan, Ian Bottomley shows a picture of a kusari armour and mentions kusari katabira (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour. Effectiveness Mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 1.0 to 1.6 mm diameter (18 to 14 gauge) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques. When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc, could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as stronger self bows, recurve bows, and crossbows could also penetrate riveted mail. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through it—according to a study of skeletons found at the battle of Visby, Gotland, a majority of the skeletons showed wounds on less well protected legs. Although mail was a formidable protection, due to technological advances as time progressed, mail worn under plate armour (and stand-alone mail as well) could be penetrated by the conventional weaponry of another knight. The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene, however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations. Manufacture Several patterns of linking the rings together have been known since ancient times, with the most common being the 4-to-1 pattern (where each ring is linked with four others). In Europe, the 4-to-1 pattern was completely dominant. Mail was also common in East Asia, primarily Japan, with several more patterns being utilised and an entire nomenclature developing around them. Historically, in Europe, from the pre-Roman period on, the rings composing a piece of mail would be riveted closed to reduce the chance of the rings splitting open when subjected to a thrusting attack or a hit by an arrow. Up until the 14th century European mail was made of alternating rows of round riveted rings and solid rings. Sometime during the 14th century European mail makers started to transition from round rivets to wedge-shaped rivets, but continued using alternating rows of solid rings. Eventually European mail makers stopped using solid rings and almost all European mail was made from wedge riveted rings only with no solid rings. Both were commonly made of wrought iron, but some later pieces were made of heat-treated steel. Wire for the riveted rings was formed by either of two methods. One was to hammer out wrought iron into plates and cut or slit the plates. These thin pieces were then pulled through a draw plate repeatedly until the desired diameter was achieved. Waterwheel-powered drawing mills are pictured in several period manuscripts. Another method was to simply forge down an iron billet into a rod and then proceed to draw it out into wire. The solid links would have been made by punching from a sheet. Guild marks were often stamped on the rings to show their origin and craftsmanship. Forge welding was also used to create solid links, but there are few possible examples known; the only well-documented example from Europe is that of the camail (mail neck-defence) of the 7th-century Coppergate Helmet found in York. Outside of Europe this practice was more common such as "theta" links from India. Very few examples of historic butted mail have been found, and it is generally accepted that butted mail was never in wide use historically except in Japan, where mail (kusari) was commonly made from butted links. Butted link mail was also used by the Moros of the Philippines in their mail and plate armours. Modern uses Practical uses Mail is used as protective clothing for butchers against meat-packing equipment. Workers may wear up to of mail under their white coats. Butchers also commonly wear a single mail glove to protect themselves from self-inflicted injury while cutting meat, as do many oyster shuckers. Scuba divers sometimes use mail to protect them from sharkbite, as do animal control officers for protection against the animals they handle. In 1980, marine biologist Jeremiah Sullivan patented his design for Neptunic full coverage chain mail shark resistant suits which he had developed for close encounters with sharks. Shark expert and underwater filmmaker Valerie Taylor was among the first to develop and test shark suits in 1979 while diving with sharks. Mail is widely used in industrial settings as shrapnel guards and splash guards in metal working operations. Electrical applications for mail include RF leakage testing and being worn as a Faraday cage suit by tesla coil enthusiasts and high voltage electrical workers. Stab-proof vests Conventional textile-based ballistic vests are designed to stop soft-nosed bullets but offer little defense from knife attacks. Knife-resistant armour is designed to defend against knife attacks; some of these use layers of metal plates, mail and metallic wires. Historical re-enactment Many historical reenactment groups, especially those whose focus is Antiquity or the Middle Ages, commonly use mail both as practical armour and for costuming. Mail is especially popular amongst those groups which use steel weapons. A modern hauberk made from 1.5 mm diameter wire with 10 mm inner diameter rings weighs roughly and contains 15,000–45,000 rings. One of the drawbacks of mail is the uneven weight distribution; the stress falls mainly on shoulders. Weight can be better distributed by wearing a belt over the mail, which provides another point of support. Mail worn today for re-enactment and recreational use can be made in a variety of styles and materials. Most recreational mail today is made of butted links which are galvanised or stainless steel. This is historically inaccurate but is much less expensive to procure and especially to maintain than historically accurate reproductions. Mail can also be made of titanium, aluminium, bronze, or copper. Riveted mail offers significantly better protection ability as well as historical accuracy than mail constructed with butted links. Japanese mail (kusari) is one of the few historically correct examples of mail being constructed with such butted links. Decorative uses Mail remained in use as a decorative and possibly high-status symbol with military overtones long after its practical usefulness had passed. It was frequently used for the epaulettes of military uniforms. It is still used in this form by some regiments of the British Army. Mail has applications in sculpture and jewellery, especially when made out of precious metals or colourful anodized metals. Mail artwork includes headdresses, decorative wall hangings, ornaments, chess sets, macramé, and jewelry. For these non-traditional applications, hundreds of patterns (commonly referred to as "weaves") have been invented. Large-linked mail is occasionally used as BDSM clothing material, with the large links intended for fetishistic purposes. In popular culture Video games Chainmail armor can be found in multiple games, such as Elden Ring and Minecraft. It is typically depicted as less expensive than plate mail, with the tradeoff being an inferior defense. Chainmail may also be purely cosmetic and hold no gameplay advantage. Film In some films, knitted string spray-painted with a metallic paint is used instead of actual mail in order to cut down on cost (an example being Monty Python and the Holy Grail, which was filmed on a very small budget). Films more dedicated to costume accuracy often use ABS plastic rings, for the lower cost and weight. Such ABS mail coats were made for The Lord of the Rings film trilogy, in addition to many metal coats. The metal coats are used rarely because of their weight, except in close-up filming where the appearance of ABS rings is distinguishable. A large scale example of the ABS mail used in the Lord of the Rings can be seen in the entrance to the Royal Armouries museum in Leeds in the form of a large curtain bearing the logo of the museum. It was acquired from the makers of the film's armour, Weta Workshop, when the museum hosted an exhibition of WETA armour from their films. For the film Mad Max Beyond Thunderdome, Tina Turner is said to have worn actual mail and she complained how heavy this was. Game of Thrones makes use of mail, notably during the "Red Wedding" scene. Gallery
Technology
Armour
null
6710
https://en.wikipedia.org/wiki/Coyote
Coyote
The coyote (Canis latrans), also known as the American jackal, prairie wolf, or brush wolf, is a species of canine native to North America. It is smaller than its close relative, the gray wolf, and slightly smaller than the closely related eastern wolf and red wolf. It fills much of the same ecological niche as the golden jackal does in Eurasia; however, the coyote is generally larger. The coyote is listed as least concern by the International Union for Conservation of Nature, due to its wide distribution and abundance throughout North America. The species is versatile, able to adapt to and expand into environments modified by humans; urban coyotes are common in many cities. The coyote was sighted in eastern Panama (across the Panama Canal from their home range) for the first time in 2013. The coyote has 19 recognized subspecies. The average male weighs and the average female . Their fur color is predominantly light gray and red or fulvous interspersed with black and white, though it varies somewhat with geography. It is highly flexible in social organization, living either in a family unit or in loosely knit packs of unrelated individuals. Primarily carnivorous, its diet consists mainly of deer, rabbits, hares, rodents, birds, reptiles, amphibians, fish, and invertebrates, though it may also eat fruits and vegetables on occasion. Its characteristic vocalization is a howl made by solitary individuals. Humans are the coyote's greatest threat, followed by cougars and gray wolves. Despite predation by gray wolves, coyotes sometimes mate with them, and with eastern, or red wolves, producing "coywolf" hybrids. In the northeastern regions of North America, the eastern coyote (a larger subspecies, though still smaller than wolves) is the result of various historical and recent matings with various types of wolves. Genetic studies show that most North American wolves contain some level of coyote DNA. The coyote is a prominent character in Native American folklore, mainly in Aridoamerica, usually depicted as a trickster that alternately assumes the form of an actual coyote or a man. As with other trickster figures, the coyote uses deception and humor to rebel against social conventions. The animal was especially respected in Mesoamerican cosmology as a symbol of military might. After the European colonization of the Americas, it was seen in Anglo-American culture as a cowardly and untrustworthy animal. Unlike wolves, which have seen their public image improve, attitudes towards the coyote remain largely negative. Description Coyote males average in weight, while females average , though size varies geographically. Northern subspecies, which average , tend to grow larger than the southern subspecies of Mexico, which average . Total length ranges on average from ; comprising a tail length of , with females being shorter in both body length and height. The largest coyote on record was a male killed near Afton, Wyoming, on November19, 1937, which measured from nose to tail, and weighed . Scent glands are located at the upper side of the base of the tail and are a bluish-black color. The color and texture of the coyote's fur vary somewhat geographically. The hair's predominant color is light gray and red or fulvous, interspersed around the body with black and white. Coyotes living at high elevations tend to have more black and gray shades than their desert-dwelling counterparts, which are more fulvous or whitish-gray. The coyote's fur consists of short, soft underfur and long, coarse guard hairs. The fur of northern subspecies is longer and denser than in southern forms, with the fur of some Mexican and Central American forms being almost hispid (bristly). Generally, adult coyotes (including coywolf hybrids) have a sable coat color, dark neonatal coat color, bushy tail with an active supracaudal gland, and a white facial mask. Albinism is extremely rare in coyotes. Out of a total of 750,000 coyotes killed by federal and cooperative hunters between March 1938 and June 1945, only two were albinos. The coyote is typically smaller than the gray wolf, but has longer ears and a relatively larger braincase, as well as a thinner frame, face, and muzzle. The scent glands are smaller than the gray wolf's, but are the same color. Its fur color variation is much less varied than that of a wolf. The coyote also carries its tail downwards when running or walking, rather than horizontally as the wolf does. Coyote tracks can be distinguished from those of dogs by their more elongated, less rounded shape. Unlike dogs, the upper canines of coyotes extend past the mental foramina. Taxonomy and evolution History At the time of the European colonization of the Americas, coyotes were largely confined to open plains and arid regions of the western half of the continent. In early post-Columbian historical records, determining whether the writer is describing coyotes or wolves is often difficult. One record from 1750 in Kaskaskia, Illinois, written by a local priest, noted that the "wolves" encountered there were smaller and less daring than European wolves. Another account from the early 1800s in Edwards County mentioned wolves howling at night, though these were likely coyotes. This species was encountered several times during the Lewis and Clark Expedition (1804–1806), though it was already well known to European traders on the upper Missouri. Meriwether Lewis, writing on 5 May 1805, in northeastern Montana, described the coyote in these terms: The coyote was first scientifically described by naturalist Thomas Say in September 1819, on the site of Lewis and Clark's Council Bluffs, up the Missouri River from the mouth of the Platte during a government-sponsored expedition with Major Stephen Long. He had the first edition of the Lewis and Clark journals in hand, which contained Biddle's edited version of Lewis's observations dated 5 May 1805. His account was published in 1823. Say was the first person to document the difference between a "prairie wolf" (coyote) and on the next page of his journal a wolf which he named Canis nubilus (Great Plains wolf). Say described the coyote as: Naming and etymology The first published usage of the word "coyote" (which is a Spanish borrowing of its Nahuatl name coyōtl ) comes from the historian Francisco Javier Clavijero's Historia de México in 1780. The first time it was used in English occurred in William Bullock's Six months' residence and travels in Mexico (1824), where it is variously transcribed as cayjotte and cocyotie. The word's spelling was standardized as "coyote" by the 1880s. The English pronunciation is heard both as a two-syllable word (with the final "e" silent) and as three-syllables (with the final "e" pronounced), with a tendency for the three-syllable pronunciation in eastern states and near the Mexican border, and outside the United States, with two syllables in western and central states. Alternative English names for the coyote include "prairie wolf", "brush wolf", "cased wolf", "little wolf" and "American jackal". Its binomial name Canis latrans translates to "barking dog", a reference to the many vocalizations they produce. Evolution Fossil record Xiaoming Wang and Richard H. Tedford, one of the foremost authorities on carnivore evolution, proposed that the genus Canis was the descendant of the coyote-like Eucyon davisi and its remains first appeared in the Miocene 6million years ago (Mya) in the southwestern US and Mexico. By the Pliocene (5Mya), the larger Canis lepophagus appeared in the same region and by the early Pleistocene (1Mya) C.latrans (the coyote) was in existence. They proposed that the progression from Eucyon davisi to C.lepophagus to the coyote was linear evolution. C.latrans and C.aureus are closely related to C.edwardii, a species that appeared earliest spanning the mid-Blancan (late Pliocene) to the close of the Irvingtonian (late Pleistocene), and coyote remains indistinguishable from C. latrans were contemporaneous with C.edwardii in North America. Johnston describes C.lepophagus as having a more slender skull and skeleton than the modern coyote. Ronald Nowak found that the early populations had small, delicate, narrowly proportioned skulls that resemble small coyotes and appear to be ancestral to C. latrans. C. lepophagus was similar in weight to modern coyotes, but had shorter limb bones that indicate a less cursorial lifestyle. The coyote represents a more primitive form of Canis than the gray wolf, as shown by its relatively small size and its comparatively narrow skull and jaws, which lack the grasping power necessary to hold the large prey in which wolves specialize. This is further corroborated by the coyote's sagittal crest, which is low or totally flattened, thus indicating a weaker bite than the wolves. The coyote, unlike the wolf, is not a specialized carnivore, as shown by the larger chewing surfaces on the molars, reflecting the species' relative dependence on vegetable matter. In these respects, the coyote resembles the fox-like progenitors of the genus more so than the wolf. The oldest fossils that fall within the range of the modern coyote date to 0.74–0.85 Ma (million years) in Hamilton Cave, West Virginia; 0.73 Ma in Irvington, California; 0.35–0.48 Ma in Porcupine Cave, Colorado, and in Cumberland Cave, Pennsylvania. Modern coyotes arose 1,000 years after the Quaternary extinction event. Compared to their modern Holocene counterparts, Pleistocene coyotes (C.l. orcutti) were larger and more robust, likely in response to larger competitors and prey. Pleistocene coyotes were likely more specialized carnivores than their descendants, as their teeth were more adapted to shearing meat, showing fewer grinding surfaces suited for processing vegetation. Their reduction in size occurred within 1,000 years of the Quaternary extinction event, when their large prey died out. Furthermore, Pleistocene coyotes were unable to exploit the big-game hunting niche left vacant after the extinction of the dire wolf (Aenocyondirus), as it was rapidly filled by gray wolves, which likely actively killed off the large coyotes, with natural selection favoring the modern gracile morph. DNA evidence In 1993, a study proposed that the wolves of North America display skull traits more similar to the coyote than wolves from Eurasia. In 2010, a study found that the coyote was a basal member of the clade that included the Tibetan wolf, the domestic dog, the Mongolian wolf and the Eurasian wolf, with the Tibetan wolf diverging early from wolves and domestic dogs. In 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor about 51,000 years ago. However, the proposed timing of the wolf / coyote divergence conflicts with the discovery of a coyote-like specimen in strata dated to 1 Mya. The study also indicated that all North American wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and eastern wolf are highly admixed with different proportions of gray wolf and coyote ancestry. Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40% wolf to 60% coyote ancestry in red wolves, 60% wolf to 40% coyote in Eastern timber wolves, and 75% wolf to 25% coyote in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and the Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves. If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not. In 2018, whole genome sequencing was used to compare members of the genus Canis. The study indicates that the common ancestor of the coyote and gray wolf has genetically admixed with a ghost population of an extinct, unidentified canid. The "ghost" canid was genetically close to the dhole, and had evolved after the divergence of the African wild dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome from the unknown extinct canid. Subspecies , 19 subspecies are recognized. Geographic variation in coyotes is not great, though taken as a whole, the eastern subspecies ( and ) are large, dark-colored animals, with a gradual paling in color and reduction in size westward and northward (, , , and ), a brightening of 'ochraceous' tones – deep orange or brown – towards the Pacific coast (, ), a reduction in size in Aridoamerica (, ) and a general trend towards dark reddish colors and short muzzles in Mexican and Central American populations. Hybridization Coyotes occasionally mate with domestic dogs, sometimes producing crosses colloquially known as "coydogs". Such matings are rare in the wild, as the mating cycles of dogs and coyotes do not coincide, and coyotes are usually antagonistic towards dogs. Hybridization usually only occurs when coyotes are expanding into areas where conspecifics are few, and dogs are the only alternatives. Even then, pup survival rates are lower than normal, as dogs do not form pair bonds with coyotes, thus making the rearing of pups more difficult. In captivity, F1 hybrids (first generation) tend to be more mischievous and less manageable as pups than dogs, and are less trustworthy on maturity than wolf-dog hybrids. Hybrids vary in appearance, but generally retain the coyote's usual characteristics. F1 hybrids tend to be intermediate in form between dogs and coyotes, while F2 hybrids (second generation) are more varied. Both F1 and F2 hybrids resemble their coyote parents in terms of shyness and intrasexual aggression. Hybrids are fertile and can be successfully bred through four generations. Melanistic coyotes owe their black pelts to a mutation that first arose in domestic dogs. A population of non-albino white coyotes in Newfoundland owe their coloration to a melanocortin 1 receptor mutation inherited from Golden Retrievers. Coyotes have hybridized with wolves to varying degrees, particularly in eastern North America. The so-called "eastern coyote" of northeastern North America probably originated in the aftermath of the extermination of gray and eastern wolves in the northeast, thus allowing coyotes to colonize former wolf ranges and mix with the remnant wolf populations. This hybrid is smaller than either the gray or eastern wolf, and holds smaller territories, but is in turn larger and holds more extensive home ranges than the typical western coyote. , the eastern coyote's genetic makeup is fairly uniform, with minimal influence from eastern wolves or western coyotes. Adult eastern coyotes are larger than western coyotes, with female eastern coyotes weighing 21% more than male western coyotes. Physical differences become more apparent by the age of 35 days, with eastern coyote pups having longer legs than their western counterparts. Differences in dental development also occurs, with tooth eruption being later, and in a different order in the eastern coyote. Aside from its size, the eastern coyote is physically similar to the western coyote. The four color phases range from dark brown to blond or reddish blond, though the most common phase is gray-brown, with reddish legs, ears, and flanks. No significant differences exist between eastern and western coyotes in aggression and fighting, though eastern coyotes tend to fight less, and are more playful. Unlike western coyote pups, in which fighting precedes play behavior, fighting among eastern coyote pups occurs after the onset of play. Eastern coyotes tend to reach sexual maturity at two years of age, much later than in western coyotes. Eastern and red wolves are also products of varying degrees of wolf-coyote hybridization. The eastern wolf probably was a result of a wolf-coyote admixture, combined with extensive backcrossing with parent gray wolf populations. The red wolf may have originated during a time of declining wolf populations in the Southeastern Woodlands, forcing a wolf-coyote hybridization, as well as backcrossing with local parent coyote populations to the extent that about 75–80% of the modern red wolf's genome is of coyote derivation. Behavior Social and reproductive behaviors Like the Eurasian golden jackal, the coyote is gregarious, but not as dependent on conspecifics as more social canid species like wolves are. This is likely because the coyote is not a specialized hunter of large prey as the latter species is. The basic social unit of a coyote pack is a family containing a reproductive female. However, unrelated coyotes may join forces for companionship, or to bring down prey too large to attack on their own. Such "nonfamily" packs are only temporary, and may consist of bachelor males, nonreproductive females and subadult young. Families are formed in midwinter, when females enter estrus. Pair bonding can occur 2–3 months before actual copulation takes place. The copulatory tie can last 5–45 minutes. A female entering estrus attracts males by scent marking and howling with increasing frequency. A single female in heat can attract up to seven reproductive males, which can follow her for as long as a month. Although some squabbling may occur among the males, once the female has selected a mate and copulates, the rejected males do not intervene, and move on once they detect other estrous females. Unlike the wolf, which has been known to practice both monogamous and bigamous matings, the coyote is strictly monogamous, even in areas with high coyote densities and abundant food. Females that fail to mate sometimes assist their sisters or mothers in raising their pups, or join their siblings until the next time they can mate. The newly mated pair then establishes a territory and either constructs their own den or cleans out abandoned badger, marmot, or skunk earths. During the pregnancy, the male frequently hunts alone and brings back food for the female. The female may line the den with dried grass or with fur pulled from her belly. The gestation period is 63 days, with an average litter size of six, though the number fluctuates depending on coyote population density and the abundance of food. Coyote pups are born in dens, hollow trees, or under ledges, and weigh at birth. They are altricial, and are completely dependent on milk for their first 10 days. The incisors erupt at about 12 days, the canines at 16, and the second premolars at 21. Their eyes open after 10 days, by which point the pups become increasingly more mobile, walking by 20 days, and running at the age of six weeks. The parents begin supplementing the pup's diet with regurgitated solid food after 12–15 days. By the age of four to six weeks, when their milk teeth are fully functional, the pups are given small food items such as mice, rabbits, or pieces of ungulate carcasses, with lactation steadily decreasing after two months. Unlike wolf pups, coyote pups begin seriously fighting (as opposed to play fighting) prior to engaging in play behavior. A common play behavior includes the coyote "hip-slam". By three weeks of age, coyote pups bite each other with less inhibition than wolf pups. By the age of four to five weeks, pups have established dominance hierarchies, and are by then more likely to play rather than fight. The male plays an active role in feeding, grooming, and guarding the pups, but abandons them if the female goes missing before the pups are completely weaned. The den is abandoned by June to July, and the pups follow their parents in patrolling their territory and hunting. Pups may leave their families in August, though can remain for much longer. The pups attain adult dimensions at eight months and gain adult weight a month later. Territorial and sheltering behaviors Individual feeding territories vary in size from , with the general concentration of coyotes in a given area depending on food abundance, adequate denning sites, and competition with conspecifics and other predators. The coyote generally does not defend its territory outside of the denning season, and is much less aggressive towards intruders than the wolf is, typically chasing and sparring with them, but rarely killing them. Conflicts between coyotes can arise during times of food shortage. Coyotes mark their territories by raised-leg urination and ground-scratching. Like wolves, coyotes use a den, usually the deserted holes of other species, when gestating and rearing young, though they may occasionally give birth under sagebrushes in the open. Coyote dens can be located in canyons, washouts, coulees, banks, rock bluffs, or level ground. Some dens have been found under abandoned homestead shacks, grain bins, drainage pipes, railroad tracks, hollow logs, thickets, and thistles. The den is continuously dug and cleaned out by the female until the pups are born. Should the den be disturbed or infested with fleas, the pups are moved into another den. A coyote den can have several entrances and passages branching out from the main chamber. A single den can be used year after year. Hunting and feeding behaviors While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory, and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes. When hunting large prey, the coyote often works in pairs or small groups. Success in killing large ungulates depends on factors such as snow depth and crust density. Younger animals usually avoid participating in such hunts, with the breeding pair typically doing most of the work. The coyote pursues large prey, typically hamstringing the animal, and subsequently then harassing it until the prey falls. Like other canids, the coyote caches excess food. Coyotes catch mouse-sized rodents by pouncing, whereas ground squirrels are chased. Although coyotes can live in large groups, small prey is typically caught singly. Coyotes have been observed to kill porcupines in pairs, using their paws to flip the rodents on their backs, then attacking the soft underbelly. Only old and experienced coyotes can successfully prey on porcupines, with many predation attempts by young coyotes resulting in them being injured by their prey's quills. Coyotes sometimes urinate on their food, possibly to claim ownership over it. Recent evidence demonstrates that at least some coyotes have become more nocturnal in hunting, presumably to avoid humans. Coyotes may occasionally form mutualistic hunting relationships with American badgers, assisting each other in digging up rodent prey. The relationship between the two species may occasionally border on apparent "friendship", as some coyotes have been observed laying their heads on their badger companions or licking their faces without protest. The amicable interactions between coyotes and badgers were known to pre-Columbian civilizations, as shown on a jar found in Mexico dated to 1250–1300 CE depicting the relationship between the two. Food scraps, pet food, and animal feces may attract a coyote to a trash can. Communication Body language Being both a gregarious and solitary animal, the variability of the coyote's visual and vocal repertoire is intermediate between that of the solitary foxes and the highly social wolf. The aggressive behavior of the coyote bears more similarities to that of foxes than it does that of wolves and dogs. An aggressive coyote arches its back and lowers its tail. Unlike dogs, which solicit playful behavior by performing a "play-bow" followed by a "play-leap", play in coyotes consists of a bow, followed by side-to-side head flexions and a series of "spins" and "dives". Although coyotes will sometimes bite their playmates' scruff as dogs do, they typically approach low, and make upward-directed bites. Pups fight each other regardless of sex, while among adults, aggression is typically reserved for members of the same sex. Combatants approach each other waving their tails and snarling with their jaws open, though fights are typically silent. Males tend to fight in a vertical stance, while females fight on all four paws. Fights among females tend to be more serious than ones among males, as females seize their opponents' forelegs, throat, and shoulders. Vocalizations The coyote has been described as "the most vocal of all [wild] North American mammals". Its loudness and range of vocalizations was the cause for its binomial name Canis latrans, meaning "barking dog". At least 11 different vocalizations are known in adult coyotes. These sounds are divided into three categories: agonistic and alarm, greeting, and contact. Vocalizations of the first category include woofs, growls, huffs, barks, bark howls, yelps, and high-frequency whines. Woofs are used as low-intensity threats or alarms and are usually heard near den sites, prompting the pups to immediately retreat into their burrows. Growls are used as threats at short distances but have also been heard among pups playing and copulating males. Huffs are high-intensity threat vocalizations produced by rapid expiration of air. Barks can be classed as both long-distance threat vocalizations and alarm calls. Bark howls may serve similar functions. Yelps are emitted as a sign of submission, while high-frequency whines are produced by dominant animals acknowledging the submission of subordinates. Greeting vocalizations include low-frequency whines, 'wow-oo-wows', and group yip howls. Low-frequency whines are emitted by submissive animals and are usually accompanied by tail wagging and muzzle nibbling. The sound known as 'wow-oo-wow' has been described as a "greeting song". The group yip howl is emitted when two or more pack members reunite and may be the final act of a complex greeting ceremony. Contact calls include lone howls and group howls, as well as the previously mentioned group yip howls. The lone howl is the most iconic sound of the coyote and may serve the purpose of announcing the presence of a lone individual separated from its pack. Group howls are used as both substitute group yip howls and as responses to either lone howls, group howls, or group yip howls. Ecology Habitat Prior to the near extermination of wolves and cougars, the coyote was most numerous in grasslands inhabited by bison, pronghorn, elk, and other deer, doing particularly well in short-grass areas with prairie dogs, though it was just as much at home in semiarid areas with sagebrush and jackrabbits or in deserts inhabited by cactus, kangaroo rats, and rattlesnakes. As long as it was not in direct competition with the wolf, the coyote ranged from the Sonoran Desert to the alpine regions of adjoining mountains or the plains and mountainous areas of Alberta. With the extermination of the wolf, the coyote's range expanded to encompass broken forests from the tropics of Guatemala and the northern slope of Alaska. Coyotes walk around per day, often along trails such as logging roads and paths; they may use iced-over rivers as travel routes in winter. They are often crepuscular, being more active around evening and the beginning of the night than during the day. However, in urban areas coyotes are known to be more nocturnal, likely to avoid encounters with humans. Like many canids, coyotes are competent swimmers, reported to be able to travel at least across water. Diet The coyote is ecologically the North American equivalent of the Eurasian golden jackal. Likewise, the coyote is highly versatile in its choice of food, but is primarily carnivorous, with 90% of its diet consisting of meat. Prey species include bison (largely as carrion), white-tailed deer, mule deer, moose, elk, bighorn sheep, pronghorn, rabbits, hares, rodents, birds (especially galliformes, roadrunners, young water birds and pigeons and doves), amphibians (except toads), lizards, snakes, turtles and tortoises, fish, crustaceans, and insects. Coyotes may be picky over the prey they target, as animals such as shrews, moles, and brown rats do not occur in their diet in proportion to their numbers. Terrestrial animals or burrowing small mammals such as ground squirrels and associated species (marmots, prairie dogs, chipmunks) as well as voles, pocket gophers, kangaroo rats and other ground-favoring rodents may be quite common foods, especially for lone coyotes. Examples of specific, primary mammal prey include eastern cottontail rabbits, thirteen-lined ground squirrels, and white-footed mice. More unusual prey include fishers, young black bear cubs, harp seals and rattlesnakes. Coyotes kill rattlesnakes mostly for food, but also to protect their pups at their dens, by teasing the snakes until they stretch out and then biting their heads and snapping and shaking the snakes. Birds taken by coyotes may range in size from thrashers, larks and sparrows to adult wild turkeys and, rarely, brooding adult swans and pelicans. If working in packs or pairs, coyotes may have access to larger prey than lone individuals normally take, such as various prey weighing more than . In some cases, packs of coyotes have dispatched much larger prey such as adult Odocoileus deer, cow elk, pronghorns and wild sheep, although the young fawn, calves and lambs of these animals are considerably more often taken even by packs, as well as domestic sheep and domestic cattle. In some cases, coyotes can bring down prey weighing up to or more. When it comes to adult ungulates such as wild deer, they often exploit them when vulnerable such as those that are infirm, stuck in snow or ice, otherwise winter-weakened or heavily pregnant, whereas less wary domestic ungulates may be more easily exploited. Although coyotes prefer fresh meat, they will scavenge when the opportunity presents itself. Excluding the insects, fruit, and grass eaten, the coyote requires an estimated of food daily, or annually. The coyote readily cannibalizes the carcasses of conspecifics, with coyote fat having been successfully used by coyote hunters as a lure or poisoned bait. The coyote's winter diet consists mainly of large ungulate carcasses, with very little plant matter. Rodent prey increases in importance during the spring, summer, and fall. The coyote feeds on a variety of different produce, including strawberries, blackberries, blueberries, sarsaparillas, peaches, pears, apples, prickly pears, chapotes, persimmons, peanuts, watermelons, cantaloupes, and carrots. During the winter and early spring, the coyote eats large quantities of grass, such as green wheat blades. It sometimes eats unusual items such as cotton cake, soybean meal, domestic animal droppings, beans, and cultivated grain such as maize, wheat, and sorghum. In coastal California, coyotes now consume a higher percentage of marine-based food than their ancestors, which is thought to be due to the extirpation of the grizzly bear from this region. In Death Valley, coyotes may consume great quantities of hawkmoth caterpillars or beetles in the spring flowering months. Enemies and competitors In areas where the ranges of coyotes and gray wolves overlap, interference competition and predation by wolves has been hypothesized to limit local coyote densities. Coyote ranges expanded during the 19th and 20th centuries following the extirpation of wolves, while coyotes were driven to extinction on Isle Royale after wolves colonized the island in the 1940s. One study conducted in Yellowstone National Park, where both species coexist, concluded that the coyote population in the Lamar River Valley declined by 39% following the reintroduction of wolves in the 1990s, while coyote populations in wolf inhabited areas of the Grand Teton National Park are 33% lower than in areas where they are absent. Wolves have been observed to not tolerate coyotes in their vicinity, though coyotes have been known to trail wolves to feed on their kills. Coyotes may compete with cougars in some areas. In the eastern Sierra Nevada, coyotes compete with cougars over mule deer. Cougars normally outcompete and dominate coyotes, and may kill them occasionally, thus reducing coyote predation pressure on smaller carnivores such as foxes and bobcats. Coyotes that are killed are sometimes not eaten, perhaps indicating that these comprise competitive interspecies interactions, however there are multiple confirmed cases of cougars also eating coyotes. In northeastern Mexico, cougar predation on coyotes continues apace but coyotes were absent from the prey spectrum of sympatric jaguars, apparently due to differing habitat usages. Other than by gray wolves and cougars, predation on adult coyotes is relatively rare but multiple other predators can be occasional threats. In some cases, adult coyotes have been preyed upon by both American black and grizzly bears, American alligators, large Canada lynx and golden eagles. At kill sites and carrion, coyotes, especially if working alone, tend to be dominated by wolves, cougars, bears, wolverines and, usually but not always, eagles (i.e., bald and golden). When such larger, more powerful or more aggressive predators such as these come to a shared feeding site, a coyote may either try to fight, wait until the other predator is done or occasionally share a kill, but if a major danger such as wolves or an adult cougar is present, the coyote will tend to flee. Coyotes rarely kill healthy adult red foxes, and have been observed to feed or den alongside them, though they often kill foxes caught in traps. Coyotes may kill fox kits, but this is not a major source of mortality. In southern California, coyotes frequently kill gray foxes, and these smaller canids tend to avoid areas with high coyote densities. In some areas, coyotes share their ranges with bobcats. These two similarly-sized species rarely physically confront one another, though bobcat populations tend to diminish in areas with high coyote densities. However, several studies have demonstrated interference competition between coyotes and bobcats, and in all cases coyotes dominated the interaction. Multiple researchers reported instances of coyotes killing bobcats, whereas bobcats killing coyotes is more rare. Coyotes attack bobcats using a bite-and-shake method similar to what is used on medium-sized prey. Coyotes, both single individuals and groups, have been known to occasionally kill bobcats. In most cases, the bobcats were relatively small specimens, such as adult females and juveniles. Coyote attacks, by an unknown number of coyotes, on adult male bobcats have occurred. In California, coyote and bobcat populations are not negatively correlated across different habitat types, but predation by coyotes is an important source of mortality in bobcats. Biologist Stanley Paul Young noted that in his entire trapping career, he had never successfully saved a captured bobcat from being killed by coyotes, and wrote of two incidents wherein coyotes chased bobcats up trees. Coyotes have been documented to directly kill Canada lynx on occasion, and compete with them for prey, especially snowshoe hares. In some areas, including central Alberta, lynx are more abundant where coyotes are few, thus interactions with coyotes appears to influence lynx populations more than the availability of snowshoe hares. Range Due to the coyote's wide range and abundance throughout North America, it is listed as Least Concern by the International Union for Conservation of Nature (IUCN). The coyote's pre-Columbian range was limited to the Southwest and Plains regions of North America, and northern and central Mexico. By the 19th century, the species expanded north and east, expanding further after 1900, coinciding with land conversion and the extirpation of wolves. By this time, its range encompassed the entire North American continent, including all of the contiguous United States and Mexico, southward into Central America, and northward into most of Canada and Alaska. This expansion is ongoing, and the species now occupies the majority of areas between 8°N (Panama) and 70°N (northern Alaska). Although it was once widely believed that coyotes are recent immigrants to southern Mexico and Central America, aided in their expansion by deforestation, Pleistocene and Early Holocene records, as well as records from the pre-Columbian period and early European colonization show that the animal was present in the area long before modern times. Range expansion occurred south of Costa Rica during the late 1970s and northern Panama in the early 1980s, following the expansion of cattle-grazing lands into tropical rain forests. The coyote is predicted to appear in northern Belize in the near future, as the habitat there is favorable to the species. Concerns have been raised of a possible expansion into South America through the Panamanian Isthmus, should the Darién Gap ever be closed by the Pan-American Highway. This fear was partially confirmed in January 2013, when the species was recorded in eastern Panama's Chepo District, beyond the Panama Canal. A 2017 genetic study proposes that coyotes were originally not found in the area of the eastern United States. From the 1890s, dense forests were transformed into agricultural land and wolf control implemented on a large scale, leaving a niche for coyotes to disperse into. There were two major dispersals from two populations of genetically distinct coyotes. The first major dispersal to the northeast came in the early 20th century from those coyotes living in the northern Great Plains. These came to New England via the northern Great Lakes region and southern Canada, and to Pennsylvania via the southern Great Lakes region, meeting together in the 1940s in New York and Pennsylvania. These coyotes have hybridized with the remnant gray wolf and eastern wolf populations, which has added to coyote genetic diversity and may have assisted adaptation to the new niche. The second major dispersal to the southeast came in the mid-20th century from Texas and reached the Carolinas in the 1980s. These coyotes have hybridized with the remnant red wolf populations before the 1970s when the red wolf was extirpated in the wild, which has also added to coyote genetic diversity and may have assisted adaptation to this new niche as well. Both of these two major coyote dispersals have experienced rapid population growth and are forecast to meet along the mid-Atlantic coast. The study concludes that for coyotes the long range dispersal, gene flow from local populations, and rapid population growth may be inter-related. Diseases and parasites Among large North American carnivores, the coyote probably carries the largest number of diseases and parasites, likely due to its wide range and varied diet. Viral diseases known to infect coyotes include rabies, canine distemper, infectious canine hepatitis, four strains of equine encephalitis, and oral papillomatosis. By the late 1970s, serious rabies outbreaks in coyotes had ceased to be a problem for over 60 years, though sporadic cases every 1–5 years did occur. Distemper causes the deaths of many pups in the wild, though some specimens can survive infection. Tularemia, a bacterial disease, infects coyotes from tick bites and through their rodent and lagomorph prey, and can be deadly for pups. Coyotes can be infected by both demodectic and sarcoptic mange, the latter being the most common. Mite infestations are rare and incidental in coyotes, while tick infestations are more common, with seasonal peaks depending on locality (May–August in the Northwest, March–November in Arkansas). Coyotes are only rarely infested with lice, while fleas infest coyotes from puphood, though they may be more a source of irritation than serious illness. Pulex simulans is the most common species to infest coyotes, while Ctenocephalides canis tends to occur only in places where coyotes and dogs (its primary host) inhabit the same area. Although coyotes are rarely host to flukes, they can nevertheless have serious effects on coyotes, particularly Nanophyetus salmincola, which can infect them with salmon poisoning disease, a disease with a 90% mortality rate. Trematode Metorchis conjunctus can also infect coyotes. Tapeworms have been recorded to infest 60–95% of all coyotes examined. The most common species to infest coyotes are Taenia pisiformis and Taenia crassiceps, which uses cottontail rabbits and rodents as intermediate hosts. The largest species known in coyotes is T. hydatigena, which enters coyotes through infected ungulates, and can grow to lengths of . Although once largely limited to wolves, Echinococcus granulosus has expanded to coyotes since the latter began colonizing former wolf ranges. The most frequent ascaroid roundworm in coyotes is Toxascaris leonina, which dwells in the coyote's small intestine and has no ill effects, except for causing the host to eat more frequently. Hookworms of the genus Ancylostoma infest coyotes throughout their range, being particularly prevalent in humid areas. In areas of high moisture, such as coastal Texas, coyotes can carry up to 250 hookworms each. The blood-drinking A. caninum is particularly dangerous, as it damages the coyote through blood loss and lung congestion. A 10-day-old pup can die from being host to as few as 25 A. caninum worms. Relationships with humans In folklore and mythology Coyote features as a trickster figure and skin-walker in the folktales of some Native Americans, notably several nations in the Southwestern and Plains regions, where he alternately assumes the form of an actual coyote or that of a man. As with other trickster figures, Coyote acts as a picaresque hero who rebels against social convention through deception and humor. Folklorists such as Harris believe coyotes came to be seen as tricksters due to the animal's intelligence and adaptability. After the European colonization of the Americas, Anglo-American depictions of Coyote are of a cowardly and untrustworthy animal. Unlike the gray wolf, which has undergone a radical improvement of its public image, Anglo-American cultural attitudes towards the coyote remain largely negative. In the Maidu creation story, Coyote introduces work, suffering, and death to the world. Zuni lore has Coyote bringing winter into the world by stealing light from the kachinas. The Chinook, Maidu, Pawnee, Tohono O'odham, and Ute portray the coyote as the companion of The Creator. A Tohono O'odham flood story has Coyote helping Montezuma survive a global deluge that destroys humanity. After The Creator creates humanity, Coyote and Montezuma teach people how to live. The Crow creation story portrays Old Man Coyote as The Creator. In The Dineh creation story, Coyote was present in the First World with First Man and First Woman, though a different version has it being created in the Fourth World. The Navajo Coyote brings death into the world, explaining that without death, too many people would exist, thus no room to plant corn. Prior to the Spanish conquest of the Aztec Empire, Coyote played a significant role in Mesoamerican cosmology. The coyote symbolized military might in Classic era Teotihuacan, with warriors dressing up in coyote costumes to call upon its predatory power. The species continued to be linked to Central Mexican warrior cults in the centuries leading up to the post-Classic Aztec rule. In Aztec mythology, Huehuecóyotl (meaning "old coyote"), the god of dance, music and carnality, is depicted in several codices as a man with a coyote's head. He is sometimes depicted as a womanizer, responsible for bringing war into the world by seducing Xochiquetzal, the goddess of love. Epigrapher David H. Kelley argued that the god Quetzalcoatl owed its origins to pre-Aztec Uto-Aztecan mythological depictions of the coyote, which is portrayed as mankind's "Elder Brother", a creator, seducer, trickster, and culture hero linked to the morning star. Attacks on humans Coyote attacks on humans are uncommon and rarely cause serious injuries, due to the relatively small size of the coyote, but have been increasingly frequent, especially in California. By the middle of the 19th century, the coyote was already marked as an enemy by humans. (Sharp & Hall, 1978 Pg. 41-54) There have been only two confirmed fatal attacks: one on three-year-old Kelly Keen in Glendale, California and another on nineteen-year-old singer-songwriter Taylor Mitchell in Nova Scotia, Canada. In the 30 years leading up to March 2006, at least 160 attacks occurred in the United States, mostly in the Los Angeles County area. Data from United States Department of Agriculture (USDA) Wildlife Services, the California Department of Fish and Game, and other sources show that while 41 attacks occurred during the period of 1988–1997, 48 attacks were verified from 1998 through 2003. The majority of these incidents occurred in Southern California near the suburban-wildland interface. In the absence of the harassment of coyotes practiced by rural people, urban coyotes are losing their fear of humans, which is further worsened by people intentionally or unintentionally feeding coyotes. In such situations, some coyotes have begun to act aggressively toward humans, chasing joggers and bicyclists, confronting people walking their dogs, and stalking small children. Albeit rarely, coyotes in these areas have targeted small children, mostly under the age of 10, though some adults have been bitten. Although media reports of such attacks generally identify the animals in question as simply "coyotes", research into the genetics of the eastern coyote indicates those involved in attacks in northeast North America, including Pennsylvania, New York, New England, and eastern Canada, may have actually been coywolves, hybrids of Canis latrans and C. lupus, not fully coyotes. Livestock and pet predation , coyotes were the most abundant livestock predators in western North America, causing the majority of sheep, goat, and cattle losses. For example, according to the National Agricultural Statistics Service, coyotes were responsible for 60.5% of the 224,000 sheep deaths attributed to predation in 2004. The total number of sheep deaths in 2004 comprised 2.22% of the total sheep and lamb population in the United States, which, according to the National Agricultural Statistics Service USDA report, totaled 4.66 million and 7.80 million heads respectively as of July 1, 2005. Because coyote populations are typically many times greater and more widely distributed than those of wolves, coyotes cause more overall predation losses. United States government agents routinely shoot, poison, trap, and kill about 90,000 coyotes each year to protect livestock. An Idaho census taken in 2005 showed that individual coyotes were 5% as likely to attack livestock as individual wolves. In Utah, more than 11,000 coyotes were killed for bounties totaling over $500,000 in the fiscal year ending June 30, 2017. Livestock guardian dogs are commonly used to aggressively repel predators and have worked well in both fenced pasture and range operations. A 1986 survey of sheep producers in the USA found that 82% reported the use of dogs represented an economic asset. Re-wilding cattle, which involves increasing the natural protective tendencies of cattle, is a method for controlling coyotes discussed by Temple Grandin of Colorado State University. This method is gaining popularity among producers who allow their herds to calve on the range and whose cattle graze open pastures throughout the year. Coyotes typically bite the throat just behind the jaw and below the ear when attacking adult sheep or goats, with death commonly resulting from suffocation. Blood loss is usually a secondary cause of death. Calves and heavily fleeced sheep are killed by attacking the flanks or hindquarters, causing shock and blood loss. When attacking smaller prey, such as young lambs, the kill is made by biting the skull and spinal regions, causing massive tissue and bone damage. Small or young prey may be completely carried off, leaving only blood as evidence of a kill. Coyotes usually leave the hide and most of the skeleton of larger animals relatively intact, unless food is scarce, in which case they may leave only the largest bones. Scattered bits of wool, skin, and other parts are characteristic where coyotes feed extensively on larger carcasses. Tracks are an important factor in distinguishing coyote from dog predation. Coyote tracks tend to be more oval-shaped and compact than those of domestic dogs, and their claw marks are less prominent and the tracks tend to follow a straight line more closely than those of dogs. With the exception of sighthounds, most dogs of similar weight to coyotes have a slightly shorter stride. Coyote kills can be distinguished from wolf kills by less damage to the underlying tissues in the former. Also, coyote scat tends to be smaller than wolf scat. Coyotes are often attracted to dog food and animals that are small enough to appear as prey. Items such as garbage, pet food, and sometimes feeding stations for birds and squirrels attract coyotes into backyards. About three to five pets attacked by coyotes are brought into the Animal Urgent Care hospital of South Orange County (California) each week, the majority of which are dogs, since cats typically do not survive the attacks. Scat analysis collected near Claremont, California, revealed that coyotes relied heavily on pets as a food source in winter and spring. At one location in Southern California, coyotes began relying on a colony of feral cats as a food source. Over time, the coyotes killed most of the cats and then continued to eat the cat food placed daily at the colony site by people who were maintaining the cat colony. Coyotes usually attack smaller-sized dogs, but they have been known to attack even large, powerful breeds such as the Rottweiler in exceptional cases. Dogs larger than coyotes, such as greyhounds, are generally able to drive them off and have been known to kill coyotes. Smaller breeds are more likely to suffer injury or death. Hunting Coyote hunting is one of the most common forms of predator hunting that humans partake in. There are not many regulations with regard to the taking of the coyote which means there are many different methods that can be used to hunt the animal. The most common forms are trapping, calling, and hound hunting. Since coyotes are colorblind, seeing only in shades of gray and subtle blues, open camouflages and plain patterns can be used. As the average male coyote weighs 8 to 20 kg (18 to 44 lbs) and the average female coyote 7 to 18 kg (15 to 40 lbs), a universal projectile that can perform between those weights is the .223 Remington, so that the projectile expands in the target after entry, but before the exit, thus delivering the most energy. Coyotes being the light and agile animals they are, they often leave a very light impression on terrain. The coyote's footprint is oblong, approximately 6.35 cm (2.5-inches) long and 5.08 cm (2-inches) wide. There are four claws in both their front and hind paws. The coyote's center pad is relatively shaped like that of a rounded triangle. Like the domestic dog the coyote's front paw is slightly larger than the hind paw. The coyote's paw is most similar to that of the domestic dog. The hunting of coyotes often results in grey wolves being shot in places where the two species still coexist, as a result of mistaken identity. Fur uses Prior to the mid-19th century, coyote fur was considered worthless. This changed with the diminution of beavers, and by 1860, the hunting of coyotes for their fur became a great source of income (75 cents to $1.50 per skin) for wolfers in the Great Plains. Coyote pelts were of significant economic importance during the early 1950s, ranging in price from $5 to $25 per pelt, depending on locality. The coyote's fur is not durable enough to make rugs, but can be used for coats and jackets, scarves, or muffs. The majority of pelts are used for making trimmings, such as coat collars and sleeves for women's clothing. Coyote fur is sometimes dyed black as imitation silver fox. Coyotes were occasionally eaten by trappers and mountain men during the western expansion. Coyotes sometimes featured in the feasts of the Plains Indians, and coyote pups were eaten by the indigenous people of San Gabriel, California. The taste of coyote meat has been likened to that of the wolf and is more tender than pork when boiled. Coyote fat, when taken in the fall, has been used on occasion to grease leather or eaten as a spread. Tameability Coyotes were likely semidomesticated by various pre-Columbian cultures. Some 19th-century writers wrote of coyotes being kept in native villages in the Great Plains. The coyote is easily tamed as a pup, but can become destructive as an adult. Both full-blooded and hybrid coyotes can be playful and confiding with their owners, but are suspicious and shy of strangers, though coyotes being tractable enough to be used for practical purposes like retrieving and pointing have been recorded. A tame coyote named "Butch", caught in the summer of 1945, had a short-lived career in cinema, appearing in Smoky (1946) and Ramrod (1947) before being shot while raiding a henhouse. In popular culture Wile E. Coyote features prominently in the Looney Tunes and Merrie Melodies series of animated short films, in which he makes numerous ill-fated attempts to capture an elusive roadrunner. Dag and his coyote pack are the main antagonists in Nickelodeon's 2006 animated film Barnyard. The NHL team in Arizona (1996–2024) was named the Arizona Coyotes to pay tribute to the large population of coyotes in the region. The famous oo-wee-oo-wee-oo wah-wah-wah scream in The Good, The Bad and The Ugly (1966) was inspired by the howl of the coyote. Copper, a coyote, was one of three mascots for the 2002 Winter Olympics. An animated coyote voiced by Johnny Cash plays a pivotal role as a spirit guide to Homer Simpson in the Simpsons episode "El Viaje Misterioso de Nuestro Jomer". The 2013 documentary film Bad Coyote profiles the expansion of coyotes into Atlantic Canada, centred in part on the 2009 death of singer-songwriter Taylor Mitchell in a coyote attack. Athletic teams at the University of South Dakota are called the Coyotes. The Daily Coyote, a 2008 autobiographical book about a woman who raises a coyote pup. Explanatory notes Citations General and cited sources
Biology and health sciences
Carnivora
null
6736
https://en.wikipedia.org/wiki/Canidae
Canidae
Canidae (; from Latin, canis, "dog") is a biological family of dog-like carnivorans, colloquially referred to as dogs, and constitutes a clade. A member of this family is also called a canid (). The family includes three subfamilies: the Caninae, and the extinct Borophaginae and Hesperocyoninae. The Caninae are known as canines, and include domestic dogs, wolves, coyotes, foxes, jackals and other species. Canids are found on all continents except Antarctica, having arrived independently or accompanied by human beings over extended periods of time. Canids vary in size from the gray wolf to the fennec fox. The body forms of canids are similar, typically having long muzzles, upright ears, teeth adapted for cracking bones and slicing flesh, long legs, and bushy tails. They are mostly social animals, living together in family units or small groups and behaving co-operatively. Typically, only the dominant pair in a group breeds and a litter of young are reared annually in an underground den. Canids communicate by scent signals and vocalizations. One canid, the domestic dog, originated from a symbiotic relationship with Upper Paleolithic humans and is one of the most widely kept domestic animals. Taxonomy In the history of the carnivores, the family Canidae is represented by the two extinct subfamilies designated as Hesperocyoninae and Borophaginae, and the extant subfamily Caninae. This subfamily includes all living canids and their most recent fossil relatives. All living canids as a group form a dental monophyletic relationship with the extinct borophagines, with both groups having a bicuspid (two points) on the lower carnassial talonid, which gives this tooth an additional ability in mastication. This, together with the development of a distinct entoconid cusp and the broadening of the talonid of the first lower molar, and the corresponding enlargement of the talon of the upper first molar and reduction of its parastyle distinguish these late Cenozoic canids and are the essential differences that identify their clade. The cat-like Feliformia and dog-like Caniformia emerged within the Carnivoramorpha around 45–42 Mya (million years ago). The Canidae first appeared in North America during the Late Eocene (37.8-33.9 Mya). They did not reach Eurasia until the Late Miocene or to South America until the Late Pliocene. Phylogenetic relationships This cladogram shows the phylogenetic position of canids within Caniformia, based on fossil finds: Evolution The Canidae are a diverse group of some 37 species ranging in size from the maned wolf with its long limbs to the short-legged bush dog. Modern canids inhabit forests, tundra, savannas, and deserts throughout tropical and temperate parts of the world. The evolutionary relationships between the species have been studied in the past using morphological approaches, but more recently, molecular studies have enabled the investigation of phylogenetics relationships. In some species, genetic divergence has been suppressed by the high level of gene flow between different populations and where the species have hybridized, large hybrid zones exist. Eocene epoch Carnivorans evolved after the extinction of the non-avian dinosaurs 66 million years ago. Around 50 million years ago, or earlier, in the Paleocene, the Carnivora split into two main divisions: caniform (dog-like) and feliform (cat-like). By 40 Mya, the first identifiable member of the dog family had arisen. Named Prohesperocyon wilsoni, its fossils have been found in southwest Texas. The chief features which identify it as a canid include the loss of the upper third molar (part of a trend toward a more shearing bite), and the structure of the middle ear which has an enlarged bulla (the hollow bony structure protecting the delicate parts of the ear). Prohesperocyon probably had slightly longer limbs than its predecessors, and also had parallel and closely touching toes which differ markedly from the splayed arrangements of the digits in bears. Canidae soon divided into three subfamilies, each of which diverged during the Eocene: Hesperocyoninae (about 39.74–15 Mya), Borophaginae (about 34–32 Mya), and Caninae (about 34–30 Mya; the only surviving subfamily). Members of each subfamily showed an increase in body mass with time and some exhibited specialized hypercarnivorous diets that made them prone to extinction. Oligocene epoch By the Oligocene, all three subfamilies (Hesperocyoninae, Borophaginae, and Caninae) had appeared in the fossil record of North America. The earliest and most primitive branch of the Canidae was Hesperocyoninae, which included the coyote-sized Mesocyon of the Oligocene (38–24 Mya). These early canids probably evolved for the fast pursuit of prey in a grassland habitat; they resembled modern viverrids in appearance. Hesperocyonines eventually became extinct in the middle Miocene. One of the early Hesperocyonines, the genus Hesperocyon, gave rise to Archaeocyon and Leptocyon. These branches led to the borophagine and canine radiations. Miocene epoch Around 8 Mya, the Beringian land bridge allowed members of the genus Eucyon a means to enter Asia from North America and they continued on to colonize Europe. Pliocene epoch The Canis, Urocyon, and Vulpes genera developed from canids from North America, where the canine radiation began. The success of these canids was related to the development of lower carnassials that were capable of both mastication and shearing. Around 5 million years ago, some of the Old World Eucyon evolved into the first members of Canis, In the Pliocene, around 4–5 Mya, Canis lepophagus appeared in North America. This was small and sometimes coyote-like. Others were wolf-like. C. latrans (the coyote) is theorized to descend from C. lepophagus. The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the last common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely given the fact that more than one lineage is present in South America. Two North American lineages found in South America are the gray fox (Urocyon cinereoargentus) and the now-extinct dire wolf (Aenocyon dirus). Besides these, there are species endemic to South America: the maned wolf (Chrysocyon brachyurus), the short-eared dog (Atelocynus microtis), the bush dog (Speothos venaticus), the crab-eating fox (Cerdocyon thous), and the South American foxes (Lycalopex spp.). The monophyly of this group has been established by molecular means. Pleistocene epoch During the Pleistocene, the North American wolf line appeared, with Canis edwardii, clearly identifiable as a wolf, and Canis rufus appeared, possibly a direct descendant of C. edwardii. Around 0.8 Mya, Canis ambrusteri emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by the dire wolf, which then spread into South America during the Late Pleistocene. By 0.3 Mya, a number of subspecies of the gray wolf (C. lupus) had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant C. lupus lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied. In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized as a separate species, Canis anthus (African golden wolf). According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal (Canis aureus) diverged from the wolf/coyote lineage 1.9 Mya, but the African golden wolf separated 1.3 Mya. Mitochondrial genome sequences indicated the Ethiopian wolf diverged from the wolf/coyote lineage slightly prior to that. Wild canids are found on every continent except Antarctica, and inhabit a wide range of different habitats, including deserts, mountains, forests, and grasslands. They vary in size from the fennec fox, which may be as little as in length and weigh , to the gray wolf, which may be up to long, and can weigh up to . Only a few species are arboreal—the gray fox, the closely related island fox and the raccoon dog habitually climb trees. All canids have a similar basic form, as exemplified by the gray wolf, although the relative length of muzzle, limbs, ears, and tail vary considerably between species. With the exceptions of the bush dog, the raccoon dog and some domestic dog breeds, canids have relatively long legs and lithe bodies, adapted for chasing prey. The tails are bushy and the length and quality of the pelage vary with the season. The muzzle portion of the skull is much more elongated than that of the cat family. The zygomatic arches are wide, there is a transverse lambdoidal ridge at the rear of the cranium and in some species, a sagittal crest running from front to back. The bony orbits around the eye never form a complete ring and the auditory bullae are smooth and rounded. Females have three to seven pairs of mammae. All canids are digitigrade, meaning they walk on their toes. The tip of the nose is always naked, as are the cushioned pads on the soles of the feet. These latter consist of a single pad behind the tip of each toe and a more-or-less three-lobed central pad under the roots of the digits. Hairs grow between the pads and in the Arctic fox the sole of the foot is densely covered with hair at some times of the year. With the exception of the four-toed African wild dog (Lycaon pictus), five toes are on the forefeet, but the pollex (thumb) is reduced and does not reach the ground. On the hind feet are four toes, but in some domestic dogs, a fifth vestigial toe, known as a dewclaw, is sometimes present, but has no anatomical connection to the rest of the foot. In some species, slightly curved nails are non-retractile and more-or-less blunt while other species have sharper, partially-retractile claws. The canine penis contains a baculum and a structure called the bulbus glandis that expands during copulation, forming a copulatory tie that lasts for up to an hour. Young canids are born blind, with their eyes opening a few weeks after birth. All living canids (Caninae) have a ligament analogous to the nuchal ligament of ungulates used to maintain the posture of the head and neck with little active muscle exertion; this ligament allows them to conserve energy while running long distances following scent trails with their nose to the ground. However, based on skeletal details of the neck, at least some of the Borophaginae (such as Aelurodon) are believed to have lacked this ligament. Dentition Dentition relates to the arrangement of teeth in the mouth, with the dental notation for the upper-jaw teeth using the upper-case letters I to denote incisors, C for canines, P for premolars, and M for molars, and the lower-case letters i, c, p and m to denote the mandible teeth. Teeth are numbered using one side of the mouth and from the front of the mouth to the back. In carnivores, the upper premolar P4 and the lower molar m1 form the carnassials that are used together in a scissor-like action to shear the muscle and tendon of prey. Canids use their premolars for cutting and crushing except for the upper fourth premolar P4 (the upper carnassial) that is only used for cutting. They use their molars for grinding except for the lower first molar m1 (the lower carnassial) that has evolved for both cutting and grinding depending on the canid's dietary adaptation. On the lower carnassial, the trigonid is used for slicing and the talonid is used for grinding. The ratio between the trigonid and the talonid indicates a carnivore's dietary habits, with a larger trigonid indicating a hypercarnivore and a larger talonid indicating a more omnivorous diet. Because of its low variability, the length of the lower carnassial is used to provide an estimate of a carnivore's body size. A study of the estimated bite force at the canine teeth of a large sample of living and fossil mammalian predators, when adjusted for their body mass, found that for placental mammals the bite force at the canines was greatest in the extinct dire wolf (163), followed among the modern canids by the four hypercarnivores that often prey on animals larger than themselves: the African wild dog (142), the gray wolf (136), the dhole (112), and the dingo (108). The bite force at the carnassials showed a similar trend to the canines. A predator's largest prey size is strongly influenced by its biomechanical limits. Most canids have 42 teeth, with a dental formula of: . The bush dog has only one upper molar with two below, the dhole has two above and two below. and the bat-eared fox has three or four upper molars and four lower ones. The molar teeth are strong in most species, allowing the animals to crack open bone to reach the marrow. The deciduous, or baby teeth, formula in canids is , molars being completely absent. Life history Social behavior Almost all canids are social animals and live together in groups. In general, they are territorial or have a home range and sleep in the open, using their dens only for breeding and sometimes in bad weather. In most foxes, and in many of the true dogs, a male and female pair work together to hunt and to raise their young. Gray wolves and some of the other larger canids live in larger groups called packs. African wild dogs have packs which may consist of 20 to 40 animals and packs of fewer than about seven individuals may be incapable of successful reproduction. Hunting in packs has the advantage that larger prey items can be tackled. Some species form packs or live in small family groups depending on the circumstances, including the type of available food. In most species, some individuals live on their own. Within a canid pack, there is a system of dominance so that the strongest, most experienced animals lead the pack. In most cases, the dominant male and female are the only pack members to breed. Communication Canids communicate with each other by scent signals, by visual clues and gestures, and by vocalizations such as growls, barks, and howls. In most cases, groups have a home territory from which they drive out other conspecifics. Canids use urine scent marks to mark their food caches or warn trespassing individuals. Social behavior is also mediated by secretions from glands on the upper surface of the tail near its root and from the anal glands, preputial glands, and supracaudal glands. Reproduction Canids as a group exhibit several reproductive traits that are uncommon among mammals as a whole. They are typically monogamous, provide paternal care to their offspring, have reproductive cycles with lengthy proestral and dioestral phases and have a copulatory tie during mating. They also retain adult offspring in the social group, suppressing the ability of these to breed while making use of the alloparental care they can provide to help raise the next generation. Most canid species are spontaneous ovulators, though maned wolves are induced ovulators. During the proestral period, increased levels of estradiol make the female attractive to the male. There is a rise in progesterone during the estral phase when female is receptive. Following this, the level of estradiol fluctuates and there is a lengthy dioestrous phase during which the female is pregnant. Pseudo-pregnancy often occurs in canids that have ovulated but failed to conceive. A period of anestrus follows pregnancy or pseudo-pregnancy, there being only one oestral period during each breeding season. Small and medium-sized canids mostly have a gestation of 50 to 60 days, while larger species average 60 to 65 days. The time of year in which the breeding season occurs is related to the length of day, as has been shown for several species that have been moved across the equator and experiences a six-month shift of phase. Domestic dogs and certain small canids in captivity may come into oestrus more often, perhaps because the photoperiod stimulus breaks down under conditions of artificial lighting. Canids have an oestrus period of 1 to 20 days, lasting one week in most species. The size of a litter varies, with from one to 16 or more pups being born. The young are born small, blind and helpless and require a long period of parental care. They are kept in a den, most often dug into the ground, for warmth and protection. When the young begin eating solid food, both parents, and often other pack members, bring food back for them from the hunt. This is most often vomited up from the adult's stomach. Where such pack involvement in the feeding of the litter occurs, the breeding success rate is higher than is the case where females split from the group and rear their pups in isolation. Young canids may take a year to mature and learn the skills they need to survive. In some species, such as the African wild dog, male offspring usually remain in the natal pack, while females disperse as a group and join another small group of the opposite sex to form a new pack. Canids and humans One canid, the domestic dog, entered into a partnership with humans a long time ago. The dog was the first domesticated species. The archaeological record shows the first undisputed dog remains buried beside humans 14,700 years ago, with disputed remains occurring 36,000 years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists. The fact that wolves are pack animals with cooperative social structures may have been the reason that the relationship developed. Humans benefited from the canid's loyalty, cooperation, teamwork, alertness and tracking abilities, while the wolf may have benefited from the use of weapons to tackle larger prey and the sharing of food. Humans and dogs may have evolved together. Among canids, only the gray wolf has widely been known to prey on humans. Nonetheless, at least two records of coyotes killing humans have been published, and at least two other reports of golden jackals killing children. Human beings have trapped and hunted some canid species for their fur and some, especially the gray wolf, the coyote and the red fox, for sport. Canids such as the dhole are now endangered in the wild because of persecution, habitat loss, a depletion of ungulate prey species and transmission of diseases from domestic dogs.
Biology and health sciences
Carnivora
null
6775
https://en.wikipedia.org/wiki/Chlorophyta
Chlorophyta
Chlorophyta () is a division of green algae informally called chlorophytes. Description Chlorophytes are eukaryotic organisms composed of cells with a variety of coverings or walls, and usually a single green chloroplast in each cell. They are structurally diverse: most groups of chlorophytes are unicellular, such as the earliest-diverging prasinophytes, but in two major classes (Chlorophyceae and Ulvophyceae) there is an evolutionary trend toward various types of complex colonies and even multicellularity. Chloroplasts Chlorophyte cells contain green chloroplasts surrounded by a double-membrane envelope. These contain chlorophylls a and b, and the carotenoids carotin, lutein, zeaxanthin, antheraxanthin, violaxanthin, and neoxanthin, which are also present in the leaves of land plants. Some special carotenoids are present in certain groups, or are synthesized under specific environmental factors, such as siphonaxanthin, prasinoxanthin, echinenon, canthaxanthin, loroxanthin, and astaxanthin. They accumulate carotenoids under nitrogen deficiency, high irradiance of sunlight, or high salinity. In addition, they store starch inside the chloroplast as carbohydrate reserves. The thylakoids can appear single or in stacks. In contrast to other divisions of algae such as Ochrophyta, chlorophytes lack a chloroplast endoplasmic reticulum. Flagellar apparatus Chlorophytes often form flagellate cells that generally have two or four flagella of equal length, although in prasinophytes heteromorphic (i.e. differently shaped) flagella are common because different stages of flagellar maturation are displayed in the same cell. Flagella have been independently lost in some groups, such as the Chlorococcales. Flagellate chlorophyte cells have symmetrical cross-shaped ('cruciate') root systems, in which ciliary rootlets with a variable high number of microtubules alternate with rootlets composed of just two microtubules; this forms an arrangement known as the "X-2-X-2" arrangement, unique to chlorophytes. They are also distinguished from streptophytes by the place where their flagella are inserted: directly at the cell apex, whereas streptophyte flagella are inserted at the sides of the cell apex (sub-apically). Below the flagellar apparatus of prasinophytes are rhizoplasts, contractile muscle-like structures that sometimes connect with the chloroplast or the cell membrane. In core chlorophytes, this structure connects directly with the surface of the nucleus. The surface of flagella lacks microtubular hairs, but some genera present scales or fibrillar hairs. The earliest-branching groups have flagella often covered in at least one layer of scales, if not naked. Metabolism Chlorophytes and streptophytes differ in the enzymes and organelles involved in photorespiration. Chlorophyte algae use a dehydrogenase inside the mitochondria to process glycolate during photorespiration. In contrast, streptophytes (including land plants) use peroxisomes that contain glycolate oxidase, which converts glycolate to glycoxylate, and the hydrogen peroxide created as a subproduct is reduced by catalases located in the same organelles. Reproduction and life cycle Asexual reproduction is widely observed in chlorophytes. Among core chlorophytes, both unicellular groups can reproduce asexually through autospores, wall-less zoospores, fragmentation, plain cell division, and exceptionally budding. Multicellular thalli can reproduce asexually through motile zoospores, non-motile aplanospores, autospores, filament fragmentation, differentiated resting cells, and even unmated gametes. Colonial groups can reproduce asexually through the formation of autocolonies, where each cell divides to form a colony with the same number and arrangement of cells as the parent colony. Many chlorophytes exclusively conduct asexual reproduction, but some display sexual reproduction, which may be isogamous (i.e., gametes of both sexes are identical), anisogamous (gametes are different) or oogamous (gametes are sperm and egg cells), with an evolutionary tendency towards oogamy. Their gametes are usually specialized cells differentiated from vegetative cells, although in unicellular Volvocales the vegetative cells can function simultaneously as gametes. Most chlorophytes have a diplontic life cycle (also known as zygotic), where the gametes fuse into a zygote which germinates, grows and eventually undergoes meiosis to produce haploid spores (gametes), similarly to ochrophytes and animals. Some exceptions display a haplodiplontic life cycle, where there is an alternation of generations, similarly to land plants. These generations can be isomorphic (i.e., of similar shape and size) or heteromorphic. The formation of reproductive cells usually does not occur in specialized cells, but some Ulvophyceae have specialized reproductive structures: gametangia, to produce gametes, and sporangia, to produce spores. The earliest-diverging unicellular chlorophytes (prasinophytes) produce walled resistant stages called cysts or 'phycoma' stages before reproduction; in some groups the cysts are as large as 230 μm in diameter. To develop them, the flagellate cells form an inner wall by discharging mucilage vesicles to the outside, increase the level of lipids in the cytoplasm to enhance buoyancy, and finally develop an outer wall. Inside the cysts, the nucleus and cytoplasm undergo division into numerous flagellate cells that are released by rupturing the wall. In some species these daughter cells have been confirmed to be gametes; otherwise, sexual reproduction is unknown in prasinophytes. Ecology Free-living Chlorophytes are an important portion of the phytoplankton in both freshwater and marine habitats, fixating more than a billion tons of carbon every year. They also live as multicellular macroalgae, or seaweeds, settled along rocky ocean shores. Most species of Chlorophyta are aquatic, prevalent in both marine and freshwater environments. About 90% of all known species live in freshwater. Some species have adapted to a wide range of terrestrial environments. For example, Chlamydomonas nivalis lives on summer alpine snowfields, and Trentepohlia species, live attached to rocks or woody parts of trees. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experience extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales, are exclusively found on land. Symbionts Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga Prototheca are pathogenic and can cause the disease protothecosis in humans and animals. With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. All members of the clade have motile flagellated swimming cells. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group. Systematics Taxonomic history The first mention of Chlorophyta belongs to German botanist Heinrich Gottlieb Ludwig Reichenbach in his 1828 work Conspectus regni vegetabilis. Under this name, he grouped all algae, mosses ('musci') and ferns ('filices'), as well as some seed plants (Zamia and Cycas). This usage did not gain popularity. In 1914, Bohemian botanist Adolf Pascher modified the name to encompass exclusively green algae, that is, algae which contain chlorophylls a and b and store starch in their chloroplasts. Pascher established a scheme where Chlorophyta was composed of two groups: Chlorophyceae, which included algae now known as Chlorophyta, and Conjugatae, which are now known as Zygnematales and belong to the Streptophyta clade from which land plants evolved. During the 20th century, many different classification schemes for the Chlorophyta arose. The Smith system, published in 1938 by American botanist Gilbert Morgan Smith, distinguished two classes: Chlorophyceae, which contained all green algae (unicellular and multicellular) that did not grow through an apical cell; and Charophyceae, which contained only multicellular green algae that grew via an apical cell and had special sterile envelopes to protect the sex organs. With the advent of electron microscopy studies, botanists published various classification proposals based on finer cellular structures and phenomena, such as mitosis, cytokinesis, cytoskeleton, flagella and cell wall polysaccharides. British botanist proposed in 1971 a scheme which distinguishes Chlorophyta from other green algal divisions Charophyta, Prasinophyta and Euglenophyta. He included four classes of chlorophytes: Zygnemaphyceae, Oedogoniophyceae, Chlorophyceae and Bryopsidophyceae. Other proposals retained the Chlorophyta as containing all green algae, and varied from one another in the number of classes. For example, the 1984 proposal by Mattox & Stewart included five classes, while the 1985 proposal by Bold & Wynne included only two, and the 1995 proposal by Christiaan van den Hoek and coauthors included up to eleven classes. The modern usage of the name 'Chlorophyta' was established in 2004, when phycologists Lewis & McCourt firmly separated the chlorophytes from the streptophytes on the basis of molecular phylogenetics. All green algae that were more closely related to land plants than to chlorophytes were grouped as a paraphyletic division Charophyta. Within the green algae, the earliest-branching lineages were grouped under the informal name of "prasinophytes", and they were all believed to belong to the Chlorophyta clade. However, in 2020 a study recovered a new clade and division known as Prasinodermophyta, which contains two prasinophyte lineages previously considered chlorophytes. Below is a cladogram representing the current state of green algal classification: Classification Currently eleven chlorophyte classes are accepted, here presented in alphabetical order with some of their characteristics and biodiversity: Chlorodendrophyceae (60 species, 15 extinct): unicellular flagellates (monadoids) surrounded by an outer cell covering or theca of organic extracellular scales composed of proteins and ketosugars. Some of these scales make up hair-like structures. Capable of asexual reproduction through cell division inside the theca. No sexual reproduction has been described. Each cell contains a single chloroplast and exhibits two flagella. Present in marine and freshwater habitats. Chlorophyceae (3,974 species): either unicellular monadoids (flagellated) or coccoids (without flagella) living solitary or in varied colonial forms (including coenobial), or multicellular filamentous (branch-like) thalli that may be ramified, or foliose (leaf-like) thalli. Cells are surrounded by a crystalline covering composed of glycoproteins abundant in glycine and hydroxyproline, as well as pectins, arabinogalactan proteins, and extensin. They exhibit a haplontic life cycle with isogamy, anisogamy or oogamy. They are capable of asexual reproduction through flagellated zoospores, aplanospores, or autospores. Each cell contains a single chloroplast, a variable number of pyrenoids (including lack thereof), and from one to hundreds of flagella without mastigonemes. Present in marine, freshwater and terrestrial habitats. Chloropicophyceae (8 species): unicellular solitary coccoids. Cells are surrounded by a multi-layered cell wall. No sexual or asexual reproduction has been described. Each cell contains a single chloroplast with astaxanthin and loroxanthin, and lacks pyrenoids or flagella. They are exclusively marine. Chuariophyceae (3 extinct species): exclusively fossil group containing carbonaceous megafossils found in Ediacaran rocks, such as Tawuia. Mamiellophyceae (25 species): unicellular solitary monadoids. Cells are naked or covered by one or two layers of flat scales, mainly with spiderweb-like or reticulate ornamentation. Each cell contains one or rarely two chloroplasts, almost always with prasinoxanthin; two equal or unequal flagella, or just one flagellum, or lacking any flagella. If flagella are present, they can be either smooth or covered in scales in the same manner as the cells. Present in marine and freshwater habitats. Nephroselmidophyceae (29 species): unicellular monadoids. Cells are covered by scales. They are capable of sexual reproduction through hologamy (fusion of entire cells), and of asexual reproduction through binary fission. Each cell contains a single cloroplast, a pyrenoid, and two flagella covered by scales. Present in marine and freshwater habitats. Pedinophyceae (24 species): unicellular asymmetrical monadoids that undergo a coccoid palmelloid phase covered by mucilage. Cells lack extracellular scales, but in rare cases are covered on the posterior side by a theca. Each cell contains a single chloroplast, a pyrenoid, and a single flagellum usually covered in mastigonemes. Present in marine, freshwater and terrestrial habitats. Picocystophyceae (1 species): unicellular coccoids, ovoid and trilobed in shape. Cells are surrounded by a multi-layered cell wall of poly-arabinose, mannose, galactose and glucose. No sexual reproduction has been described. They are capable of asexual reproduction through autosporulation, resulting in two or rarely four daughter cells. Each cell contains a single bilobed chloroplast with diatoxanthin and monadoxanthin, without any pyrenoid or flagella. Present in saline lakes. Pyramimonadophyceae (166 species, 59 extinct): unicellular monadoids or coccoids. Cells are covered by two or more layers of organic scales. No sexual reproduction has been described, but some cells with only one flagellum have been interpreted as potential gametes. Asexual reproduction has only been observed in the coccoid forms, via zoospores. Each cell contains a single chloroplast, a pyrenoid, and between 4 and 16 flagella. The flagella are covered in at least two layers of organic scales: a bottom layer of pentagonal scales organized in 24 rows, and a top layer of limuloid scales distributed in 11 rows. They are exclusively marine. Trebouxiophyceae (926 species, 1 extinct): unicellular monadoids occasionally without flagella, or colonial, or ramified filamentous thalli, or living as the photobionts of lichen. Cells are covered by a cell wall of cellulose, algaenans, and β-galactofuranane. No sexual reproduction has been described with the exception of some observations of gamete fusion and presence of meiotic genes. They are capable of asexual reproduction through autospores or zoospores. Each cell contains a single chloroplast, a pyrenoid, and one or two pairs of smooth flagella. They are present in marine, freshwater and terrestrial habitats. Ulvophyceae (2,695 species, 990 extinct): macroscopic thalli, either filamentous (which may be ramified) or foliose (composed of monostromatic or distromatic layers) or even compact tubular forms, generally multinucleate. Cells surrounded by a cell wall that may be calcified, composed of cellulose, β-manane, β-xilane, sulphated or piruvilated polysaccharides or sulphated ramnogalacturonanes, arabinogalactan proteins, and extensin. They exhibit a haplodiplontic life cycle where the alternating generations can be isomorphic or heteromorphic. They reproduce asexually via zoospores that may be covered in scales. Each cell contains a single chloroplast, and one or two pairs of flagella without mastigonemes but covered in scales. They are present in marine, freshwater and terrestrial habitats. Evolution In February 2020, the fossilized remains of a green alga, named Proterocladus antiquus were discovered in the northern province of Liaoning, China. At around a billion years old, it is believed to be one of the oldest examples of a multicellular chlorophyte. It is currently classified as a member of order Siphonocladales, class Ulvophyceae. In 2023, a study calculated the molecular age of green algae as calibrated by this fossil. The study estimated the origin of Chlorophyta within the Mesoproterozoic era, at around 2.04–1.23 billion years ago. Usage Model organisms Among chlorophytes, a small group known as the volvocine green algae is being researched to understand the origins of cell differentiation and multicellularity. In particular, the unicellular flagellate Chlamydomonas reinhardtii and the colonial organism Volvox carteri are object of interest due to sharing homologous genes that in Volvox are directly involved in the development of two different cell types with full division of labor between swimming and reproduction, whereas in Chlamydomonas only one cell type exists that can function as a gamete. Other volvocine species, with intermediate characters between these two, are studied to further understand the transition towards the cellular division of labor, namely Gonium pectorale, Pandorina morum, Eudorina elegans and Pleodorina starrii. Industrial uses Chlorophyte microalgae are a valuable source of biofuel and various chemicals and products in industrial amounts, such as carotenoids, vitamins and unsaturated fatty acids. The genus Botryococcus is an efficient producer of hydrocarbons, which are converted into biodiesel. Various genera (Chlorella, Scenedesmus, Haematococcus, Dunaliella and Tetraselmis) are used as cellular factories of biomass, lipids and different vitamins for either human or animal consumption, and even for usage as pharmaceuticals. Some of their pigments are employed for cosmetics.
Biology and health sciences
Green algae
null
6776
https://en.wikipedia.org/wiki/Capybara
Capybara
The capybara or greater capybara (Hydrochoerus hydrochaeris) is the largest living rodent, native to South America. It is a member of the genus Hydrochoerus. The only other extant member is the lesser capybara (Hydrochoerus isthmius). Its close relatives include guinea pigs and rock cavies, and it is more distantly related to the agouti, the chinchilla, and the nutria. The capybara inhabits savannas and dense forests, and lives near bodies of water. It is a highly social species and can be found in groups as large as 100 individuals, but usually live in groups of 10–20 individuals. The capybara is hunted for its meat and hide and also for grease from its thick fatty skin. Etymology Its common name is derived from Tupi , a complex agglutination of (leaf) + (slender) + (eat) + (a suffix for agent nouns), meaning "one who eats slender leaves", or "grass-eater". The genus name, hydrochoerus, comes from Greek ( "water") and ( "pig, hog") and the species name, hydrochaeris, comes from Greek ( "water") and ( "feel happy, enjoy"). Classification and phylogeny The capybara and the lesser capybara both belong to the subfamily Hydrochoerinae along with the rock cavies. The living capybaras and their extinct relatives were previously classified in their own family Hydrochoeridae. Since 2002, molecular phylogenetic studies have recognized a close relationship between Hydrochoerus and Kerodon, the rock cavies, supporting placement of both genera in a subfamily of Caviidae. Paleontological classifications previously used Hydrochoeridae for all capybaras, while using Hydrochoerinae for the living genus and its closest fossil relatives, such as Neochoerus, but more recently have adopted the classification of Hydrochoerinae within Caviidae. The taxonomy of fossil hydrochoerines is also in a state of flux. In recent years, the diversity of fossil hydrochoerines has been substantially reduced. This is largely due to the recognition that capybara molar teeth show strong variation in shape over the life of an individual. In one instance, material once referred to four genera and seven species on the basis of differences in molar shape is now thought to represent differently aged individuals of a single species, Cardiatherium paranense. Among fossil species, the name "capybara" can refer to the many species of Hydrochoerinae that are more closely related to the modern Hydrochoerus than to the "cardiomyine" rodents like Cardiomys. The fossil genera Cardiatherium, Phugatherium, Hydrochoeropsis, and Neochoerus are all capybaras under that concept. Description The capybara has a heavy, barrel-shaped body and short head, with reddish-brown fur on the upper part of its body that turns yellowish-brown underneath. Its sweat glands can be found in the surface of the hairy portions of its skin, an unusual trait among rodents. The animal lacks down hair, and its guard hair differs little from over hair. Adult capybaras grow to in length, stand tall at the withers, and typically weigh , with an average in the Venezuelan llanos of . Females are slightly heavier than males. The top recorded weights are for a wild female from Brazil and for a wild male from Uruguay. Also, an 81 kg individual was reported in São Paulo in 2001 or 2002. The dental formula is . Capybaras have slightly webbed feet and vestigial tails. Their hind legs are slightly longer than their forelegs; they have three toes on their rear feet and four toes on their front feet. Their muzzles are blunt, with nostrils, and the eyes and ears are near the top of their heads. Its karyotype has 2n = 66 and FN = 102, meaning it has 66 chromosomes with a total of 102 arms. Ecology Capybaras are semiaquatic mammals found throughout all countries of South America except Chile. They live in densely forested areas near bodies of water, such as lakes, rivers, swamps, ponds, and marshes, as well as flooded savannah and along rivers in the tropical rainforest. They are superb swimmers and can hold their breath underwater for up to five minutes at a time. Capybara have flourished in cattle ranches. They roam in home ranges averaging in high-density populations. Many escapees from captivity can also be found in similar watery habitats around the world. Sightings are fairly common in Florida, although a breeding population has not yet been confirmed. In 2011, one specimen was spotted on the Central Coast of California. These escaped populations occur in areas where prehistoric capybaras inhabited; late Pleistocene capybaras inhabited Florida and Hydrochoerus hesperotiganites in California and Hydrochoerus gaylordi in Grenada, and feral capybaras in North America may actually fill the ecological niche of the Pleistocene species. Diet and predation Capybaras are herbivores, grazing mainly on grasses and aquatic plants, as well as fruit and tree bark. They are very selective feeders and feed on the leaves of one species and disregard other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. Plants that capybaras eat during the summer lose their nutritional value in the winter, so they are not consumed at that time. The capybara's jaw hinge is not perpendicular, so they chew food by grinding back-and-forth rather than side-to-side. Capybaras are autocoprophagous, meaning they eat their own feces as a source of bacterial gut flora, to help digest the cellulose in the grass that forms their normal diet, and to extract the maximum protein and vitamins from their food. They also regurgitate food to masticate again, similar to cud-chewing by cattle. Like other rodents, a capybara's front teeth grow continually to compensate for the constant wear from eating grasses; their cheek teeth also grow continuously. Like its relative the guinea pig, the capybara does not have the capacity to synthesize vitamin C, and capybaras not supplemented with vitamin C in captivity have been reported to develop gum disease as a sign of scurvy. The maximum lifespan of the capybara is 8 to 10 years, but in the wild capybaras usually do not live longer than four years because of predation from South American big cats such as jaguars and cougars and from non-mammalian predators such as harpy eagles, caimans, green anacondas and piranhas . Social organization Capybaras are known to be gregarious. While they sometimes live solitarily, they are more commonly found in groups of around 10–20 individuals, with two to four adult males, four to seven adult females, and the remainder juveniles. Capybara groups can consist of as many as 50 or 100 individuals during the dry season when the animals gather around available water sources. Males establish social bonds, dominance, or general group consensus. They can make dog-like barks when threatened or when females are herding young. Capybaras have two types of scent glands: a morrillo, located on the snout, and anal glands. Both sexes have these glands, but males have much larger morrillos and use their anal glands more frequently. The anal glands of males are also lined with detachable hairs. A crystalline form of scent secretion is coated on these hairs and is released when in contact with objects such as plants. These hairs have a longer-lasting scent mark and are tasted by other capybaras. Capybaras scent-mark by rubbing their morrillos on objects, or by walking over scrub and marking it with their anal glands. Capybaras can spread their scent farther by urinating; however, females usually mark without urinating and scent-mark less frequently than males overall. Females mark more often during the wet season when they are in estrus. In addition to objects, males also scent-mark females. Reproduction When in estrus, the female's scent changes subtly and nearby males begin pursuit. In addition, a female alerts males she is in estrus by whistling through her nose. During mating, the female has the advantage and mating choice. Capybaras mate only in water, and if a female does not want to mate with a certain male, she either submerges or leaves the water. Dominant males are highly protective of the females, but they usually cannot prevent some of the subordinates from copulating. The larger the group, the harder it is for the male to watch all the females. Dominant males secure significantly more matings than each subordinate, but subordinate males, as a class, are responsible for more matings than each dominant male. The lifespan of the capybara's sperm is longer than that of other rodents. ]Capybara gestation is 130–150 days, and produces a litter of four young on average, but may produce between one and eight in a single litter. Birth is on land and the female rejoins the group within a few hours of delivering the newborn capybaras, which join the group as soon as they are mobile. Within a week, the young can eat grass, but continue to suckle—from any female in the group—until weaned around 16 weeks. The young form a group within the main group. Alloparenting has been observed in this species. Breeding peaks between April and May in Venezuela and between October and November in Mato Grosso, Brazil. Activities Though quite agile on land, capybaras are equally at home in the water. They are excellent swimmers, and can remain completely submerged for up to five minutes, an ability they use to evade predators. Capybaras can sleep in water, keeping only their noses out. As temperatures increase during the day, they wallow in water and then graze during the late afternoon and early evening. They also spend time wallowing in mud. They rest around midnight and then continue to graze before dawn. Communication Capybaras communicate using barks, chirps, whistles, huffs, and purrs. Conservation and human interaction Capybaras are not considered a threatened species; their population is stable throughout most of their South American range, though in some areas hunting has reduced their numbers. Capybaras are hunted for their meat and pelts in some areas, and otherwise killed by humans who see their grazing as competition for livestock. In some areas, they are farmed, which has the effect of ensuring the wetland habitats are protected. Their survival is aided by their ability to breed rapidly. Capybaras have adapted well to urbanization in South America. They can be found in many areas in zoos and parks, and may live for 12 years in captivity, more than double their wild lifespan. Capybaras are docile and usually allow humans to pet and hand-feed them, but physical contact is normally discouraged, as their ticks can be vectors to Rocky Mountain spotted fever. The European Association of Zoos and Aquaria asked Drusillas Park in Alfriston, Sussex, England, to keep the studbook for capybaras, to monitor captive populations in Europe. The studbook includes information about all births, deaths and movements of capybaras, as well as how they are related. Capybaras are farmed for meat and skins in South America. The meat is considered unsuitable to eat in some areas, while in other areas it is considered an important source of protein. In parts of South America, especially in Venezuela, capybara meat is popular during Lent and Holy Week as the Catholic Church previously issued special dispensation to allow it to be eaten while other meats are generally forbidden. After several attempts a 1784 Papal bull was obtained that allowed the consumption of capybara during Lent. There is widespread perception in Venezuela that consumption of capybaras is exclusive to rural people. In August 2021, Argentine and international media reported that capybaras had been disturbing residents of Nordelta, an affluent gated community north of Buenos Aires built atop the local capybara's preexisting wetland habitat. This inspired social media users to jokingly adopt the capybara as a symbol of class struggle and communism. Brazilian Lyme-like borreliosis likely involves capybaras as reservoirs and Amblyomma and Rhipicephalus ticks as vectors. Popularity and meme culture Izu Shaboten Zoo and other zoos in Japan have prepared hot spring baths for capybaras. Video clips of the bathing capybaras have gained millions of views. The capybaras have influenced an anime character named Kapibara-san, and a series of merchandise such as plush toys. A Capybara café has been opened in Tokyo. In the early 2020s, capybaras became a growing figure of meme culture due to many factors, including the disturbances in Nordelta which led to them being comically postulated as figures of class struggle. A common meme format includes capybaras in various situations with the song "After Party" by Don Toliver. At the same time there was a growth in popularity of internet hits for 'capybara'. Capybaras are also associated with the phrase "Ok I pull up", the opening lyric in Toliver's song.
Biology and health sciences
Rodents
null
9383513
https://en.wikipedia.org/wiki/Pauli%20equation
Pauli equation
In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation. Equation For a particle of mass and electric charge , in an electromagnetic field described by the magnetic vector potential and the electric scalar potential , the Pauli equation reads: Here are the Pauli operators collected into a vector for convenience, and is the momentum operator in position representation. The state of the system, (written in Dirac notation), can be considered as a two-component spinor wavefunction, or a column vector (after choice of basis): . The Hamiltonian operator is a 2 × 2 matrix because of the Pauli operators. Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field. See Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just where is the kinetic momentum, while in the presence of an electromagnetic field it involves the minimal coupling , where now is the kinetic momentum and is the canonical momentum. The Pauli operators can be removed from the kinetic energy term using the Pauli vector identity: Note that unlike a vector, the differential operator has non-zero cross product with itself. This can be seen by considering the cross product applied to a scalar function : where is the magnetic field. For the full Pauli equation, one then obtains for which only a few analytic results are known, e.g., in the context of Landau quantization with homogenous magnetic fields or for an idealized, Coulomb-like, inhomogeneous magnetic field. Weak magnetic fields For the case of where the magnetic field is constant and homogenous, one may expand using the symmetric gauge , where is the position operator and A is now an operator. We obtain where is the particle angular momentum operator and we neglected terms in the magnetic field squared . Therefore, we obtain where is the spin of the particle. The factor 2 in front of the spin is known as the Dirac g-factor. The term in , is of the form which is the usual interaction between a magnetic moment and a magnetic field, like in the Zeeman effect. For an electron of charge in an isotropic constant magnetic field, one can further reduce the equation using the total angular momentum and Wigner-Eckart theorem. Thus we find where is the Bohr magneton and is the magnetic quantum number related to . The term is known as the Landé g-factor, and is given here by where is the orbital quantum number related to and is the total orbital quantum number related to . From Dirac equation The Pauli equation can be inferred from the non-relativistic limit of the Dirac equation, which is the relativistic quantum equation of motion for spin-1/2 particles. Derivation The Dirac equation can be written as: where and are two-component spinor, forming a bispinor. Using the following ansatz: with two new spinors , the equation becomes In the non-relativistic limit, and the kinetic and electrostatic energies are small with respect to the rest energy , leading to the Lévy-Leblond equation. Thus Inserted in the upper component of Dirac equation, we find Pauli equation (general form): From a Foldy–Wouthuysen transformation The rigorous derivation of the Pauli equation follows from Dirac equation in an external field and performing a Foldy–Wouthuysen transformation considering terms up to order . Similarly, higher order corrections to the Pauli equation can be determined giving rise to spin-orbit and Darwin interaction terms, when expanding up to order instead. Pauli coupling Pauli's equation is derived by requiring minimal coupling, which provides a g-factor g=2. Most elementary particles have anomalous g-factors, different from 2. In the domain of relativistic quantum field theory, one defines a non-minimal coupling, sometimes called Pauli coupling, in order to add an anomalous factor where is the four-momentum operator, is the electromagnetic four-potential, is proportional to the anomalous magnetic dipole moment, is the electromagnetic tensor, and are the Lorentzian spin matrices and the commutator of the gamma matrices . In the context of non-relativistic quantum mechanics, instead of working with the Schrödinger equation, Pauli coupling is equivalent to using the Pauli equation (or postulating Zeeman energy) for an arbitrary g-factor.
Physical sciences
Quantum mechanics
Physics
9387505
https://en.wikipedia.org/wiki/Unenlagiinae
Unenlagiinae
Unenlagiinae is a subfamily of long-snouted paravian theropods. They are traditionally considered to be members of Dromaeosauridae, though some authors place them into their own family, Unenlagiidae, sometimes alongside the subfamily Halszkaraptorinae. Definitive members are known from South America, though some researchers include taxa from other continents within this subfamily based on phylogenetic analyses. Description Most unenlagiines have been discovered in Argentina. The largest was Austroraptor, which measured up to 5–6 m (16.4–19.7 ft) in length, making it also one of the largest dromaeosaurids. The subfamily is distinguished from other dromaeosaurids by a tail stiffened by lengthy chevrons and superior processes, a reduced second pedal ungual, a posteriorly oriented pubis and very elongated snouts. Unenlagiines also had elongated, slender hindlimbs with a subarctometatarsalian metatarsus, which is characterized by the pinched metatarsal III at the upper end. Their distinct anatomy from Laurasian dromaeosaurids was likely a consequence of the breakup of Pangaea into Gondwana and Laurasia, where the geological isolation of unenlagiines from their relatives resulted in allopatric speciation. Classification During the description of Halszkaraptor in 2017, Cau et al. published a phylogenetic analysis of the Dromaeosauridae, in which, members of the Unenlagiinae are classified as: In 2019, during the description of Hesperornithoides, many Paravian groups were examined for the inclusion of this new genus, including the Unenlagiinae. The analysis ended in the inclusion of Rahonavis, Pyroraptor and Dakotaraptor to the Unenlagiinae. Paleobiology A study performed by Gianechini and colleagues in 2020 indicates that unenlagiine dromaeosaurids of Gondwana possessed different hunting specializations than the eudromaeosaurs from Laurasia. The shorter second phalanx in the second digit of the foot of eudromaeosaurs allowed for increased force to be generated by that digit, which, combined with a shorter and wider metatarsus, and a noticeable marked hinge‐like morphology of the articular surfaces of metatarsals and phalanges, possibly allowed eudromaeosaurs to exert a greater gripping strength than unenlagiines, allowing for more efficient subduing and killing of large prey. In comparison, the unenlagiine dromaeosaurids possess a longer and slender subarctometatarsus, and less well‐marked hinge joints, a trait that possibly gave them greater cursorial capacities and allowed for greater speed than eudromaeosaurs. Additionally, the longer second phalanx of the second digit allowed unenlagiines fast movements of their feet's second digits to hunt smaller, faster types of prey. These differences in locomotor and predatory specializations may have been a key feature that influenced the evolutionary paths that shaped both groups of dromaeosaurs in the northern and southern hemispheres, respectively.
Biology and health sciences
Theropods
Animals
9387918
https://en.wikipedia.org/wiki/Human%20nose
Human nose
The human nose is the first organ of the respiratory system. It is also the principal organ in the olfactory system. The shape of the nose is determined by the nasal bones and the nasal cartilages, including the nasal septum, which separates the nostrils and divides the nasal cavity into two. The nose has an important function in breathing. The nasal mucosa lining the nasal cavity and the paranasal sinuses carries out the necessary conditioning of inhaled air by warming and moistening it. Nasal conchae, shell-like bones in the walls of the cavities, play a major part in this process. Filtering of the air by nasal hair in the nostrils prevents large particles from entering the lungs. Sneezing is a reflex to expel unwanted particles from the nose that irritate the mucosal lining. Sneezing can transmit infections, because aerosols are created in which the droplets can harbour pathogens. Another major function of the nose is olfaction, the sense of smell. The area of olfactory epithelium, in the upper nasal cavity, contains specialised olfactory cells responsible for this function. The nose is also involved in the function of speech. Nasal vowels and nasal consonants are produced in the process of nasalisation. The hollow cavities of the paranasal sinuses act as sound chambers that modify and amplify speech and other vocal sounds. There are several plastic surgery procedures that can be done on the nose, known as rhinoplasties available to correct various structural defects or to change the shape of the nose. Defects may be congenital, or result from nasal disorders or from trauma. These procedures are a type of reconstructive surgery. Elective procedures to change a nose shape are a type of cosmetic surgery. Structure Several bones and cartilages make up the bony-cartilaginous framework of the nose, and the internal structure. The nose is also made up of types of soft tissue such as skin, epithelia, mucous membrane, muscles, nerves, and blood vessels. In the skin there are sebaceous glands, and in the mucous membrane there are nasal glands. The bones and cartilages provide strong protection for the internal structures of the nose. There are several muscles that are involved in movements of the nose. The arrangement of the cartilages allows flexibility through muscle control to enable airflow to be modified. Bones The bony structure of the nose is provided by the maxilla, frontal bone, and a number of smaller bones. The topmost bony part of the nose is formed by the nasal part of the frontal bone, which lies between the brow ridges, and ends in a serrated nasal notch. A left and a right nasal bone join with the nasal part of the frontal bone at either side; and these at the side with the small lacrimal bones and the frontal process of each maxilla. The internal roof of the nasal cavity is composed of the horizontal, perforated cribriform plate of the ethmoid bone through which pass sensory fibres of the olfactory nerve. Below and behind the cribriform plate, sloping down at an angle, is the face of the sphenoid bone. The wall separating the two cavities of the nose, the nasal septum, is made up of bone inside and cartilage closer to the tip of the nose. The bony part is formed by the perpendicular plate of the ethmoid bone at the top, and the vomer bone below. The floor of the nose is made up of the incisive bone and the horizontal plates of the palatine bones, and this makes up the hard palate of the roof of the mouth. The two horizontal plates join at the midline and form the posterior nasal spine that gives attachment to the musculus uvulae in the uvula. The two maxilla bones join at the base of the nose at the lower nasal midline between the nostrils, and at the top of the philtrum to form the anterior nasal spine. This thin projection of bone holds the cartilaginous center of the nose. It is also an important cephalometric landmark. Cartilages The nasal cartilages are the septal, lateral, major alar, and minor alar cartilages. The major and minor cartilages are also known as the greater and lesser alar cartilages. There is a narrow strip of cartilage called the vomeronasal cartilage that lies between the vomer and the septal cartilage. The septal nasal cartilage, extends from the nasal bones in the midline, to the bony part of the septum in the midline, posteriorly. It then passes along the floor of the nasal cavity. The septum is quadrangular–the upper half is attached to the two lateral nasal cartilages, which are fused to the dorsal septum in the midline. The septum is laterally attached, with loose ligaments, to the bony margin of the anterior nasal aperture, while the inferior ends of the lateral cartilages are free (unattached). The three or four minor alar cartilages are adjacent to the lateral cartilages, held in the connective tissue membrane, that connects the lateral cartilages to the frontal process of the maxilla. The nasal bones in the upper part of the nose are joined by the midline internasal suture. They join with the septal cartilage at a junction known as the rhinion. The rhinion is the midline junction where the nasal bone meets the septal cartilage. From the rhinion to the apex, or tip, the framework is of cartilage. The major alar cartilages are thin, U-shaped plates of cartilage on each side of the nose that form the lateral and medial walls of the vestibule, known as the medial and lateral crura. The medial crura are attached to the septal cartilage, forming fleshy parts at the front of the nostrils on each side of the septum, called the medial crural footpods. The medial crura meet at the midline below the end of the septum to form the columella and lobule. The lobule contains the tip of the nose and its base contains the nostrils. At the peaks of the folds of the medial crura, they form the alar domes the tip-defining points of the nose, separated by a notch. They then fold outwards, above and to the side of the nostrils forming the lateral crura. The major alar cartilages are freely moveable and can respond to muscles to either open or constrict the nostrils. There is a reinforcing structure known as the nasal scroll that resists internal collapse from airflow pressure generated by normal breathing. This structure is formed by the junction between the lateral and major cartilages. Their edges interlock by one scrolling upwards and one scrolling inwards. Muscles The muscles of the nose are a subgroup of the facial muscles. They are involved in respiration and facial expression. The muscles of the nose include the procerus, nasalis, depressor septi nasi, levator labii superioris alaeque nasi, and the orbicularis oris of the mouth. As are all of the facial muscles, the muscles of the nose are innervated by the facial nerve and its branches. Although each muscle is independent, the muscles of the nose form a continuous layer with connections between all the components of the muscles and ligaments, in the nasal part of a superficial muscular aponeurotic system (SMAS). The SMAS is continuous from the nasofrontal process to the nasal tip. It divides at level of the nasal valve into superficial and deep layers, each layer having medial and lateral components. The procerus muscle produces wrinkling over the bridge of the nose, and is active in concentration and frowning. It is a prime target for Botox procedures in the forehead to remove the lines between the eyes. The nasalis muscle consists of two main parts: a transverse part called the compressor naris, and an alar part termed the dilator naris. The compressor naris muscle compresses the nostrils and may completely close them. The alar part, the dilator naris mainly consists of the dilator naris posterior, and a much smaller dilator naris anterior, and this muscle flares the nostrils. The dilator naris helps to form the upper ridge of the philtrum. The anterior, and the posterior dilator naris, (the alar part of the nasalis muscle), give support to the nasal valves. The depressor septi nasi may sometimes be absent or rudimentary. The depressor septi pulls the columella, the septum, and the tip of the nose downwards. At the start of inspiration, this muscle tenses the nasal septum and with the dilator naris widens the nostrils. The levator labii superioris alaeque nasi divides into a medial and a lateral slip. The medial slip blends into the perichondrium of the major alar cartilage and its overlying skin. The lateral slip blends at the side of the upper lip with the levator labii superioris, and with the orbicularis oris. The lateral slip raises the upper lip and deepens and increases the curve above the nasolabial furrow. The medial slip pulls the lateral crus upwards and modifies the curve of the furrow around the alae, and dilates the nostrils. Soft tissue The skin of the nose varies in thickness along its length. From the glabella to the bridge (the nasofrontal angle), the skin is thick, fairly flexible, and mobile. It tapers to the bridge where it is thinnest and least flexible as it is closest to the underlying bone. From the bridge until the tip of the nose the skin is thin. The tip is covered in skin that is as thick as the top section, and has many large sebaceous glands. The thickness of the skin varies but is still separated from the underlying bones and cartilage by four layers – a superficial fatty layer; a fibromuscular layer continued from the SMAS; a deep fatty layer, and the periosteum. Other areas of soft tissue are found where there is no support from cartilage; these include an area around the sides of the septum – the paraseptal area – an area around the lateral cartilages, an area at the top of the nostril, and an area in the alae. External nose The nasal root is the top of the nose that attaches the nose to the forehead. The nasal root is above the bridge and below the glabella, forming an indentation known as the nasion at the frontonasal suture where the frontal bone meets the nasal bones. The nasal dorsum also known as the nasal ridge is the border between the root and the tip of the nose, which in profile can be variously shaped. The ala of the nose (ala nasi, "wing of the nose"; plural alae) is the lower lateral surface of the external nose, shaped by the alar cartilage and covered in dense connective tissue. The alae flare out to form a rounded eminence around the nostril. Sexual dimorphism is evident in the larger nose of the male. This is due to the increased testosterone that thickens the brow ridge and the bridge of the nose making it wider. Differences in the symmetry of the nose have been noted in studies. Asymmetry is predominantly seen in wider left-sided nasal and other facial features. Nasal cavity The nasal cavity is the large internal space of the nose, and is in two parts – the nasal vestibule and the nasal cavity proper. The nasal vestibule is the frontmost part of the nasal cavity, enclosed by cartilages. The vestibule is lined with skin, hair follicles, and a large number of sebaceous glands. A mucous ridge known as the limen nasi separates the vestibule from the rest of the nasal cavity and marks the change from the skin of the vestibule to the respiratory epithelium of the rest of the nasal cavity. This area is also known as a mucocutaneous junction and has a dense microvasculature. The nasal cavity is divided into two cavities by the nasal septum, and each is accessed by an external nostril. The division into two cavities enables the functioning of the nasal cycle that slows down the conditioning process of the inhaled air. At the back of the nasal cavity there are two openings, called choanae (also posterior nostrils), that give entrance to the nasopharynx, and rest of the respiratory tract. On the outer wall of each cavity are three shell-like bones called conchae, arranged as superior, middle and inferior nasal conchae. Below each concha is a corresponding superior, middle, and inferior nasal meatus, or passage. Sometimes when the superior concha is narrow, a fourth supreme nasal concha is present situated above and sharing the space with the superior concha. The term concha refers to the actual bone; when covered by soft tissue and mucosa, and functioning, a concha is termed a turbinate. Excessive moisture as tears collected in the lacrimal sac travel down the nasolacrimal ducts where they drain into the inferior meatus in the nasal cavity. Most of the nasal cavity and paranasal sinuses is lined with respiratory epithelium as nasal mucosa. In the roof of each cavity is an area of specialised olfactory epithelium. This region is about , covering the superior concha, the cribriform plate, and the nasal septum. The nasal cavity has a nasal valve area that includes an external nasal valve, and an internal nasal valve. The external nasal valve is bounded medially by the columella, laterally by the lower lateral nasal cartilage, and posteriorly by the nasal sill. The internal nasal valve is bounded laterally by the caudal border of the upper lateral cartilage, medially by the dorsal nasal septum, and inferiorly by the anterior border of the inferior turbinate. The internal nasal valve is the narrowest region of the nasal cavity and is the primary site of nasal resistance. The valves regulate the airflow and resistance. Air breathed in is forced to pass through the narrow internal nasal valve, and then expands as it moves into the nasal cavity. The sudden change in the speed and pressure of the airflow creates turbulence that allows optimum contact with the respiratory epithelium for the necessary warming, moisturising, and filtering. The turbulence also allows movement of the air to pass over the olfactory epithelium and transfer odour information. The angle of the valve between the septum and the sidewall needs to be sufficient for unobstructed airflow, and this is normally between 10 and 15 degrees. The borders of each nasal cavity are a roof, floor, medial wall (the septum), and lateral wall. The middle part of the roof of the nasal cavity is composed of the horizontal, perforated cribriform plate of the ethmoid bone, through which pass sensory fibres of the olfactory nerve into the cranial cavity. Paranasal sinuses The mucosa that lines the nasal cavity extends into its chambers, the paranasal sinuses. The nasal cavity and the paranasal sinuses are referred to as the sinonasal tract or sinonasal region, and its anatomy is recognised as being unique and complex. Four paired paranasal sinuses – the frontal sinus, the sphenoid sinus, the ethmoid sinus and the maxillary sinus drain into regions of the nasal cavity. The sinuses are air-filled extensions of the nasal cavity into the cranial bones. The frontal sinuses are located in the frontal bone; the sphenoidal sinuses in the sphenoid bone; the maxillary sinuses in the maxilla; and the ethmoidal sinuses in the ethmoid bone. A narrow opening called a sinus ostium from each of the paranasal sinuses allows drainage into the nasal cavity. The maxillary sinus is the largest of the sinuses and drains into the middle meatus. Most of the ostia open into the middle meatus and the anterior ethmoid, that together are termed the ostiomeatal complex. Adults have a high concentration of cilia in the ostia. The cilia in the sinuses beat towards the openings into the nasal cavity. The increased numbers of cilia and the narrowness of the sinus openings allow for an increased time for moisturising, and warming. Nose shape The shape of the nose varies widely due to differences in the nasal bone shapes and formation of the bridge of the nose. Anthropometric studies have importantly contributed to craniofacial surgery, and the nasal index is a recognised anthropometric index used in nasal surgery. Paul Topinard developed the nasal index as a method of classifying ethnic groups. The index is based on the ratio of the breadth of the nose to its height. The nasal dimensions are also used to classify nasal morphology into five types: Hyperleptorrhine is a very long, narrow nose with a nasal index of 40 to 55. Leptorrhine describes a long, narrow nose with an index of 55–70. Mesorrhine is a medium nose with an index of 70–85. Platyrrhine is a short, broad nose with an index of 85–99·9. The fifth type is the hyperplatyrrhine having an index of more than 100. Variations in nose size between ethnicities may be attributed to differing evolutionary adaptations to local temperatures and humidity. Other factors such as sexual selection may also account for ethnic differences in nose shape. Some deformities of the nose are named, such as the pug nose and the saddle nose. The pug nose is characterised by excess tissue from the apex that is out of proportion to the rest of the nose. A low and underdeveloped nasal bridge may also be evident. A saddle nose deformity involving the collapse of the bridge of the nose is mostly associated with trauma to the nose but can be caused by other conditions including leprosy. Werner syndrome, a condition associated with premature aging, causes a "bird-like" appearance due to pinching of the nose. Down syndrome commonly presents a small nose with a flattened nasal bridge. This can be due to the absence of one or both nasal bones, shortened nasal bones, or nasal bones that have not fused in the midline. Blood supply and drainage Supply The blood supply to the nose is provided by branches of the ophthalmic, maxillary, and facial arteries – branches of the carotid arteries. Branches of these arteries anastomose to form plexuses in and under the nasal mucosa. In the septal region Kiesselbach's plexus is a common site of nosebleeds. Branches of the ophthalmic artery – the anterior and posterior ethmoidal arteries supply the roof, upper bony septum, and ethmoidal and frontal sinuses. The anterior ethmoidal artery also helps to supply the lower septal cartilage. Another branch is the dorsal nasal artery a terminal branch that supplies the skin of the alae and dorsum. Branches of the maxillary artery include the greater palatine artery; the sphenopalatine artery and its branches – the posterior lateral nasal arteries and posterior septal nasal branches; the pharyngeal branch; and the infraorbital artery and its branches – the superior anterior and posterior alveolar arteries. The sphenopalatine artery and the ethmoid arteries supply the outer walls of the nasal cavity. There is additional supply from a branch of the facial artery – the superior labial artery. The sphenopalantine artery is the artery primarily responsible for supplying the nasal mucosa. The skin of the alae is supplied by the septal and lateral nasal branches of the facial artery. The skin of the outer parts of the alae and the dorsum of the nose are supplied by the dorsal nasal artery a branch of the ophthalmic artery, and the infraorbital branch of the maxillary arteries. Drainage Veins of the nose include the angular vein that drains the side of the nose, receiving lateral nasal veins from the alae. The angular vein joins with the superior labial vein. Some small veins from the dorsum of the nose drain to the nasal arch of the frontal vein at the root of the nose. In the posterior region of the cavity, specifically in the posterior part of the inferior meatus is a venous plexus known as Woodruff's plexus. This plexus is made up of large thin-walled veins with little soft tissue such as muscle or fiber. The mucosa of the plexus is thin with very few structures. Lymphatic drainage From different areas of the nose superficial lymphatic vessels run with the veins, and deep lymphatic vessels travel with the arteries. Lymph drains from the anterior half of the nasal cavity, including both the medial and lateral walls, to join that of the external nasal skin to drain into the submandibular lymph nodes. The rest of the nasal cavity and paranasal sinuses all drain to the upper deep cervical lymph nodes, either directly or through the retropharyngeal lymph nodes. The back of the nasal floor probably drains to the parotid lymph nodes. Nerve supply The nerve supply to the nose and paranasal sinuses comes from two branches of the trigeminal nerve (CN V): the ophthalmic nerve (CN V1), the maxillary nerve (CN V2), and branches from these. In the nasal cavity, the nasal mucosa is divided in terms of nerve supply into a back lower part (posteroinferior), and a frontal upper part (anterosuperior). The posterior part is supplied by a branch of the maxillary nerve – the nasopalatine nerve, which reaches the septum. Lateral nasal branches of the greater palatine nerve supply the lateral wall. The frontal upper part is supplied from a branch of the ophthalmic nerve – the nasociliary nerve, and its branches – the anterior and posterior ethmoidal nerves. Most of the external nose – the dorsum, and the apex are supplied by the infratrochlear nerve, (a branch of the nasociliary nerve). The external branch of the anterior ethmoidal nerve also supplies areas of skin between the root and the alae. The alae of the nose are supplied by nasal branches of CN V2, the infraorbital nerve, and internal nasal branches of infraorbital nerve that supply the septum and the vestibule. The maxillary sinus is supplied by superior alveolar nerves from the maxillary and infraorbital nerves. The frontal sinus is supplied by branches of the supraorbital nerve. The ethmoid sinuses are supplied by anterior and posterior ethmoid branches of the nasociliary nerve. The sphenoid sinus is supplied by the posterior ethmoidal nerves. Movement The muscles of the nose are supplied by branches of the facial nerve. The nasalis muscle is supplied by the buccal branches. It may also be supplied by one of the zygomatic branches. The procerus is supplied by temporal branches of the facial nerve and lower zygomatic branches; a supply from the buccal branch has also been described. The depressor septi is innervated by the buccal branch, and sometimes by the zygomatic branch, of the facial nerve. The levator labii superioris alaeque nasi is innervated by zygomatic and superior buccal branches of the facial nerve. Smell The sense of smell is transmitted by the olfactory nerves. Olfactory nerves are bundles of very small unmyelinated axons that are derived from olfactory receptor neurons in the olfactory mucosa. The axons are in varying stages of maturity, reflecting the constant turnover of neurons in the olfactory epithelium. A plexiform network is formed in the lamina propria, by the bundles of axons that are surrounded by olfactory ensheathing cells. In as many as twenty branches, the bundled axons cross the cribriform plate and enter the overlying olfactory bulb ending as glomeruli. Each branch is enclosed by an outer dura mater that becomes continuous with the nasal periosteum. Autonomic supply The nasal mucosa in the nasal cavity is also supplied by the autonomic nervous system. Postganglionic nerve fibers from the deep petrosal nerve join with preganglionic nerve fibers from the greater petrosal nerve to form the nerve of the pterygoid canal. Sympathetic postganglionic fibers are distributed to the blood vessels of the nose. Postganglionic parasympathetic fibres derived from the pterygopalatine ganglion provide the secretomotor supply to the nasal mucous glands, and are distributed via branches of the maxillary nerves. Development Development of the nose In the early development of the embryo, neural crest cells migrate to form the mesenchymal tissue as ectomesenchyme of the pharyngeal arches. By the end of the fourth week, the first pair of pharyngeal arches form five facial prominences or processes – an unpaired frontonasal process, paired mandibular processes and paired maxillary processes. The nose is largely formed by the fusion of these five facial prominences. The frontonasal process gives rise to the bridge of the nose. The medial nasal processes provide the crest and the tip of the nose, and the lateral nasal processes form the alae or sides of the nose. The frontonasal process is a proliferation of mesenchyme in front of the brain vesicles, and makes up the upper border of the stomadeum. During the fifth week, the maxillary processes increase in size and at the same time the ectoderm of the frontonasal process becomes thickened at its sides and also increases in size, forming the nasal placodes. The nasal placodes are also known as the olfactory placodes. This development is induced by the ventral part of the forebrain. In the sixth week, the ectoderm in each nasal placode invaginates to form an indented oval-shaped pit, which forms a surrounding raised ridge of tissue. Each nasal pit forms a division between the ridges, into a lateral nasal process on the outer edge, and a medial nasal process on the inner edge. In the sixth week, the nasal pits deepen as they penetrate into the underlying mesenchyme. At this time, the medial nasal processes migrate towards each other and fuse forming the primordium of the bridge of the nose and the septum. The migration is helped by the increased growth of the maxillary prominences medially, which compresses the medial nasal processes towards the midline. Their merging takes place at the surface, and also at a deeper level. The merge forms the intermaxillary segment, and this is continuous with the rostral part of the nasal septum. The tips of the maxillary processes also grow and fuse with the intermaxillary process. The intermaxillary process gives rise to the philtrum of the upper lip. At the end of the sixth week, the nasal pits have deepened further and they fuse to make a large ectodermal nasal sac. This sac will be above and to the back of the intermaxillary process. Leading into the seventh week, the nasal sac floor and posterior wall grow to form a thickened plate-like ectodermal structure called the nasal fin. The nasal fin separates the sac from the oral cavity. Within the fin, vacuoles develop that fuse with the nasal sac. This enlarges the nasal sac and at the same time thins the fin to a membrane – the oronasal membrane that separates the nasal pits from the oral cavity. During the seventh week the oronasal membrane ruptures and disintegrates to form an opening – the single primitive choana. The intermaxillary segment extends posteriorly to form the primary palate, which makes up the floor of the nasal cavity. During the eighth and ninth weeks, a pair of thin extensions form from the medial walls of the maxillary process. These extensions are called the palatine shelves that form the secondary palate. The secondary palate will endochondrally ossify to form the hard palate – the end-stage floor of the nasal cavity. During this time, ectoderm and mesoderm of the frontonasal process produce the midline septum. The septum grows down from the roof of the nasal cavity and fuses with the developing palates along the midline. The septum divides the nasal cavity into two nasal passages opening into the pharynx through the definitive choanae. At ten weeks, the cells differentiate into muscle, cartilage, and bone. Problems at this stage of development can cause birth defects such as choanal atresia (absent or closed passage), facial clefts and nasal dysplasia (faulty or incomplete development) or extremely rarely polyrrhinia the formation of a duplicate nose. Normal development is critical because the newborn infant breathes through the nose for the first six weeks, and any nasal blockage will need emergency treatment to clear. Development of the paranasal sinuses The four pairs of paranasal sinuses – the maxillary, ethmoid, sphenoid, and frontal, develop from the nasal cavity as invaginations extending into their named bones. Two pairs of sinuses form during prenatal development and two pairs form after birth. The maxillary sinuses are the first to appear during the fetal third month. They slowly expand within the maxillary bones and continue to expand throughout childhood. The maxillary sinuses form as invaginations from the nasal sac. The ethmoid sinuses appear in the fetal fifth month as invaginations of the middle meatus. The ethmoid sinuses do not grow into the ethmoid bone and do not completely develop until puberty. The sphenoid sinuses are extensions of the ethmoid sinuses into the sphenoid bones. They begin to develop around two years of age, and continue to enlarge during childhood. The frontal sinuses only develop in the fifth or sixth year of childhood, and continue expanding throughout adolescence. Each frontal sinus is made up of two independent spaces that develop from two different sources; one from the expansion of ethmoid sinuses into frontal bone, and the other develops from invagination. They never coalesce so drain independently. Function Respiration The nose is the first organ of the upper respiratory tract in the respiratory system. Its main respiratory function is the supply and conditioning, by warming, moisturising and filtering of particulates of inhaled air. Nasal hair in the nostrils traps large particles preventing their entry into the lungs. The three positioned nasal conchae in each cavity provide four grooves as air passages, along which the air is circulated and moved to the nasopharynx. The internal structures and cavities, including the conchae and paranasal sinuses form an integrated system for the conditioning of the air breathed in through the nose. This functioning also includes the major role of the nasal mucosa, and the resulting conditioning of the air before it reaches the lungs is important in maintaining the internal environment and proper functioning of the lungs. The turbulence created by the conchae and meatuses optimises the warming, moistening, and filtering of the mucosa. A major protective role is thereby provided by these structures of the upper respiratory tract, in the passage of air to the more delicate structures of the lower respiratory tract. Sneezing is an important protective reflex action initiated by irritation of the nasal mucosa to expel unwanted particles through the mouth and nose. Photic sneezing is a reflex brought on by different stimuli such as bright lights. The nose is also able to provide sense information as to the temperature of the air being breathed. Variations in shape of the nose have been hypothesised to possibly be adaptive to regional differences in temperature and humidity, though they may also have been driven by other factors such as sexual selection. Sense of smell The nose also plays the major part in the olfactory system. It contains an area of specialised cells, olfactory receptor neurons responsible for the sense of smell (olfaction). Olfactory mucosa in the upper nasal cavity, contains a type of nasal gland called olfactory glands or Bowman's glands, which help in olfaction. The nasal conchae also help in olfaction function, by directing air-flow to the olfactory region. Speech Speech is produced with pressure from the lungs. This can be modified using airflow through the nose in a process called nasalisation. This involves the lowering of the soft palate to produce nasal vowels and consonants by allowing air to escape from both the nose and the mouth. Nasal airflow is also used to produce a variety of click consonants called nasal clicks. The large, hollow cavities of the paranasal sinuses act as resonating chambers that modify, and amplify speech and other vocal vibrations passing through them. Clinical significance One of the most common medical conditions involving the nose is a nosebleed (epistaxis). Most nosebleeds occur in Kiesselbach's plexus, a vascular plexus in the lower front part of the septum involving the convergence of four arteries. A smaller proportion of nosebleeds that tend to be nontraumatic occur in Woodruff's plexus. Woodruff's plexus is a venous plexus of large thin-walled veins lying in the posterior part of the inferior meatus. Another common condition is nasal congestion, usually a symptom of infection, particularly sinusitis, or other inflammation of the nasal lining called rhinitis, including allergic rhinitis and nonallergic rhinitis. Chronic nasal obstruction resulting in breathing through the mouth can greatly impair or prevent the nostrils from flaring. One of the causes of snoring is nasal obstruction, and anti-snoring devices such as a nasal strip help to flare the nostrils and keep the airway open. Nasal flaring, is usually seen in children when breathing is difficult. Most conditions of nasal congestion also cause a loss of the sense of smell (anosmia). This may also occur in other conditions, for example following trauma, in Kallmann syndrome or Parkinson's disease. A blocked sinus ostium, an opening from a paranasal sinus, will cause fluid to accumulate in the sinus. In children, the nose is a common site of foreign bodies. The nose is one of the exposed areas that is susceptible to frostbite. Because of the special nature of the blood supply to the human nose and surrounding area, it is possible for retrograde infections from the nasal area to spread to the brain. For this reason, the area from the corners of the mouth to the bridge of the nose, including the nose and maxilla, is known as the danger triangle of the face. Infections or other conditions that may result in destruction of, or damage to a part of the nose include rhinophyma, skin cancers particularly basal-cell carcinoma, paranasal sinus and nasal cavity cancer, granulomatosis with polyangiitis, syphilis, leprosy, recreational use of cocaine, chromium and other toxins. The nose may be stimulated to grow in acromegaly, a condition caused by an excess of growth hormone. A common anatomic variant is an air-filled cavity within a concha known as a concha bullosa. In rare cases a polyp can form inside a bullosa. Usually a concha bullosa is small and without symptoms but when large can cause obstruction to sinus drainage. Some drugs can be nasally administered, including drug delivery to the brain, and these include nasal sprays and topical treatments. The septal cartilage can be destroyed through the repeated inhalation of recreational drugs such as cocaine. This, in turn, can lead to more widespread collapse of the nasal skeleton. Sneezing can transmit infections carried in the expelled droplets. This route is called either airborne transmission or aerosol transmission. Surgical procedures Badly positioned alar cartilages lack proper support, and can affect the function of the external nasal valve. This can cause breathing problems particularly during deep inhalation. The surgical procedure to correct breathing problems due to disorders in the nasal structures is called a rhinoplasty, and this is also the procedure used for a cosmetic surgery when it is commonly called a "nose job". For surgical procedures of rhinoplasty, the nose is mapped out into a number of subunits and segments. This uses nine aesthetic nasal subunits and six aesthetic nasal segments. A septoplasty is the specific surgery to correct a nasal septum deviation. A broken nose can result from trauma. Minor fractures may heal on their own. Surgery known as reduction may be carried out on more severe breaks that cause dislocation. Several nasal procedures of the nose and paranasal sinuses can be carried out using minimally-invasive nasal endoscopy. These procedures aim to restore sinus ventilation, mucociliary clearance, and maintain the health of the sinus mucosa. Some non-nasal surgeries can also be carried out through the use of an endoscope that is entered through the nose. These endoscopic endonasal surgeries are used to remove tumours from the front of the base of the skull. Swollen conchae can cause obstruction and nasal congestion, and may be treated surgically by a turbinectomy. Society and culture Some people choose to have cosmetic surgery (called a rhinoplasty) to change the appearance of their nose. Nose piercings, such as in the nostril, septum, or bridge, are also common. In certain Asian countries such as China, Japan, South Korea, Malaysia, Thailand and Bangladesh, rhinoplasties are commonly carried out to create a more developed nose bridge or a "high nose". Similarly, "DIY nose lifts" in the form of re-usable cosmetic items have become popular and are sold in many Asian countries such as China, Japan, South Korea, Taiwan, Sri Lanka and Thailand. A high-bridged nose has been a common beauty ideal in many Asian cultures dating back to the beauty ideals of ancient China and India. In New Zealand, nose pressing ("hongi") is a traditional greeting originating among the Māori people. However it is now generally confined to certain traditional celebrations. The Hanazuka monument enshrines the mutilated noses of at least 38,000 Koreans killed during the Japanese invasions of Korea from 1592 to 1598. Nose picking is a common, mildly taboo habit. Medical risks include the spread of infections, nosebleeds and, rarely, perforation of the nasal septum. When it becomes compulsive it is termed rhinotillexomania. The wiping of the nose with the hand, commonly referred to as the "allergic salute", is also mildly taboo and can result in the spreading of infections as well. Habitual as well as fast or rough nose wiping may also result in a crease (known as a transverse nasal crease or groove) running across the nose, and can lead to permanent physical deformity observable in childhood and adulthood. Nose fetishism (or nasophilia) is the sexual partialism for the nose. Neanderthals Clive Finlayson of the Gibraltar Museum said the large Neanderthal noses were an adaptation to the cold, Todd C. Rae of the American Museum of Natural History said primate and arctic animal studies have shown sinus size reduction in areas of extreme cold rather than enlargement in accordance with Allen's rule. Therefore, Todd C. Rae concludes that the design of the large and wide Neanderthal nose was evolved for the hotter climate of the Middle East and Africa and remained unchanged when they entered Europe. Miquel Hernández of the Department of Animal Biology at the University of Barcelona said the "high and narrow nose of Eskimos and Neanderthals" is an "adaptation to a cold and dry environment", since it contributes to warming and moisturizing the air and the "recovery of heat and moisture from expired air".
Biology and health sciences
Human anatomy
Health
9388131
https://en.wikipedia.org/wiki/Type%20Ib%20and%20Ic%20supernovae
Type Ib and Ic supernovae
Type Ib and Type Ic supernovae are categories of supernovae that are caused by the stellar core collapse of massive stars. These stars have shed or been stripped of their outer envelope of hydrogen, and, when compared to the spectrum of Type Ia supernovae, they lack the absorption line of silicon. Compared to Type Ib, Type Ic supernovae are hypothesized to have lost more of their initial envelope, including most of their helium. The two types are usually referred to as stripped core-collapse supernovae. Spectra When a supernova is observed, it can be categorized in the Minkowski–Zwicky supernova classification scheme based upon the absorption lines that appear in its spectrum. A supernova is first categorized as either a Type I or Type II, then subcategorized based on more specific traits. Supernovae belonging to the general category Type I lack hydrogen lines in their spectra; in contrast to Type II supernovae which do display lines of hydrogen. The Type I category is subdivided into Type Ia, Type Ib and Type Ic. Type Ib/Ic supernovae are distinguished from Type Ia by the lack of an absorption line of singly ionized silicon at a wavelength of 635.5 nanometres. As Type Ib and Ic supernovae age, they also display lines from elements such as oxygen, calcium and magnesium. In contrast, Type Ia spectra become dominated by lines of iron. Type Ic supernovae are distinguished from Type Ib in that the former also lack lines of helium at 587.6 nm. Formation Prior to becoming a supernova, an evolved massive star is organized like an onion, with layers of different elements undergoing fusion. The outermost layer consists of hydrogen, followed by helium, carbon, oxygen, and so forth. Thus when the outer envelope of hydrogen is shed, this exposes the next layer that consists primarily of helium (mixed with other elements). This can occur when a very hot, massive star reaches a point in its evolution when significant mass loss is occurring from its stellar wind. Highly massive stars (with 25 or more times the mass of the Sun) can lose up to 10−5 solar masses () each year—the equivalent of every 100,000 years. Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via winds or mass transfer to a companion. The progenitors of Types Ib and Ic have lost most of their outer envelopes due to strong stellar winds or else from interaction with a close companion of about . Rapid mass loss can occur in the case of a Wolf–Rayet star, and these massive objects show a spectrum that is lacking in hydrogen. Type Ib progenitors have ejected most of the hydrogen in their outer atmospheres, while Type Ic progenitors have lost both the hydrogen and helium shells; in other words, Type Ic have lost more of their envelope (i.e., much of the helium layer) than the progenitors of Type Ib. In other respects, however, the underlying mechanism behind Type Ib and Ic supernovae is similar to that of a Type II supernova, thus placing Types Ib and Ic between Type Ia and Type II. Because of their similarity, Type Ib and Ic supernovae are sometimes collectively called Type Ibc supernovae. There is some evidence that a small fraction of the Type Ic supernovae may be the progenitors of gamma ray bursts (GRBs); in particular, type Ic supernovae that have broad spectral lines corresponding to high-velocity outflows are thought to be strongly associated with GRBs. However, it is also hypothesized that any hydrogen-stripped Type Ib or Ic supernova could be a GRB, dependent upon the geometry of the explosion. In any case, astronomers believe that most Type Ib, and probably Type Ic as well, result from core collapse in stripped, massive stars, rather than from the thermonuclear runaway of white dwarfs. As they are formed from rare, very massive stars, the rate of Type Ib and Ic supernova occurrence is much lower than the corresponding rate for Type II supernovae. They normally occur in regions of new star formation, and are extremely rare in elliptical galaxies. Because they share a similar operating mechanism, Type Ibc and the various Type II supernovae are collectively called core-collapse supernovae. In particular, Type Ibc may be referred to as stripped core-collapse supernovae. Light curves The light curves (a plot of luminosity versus time) of Type Ib supernovae vary in form, but in some cases can be nearly identical to those of Type Ia supernovae. However, Type Ib light curves may peak at lower luminosity and may be redder. In the infrared portion of the spectrum, the light curve of a Type Ib supernova is similar to a Type II-L light curve. Type Ib supernovae usually have slower decline rates for the spectral curves than Ic. Type Ia supernovae light curves are useful for measuring distances on a cosmological scale. That is, they serve as standard candles. However, due to the similarity of the spectra of Type Ib and Ic supernovae, the latter can form a source of contamination of supernova surveys and must be carefully removed from the observed samples before making distance estimates.
Physical sciences
Stellar astronomy
Astronomy
2190689
https://en.wikipedia.org/wiki/Fraser%20fir
Fraser fir
The Fraser fir (Abies fraseri), sometimes spelled Frasier fir, is an endangered species of fir native to the Appalachian Mountains of the southeastern United States. They are endemic to only seven montane regions in the Appalachian Mountains. Taxonomy Abies fraseri is closely related to Abies balsamea (balsam fir), of which it has occasionally been treated as a subspecies (as A. balsamea subsp. fraseri (Pursh) E.Murray) or a variety (as A. balsamea var. fraseri (Pursh) Spach). Some botanists regard the variety of balsam fir named Abies balsamea var. phanerolepis as a natural hybrid with Fraser fir, denominated Abies × phanerolepis (Fernald) Liu. Names The species Abies fraseri is named after the Scottish botanist John Fraser (1750–1811), who made numerous botanical collections in the region. It is sometimes spelled "Frasier," "Frazer" or "Frazier." In the past, it was also sometimes known as "she-balsam" because resin could be "milked" from its bark blisters, in contrast to the "he balsam" (or Picea rubens, the red spruce) which could not be milked. It has also occasionally been called balsam fir, inviting confusion with A. balsamea. Description Abies fraseri is a small evergreen coniferous tree typically growing between tall and rarely to , with a trunk diameter of , rarely . The crown is conical, with straight branches either horizontal or angled upward at 40° from the trunk; it is dense when the tree is young and more open in maturity. The bark is thin, smooth, grayish brown, and has numerous resinous blisters on juvenile trees, becoming fissured and scaly in maturity. The leaves are needle-like; arranged spirally on the twigs but twisted at their bases to form two rows on each twig; they are long and broad; flat; flexible; rounded or slightly notched at their apices (tips); dark to glaucous green adaxially (above); often having a small patch of stomata near their apices; and having two silvery white stomatal bands abaxially (on their undersides). Their strong fragrance resembles that of turpentine. The cones are erect; cylindrical; long, rarely , and broad, rarely broad; dark purple, turning pale brown when mature; often resinous; and with long reflexed green, yellow, or pale purple bract scales. The cones disintegrate when mature at 4–6 months old to release the winged seeds. Ecology Reproduction and growth Fraser fir is monoecious, meaning that both male and female cones (strobili) occur on the same tree. Cone buds usually open from mid-May to early June. Female cones are borne mostly in the top few feet of the crown and on the distal ends of branches. Male cones are borne below female cones, but mostly in the upper half of the crown. Seed production may begin when trees are 15 years old. Seeds germinate well on mineral soil, moss, peat, decaying stumps and logs, and even on detritus or litter that is sufficiently moist. Distribution and habitat Abies fraseri is restricted to the southeastern Appalachian Mountains in southwestern Virginia, western North Carolina and eastern Tennessee, where it occurs at high elevations, from to the mountain summits. It lives in acidic moist but well-drained sandy loam and is usually mixed with Picea rubens (red spruce). Other trees it grows with include Tsuga caroliniana (Carolina hemlock), Betula alleghaniensis (yellow birch), Betula papyrifera (paper birch), and Acer saccharum (sugar maple). The climate is cool and moist, with short, cool summers and cold winters with heavy snowfall. It lives in sites that experience frequent cloud coverage, which, when paired with cooler temperatures, improves plant water status and high soil moisture. Pests Abies fraseri can be severely damaged by a non-native insect, the balsam woolly adelgid (Adelges piceae) from Europe. The insect's introduction and spread led to a rapid decline in Fraser fir across its range, with over 80 percent of mature trees having been killed. The rapid regeneration of seedlings with lack of canopy has led to good regrowth of healthy young trees where the mature forests once stood. However, when these young trees get old enough for the bark to develop fissures, they may be attacked and killed by the adelgids as well. For this reason, the future of the species was still uncertain, though the Mount Rogers (Virginia) population has largely evaded adelgid mortality. The decline of the Fraser fir in the southern Appalachians has contributed to loss of moss habitat which supports the endangered spruce-fir moss spider (Microhexura montivaga), an obligate of the Southern Appalachian spruce–fir forest ecoregion. By the late 1990s, the adelgid population had decreased. While two-thirds of adult trees had been killed by the 1980s, a study of the Great Smoky Mountains National Park showed that as of 2020, the number of adult trees had increased over the previous 30 years, with three times as many on Kuwohi, Tennessee's highest peak. Threats The Fraser fir is an endangered species. Threats include climate change and the aforementioned balsam woolly adelgid. Cultivation and uses Although not important as a source of timber, the combination of dense natural pyramidal form, strong limbs, soft long-retained needles, dark blue-green color, pleasant scent and excellent shipping characteristics, has led to Fraser fir being widely used as a Christmas tree. Fraser fir has been used more times as the White House Christmas tree than any other tree. The Christmas decoration trade is a multimillion-dollar business in the southern Appalachians. North Carolina produces the majority of Fraser fir Christmas trees. It requires from seven to ten years in the field to produce a tree. In 2005, the North Carolina General Assembly passed legislation making the Fraser fir the official Christmas tree of North Carolina. The Fraser fir is cultivated from seedlings in several northern states and in Quebec, especially for the Christmas tree trade. It is also grown in Bedgebury National Pinetum and other collections in the United Kingdom.
Biology and health sciences
Pinaceae
Plants
2190852
https://en.wikipedia.org/wiki/Ancylostoma
Ancylostoma
Ancylostoma is a genus of nematodes that includes some species of hookworms. Species include: Ancylostoma braziliense, commonly infects cats, popularly known in Brazil as bicho-geográfico Ancylostoma caninum, commonly infects dogs Ancylostoma ceylanicum Ancylostoma duodenale Ancylostoma pluridentatum, commonly infects sylvatic cats Ancylostoma tubaeforme, infects cats along with other hosts
Biology and health sciences
Ecdysozoa
Animals
2192496
https://en.wikipedia.org/wiki/Fungiculture
Fungiculture
Fungiculture is the cultivation of fungi such as mushrooms. Cultivating fungi can yield foods (which include mostly mushrooms), medicine, construction materials and other products. A mushroom farm is involved in the business of growing fungi. The word is also commonly used to refer to the practice of cultivation of fungi by animals such as leafcutter ants, termites, ambrosia beetles, and marsh periwinkles. Overview As fungi, mushrooms require different conditions than plants for optimal growth. Plants develop through photosynthesis, a process that converts atmospheric carbon dioxide into carbohydrates, especially cellulose. While sunlight provides an energy source for plants, mushrooms derive all of their energy and growth materials from their growth medium, through biochemical decomposition processes. This does not mean that light is an irrelevant requirement, since some fungi use light as a signal for fruiting. However, all the materials for growth must already be present in the growth medium. Mushrooms grow well at relative humidity levels of around 95–100%, and substrate moisture levels of 50 to 75%. Instead of seeds, mushrooms reproduce through spores. Spores can be contaminated with airborne microorganisms, which will interfere with mushroom growth and prevent a healthy crop. Mycelium, or actively growing mushroom culture, is placed on a substrate—usually sterilized grains such as rye or millet—and induced to grow into those grains. This is called inoculation. Inoculated grains (or plugs) are referred to as spawn. Spores are another inoculation option, but are less developed than established mycelium. Since they are also contaminated easily, they are only manipulated in laboratory conditions with a laminar flow cabinet. Techniques All mushroom growing techniques require the correct combination of humidity, temperature, substrate (growth medium) and inoculum (spawn or starter culture). Wild harvests, outdoor log inoculation and indoor trays all provide these elements. Outdoor logs Mushrooms can be grown on logs placed outdoors in stacks or piles, as has been done for hundreds of years. Sterilization is not performed as part of this method. Since production may be unpredictable and seasonal, less than 5% of commercially sold mushrooms are produced this way. Here, tree logs are inoculated with spawn, then allowed to grow as they would in wild conditions. Fruiting, or pinning, is triggered by seasonal changes, or by briefly soaking the logs in cool water. Shiitake and oyster mushrooms have traditionally been produced using the outdoor log technique, although controlled techniques such as indoor tray growing or artificial logs made of compressed substrate have been substituted. Shiitake mushrooms that are grown under a forested canopy are considered non-timber forest products. In the Northeastern United States, shiitake mushrooms can be cultivated on a variety of hardwood logs including oak, American beech, sugar maple and hophornbeam. Softwood should not be used to cultivate shiitake mushrooms because the resin of softwoods will oftentimes inhibit the growth of the shiitake mushroom making it impractical as a growing substrate. To produce shiitake mushrooms, 1 metre (3-foot) hardwood logs with a diameter ranging between are inoculated with the mycelium of the shiitake fungus. Inoculation is completed by drilling holes in hardwood logs, filling the holes with cultured shiitake mycelium or inoculum, and then sealing the filled holes with hot wax. After inoculation, the logs are placed under the closed canopy of a coniferous stand and are left to incubate for 12 to 15 months. Once incubation is complete, the logs are soaked in water for 24 hours. Seven to ten days after soaking, shiitake mushrooms will begin to fruit and can be harvested once fully ripe. Indoor trays Indoor mushroom cultivation for the purpose of producing a commercial crop was first developed in caves in France. The caves provided a stable environment (temperature, humidity) all year round. The technology for a controlled growth medium and fungal spawn was brought to the UK in the late 1800s in caves created by quarrying near areas such as Bath, Somerset. Growing indoors allows the ability to control light, temperature and humidity while excluding contaminants and pests. This enables consistent production, regulated by spawning cycles. By the mid-twentieth century this was typically accomplished in windowless, purpose-built buildings, for large-scale commercial production. Indoor tray growing is the most common commercial technique, followed by containerized growing. The tray technique provides the advantages of scalability and easier harvesting. There are a series of stages in the farming of the most widely used commercial mushroom species Agaricus bisporus. These are composting, fertilizing, spawning, casing, pinning, and cropping." Six phases of mushroom cultivation Complete sterilization is not required or performed during composting. In most cases, a pasteurization step is included to allow some beneficial microorganisms to remain in the growth substrate. Specific time spans and temperatures required during stages 3–6 will vary respective to species and variety. Substrate composition and the geometry of growth substrate will also affect the ideal times and temperatures. Pinning is the trickiest part for a mushroom grower, since a combination of carbon dioxide (CO2) concentration, temperature, light, and humidity triggers mushrooms towards fruiting. Up until the point when rhizomorphs or mushroom "pins" appear, the mycelium is an amorphous mass spread throughout the growth substrate, unrecognizable as a mushroom. Carbon dioxide concentration becomes elevated during the vegetative growth phase, when mycelium is sealed in a gas-resistant plastic barrier or bag which traps gases produced by the growing mycelium. To induce pinning, this barrier is opened or ruptured. CO2 concentration then decreases from about 0.08% to 0.04%, the ambient atmospheric level. Indoor oyster mushroom farming Oyster mushroom farming is rapidly expanding around many parts of the world. Oyster mushrooms are grown in substrate that comprises sterilized wheat, paddy straw and even used coffee grounds, and they do not require much space compared to other crops. The per unit production and profit extracted is comparatively higher than other crops. Oyster mushrooms can also be grown indoors from kits, most commonly in the form of a box containing growing medium with spores. Substrates Mushroom production converts the raw natural ingredients into mushroom tissue, most notably the carbohydrate chitin. An ideal substrate will contain enough nitrogen and carbohydrate for rapid mushroom growth. Common bulk substrates include several of the following ingredients: Wood chips or sawdust Mulched straw (usually wheat, but also rice and other straws) Strawbedded horse or poultry manure Corncobs Waste or recycled paper Coffee pulp or grounds Nut and seed hulls Cottonseed hulls Cocoa bean hulls Cottonseed meal Soybean meal Brewer's grain Ammonium nitrate Urea Mushrooms metabolize complex carbohydrates in their substrate into glucose, which is then transported through the mycelium as needed for growth and energy. While it is used as a main energy source, its concentration in the growth medium should not exceed 2%. For ideal fruiting, closer to 1% is ideal. Pests and diseases Parasitic insects, bacteria and other fungi all pose risks to indoor production. Sciarid or phorid flies may lay eggs in the growth medium, which hatch into maggots and damage developing mushrooms during all growth stages. Bacterial blotch caused by Pseudomonas bacteria or patches of Trichoderma green mold also pose a risk during the fruiting stage. Pesticides and sanitizing agents are available to use against these infestations. Biological controls for sciarid and phorid flies have also been proposed. Trichoderma green mold can affect mushroom production, for example in the mid-1990s in Pennsylvania leading to significant crop losses. The contaminating fungus originated from poor hygiene by workers and poorly prepared growth substrates. Mites in the genus Histiostoma have been found in mushroom farms. Histiostoma gracilipes feeds on mushrooms directly, while H. heinemanni is suspected to spread diseases. Commercially cultivated fungi Agaricus bisporus, also known as champignon and the button mushroom. This species also includes the portobello and crimini mushrooms. Auricularia cornea and Auricularia heimuer (Tree ear fungus), two closely related species of jelly fungi that are commonly used in Chinese cuisine. Clitocybe nuda, or blewit, is cultivated in Europe. Flammulina velutipes, the "winter mushroom", also known as enokitake in Japan Fusarium venenatum – the source for mycoprotein which is used in Quorn, a meat analogue. Hypsizygus tessulatus (also Hypsizygus marmoreus), called shimeji in Japanese, it is a common variety of mushroom available in most markets in Japan. Known as "Beech mushroom" in Europe. Lentinus edodes, also known as shiitake, oak mushroom. Lentinus edodes is largely produced in Japan, China and South Korea. Lentinus edodes accounts for 10% of world production of cultivated mushrooms. Common in Japan, China, Australia and North America. Phallus indusiatus – (bamboo mushroom), traditionally collected from the wild, it has been cultivated in China since the late 1970s. Pleurotus species are the second most important mushrooms in production in the world, accounting for 25% of total world production. Pleurotus mushrooms are cultivated worldwide; China is the major producer. Several species can be grown on carbonaceous matter such as straw or newspaper. In the wild they are usually found growing on wood. Pleurotus citrinopileatus (golden oyster mushroom) Pleurotus cornucopiae (branched oyster mushroom) Pleurotus eryngii (king trumpet mushroom) Pleurotus ostreatus (oyster mushroom) Rhizopus oligosporus – the fungal starter culture used in the production of tempeh. In tempeh the mycelia of R. oligosporus are consumed. Sparassis crispa – recent developments have led to this being cultivated in California. It is cultivated on large scale in Korea and Japan. Tremella fuciformis (Snow fungus), another type of jelly fungus that is commonly used in Chinese cuisine. Tuber species, (the truffle), Truffles belong to the ascomycete grouping of fungi. The truffle fruitbodies develop underground in mycorrhizal association with certain trees e.g. oak, poplar, beech, and hazel. Being difficult to find, trained pigs or dogs are often used to sniff them out for easy harvesting. Tuber aestivum (Summer or St. Jean truffle) Tuber magnatum (Piemont white truffle) Tuber melanosporum (Périgord truffle) T.melanosporum x T.magnatum (Khanaqa truffle) Terfezia sp. (desert truffle) Ustilago maydis (corn smut), a fungal pathogen of the maize plant. Also called the Mexican truffle, although not a true truffle. Volvariella volvacea (the "paddy straw mushroom.") Volvariella mushrooms account for 16% of total production of cultivated mushrooms in the world. Production regions in North America Pennsylvania is the top-producing mushroom state in the United States, and celebrates September as "Mushroom Month". The borough of Kennett Square is a historical and present leader in mushroom production. It currently leads production of Agaricus-type mushrooms, followed by California, Florida and Michigan. Other mushroom-producing states: East: Connecticut, Delaware, Florida, Maryland, New York, Pennsylvania, Tennessee, Maine, and Vermont Central: Illinois, Oklahoma, Texas, and Wisconsin West: California, Colorado, Montana, Oregon, Utah and Washington The lower Fraser Valley of British Columbia, which includes Vancouver, has a significant number of producersabout 60 as of 1998. Production in Europe Oyster mushroom cultivation has taken off in Europe as of late. Many entrepreneurs nowadays find it as a quite profitable business, a start-up with a small investment and good profit. Italy with 785,000 tonnes and Netherlands with 307,000 tonnes are between the top ten mushroom producing countries in the world. The world's biggest producer of mushroom spawn is also situated in France. According to a research carried out on Production and Marketing of Mushrooms: Global and National Scenario Poland, Netherlands, Belgium, Lithuania are the major exporting mushrooms countries in Europe and countries like UK, Germany, France, Russia are considered to be the major importing countries. Education and training Oyster mushroom cultivation is a sustainable business where different natural resources can be used as a substrate. The number of people becoming interested in this field is rapidly increasing. The possibility of creating a viable business in urban environments by using coffee grounds is appealing for many entrepreneurs. Since mushroom cultivation is not a subject available at school, most urban farmers learned it by doing. The time to master mushroom cultivation is time consuming and costly in missed revenue. For this reason there are numerous companies in Europe specialized in mushroom cultivation that are offering training for entrepreneurs and organizing events to build community and share knowledge. They also show the potential positive impact of this business on the environment. Courses about mushroom cultivation can be attended in many countries around Europe. There is education available for growing mushrooms on coffee grounds, more advanced training for larger scale farming, spawn production and lab work and growing facilities. Events are organised with different intervals. The Mushroom Learning Network gathers once a year in Europe. The International Society for Mushroom Science gathers once every five-years somewhere in the world.
Technology
Agriculture_2
null
2195020
https://en.wikipedia.org/wiki/Resultant
Resultant
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant. The resultant is widely used in number theory, either directly or through the discriminant, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition, integration of rational functions and drawing of curves defined by a bivariate polynomial equation. The resultant of n homogeneous polynomials in n variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant. It is, with Gröbner bases, one of the main tools of elimination theory. Notation The resultant of two univariate polynomials and is commonly denoted or In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: or The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as or Definition The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix. More precisely, let and be nonzero polynomials of degrees and respectively. Let us denote by the vector space (or free module if the coefficients belong to a commutative ring) of dimension whose elements are the polynomials of degree strictly less than . The map such that is a linear map between two spaces of the same dimension. Over the basis of the powers of (listed in descending order), this map is represented by a square matrix of dimension , which is called the Sylvester matrix of and (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map). The resultant of and is thus the determinant which has columns of and columns of (the fact that the first column of 's and the first column of 's have the same length, that is , is here only for simplifying the display of the determinant). For instance, taking and we get If the coefficients of the polynomials belong to an integral domain, then where and are respectively the roots, counted with their multiplicities, of and in any algebraically closed field containing the integral domain. This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers. Properties In this section and its subsections, and are two polynomials in of respective degrees and , and their resultant is denoted Characterizing properties The following properties hold for the resultant of two polynomials with coefficients in a commutative ring . If is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties. If is a subring of another ring , then That is and have the same resultant when considered as polynomials over or . If (that is if is a nonzero constant) then Similarly, if , then Zeros The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common divisor of positive degree. The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an algebraically closed field containing the coefficients. There exists a polynomial of degree less than and a polynomial of degree less than such that This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials. Invariance by ring homomorphisms Let and be two polynomials of respective degrees and with coefficients in a commutative ring , and a ring homomorphism of into another commutative ring . Applying to the coefficients of a polynomial extends to a homomorphism of polynomial rings , which is also denoted With this notation, we have: If preserves the degrees of and (that is if and ), then If and then If and and the leading coefficient of is then If and and the leading coefficient of is then These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem. When is a polynomial ring in other indeterminates, and is the ring obtained by specializing to numerical values some or all indeterminates of , these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant. This property is fundamental, for example, for cylindrical algebraic decomposition. Invariance under change of variable If and are the reciprocal polynomials of and , respectively, then This means that the property of the resultant being zero is invariant under linear and projective changes of the variable. Invariance under change of polynomials If and are nonzero constants (that is they are independent of the indeterminate ), and and are as above, then If and are as above, and is another polynomial such that the degree of is , then It is only when and have the same degree that cannot be deduced from the degrees of the given polynomials. If either is monic, or , then If , then These properties imply that in the Euclidean algorithm for polynomials, and all its variants (pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm, which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires arithmetic operations. Generic properties In this section, we consider two polynomials and whose coefficients are distinct indeterminates. Let be the polynomial ring over the integers defined by these indeterminates. The resultant is often called the generic resultant for the degrees and . It has the following properties. is an absolutely irreducible polynomial. If is the ideal of generated by and , then is the principal ideal generated by . Homogeneity The generic resultant for the degrees and is homogeneous in various ways. More precisely: It is homogeneous of degree in It is homogeneous of degree in It is homogeneous of degree in all the variables and If and are given the weight (that is, the weight of each coefficient is its degree as elementary symmetric polynomial), then it is quasi-homogeneous of total weight . If and are homogeneous multivariate polynomials of respective degrees and , then their resultant in degrees and with respect to an indeterminate , denoted in , is homogeneous of degree in the other indeterminates. Elimination property Let be the ideal generated by two polynomials and in a polynomial ring where is itself a polynomial ring over a field. If at least one of and is monic in , then: The ideals and define the same algebraic set. That is, a tuple of elements of an algebraically closed field is a common zero of the elements of if and only it is a zero of The ideal has the same radical as the principal ideal That is, each element of has a power that is a multiple of All irreducible factors of divide every element of The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows. As at least one of and is monic, a tuple is a zero of if and only if there exists such that is a common zero of and . Such a common zero is also a zero of all elements of Conversely, if is a common zero of the elements of it is a zero of the resultant, and there exists such that is a common zero of and . So and have exactly the same zeros. Computation Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient. As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix), it may be computed by using any algorithm for computing determinants. This needs arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice. It follows from that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees and may be done in arithmetic operations in the field of coefficients. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem. The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input ( where is an upper bound of the number of digits of the input polynomials). Application to polynomial systems Resultants were introduced for solving systems of polynomial equations and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems. Case of two equations in two unknowns Consider the system of two polynomial equations where and are polynomials of respective total degrees and . Then is a polynomial in , which is generically of degree (by properties of ). A value of is a root of if and only if either there exist in an algebraically closed field containing the coefficients, such that , or and (in this case, one says that and have a common root at infinity for ). Therefore, solutions to the system are obtained by computing the roots of , and for each root computing the common root(s) of and Bézout's theorem results from the value of , the product of the degrees of and . In fact, after a linear change of variables, one may suppose that, for each root of the resultant, there is exactly one value of such that is a common zero of and . This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of and . With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees. General case At first glance, it seems that resultants may be applied to a general polynomial system of equations by computing the resultants of every pair with respect to for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove. A method, introduced at the end of the 19th century, works as follows: introduce new indeterminates and compute This is a polynomial in whose coefficients are polynomials in which have the property that is a common zero of these polynomial coefficients, if and only if the univariate polynomials have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials. To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing. This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical. Other applications Number theory The discriminant of a polynomial, which is a fundamental tool in number theory, is , where is the leading coefficient of and its degree. If and are algebraic numbers such that , then is a root of the resultant and is a root of , where is the degree of . Combined with the fact that is a root of , this shows that the set of algebraic numbers is a field. Let be an algebraic field extension generated by an element which has as minimal polynomial. Every element of may be written as where is a polynomial. Then is a root of and this resultant is a power of the minimal polynomial of Algebraic geometry Given two plane algebraic curves defined as the zeros of the polynomials and , the resultant allows the computation of their intersection. More precisely, the roots of are the x-coordinates of the intersection points and of the common vertical asymptotes, and the roots of are the y-coordinates of the intersection points and of the common horizontal asymptotes. A rational plane curve may be defined by a parametric equation where , and are polynomials. An implicit equation of the curve is given by The degree of this curve is the highest degree of , and , which is equal to the total degree of the resultant. Symbolic integration In symbolic integration, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form where is a square-free polynomial and is a polynomial of lower degree than . The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of ). In fact, the antiderivative is where the sum runs over all complex roots of . The number of algebraic numbers involved by this expression is generally equal to the degree of , but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–Trager method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers. Let be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is where the internal sums run over the roots of the (if the sum is zero, as being the empty sum), and is a polynomial of degree in . The Lazard-Rioboo contribution is the proof that is the subresultant of degree of and It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence. Computer algebra All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra. In fact most computer algebra systems include an efficient implementation of the computation of resultants. Homogeneous resultant The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials and of respective total degrees and , their homogeneous resultant is the determinant of the matrix over the monomial basis of the linear map where runs over the bivariate homogeneous polynomials of degree , and runs over the homogeneous polynomials of degree . In other words, the homogeneous resultant of and is the resultant of and when they are considered as polynomials of degree and (their degree in may be lower than their total degree): (The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation). The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism. That is: The resultant of two homogeneous polynomials over an integral domain is zero if and only if they have a non-zero common zero over an algebraically closed field containing the coefficients. If and are two bivariate homogeneous polynomials with coefficients in a commutative ring , and a ring homomorphism of into another commutative ring , then extending to polynomials over , ones has The property of an homogeneous resultant to be zero is invariant under any projective change of variables. Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant. Macaulay's resultant Macaulay's resultant, named after Francis Sowerby Macaulay, also called the multivariate resultant, or the multipolynomial resultant, is a generalization of the homogeneous resultant to homogeneous polynomials in indeterminates. Macaulay's resultant is a polynomial in the coefficients of these homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an algebraically closed field containing the coefficients, or, equivalently, if the hyper surfaces defined by the polynomials have a common zero in the dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers). Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms. However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials. Resultant of generic homogeneous polynomials A homogeneous polynomial of degree in variables may have up to coefficients; it is said to be generic, if these coefficients are distinct indeterminates. Let be generic homogeneous polynomials in indeterminates, of respective degrees Together, they involve indeterminate coefficients. Let be the polynomial ring over the integers, in all these indeterminate coefficients. The polynomials belong thus to and their resultant (still to be defined) belongs to . The Macaulay degree is the integer which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix, which is the matrix over the monomial basis of the -linear map in which each runs over the homogeneous polynomials of degree and the codomain is the -module of the homogeneous polynomials of degree . If , the Macaulay matrix is the Sylvester matrix, and is a square matrix, but this is no longer true for . Thus, instead of considering the determinant, one considers all the maximal minors, that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the -ideal generated by these principal minors is a principal ideal, which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes , when, for each , zero is substituted for all coefficients of except the coefficient of for which one is substituted. Properties of the generic Macaulay resultant The generic Macaulay resultant is an irreducible polynomial. It is homogeneous of degree in the coefficients of where is the Bézout bound. The product with the resultant of every monomial of degree in belongs to the ideal of generated by Resultant of polynomials over a field From now on, we consider that the homogeneous polynomials of degrees have their coefficients in a field , that is that they belong to Their resultant is defined as the element of obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the The main property of the resultant is that it is zero if and only if have a nonzero common zero in an algebraically closed extension of . The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of Projective Nullstellensatz: If the resultant is nonzero, then where is the Macaulay degree, and is the maximal homogeneous ideal. This implies that have no other common zero than the unique common zero, , of Computability As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors, there are algorithms for computing resultants in a finite number of steps. However, the generic resultant is a polynomial of very high degree (exponential in ) depending on a huge number of indeterminates. It follows that, except for very small and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of and of the degrees of the input polynomials. Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field. In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying Gaussian elimination to the Macaulay matrix. This provides a computational complexity where is the maximum degree of input polynomials. Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the "elimination" of from the input polynomials. U-resultant Macaulay's resultant provides a method, called "U-resultant" by Macaulay, for solving systems of polynomial equations. Given homogeneous polynomials of degrees in indeterminates over a field , their U-resultant is the resultant of the polynomials where is the generic linear form whose coefficients are new indeterminates Notation or for these generic coefficients is traditional, and is the origin of the term U-resultant. The U-resultant is a homogeneous polynomial in It is zero if and only if the common zeros of form a projective algebraic set of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of ). If the U-resultant is not zero, its degree is the Bézout bound The U-resultant factorizes over an algebraically closed extension of into a product of linear forms. If is such a linear factor, then are the homogeneous coordinates of a common zero of Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the at this zero. In other words, the U-resultant provides a completely explicit version of Bézout's theorem. Extension to more polynomials and computation The U-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be , where is the number of indeterminates. In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from , and the resulting computation can be performed via a specialized Gaussian elimination procedure followed by symbolic determinant computation. Let be homogeneous polynomials in of degrees over a field . Without loss of generality, one may suppose that Setting for , the Macaulay bound is Let be new indeterminates and define In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in of the linear map where, for each , runs over the linear space consisting of zero and the homogeneous polynomials of degree . Reducing the Macaulay matrix by a variant of Gaussian elimination, one obtains a square matrix of linear forms in The determinant of this matrix is the U-resultant. As with the original U-resultant, it is zero if and only if have infinitely many common projective zeros (that is if the projective algebraic set defined by has infinitely many points over an algebraic closure of ). Again as with the original U-resultant, when this U-resultant is not zero, it factorizes into linear factors over any algebraically closed extension of . The coefficients of these linear factors are the homogeneous coordinates of the common zeros of and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor. The number of rows of the Macaulay matrix is less than where is the usual mathematical constant, and is the arithmetic mean of the degrees of the It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions (Bézout's theorem). This computation may be practically viable when , and are not large.
Mathematics
Other algebra topics
null
2195217
https://en.wikipedia.org/wiki/Gopher
Gopher
Pocket gophers, commonly referred to simply as gophers, are burrowing rodents of the family Geomyidae. The roughly 41 species are all endemic to North and Central America. They are commonly known for their extensive tunneling activities and their ability to destroy farms and gardens. The name "pocket gopher" on its own may refer to any of a number of genera within the family Geomyidae. These are the "true" gophers, but several ground squirrels in the distantly related family Sciuridae are often called "gophers", as well. The origin of the word "gopher" is uncertain; the French gaufre, meaning waffle, has been suggested, on account of the gopher tunnels resembling the honeycomb-like pattern of holes in a waffle; another suggestion is that the word is of Muskogean origin. Description Pocket gophers weigh around , and are about in body length, with a tail long. A few species reach weights approaching . Within any particular gopher species, the males are larger than the females, and can be nearly double their weight. Average lifespans are one to three years. The maximum lifespan for the pocket gopher is about five years. Some gophers, such as those in the genus Geomys, have lifespans that have been documented as up to seven years in the wild. Most gophers have brown fur that often closely matches the color of the soil in which they live. Their most characteristic features are their large cheek pouches, from which the word "pocket" in their name derives. These pouches are fur-lined, can be turned inside out, and extend from the side of the mouth well back onto the shoulders. Gophers have small eyes and a short, hairy tail, which they use to feel around tunnels when they walk backwards. Pocket gophers have often been found to carry external parasites including, most commonly, lice, but also ticks, fleas, and mites. Common predators of the gopher include weasels, snakes, and hawks. Behavior All pocket gophers create a network of tunnel systems that provide protection and a means of collecting food. They are larder hoarders, and their cheek pouches are used for transporting food back to their burrows. Gophers can collect large hoards. Unlike ground squirrels, gophers do not live in large communities and seldom find themselves above ground. Tunnel entrances can be identified by small piles of loose soil covering the opening. Burrows are in many areas where the soil is softer and easily tunneled. Gophers often visit vegetable gardens, lawns, or farms, as they like moist soil (see Soil biomantle). This has led to their frequent treatment as pests. Gophers eat plant roots, shrubs, and other vegetables such as carrots, lettuce, radishes, and any other vegetables with juice. Some species are considered agricultural pests. The resulting destruction of plant life then leaves the area a stretch of denuded soil. At the same time, the soil disturbance created by turning it over can lead to the early establishment of ecological succession in communities of r-selected and other ruderal plant species. The stashing and subsequent decomposition of plant material in the gophers' larder can produce deep fertilization of the soil. Pocket gophers are solitary outside of the breeding season, aggressively maintaining territories that vary in size depending on the resources available. Males and females may share some burrows and nesting chambers if their territories border each other, but in general, each pocket gopher inhabits its own individual tunnel system. Although they attempt to flee when threatened, they may attack other animals, including cats and humans, and can inflict serious bites with their long, sharp teeth. Depending on the species and local conditions, pocket gophers may have a specific annual breeding season, or may breed repeatedly through the year. Each litter typically consists of two to five young, although this may be much higher in some species. The young are born blind and helpless and are weaned when around 40 days old. Control Geomys and Thomomys species are classed as "prohibited new organisms" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing them from being imported into the country. Classification Much debate exists among taxonomists about which races of pocket gophers should be recognized as full species, and the following list cannot be regarded as definitive. Family Geomyidae Genus Cratogeomys; some authors treat this genus as a subgenus of Pappogeomys. Yellow-faced pocket gopher (Cratogeomys castanops) Oriental Basin pocket gopher (C. fulvescens) Smoky pocket gopher (C. fumosus) Goldman's pocket gopher (C. goldmani) Merriam's pocket gopher (C. merriami) Perote pocket gopher (C. perotensis) Volcan de Toluca pocket gopher (C. planiceps) Genus Geomys – eastern pocket gophers; principally live in the southwestern United States, east of the Sierra Nevada mountains Desert pocket gopher (Geomys arenarius) Attwater's pocket gopher (G. attwateri) Baird's pocket gopher (G. breviceps) Plains pocket gopher (G. bursarius) Hall's pocket gopher (G. jugossicularis) Knox Jones's pocket gopher (G. knoxjonesi) Sand Hills pocket gopher (G. lutescens) Texas pocket gopher (G. personatus) Southeastern pocket gopher (G. pinetis) Strecker's pocket gopher (G. streckeri) Central Texas pocket gopher (G. texensis) Tropical pocket gopher (G. tropicalis) Genus Heterogeomys – giant pocket gophers or taltuzas; live in Mexico, Central America, and Colombia; some authors treat this genus as a subgenus of Orthogeomys. Chiriqui pocket gopher (Heterogeomys cavator) Cherrie's pocket gopher (H. cherriei) Darien pocket gopher (H. dariensis) Variable pocket gopher (H. heterodus) Hispid pocket gopher (H. hispidus) Big pocket gopher (H. lanius) Underwood's pocket gopher (H. underwoodi) Genus Orthogeomys; live in Guatemala, Honduras, and Mexico; Giant pocket gopher (O. grandis) Genus Pappogeomys; live in Mexico Buller's pocket gopher (P. bulleri) Genus Thomomys – western pocket gophers; widely distributed in North America, extending into the northwestern US, Canada, and the southeastern US. Black-and-Brown pocket gopher (T. atrovarius) Botta's pocket gopher (T. bottae) Camas pocket gopher (T. bulbivorus) Wyoming pocket gopher (T. clusius) Idaho pocket gopher (T. idahoensis) Mazama pocket gopher (T. mazama) Mountain pocket gopher (T. monticola) Nayar pocket gopher (T. nayarensis) Sierra Madre Occidental pocket gopher (T. sheldoni) Northern pocket gopher (T. talpoides) Townsend's pocket gopher (T. townsendii) Southern pocket gopher (T. umbrinus) Genus Zygogeomys Michoacan pocket gopher (Zygogeomys trichopus) Some sources also list a genus Hypogeomys, with one species, but this genus name is normally used for the Malagasy giant rat, which belongs to the family Nesomyidae. In popular culture Minnesota is nicknamed the "Gopher State", and the University of Minnesota's athletics teams are collectively known as the Golden Gophers, led by mascot Goldy Gopher. The Golden Gopher, however, refers to the Thirteen-lined ground squirrel, which is not a member of the Geomyidae family. Gainer the Gopher is the mascot of the Saskatchewan Roughriders in the Canadian Football League. Gopher is a recurring character in Disney's Winnie the Pooh franchise. A gopher puppet is featured prominently in the film Caddyshack and the sequel. The mascot of the Go programming language is the Go Gopher. Gordon the Gopher is an English puppet gopher that appeared on Children's BBC between 1985 and 1987. Mac and Tosh from the Looney Tunes franchise, are a couple of extremely well mannered gophers.
Biology and health sciences
Rodents
Animals
8721011
https://en.wikipedia.org/wiki/Low-ionization%20nuclear%20emission-line%20region
Low-ionization nuclear emission-line region
A low-ionization nuclear emission-line region (LINER) is a type of galactic nucleus that is defined by its spectral line emission. The spectra typically include line emission from weakly ionized or neutral atoms, such as O, O+, N+, and S+. Conversely, the spectral line emission from strongly ionized atoms, such as O++, Ne++, and He+, is relatively weak. The class of galactic nuclei was first identified by Timothy Heckman in the third of a series of papers on the spectra of galactic nuclei that were published in 1980. Demographics of LINER galaxies Galaxies that contain LINERs are often referred to as LINER galaxies. LINER galaxies are very common; approximately one-third of all nearby galaxies (galaxies within approximately 20-40 Mpc) may be classified as LINER galaxies. Approximately 75% of LINER galaxies are either elliptical galaxies, lenticular galaxies, or S0/a-Sab galaxies (spiral galaxies with large bulges and tightly wound spiral arms). LINERs are found less frequently in Sb-Scd galaxies (spiral galaxies with small bulges and loosely wound spiral arms), and they are very rare in nearby irregular galaxies. LINERs also may be commonly found in luminous infrared galaxies (LIRGs), a class of galaxies defined by their infrared luminosities that are frequently formed when two galaxies collide with each other. Approximately one-quarter of LIRGs may contain LINERs. Scientific debates: energy sources and ionization mechanisms LINERs have been at the center of two major debates. First, astronomers have debated the source of energy that excites the ionized gas in the centers of these galaxies. Some astronomers have proposed that active galactic nuclei (AGN) with supermassive black holes are responsible for the LINER spectral emission. Other astronomers have asserted that the emission is powered by star formation regions. The other major issue is related to how the ions are excited. Some astronomers have suggested that shock waves propagating through the gas may ionize the gas, while others have suggested that photoionization (ionization by ultraviolet light) may be responsible. These debates are complicated by the fact that LINERs are found in a wide variety of objects with different brightnesses and morphologies. Moreover, the debate over the energy sources for LINERs is entangled with a similar debate over whether the light from star formation regions or the light from AGN produce the high infrared luminosities seen in LIRGs. Although both the energy sources and the excitation mechanisms for LINER emission are still being studied, many LINERs are frequently referred to as AGN. Star formation in LINERs A number of surveys have been performed to explore the connection between star formation and LINER activity. If a connection can be found between star formation activity and LINER activity, then this strengthens the possibility that LINERs are powered by the hot gas found in star formation regions. However, if star formation cannot be found in LINERs, then this definitively excludes star formation as powering LINER emission. Star formation in LIRGs with LINERs Recent observations with the Spitzer Space Telescope show a clear connection between LINER emission in luminous infrared galaxies (LIRGs) and star formation activity. The mid-infrared spectra of LIRGs with LINERs have been shown to look similar to the mid-infrared spectra of starburst galaxies, which suggest that infrared-bright LINERs are powered by star formation activity. However, some mid-infrared spectral line emission from AGN have also been detected in these galaxies, indicating that star formation may not be the only energy sources in these galaxies. Star formation in normal galaxies with LINERs Normal nearby galaxies with LINERs, however, appear to be different. A few near-infrared spectroscopic surveys have identified some LINERs in normal galaxies that may be powered by star formation. However, most LINERs in nearby galaxies have low levels of star formation activity. Moreover, the stellar populations of many LINERs appear to be very old, and the mid-infrared spectra, as observed by the Spitzer Space Telescope, do not appear similar to the spectra expected from star formation. These results demonstrate that most LINER in nearby normal galaxies may not be powered by star formation, although a few exceptions clearly exist. Notable LINER galaxies Messier 94 NGC 5005 NGC 5195 Sombrero Galaxy
Physical sciences
Active galactic nucleus
Astronomy
5532116
https://en.wikipedia.org/wiki/Phytelephas
Phytelephas
Phytelephas is a genus containing six known species of dioecious palms (family Arecaceae), occurring from southern Panama along the Andes to Ecuador, Bolivia, Colombia, northwestern Brazil, and Peru. They are commonly known as ivory palms, ivory-nut palms or tagua palms (); the scientific name Phytelephas means "plant ivory" or more literally, "plant elephant". This and the first two of the common names refer to the very hard white endosperm of their seeds (tagua nuts or jarina seeds), which resembles elephant ivory. Description They are medium-sized to tall palms reaching up to 20 m tall, with pinnate leaves. The "nut" is covered with pericarp, which gets removed by animals. The kernel is covered with a brown, flaky skin and shaped like a small avocado, roughly 4–8 cm in diameter. The male plants produce catkins up to three feet (0.9 meter) in length of male flowers, each bearing up to one thousand stamens, the greatest number of any Monocot. Uses Given trade restrictions in elephant ivory as well as animal welfare concerns, ivory palm endosperm is often used as a substitute for elephant ivory today, and traded under the names vegetable ivory, palm ivory, marfil-vegetal, corozo, tagua, or jarina. When dried out, it can be carved just like elephant ivory; it is often used for beads, buttons, figurines and jewelry, and can be dyed. More recently, palm ivory has been used in the production of bagpipes. Vegetable ivory stimulates local economies in South America, provides an alternative to cutting down rainforests for farming, and prevents elephants from being killed for the ivory in their tusks. In Ecuador, the Ecuadorean ivory palm (P. aequatorialis) is the species whose kernels are widely harvested. The large-fruited ivory palm (P. macrocarpa) is the ivory palm native to Brazil, and most internationally traded palm ivory is derived from this species. The Colombian ivory palm (P. schottii) and P. tenuicaulis, both formerly included in P. macrocarpa, are the usual source of the product in Colombia. The other two species are quite rare and have a restricted range; they are not used for tagua production on a significant scale. The kernels are picked up from the ground after the ripe fruit has detached from the tree and forest animals have taken care of the pericarp, or harvested when ripe and the pericarp manually removed. As the nut shrinks when it hardens, a small hollow cavity can form in the center. It is often not possible to know whether the inside of the nut will have a small cavity in the center until it is cut into. Therefore, when carving, it is common to either incorporate the hole or cavity into carvings or not carve deep enough to reach a potential cavity. In their native range, these palms are also used as a source of food and construction wood. List of species The following species are recognized:
Biology and health sciences
Arecales (inc. Palms)
Plants
5536832
https://en.wikipedia.org/wiki/Diplocaulus
Diplocaulus
Diplocaulus (meaning "double stalk") is an extinct genus of lepospondyl amphibians which lived from the Late Carboniferous to the Late Permian of North America and Africa. Diplocaulus is by far the largest and best-known of the lepospondyls, characterized by a distinctive boomerang-shaped skull. Remains attributed to Diplocaulus have been found from the Late Permian of Morocco and represent the youngest-known occurrence of a lepospondyl. Description Diplocaulus had a stocky, salamander-like body, but was relatively large, reaching up to in length. Although a complete tail is unknown for the genus, a nearly complete articulated skeleton described in 1917 preserved a row of tail vertebrae near the head. This was construed as circumstantial evidence for a long, thin tail capable of reaching the head if the animal was curled up. Most studies since this discovery have argued that anguiliform (eel-like) tail movement was the main force of locomotion utilized by Diplocaulus and its relatives. Horns The most distinctive features of this genus and its closest relatives were a pair of long protrusions or horns at the rear of the skull, giving the head a boomerang-like shape. Most of the outer/front edge of each horn was formed by the elongated, blade-like squamosal bone. The rear edge of the skull and horns, on the other hand, was formed by the postparietal bones, also known as dermosupraoccipitals in older publications. However, the primary component of each horn (including the tips) is a long bone with a historically controversial identification. Many early sources considered the bone to be a tabular, which in other early tetrapods is a small bone lying at the rear edge of the skull. However, Olson (1951) doubted this, arguing that the bone's contact with the parietals excluded the possibility of it being a tabular. He argued that the bone was the supratemporal bone, which had enlarged and shifted towards the rear tip of the skull. Beerbower (1963) countered Olson's reasoning by pointing out that Urocordylus, a newt-like relative of Diplocaulus, retained both a supratemporal bone and a tabular bone. In Urocordylus, the tabular lies closer to the back of the skull and even contacts the parietals, invalidating Olson's main point. Based on this observation, it is more likely that the primary bone of the horns in Diplocaulus is a tabular. Many studies (even a later publication by Olson) now refer to Diplocaulus horns as tabular horns based on Beerbower's argument. Species D. salamandroides D. salamandroides was the first species of Diplocaulus to be discovered. Remains from this species were discovered near Danville, Illinois by William Gurley and J.C. Winslow, a pair of local geologists. The fossils were later described by renowned paleontologist Edward Drinker Cope in 1877. This species is only known from a small number of vertebrae sent to Cope by Gurley and Winslow. These vertebrae were noted for their similarities to those of salamanders (hence the specific name salamandroides), although Cope was reluctant to refer them to any known group. A large jaw bone with labyrinthodont teeth was associated with some of these vertebrae, but it was much larger than expected for the vertebrae and likely belonged to Eryops or some other larger amphibian. D. salamandroides could be distinguished from D. magnicornis by its small size (from a fifth to a sixth the size of the latter) and less pronounced accessory articular processes (at the time identified as zygosphene-zygantrum articulations). The rocks in which these fossils were discovered had been informally referred to as the "Clepsydrops shales", named after a local genus of early synapsid by Cope in 1865. The shales were initially believed to be from either the Permian or Triassic periods in age based on the purported presence of reptile and lungfish fossils. By 1878, Cope had decided that the site was Permian. In 1908, E.C. Case noted that the shales also contain remains from fish which were from the late Carboniferous and early Permian periods. He argued that, while the Clepsydrops shales of Illinois and the similar red beds of Texas were evidently formed after the major Carboniferous coal deposits, there was not sufficient evidence to exclude them from the Carboniferous period itself. Nowadays the Clepsydrops shales are typically assigned to the McLeansboro or Mattoon Formations. D. salamandroides fossils have also been found in Pennsylvania. These formations are now believed to be Missourian (late Carboniferous) in age. D. magnicornis This species, described by Cope in 1882, is by far the most common and well-described member of the genus. D. magnicornis was the first species known from more than vertebrae, and it allowed Cope and other paleontologists to realize the nature of Diplocaulus as a bizarre long-horned "batrachian" (amphibian). Much of modern knowledge on the genus is based on this species, as it outnumbers any other Diplocaulus remains by hundreds of specimens. D. magnicornis had a wide temporal distribution throughout the red beds of Texas and Oklahoma. D. brevirostris D. brevirostris was similar to D. magnicornis, although it was significantly more rare. It is represented by a small number of specimens found in an early strata of the Texas red beds, specifically the Arroyo Formation of the Clear Fork Group. This species can be differentiated from D. magnicornis by the much shorter and blunter snout compared to the length of the skull as a whole. In addition, the horns are more elongated, the parietals have a convex upper surface, and the rear edge of the skull is more strongly and smoothly curved. While juvenile members of D. magnicornis also have a smoothly curved rear edge of the skull, all known D. brevirostris specimens are clearly adults as shown by their robust skull ornamentation, long horns, and large size. Therefore, this trait is a legitimate distinguishing feature of adult specimens of this species. The only specimen known from more than a skull is the type specimen, AM 4470, which preserves some vertebrae similar to those of "D. primigenius". E.C. Olson, the original describer of the species, suggested that it occupied different habitats than D. magnicornis such as mountain streams, accounting for its comparative rarity. However, other studies have suggested that D. magnicornis would have lived in similar environments, invalidating Olson's hypothesis. D. recurvatus This species, from the Vale Formation of the Texas red beds, was very similar to D. magnicornis, and partially coexisted alongside that species in younger strata. Olson hypothesized that D. recurvatus may have been descended from an early stock of D. magnicornis. D. recurvatus differs from D. magnicornis in one specific trait: the tips of the tabular horns are "crooked". The tips are bent relative to the rest of the horns, and abruptly taper. Comparison to a growth series of D. magnicornis indicates that D. recurvatus specimens had developmental pathways which significantly differed from D. magnicornis. For example, skull length and width seem to be inversely correlated in D. recurvatus and directly correlated in D. magnicornis. In addition, the restriction in the horns of D. recurvatus develops in an area which would otherwise expand in adult D. magnicornis. D. minimus Diplocaulus minimus is a species known from the Ikakern Formation of Morocco. It had an unusually asymmetrical skull, with the left prong being long and tapering as in other species but the right prong being much shorter and more rounded. This feature was present in multiple skulls referred to this species, so it is very unlikely to be a result of crushing or distortion. Some studies have suggested that this species is more closely related to Diploceraspis than to Diplocaulus magnicornus. This may suggest that either Diplocaulus is not a true monophyletic genus, that Diploceraspis is a junior synonym of the genus, or that "Diplocaulus" minimus represents a distinct genus. Dubious species D. limbatus was the third species of Diplocaulus to be named, and remained the second most well known member of the genus until the 1950s. It was described by E.D. Cope in 1895 based on several incomplete specimens found in the Texas red beds. The type specimen was a poorly preserved skull and partial skeleton designated AM 4471. Cope found that the skull of this specimen had shorter, thinner horns than those of D. magnicornis, as well as a seemingly unique feature: a large notch separating the quadratojugal from the rest of the tabular horn. E.C. Case later provided additional distinctions present in a skull referred to D. limbatus, including smoother edges to the skull, larger eyes, and more pointed horns. However, additional D. limbatus specimens prepared by Douthitt have shown that many of Case's identifications were erroneous, and that only the notch identified by Cope could be used to distinguish it from D. magnicornis. In 1951, E.C. Olson concluded that AM 4471 was too poorly preserved to differentiate from D. magnicornis, and therefore he designated D. limbatus as a synonym of that species. However, he also analyzed the referred D. limbatus skull described by Case, AM 4470, and found that it was unique enough to qualify as the type specimen of a new Diplocaulus species: D. brevirostris. D. copei and D. pusillus were both named by German paleontologist Ferdinand Broili in 1904. D. copei was known from three Texan specimens, all of which were heavily crushed and incomplete. Broili argued that this species was unique due to its small size and horns which bend inwards. However, E.C. Case could find no way to distinguish between its specimens and those of D. magnicornis and "D. limbatus", and he rejected the species as indeterminate, a decision followed by later sources. D. pusillus, known from a pair of minuscule skulls found in Texas and stored at the Palaeontological Museum of Munich, is a more controversial species. The skulls are distinctive compared to adult Diplocaulus specimens from other species, and some early sources have doubted their referral to the genus. These sources voiced a possibility that the skulls came from some other amphibian from the area, such as Trimerorhachis. In 1918, S.W. Williston used the D. pusillus specimens as the basis for Platyops parvus, a new genus of diplocaulid. In 1946, E.C. Case revised Williston's name to Permoplatyops parvus, as the genus name "Platyops" was already in use. He brought up the possibility that the skulls were from an extremely young Diplocaulus, and in response Olson (1951) designated D. pusillus (and therefore Permoplatyops parvus) as a synonym of one of the other red bed Diplocaulus species, such as D. magnicornis. D. primigenius was described in 1921 by M.G. Mehl based on a single specimen preserving a skull, shoulder elements, and a string of vertebrae. The skull was seemingly identical to that of D. magnicornis, but the vertebrae were peculiar. They were quite enlarged, particularly the neural spines which were tall, rough structures with a depression at their highest extent. E.C. Olson (1951) noted that the vertebrae were comparable to those of the holotype of D. brevirostris (AM 4470), but also that the skull was much more akin to D. magnicornis instead. While Olson did decide to synonymize D. primigenius with D. magnicornis, he also noted that the specimen remained an interesting conundrum with implications for the disconnect between vertebral and skull development in Diplocaulus. D. parvus, named by E.C. Olson in 1972, was designated as a new species with no connection whatsoever with "Permoplatyops parvus", which at that point was treated as a synonym of D. magnicornis. D. parvus is known from a single specimen from the Chickasha Formation of Oklahoma. It was generally very similar to D. recurvatus, differing primarily due to its smaller size as isolated geographical location. Germain (2010) did not consider these traits sufficient to justify retaining D. parvus separate from D. recurvatus. The D. parvus specimen is potentially the youngest Diplocaulus fossil recovered from North America, at about 270 million years old. Paleobiology Function of the tabular horns Various hypotheses have been put forth to the purpose of these horns. One of the earliest suggestions, provided by S.W. Williston in 1909, was that they protected external gills, but in 1911 E.C. Case pointed out that there was slim evidence for this idea. Another hypothesis was provided in a dissertation, published by University of Kansas professor Herman Douthitt in 1917, which focused entirely on the anatomy of Diplocaulus. Douthitt argued that the most undisputed function was that the horns acted as a counterweight to offset the heavily-built forward part of the head which would have been difficult to lift otherwise. However, he also noted that this was probably not their primary function, and that they may have been maladaptive developments "as the result of some internal metabolic derangement". In 1951, E.C. Olson suggested that the horns could have supported skin flaps capable of assisting the animal in skate- or stingray-like locomotion. However, he admitted that his suggestion was entirely conjectural considering a lack of soft tissue evidence. He also briefly proposed other possible functions, such as the use of the broad head as a burrowing tool to escape predators or survive droughts. J.R. Beerbower revived the hypothesis that the horns were involved in respiration during his 1963 description of Diploceraspis, which was a close relative of Diplocaulus. His argument relied on the possibility that the horns supported operculum-like vertical pouches protecting external or internal gills. One possibility is that the shape was defensive, since even a large predator would have a hard time trying to swallow a creature with such a wide head. Lift A new hypothesis for the function of the horns was presented by South African paleontologist Arthur Cruickshank & fluid dynamicist B.W. Skews in a 1980 paper. They proposed that the tabular horns acted as a hydrofoil, allowing the animal to more easily control how water flows over its head. In the process of their investigation, Cruickshank & Skews developed a full-scale model of the head and a portion of the body of a Diplocaulus, constructed from balsa wood and modelling clay. The model was placed in a wind tunnel, and subjected to several tests to determine drag, lift, and other forces experienced by the head in different situations. The results showed that the horns generated significant lift, which would have allowed the animal to rise in the water column of a river or stream quite quickly and easily. Lift was present when the head was parallel to the flow of water (modeled by air), with lift increasing at a higher attack angle (angle above the horizontal) and only dropping once the head reached a high stall angle of 22 degrees. Lift and pitching moment was minimized at 1.5 degrees below the horizontal, which may have been the natural resting angle of the head. When the "mouth" of the model was opened, lift was barely affected, the pitching moment decreased, and drag only slightly increased. This indicates that Diplocaulus would not have been seriously disadvantaged if they chose to attack prey items while rising through the water. Cruickshank & Skews also glued numerous small spheres to the model in order to test how an irregular texture would affect the mechanics of the head. The highly irregular spheres drastically reduced lift and increased drag, but when they were rubbed off (leaving only the slightly irregular glue layer), the only major reduction in aerodynamic quality (compared to the smooth model) was that the stall angle decreased to 16 degrees. The study also inquired about the hydrodynamics of Diploceraspis, which lacked a flange on the underside of the horns which was present in Diplocaulus. When the flange was removed from the smooth model, the resulting lift forces started being generated at a lower angle, 6 degrees below the horizontal rather than 1.5. This may indicate that Diploceraspis was better adapted for slower streams, where immediate lift was prioritized over the more gradual lift created by the Diplocaulus model, which would have been able to take advantage of a swifter current. Paleoecology A trio of three juvenile Diplocaulus in a burrow of eight (plus one juvenile Eryops) were found to have been partially eaten by the sail-backed synapsid Dimetrodon, which likely unearthed the amphibians during a drought. One of the three was killed with a bite to the head, taking part of its skull and portions of the brain, a fatal injury that the animal could not defend against. Gallery
Biology and health sciences
Prehistoric amphibians
Animals
5537238
https://en.wikipedia.org/wiki/Triadobatrachus
Triadobatrachus
Triadobatrachus is an extinct genus of salientian frog-like amphibians, including only one known species, Triadobatrachus massinoti. It is the oldest member of the frog lineage known, and an excellent example of a transitional fossil. It lived during the Early Triassic about 250 million years ago, in what is now Madagascar. Triadobatrachus was long, and still retained many primitive characteristics, such as possessing at least 26 vertebrae, where modern frogs have only four to nine. At least 10 of these vertebrae formed a short tail, which the animal may have retained as an adult. It probably swam by kicking its hind legs, although it could not jump, as most modern frogs can. Its skull resembled that of modern frogs, consisting of a latticework of thin bones separated by large openings. This creature, or a relative, evolved eventually into modern frogs, the earliest example of which is Prosalirus, millions of years later in the Early Jurassic. It was first discovered in the 1930s, when Adrien Massinot, near the village of Betsiaka in northern Madagascar, found an almost complete skeleton in the Induan Middle Sakamena Formation of the Sakamena Group. The animal must have fossilized soon after its death, because all bones lay in their natural anatomical position. Only the anterior part of the skull and the ends of the limbs were missing. This fossil was initially described under the name Protobatrachus massinoti by the French paleontologist Jean Piveteau in 1936. Much more detailed description were published more recently. Although it was found in marine deposits, the general structure of Triadobatrachus shows that it probably lived for part of the time on land and breathed air. Its proximity to the mainland is further borne out by the remains of terrestrial plants found with it, and because most extant amphibians do not tolerate saltwater, and that this saltwater intolerance was probably present in the earliest lissamphibians. Triadobatrachus is similar in age to the salientian Czatkobatrachus which is known from the Early Triassic (Olenekian) of Poland. Gallery
Biology and health sciences
Prehistoric amphibians
Animals
996950
https://en.wikipedia.org/wiki/Starburst%20region
Starburst region
A starburst region is a region of space that is undergoing a large amount of star formation. A starburst is an astrophysical process that involves star formation occurring at a rate that is large compared to the rate that is typically observed. This starburst activity will consume the available interstellar gas supply over a timespan that is much shorter than the lifetime of the galaxy. For example, the nebula NGC 6334 has a star formation rate estimated to be 3600 solar masses per million years compared to the star formation rate of the entire Milky Way of about seven million solar masses per million years. Due to the high amount of star formation a starburst is usually accompanied by much higher gas pressure and a larger ratio of hydrogen cyanide to carbon monoxide emission-lines than are usually observed. Starbursts can occur in entire galaxies or just regions of space. For example, the Tarantula Nebula is a nebula in the Large Magellanic Cloud which has one of the highest star formation rates in the Local Group. By contrast, a starburst galaxy is an entire galaxy that is experiencing a very high star formation rate. One notable example is Messier 82 in which the gas pressure is 100 times greater than in the local neighborhood, and it is forming stars at about the same rate as the entire Milky Way in a region only about across. At this rate M82 will consume its 200 million solar masses of atomic and molecular hydrogen in 100 million years (its free-fall time). Starburst regions can occur in different shapes, for example in Messier 94 the inner ring is a starburst region. Messier 82 has a starburst core of about 600 parsec in diameter. Starbursts are common during galaxy mergers such as the Antennae Galaxies. In the case of mergers, the starburst can either be local or galaxy-wide depending on the galaxies and how they are merging.
Physical sciences
Stellar astronomy
Astronomy
997476
https://en.wikipedia.org/wiki/Night%20sky
Night sky
The night sky is the nighttime appearance of celestial objects like stars, planets, and the Moon, which are visible in a clear sky between sunset and sunrise, when the Sun is below the horizon. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. Aurorae light up the skies above the polar circles. Occasionally, a large coronal mass ejection from the Sun or simply high levels of solar wind may extend the phenomenon toward the Equator. The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the status of the night sky as a calendar to determine when to plant crops. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities. The history of astrology has generally been based on the belief that relationships between heavenly bodies influence or explain events on Earth. The scientific study of objects in the night sky takes place in the context of observational astronomy. Visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of sky brightness. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Optical filters and modifications to light fixtures can help to alleviate this problem, but for optimal views, both professional and amateur astronomers seek locations far from urban skyglow. Brightness The fact that the sky is not completely dark at night, even in the absence of moonlight and city lights, can be easily observed, since if the sky were absolutely dark, one would not be able to see the silhouette of an object against the sky. The intensity of the sky brightness varies greatly over the day and the primary cause differs as well. During daytime when the Sun is above the horizon direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. In twilight, the period of time between sunset and sunrise, the situation is more complicated and a further differentiation is required. Twilight is divided in three segments according to how far the Sun is below the horizon in segments of 6°. After sunset the civil twilight sets in, and ends when the Sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the Sun reaches heights of −6° and −12°, after which comes the astronomical twilight defined as the period from −12° to −18°. When the Sun drops more than 18° below the horizon, the sky generally attains its minimum brightness. Several sources can be identified as the source of the intrinsic brightness of the sky, namely airglow, indirect scattering of sunlight, scattering of starlight, and artificial light pollution. Visual presentation Depending on local sky cloud cover, pollution, humidity, and light pollution levels, the stars visible to the unaided naked eye appear as hundreds, thousands or tens of thousands of white pinpoints of light in an otherwise near black sky together with some faint nebulae or clouds of light. In ancient times the stars were often assumed to be equidistant on a dome above the Earth because they are much too far away for stereopsis to offer any depth cues. Visible stars range in color from blue (hot) to red (cold), but with such small points of faint light, most look white because they stimulate the rod cells without triggering the cone cells. If it is particularly dark and a particularly faint celestial object is of interest, averted vision may be helpful. The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field. Because stargazing is best done from a dark place away from city lights, dark adaptation is important to achieve and maintain. It takes several minutes for eyes to adjust to the darkness necessary for seeing the most stars, and surroundings on the ground are hard to discern. A red flashlight can be used to illuminate star charts and telescope parts without undoing the dark adaptation. Constellations Star charts are produced to aid stargazers in identifying constellations and other celestial objects. Constellations are prominent because their stars tend to be brighter than other nearby stars in the sky. Different cultures have created different groupings of constellations based on differing interpretations of the more-or-less random patterns of dots in the sky. Constellations were identified without regard to distance to each star, but instead as if they were all dots on a dome. Orion is among the most prominent and recognizable constellations. The Big Dipper (which has a wide variety of other names) is helpful for navigation in the northern hemisphere because it points to Polaris, the north star. The pole stars are special because they are approximately in line with the Earth's axis of rotation so they appear to stay in one place while the other stars rotate around them through the course of a night (or a year). Planets Planets, named for the Greek word for 'wanderer', process through the starfield a little each day, executing loops with time scales dependent on the length of the planet's year or orbital period around the Sun. Planets, to the naked eye, appear as points of light in the sky with variable brightness. Planets shine due to sunlight reflecting or scattering from the planets' surface or atmosphere. Thus, the relative Sun-planet-Earth positions determine the planet's brightness. With a telescope or good binoculars, the planets appear as discs demonstrating finite size, and it is possible to observe orbiting moons which cast shadows onto the host planet's surface. Venus is the most prominent planet, often called the "morning star" or "evening star" because it is brighter than the stars and often the only "star" visible near sunrise or sunset, depending on its location in its orbit. Because of its brightness, Venus can sometimes be seen after sunrise. Mercury, Mars, Jupiter and Saturn are also visible to the naked eye in the night sky. The Moon The Moon appears as a grey disc in the sky with cratering visible to the naked eye. It spans, depending on its exact location, 29–33 arcminutes – which is about the size of a thumbnail at arm's length, and is readily identified. Over 29.53 days on average, the moon goes through a full cycle of lunar phases. People can generally identify phases within a few days by looking at the Moon. Unlike stars and most planets, the light reflected from the Moon is bright enough to be seen during the day. Some of the most spectacular moons come during the full moon phase near sunset or sunrise. The Moon on the horizon benefits from the Moon illusion which makes it appear larger. The Sun's light reflected from the Moon traveling through the atmosphere also appears to color the Moon orange and/or red. Comets Comets come to the night sky only rarely. Comets are illuminated by the Sun, and their tails extend away from the Sun. A comet with a visible tail is quite unusual – a great comet appears about once a decade. They tend to be visible only shortly before sunrise or after sunset because those are the times they are close enough to the Sun to show a tail. Clouds Clouds obscure the view of other objects in the sky, though varying thicknesses of cloud cover have differing effects. A very thin cirrus cloud in front of the moon might produce a rainbow-colored ring around the moon. Stars and planets are too small or dim to take on this effect and are instead only dimmed (often to the point of invisibility). Thicker cloud cover obscures celestial objects entirely, making the sky black or reflecting city lights back down. Clouds are often close enough to afford some depth perception, though they are hard to see without moonlight or light pollution. Other objects On clear dark nights in unpolluted areas, when the Moon appears thin or below the horizon, the Milky Way, a band of what looks like white dust, can be seen. The Magellanic Clouds of the southern sky are easily mistaken to be Earth-based clouds (hence the name) but are in fact collections of stars found outside the Milky Way known as dwarf galaxies. Zodiacal light is a glow that appears near the points where the Sun rises and sets, and is caused by sunlight interacting with interplanetary dust. Gegenschein is a faint bright spot in the night sky centered at the antisolar point, caused by the backscatter of sunlight by interplanetary dust. Shortly after sunset and before sunrise, artificial satellites often look like stars – similar in brightness and size – but move relatively quickly. Those that fly in low Earth orbit cross the sky in a couple of minutes. Some satellites, including space debris, appear to blink or have a periodic fluctuation in brightness because they are rotating. Satellite flares can appear brighter than Venus, with notable examples including the International Space Station (ISS) and Iridium Satellites. Meteors streak across the sky infrequently. During a meteor shower, they may average one a minute at irregular intervals, but otherwise their appearance is a random surprise. The occasional meteor will make a bright, fleeting streak across the sky, and they can be very bright in comparison to the night sky. Aircraft are also visible at night, distinguishable at a distance from other objects because their navigation lights blink. Sky map Future and past Beside the Solar System objects changing in the course of them and Earth orbiting and changing orbits over time around the Sun and in the case of the Moon around Earth, appearing over time smaller by expanding its orbit, the night sky also changes over the course of the years with stars having a proper motion and changing brightness because of being variable stars, by the distance to them getting larger or other celestial events like supernovas. Over a timescale of tens of billions of years the night sky in the Local Group will significantly change when the coalescence of the Andromeda Galaxy and the Milky Way merge into a single elliptical galaxy.
Physical sciences
Astronomy basics
Astronomy
998070
https://en.wikipedia.org/wiki/Node%20%28physics%29
Node (physics)
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes. Explanation Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string. In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other. In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node. In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures. In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node. Nodes are the points of zero displacement, not the points where two constituent waves intersect. Boundary conditions Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection: Fixed boundary: Examples of this type of boundary are the attachment point of a guitar string, the closed end of an open pipe like an organ pipe, or a woodwind pipe, the periphery of a drumhead, a transmission line with the end short circuited, or the mirrors at the ends of a laser cavity. In this type, the amplitude of the wave is forced to zero at the boundary, so there is a node at the boundary, and the other nodes occur at multiples of half a wavelength from it: Free boundary: Examples of this type are an open-ended organ or woodwind pipe, the ends of the vibrating resonator bars in a xylophone, glockenspiel or tuning fork, the ends of an antenna, or a transmission line with an open end. In this type the derivative (slope) of the wave's amplitude (in sound waves the pressure, in electromagnetic waves, the current) is forced to zero at the boundary. So there is an amplitude maximum (antinode) at the boundary, the first node occurs a quarter wavelength from the end, and the other nodes are at half wavelength intervals from there: Examples Sound A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density. The number of nodes in a specified length is directly proportional to the frequency of the wave. Occasionally on a guitar, violin, or other stringed instrument, nodes are used to create harmonics. When the finger is placed on top of the string at a certain point, but does not push the string all the way down to the fretboard, a third node is created (in addition to the bridge and nut) and a harmonic is sounded. During normal play when the frets are used, the harmonics are always present, although they are quieter. With the artificial node method, the overtone is louder and the fundamental tone is quieter. If the finger is placed at the midpoint of the string, the first overtone is heard, which is an octave above the fundamental note which would be played, had the harmonic not been sounded. When two additional nodes divide the string into thirds, this creates an octave and a perfect fifth (twelfth). When three additional nodes divide the string into quarters, this creates a double octave. When four additional nodes divide the string into fifths, this creates a double-octave and a major third (17th). The octave, major third and perfect fifth are the three notes present in a major chord. The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument. Waves in two or three dimensions In two dimensional standing waves, nodes are curves (often straight lines or circles when displayed on simple geometries.) For example, sand collects along the nodes of a vibrating Chladni plate to indicate regions where the plate is not moving. In chemistry, quantum mechanical waves, or "orbitals", are used to describe the wave-like properties of electrons. Many of these quantum waves have nodes and antinodes as well. The number and position of these nodes and antinodes give rise to many of the properties of an atom or covalent bond. Atomic orbitals are classified according to the number of radial and angular nodes. A radial node for the hydrogen atom is a sphere that occurs where the wavefunction for an atomic orbital is equal to zero, while the angular node is a flat plane. Molecular orbitals are classified according to bonding character. Molecular orbitals with an antinode between nuclei are very stable, and are known as "bonding orbitals" which strengthen the bond. In contrast, molecular orbitals with a node between nuclei will not be stable due to electrostatic repulsion and are known as "anti-bonding orbitals" which weaken the bond. Another such quantum mechanical concept is the particle in a box where the number of nodes of the wavefunction can help determine the quantum energy state—zero nodes corresponds to the ground state, one node corresponds to the 1st excited state, etc. In general, If one arranges the eigenstates in the order of increasing energies, , the eigenfunctions likewise fall in the order of increasing number of nodes; the nth eigenfunction has n−1 nodes, between each of which the following eigenfunctions have at least one node.
Physical sciences
Waves
Physics
998116
https://en.wikipedia.org/wiki/Node%20%28networking%29
Node (networking)
In telecommunications networks, a node (, ‘knot’) is either a redistribution point or a communication endpoint. A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. In data communication, a physical network node may either be data communication equipment (such as a modem, hub, bridge or switch) or data terminal equipment (such as a digital telephone handset, a printer or a host computer). A passive distribution point such as a distribution frame or patch panel is not a node. Computer networks In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer. If a network is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address. If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes. Telecommunications In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location register, gateway GPRS Support Node (GGSN) and serving GPRS support node (SGSN) are examples of nodes. Cellular network base stations are not considered to be nodes in this context. In cable television systems (CATV), this term has assumed a broader context and is generally associated with a fiber optic node. This can be defined as those homes or businesses within a specific geographic area that are served from a common fiber optic receiver. A fiber optic node is generally described in terms of the number of "homes passed" that are served by that specific fiber node. Distributed systems In a distributed system network, the nodes are clients, servers or peers. A peer may sometimes serve as client, sometimes server. In a peer-to-peer or overlay network, nodes that actively route data for the other networked devices as well as themselves are called supernodes. Distributed systems may sometimes use virtual nodes so that the system is not oblivious to the heterogeneity of the nodes. This issue is addressed with special algorithms, like consistent hashing, as it is the case in Amazon's Dynamo. Within a vast computer network, the individual computers on the periphery of the network, those that do not also connect other networks, and those that often connect transiently to one or more clouds are called end nodes. Typically, within the cloud computing construct, the individual user or customer computer that connects into one well-managed cloud is called an end node. Since these computers are a part of the network yet unmanaged by the cloud's host, they present significant risks to the entire cloud. This is called the end node problem. There are several means to remedy this problem but all require instilling trust in the end node computer.
Technology
Networks
null
998156
https://en.wikipedia.org/wiki/Haze
Haze
Haze is traditionally an atmospheric phenomenon in which dust, smoke, and other dry particulates suspended in air obscure visibility and the clarity of the sky. The World Meteorological Organization manual of codes includes a classification of particulates causing horizontal obscuration into categories of fog, ice fog, steam fog, mist, haze, smoke, volcanic ash, dust, sand, and snow. Sources for particles that cause haze include farming (stubble burning, ploughing in dry weather), traffic, industry, windy weather, volcanic activity and wildfires. Seen from afar (e.g. an approaching airplane) and depending on the direction of view with respect to the Sun, haze may appear brownish or bluish, while mist tends to be bluish grey instead. Whereas haze often is considered a phenomenon occurring in dry air, mist formation is a phenomenon in saturated, humid air. However, haze particles may act as condensation nuclei that leads to the subsequent vapor condensation and formation of mist droplets; such forms of haze are known as "wet haze". In meteorological literature, the word haze is generally used to denote visibility-reducing aerosols of the wet type suspended in the atmosphere. Such aerosols commonly arise from complex chemical reactions that occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulfuric acid when exposed. The reactions are enhanced in the presence of sunlight, high relative humidity, and an absence of air flow (wind). A small component of wet-haze aerosols appear to be derived from compounds released by trees when burning, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon. Large areas of haze covering many thousands of kilometers may be produced under extensive favorable conditions each summer. Air pollution Haze often occurs when suspended dust and smoke particles accumulate in relatively dry air. When weather conditions block the dispersal of smoke and other pollutants they concentrate and form a usually low-hanging shroud that impairs visibility and may become a respiratory health threat if excessively inhaled. Industrial pollution can result in dense haze, which is known as smog. Since 1991, haze has been a particularly acute problem in Southeast Asia. The main source of the haze has been smoke from fires occurring in Sumatra and Borneo which dispersed over a wide area. In response to the 1997 Southeast Asian haze, the ASEAN countries agreed on a Regional Haze Action Plan (1997) as an attempt to reduce haze. In 2002, all ASEAN countries signed the Agreement on Transboundary Haze Pollution, but the pollution is still a problem there today. Under the agreement, the ASEAN secretariat hosts a co-ordination and support unit. During the 2013 Southeast Asian haze, Singapore experienced a record high pollution level, with the 3-hour Pollutant Standards Index reaching a record high of 401. In the United States, the Interagency Monitoring of Protected Visual Environments (IMPROVE) program was developed as a collaborative effort between the US EPA and the National Park Service in order to establish the chemical composition of haze in National Parks and establish air pollution control measures in order to restore the visibility of the air to pre-industrial levels. Additionally, the Clean Air Act requires that any current visibility problems be addressed and remedied, and future visibility problems be prevented, in 156 Class I Federal areas located throughout the United States. A full list of these areas is available on EPA's website. In addition to the severe health issues caused by haze from air pollution, dust storm particles, and bush fire smoke, reduction in irradiance is the most dominant impact of these sources of haze and a growing issue for photovoltaic production as the solar industry grows. Smog also lowers agricultural yield and it has been proposed that pollution controls could increase agricultural production in China. These effects are negative for both sides of agrivoltaics (the combination of photovoltaic electricity production and food from agriculture). International disputes Transboundary haze Haze is no longer just a confined as a domestic problem. It has become one of the causes of international disputes among neighboring countries. Haze can migrate to adjacent countries in the path of wind and thereby pollutes other countries as well, even if haze does not first manifest there. One of the most recent problems occur in Southeast Asia which largely affects the nations of Indonesia, Malaysia and Singapore. In 2013, due to forest fires in Indonesia, Kuala Lumpur and surrounding areas became shrouded in a pall of noxious fumes dispersed from Indonesia, that brings a smell of ash and coal for more than a week, in the country's worst environmental crisis since 1997. The main sources of the haze are Indonesia's Sumatra Island, Indonesian areas of Borneo, and Riau, where farmers, plantation owners and miners have set hundreds of fires in the forests to clear land during dry weather. Winds blew most of the particulates and fumes across the narrow Strait of Malacca to Malaysia, although parts of Indonesia in the path are also affected. The 2015 Southeast Asian haze was another major crisis of air quality, although there were occasions such as the 2006 and 2019 haze which were less impactful than the three major Southeast Asian haze of 1997, 2013 and 2015. Obscuration Haze causes issues in the area of terrestrial photography and imaging, where the penetration of large amounts of dense atmosphere may be necessary to image distant subjects. This results in the visual effect of a loss of contrast in the subject, due to the effect of light scattering and reflection through the haze particles. For these reasons, sunrise and sunset colors and possibly the sun itself appear subdued on hazy days, and stars may be obscured by haze at night. In some cases, attenuation by haze is so great that, toward sunset, the sun disappears altogether before even reaching the horizon. Haze can be defined as an aerial form of the Tyndall effect therefore unlike other atmospheric effects such as cloud, mist and fog, haze is spectrally selective in accordance to the electromagnetic spectrum: shorter (blue) wavelengths are scattered more, and longer (red/infrared) wavelengths are scattered less. For this reason, many super-telephoto lenses often incorporate yellow light filters or coatings to enhance image contrast. Infrared (IR) imaging may also be used to penetrate haze over long distances, with a combination of IR-pass optical filters and IR-sensitive detectors at the intended destination.
Physical sciences
Atmospheric optics
Earth science
999752
https://en.wikipedia.org/wiki/Lithocarpus
Lithocarpus
Lithocarpus is a genus in the beech family, Fagaceae. Trees in this genus are commonly known as the stone oaks and differ from Quercus primarily because they produce insect-pollinated flowers on erect spikes and the female flowers have short styles with punctate stigmas. At current, around 340 species have been described, mostly restricted to Southeast Asia. Fossils show that Lithocarpus formerly had a wider distribution, being found in North America and Europe during the Eocene to Miocene epochs. The species extend from the foothills of the Hengduan Mountains, where they form dominant stands of trees, through Indochina and the Malayan Archipelago, crossing Wallace's Line and reaching Papua. In general, these trees are most dominant in the uplands (more than above sea level) and have many ecological similarities to the Dipterocarpaceae, the dominant lowland tree group. These trees are intolerant of seasonal droughts, not being found on the Lesser Sunda Islands, despite their ability to cross numerous water barriers to reach Papua. The North American tanoak or tanbark oak (Notholithocarpus densiflorus) was previously included in this genus but recent evidence indicates the similarities in flower and fruit morphology are due to convergent evolution. Both genetic and morphological evidence demonstrate that the tanoak is a distant relative to Asian stone oaks, and therefore tanoak has been moved into a new genus, Notholithocarpus. Lithocarpus trees are evergreen trees with leathery, alternate leaves, the margins of which are almost always entire, rarely toothed. The seed is a nut similar to an oak acorn with a cupule enclosing the basal part of the fruit. Cupules of stone oaks demonstrate a wide variety in the type and arrangement of lamellae and scales on the outside of the cupule, with some of them completely enclosing the nut, even becoming irregularly dehiscent in a few species. The seeds are often protected by a hard woody shell (hence the genus name, from Greek , , "stone" and , , "seed"). In some sections of the genus, the seed is embedded in the basal material of the fruit which becomes highly lignified and hard, lending greater mechanical protection to the seed, creating a novel type of fruit. The kernel is edible in some species (e.g. Lithocarpus edulis), but inedible, and very bitter, in others. Several of the species are very attractive ornamental trees, used in parks and large gardens in warm temperate and subtropical areas. Classification In 1948, Aimee Camus produced a comprehensive treatment of the two major genera in the family, given the specimens available to her at the Natural History Museum in Paris. Because of the many collections available from the French colonies in subtropical and tropical Indochina, she worked extensively with stone oaks from the region. Most importantly, she provided the only existing infrageneric structure within the genus but unfortunately, many of the species from the Malesian region, south of the Isthmus of Kra, are not incorporated into this system. Her classification system included 13 subgenera, including the subgenus Pasania which is by far the largest division within the genus. About 100 Asian species were treated separately in Pasania, at the genus level, and occasionally the old name persists on some herbarium sheets that have not been annotated. Several of the other subgenera possess fewer than ten species and have distinctive morphologies. Few of the Malesian species are treated in Camus' system and Soepadmo, who wrote the Flora Malesiana treatment, made no attempt to update or integrate these species into Camus' system, therefore a lot of work obviously remains to be done. Camus' system was highly detailed, as three levels of organization are recognized below the subgenus, but the classification is not systematic at the lowest level. List of subgenera (No. of species in Camus' treatment): Castanicarpus (1); Corylopasania (2); Cryptostylis (1); Cyclobalanus (58); Cyrtobalanus (1); Eulithocarpus (11); Gymnobalanus (10); Liebmannia (3); Oerstedia (1); Pachybalanus (14); Pasania (209); Pseudosynaedrys (9); Synaedrys (15); indeterminate (12). Early researchers into the family often suggested that the stone oaks were primitive in the family. An exhaustive study of the inflorescence and fruits of 73 species from eight of Camus' subgenera found that important development and evolutionary characters distinguish the major groups in the genus and indicate differences among the genera of the family. Species
Biology and health sciences
Fagales
Plants
590920
https://en.wikipedia.org/wiki/Hemodialysis
Hemodialysis
Hemodialysis, also spelled haemodialysis, or simply dialysis, is a process of filtering the blood of a person whose kidneys are not working normally. This type of dialysis achieves the extracorporeal removal of waste products such as creatinine and urea and free water from the blood when the kidneys are in a state of kidney failure. Hemodialysis is one of three renal replacement therapies (the other two being kidney transplant and peritoneal dialysis). An alternative method for extracorporeal separation of blood components such as plasma or cells is apheresis. Hemodialysis can be an outpatient or inpatient therapy. Routine hemodialysis is conducted in a dialysis outpatient facility, either a purpose-built room in a hospital or a dedicated, stand-alone clinic. Less frequently hemodialysis is done at home. Dialysis treatments in a clinic are initiated and managed by specialized staff made up of nurses and technicians; dialysis treatments at home can be self-initiated and managed or done jointly with the assistance of a trained helper who is usually a family member.<ref></ref> Medical uses Hemodialysis is the choice of renal replacement therapy for patients who need dialysis acutely, and for many patients as maintenance therapy. It provides excellent, rapid clearance of solutes. A nephrologist (a medical kidney specialist) decides when hemodialysis is needed and the various parameters for a dialysis treatment. These include frequency (how many treatments per week), length of each treatment, and the blood and dialysis solution flow rates, as well as the size of the dialyzer. The composition of the dialysis solution is also sometimes adjusted in terms of its sodium, potassium, and bicarbonate levels. In general, the larger the body size of an individual, the more dialysis they will need. In North America and the UK, 3–4 hour treatments (sometimes up to 5 hours for larger patients) given 3 times a week are typical. Twice-a-week sessions are limited to patients who have a substantial residual kidney function. Four sessions per week are often prescribed for larger patients, as well as patients who have trouble with fluid overload. Finally, there is growing interest in short daily home hemodialysis, which is 1.5 – 4 hr sessions given 5–7 times per week, usually at home. There is also interest in nocturnal dialysis, which involves dialyzing a patient, usually at home, for 8–10 hours per night, 3–6 nights per week. Nocturnal in-center dialysis, 3–4 times per week, is also offered at a handful of dialysis units in the United States. Adverse effects Disadvantages Restricts independence, as people undergoing this procedure cannot travel around because of supplies' availability Requires more supplies such as high water quality and electricity Requires reliable technology like dialysis machines The procedure is complicated and requires that care givers have more knowledge Requires time to set up and clean dialysis machines, and expense with machines and associated staff Complications Fluid shifts Hemodialysis often involves fluid removal (through ultrafiltration), because most patients with renal failure pass little or no urine. Side effects caused by removing too much fluid and/or removing fluid too rapidly include low blood pressure, fatigue, chest pains, leg-cramps, nausea and headaches. These symptoms can occur during the treatment and can persist post treatment; they are sometimes collectively referred to as the dialysis hangover or dialysis washout. The severity of these symptoms is usually proportionate to the amount and speed of fluid removal. However, the impact of a given amount or rate of fluid removal can vary greatly from person to person and day to day. These side effects can be avoided and/or their severity lessened by limiting fluid intake between treatments or increasing the dose of dialysis e.g. dialyzing more often or longer per treatment than the standard three times a week, 3–4 hours per treatment schedule. Access-related Since hemodialysis requires access to the circulatory system, patients undergoing hemodialysis may expose their circulatory system to microbes, which can lead to bacteremia, an infection affecting the heart valves (endocarditis) or an infection affecting the bones (osteomyelitis). The risk of infection varies depending on the type of access used (see below). Bleeding may also occur, again the risk varies depending on the type of access used. Infections can be minimized by strictly adhering to infection control best practices. Venous needle dislodgement Venous needle dislodgement (VND) is a fatal complication of hemodialysis where the patient experiences rapid blood loss due to a faltering attachment of the needle to the venous access point. Anticoagulation-related Unfractioned heparin (UHF) is the most commonly used anticoagulant in hemodialysis, as it is generally well tolerated and can be quickly reversed with protamine sulfate. Low-molecular weight heparin (LMWH) is however, becoming increasingly popular and is now the norm in western Europe. Compared to UHF, LMWH has the advantage of an easier mode of administration and reduced bleeding but the effect cannot be easily reversed. Heparin can infrequently cause a low platelet count due to a reaction called heparin-induced thrombocytopenia (HIT). The risk of HIT is lower with LMWH compared to UHF. In such patients, alternative anticoagulants may be used. Even though HIT causes a low platelet count it can paradoxically predispose thrombosis. When comparing UHF to LMWH for the risk of adverse effects, the evidence is uncertain as to which treatment approach to thin blood has the least side effects and what is the ideal treatment strategy for preventing blood clots during hemodialysis. In patients at high risk of bleeding, dialysis can be done without anticoagulation. First-use syndrome First-use syndrome is a rare but severe anaphylactic reaction to the artificial kidney. Its symptoms include sneezing, wheezing, shortness of breath, back pain, chest pain, or sudden death. It can be caused by residual sterilant in the artificial kidney or the material of the membrane itself. In recent years, the incidence of first-use syndrome has decreased, due to an increased use of gamma irradiation, steam sterilization, or electron-beam radiation instead of chemical sterilants, and the development of new semipermeable membranes of higher biocompatibility. New methods of processing previously acceptable components of dialysis must always be considered. For example, in 2008, a series of first-use type of reactions, including deaths, occurred due to heparin contaminated during the manufacturing process with oversulfated chondroitin sulfate. Cardiovascular Long term complications of hemodialysis include hemodialysis-associated amyloidosis, neuropathy and various forms of heart disease. Increasing the frequency and length of treatments has been shown to improve fluid overload and enlargement of the heart that is commonly seen in such patients. Vitamin deficiency Folate deficiency can occur in some patients having hemodialysis. Electrolyte imbalances Although a dialysate fluid, which is a solution containing diluted electrolytes, is employed for the filtration of blood, haemodialysis can cause an electrolyte imbalance. These imbalances can derive from abnormal concentrations of potassium (hypokalemia, hyperkalemia), and sodium (hyponatremia, hypernatremia). These electrolyte imbalances are associated with increased cardiovascular mortality. Mechanism and technique The principle of hemodialysis is the same as other methods of dialysis; it involves diffusion of solutes across a semipermeable membrane. Hemodialysis utilizes counter current flow, where the dialysate is flowing in the opposite direction to blood flow in the extracorporeal circuit. Counter-current flow maintains the concentration gradient across the membrane at a maximum and increases the efficiency of the dialysis. Fluid removal (ultrafiltration) is achieved by altering the hydrostatic pressure of the dialysate compartment, causing free water and some dissolved solutes to move across the membrane along a created pressure gradient. The dialysis solution that is used may be a sterilized solution of mineral ions and is called dialysate. Urea and other waste products including potassium, and phosphate diffuse into the dialysis solution. However, concentrations of sodium and chloride are similar to those of normal plasma to prevent loss. Sodium bicarbonate is added in a higher concentration than plasma to correct blood acidity. A small amount of glucose is also commonly used. The concentration of electrolytes in the dialysate is adjusted depending on the patient's status before the dialysis. If a high concentration of sodium is added to the dialysate, the patient can become thirsty and end up accumulating body fluids, which can lead to heart damage. On the contrary, low concentrations of sodium in the dialysate solution have been associated with a low blood pressure and intradialytic weight gain, which are markers of improved outcomes. However, the benefits of using a low concentration of sodium have not been demonstrated yet, since these patients can also develop cramps, intradialytic hypotension and low sodium in serum, which are symptoms associated with a high mortality risk. Note that this is a different process to the related technique of hemofiltration. Access Three primary methods are used to gain access to the blood for hemodialysis: an intravenous catheter, an arteriovenous fistula (AV) and a synthetic graft. The type of access is influenced by factors such as the expected time course of a patient's renal failure and the condition of their vasculature. Patients may have multiple access procedures, usually because an AV fistula or graft is maturing and a catheter is still being used. The placement of a catheter is usually done under light sedation, while fistulas and grafts require an operation. Types There are three types of hemodialysis: conventional hemodialysis, daily hemodialysis, and nocturnal hemodialysis. Below is an adaptation and summary from a brochure of The Ottawa Hospital. Conventional hemodialysis Conventional hemodialysis is usually done three times per week, for about three to four hours for each treatment (Sometimes five hours for larger patients), during which the patient's blood is drawn out through a tube at a rate of 200–400 mL/min. The tube is connected to a 15, 16, or 17 gauge needle inserted in the dialysis fistula or graft, or connected to one port of a dialysis catheter. The blood is then pumped through the dialyzer, and then the processed blood is pumped back into the patient's bloodstream through another tube (connected to a second needle or port). During the procedure, the patient's blood pressure is closely monitored, and if it becomes low, or the patient develops any other signs of low blood volume such as nausea, the dialysis attendant can administer extra fluid through the machine. During the treatment, the patient's entire blood volume (about 5 L) circulates through the machine every 15 minutes. During this process, the dialysis patient is exposed to a week's worth of water for the average person. Daily hemodialysis Daily hemodialysis is typically used by those patients who do their own dialysis at home. It is less stressful (more gentle) but does require more frequent access. This is simple with catheters, but more problematic with fistulas or grafts. The "buttonhole technique" can be used for fistulas, but not grafts, requiring frequent access. Daily hemodialysis is usually done for 2 hours six days a week. Nocturnal hemodialysis The procedure of nocturnal hemodialysis is similar to conventional hemodialysis except it is performed three to six nights a week and between six and ten hours per session while the patient sleeps. Equipment The hemodialysis machine pumps the patient's blood and the dialysate through the dialyzer. The newest dialysis machines on the market are highly computerized and continuously monitor an array of safety-critical parameters, including blood (QB) and dialysate QD) flow rates; dialysis solution conductivity, temperature, and pH; and analysis of the dialysate for evidence of blood leakage or presence of air. Any reading that is out of normal range triggers an audible alarm to alert the patient-care technician who is monitoring the patient. Manufacturers of dialysis machines include companies such as Nipro, Fresenius, Gambro, Baxter, B. Braun, NxStage and Bellco. QB to QD flow rates have to reach 1:2 ratio where QB is set around 250 ml/min and QD is set around 500 ml/min to ensure good dialysis efficiency. Water system An extensive water purification system is critical for hemodialysis. Since dialysis patients are exposed to vast quantities of water, which is mixed with dialysate concentrate to form the dialysate, even trace mineral contaminants or bacterial endotoxins can filter into the patient's blood. Because the damaged kidneys cannot perform their intended function of removing impurities, molecules introduced into the bloodstream from improperly purified water can build up to hazardous levels, causing numerous symptoms or death. Aluminum, chlorine and or chloramines, fluoride, copper, and zinc, as well as bacterial fragments and endotoxins, have all caused problems in this regard. For this reason, water used in hemodialysis is carefully purified before use. A common water purification system includes a multi stage system. The water is first softened. Next the water is run through a tank containing activated charcoal to adsorb organic contaminants, and chlorine and chloramines. The water may then be temperature-adjusted if needed. Primary purification is then done by forcing water through a membrane with very tiny pores, a so-called reverse osmosis membrane. This lets the water pass, but holds back even very small solutes such as electrolytes. Final removal of leftover electrolytes is done in some water systems by passing the water through an electrodeionization (EDI) device, which removes any leftover anions or cations and replace them with hydroxyl and hydrogen ions, respectively, leaving ultrapure water. Even this degree of water purification may be insufficient. The trend lately is to pass this final purified water (after mixing with dialysate concentrate) through an ultrafiltration membrane or absolute filter. This provides another layer of protection by removing impurities, especially those of bacterial origin, that may have accumulated in the water after its passage through the original water purification system. Dialysate Once purified water is mixed with dialysate (also called dialysis fluid) concentrate consisting of: sodium, potassium, calcium, magnesium and dextrose mixed in an acid solution; this solution is mixed with the purified water and a chemical buffer. This forms the dialysate solution, which contains the basic electrolytes found in human blood. This dialysate solution contains charged ions that conducts electricity. During dialysis, the conductivity of dialysis solution is continuously monitored to ensure that the water and dialysate concentrate are being mixed in the proper proportions. Both excessively concentrated dialysis solution and excessively dilute solution can cause severe clinical problems. Chemical buffers such as bicarbonate or lactate can alternatively be added to regulate the pH of the dialysate. Both buffers can stabilize the pH of the solution at a physiological level with no negative impacts on the patient. There is some evidence of a reduction in the incidence of heart and blood problems and high blood pressure events when using bicarbonate as the pH buffer compared to lactate. However, the mortality rates after using both buffers do not show a significative difference. Dialyzer The dialyzer is the piece of equipment that filters the blood. Almost all dialyzers in use today are of the hollow-fiber variety. A cylindrical bundle of hollow fibers, whose walls are composed of semi-permeable membrane, is anchored at each end into potting compound (a sort of glue). This assembly is then put into a clear plastic cylindrical shell with four openings. One opening or blood port at each end of the cylinder communicates with each end of the bundle of hollow fibers. This forms the "blood compartment" of the dialyzer. Two other ports are cut into the side of the cylinder. These communicate with the space around the hollow fibers, the "dialysate compartment." Blood is pumped via the blood ports through this bundle of very thin capillary-like tubes, and the dialysate is pumped through the space surrounding the fibers. Pressure gradients are applied when necessary to move fluid from the blood to the dialysate compartment. Membrane and flux Dialyzer membranes come with different pore sizes. Those with smaller pore size are called "low-flux" and those with larger pore sizes are called "high-flux." Some larger molecules, such as beta-2-microglobulin, are not removed at all with low-flux dialyzers; lately, the trend has been to use high-flux dialyzers. However, such dialyzers require newer dialysis machines and high-quality dialysis solution to control the rate of fluid removal properly and to prevent backflow of dialysis solution impurities into the patient through the membrane. Dialyzer membranes used to be made primarily of cellulose (derived from cotton linter). The surface of such membranes was not very biocompatible, because exposed hydroxyl groups would activate complement in the blood passing by the membrane. Therefore, the basic, "unsubstituted" cellulose membrane was modified. One change was to cover these hydroxyl groups with acetate groups (cellulose acetate); another was to mix in some compounds that would inhibit complement activation at the membrane surface (modified cellulose). The original "unsubstituted cellulose" membranes are no longer in wide use, whereas cellulose acetate and modified cellulose dialyzers are still used. Cellulosic membranes can be made in either low-flux or high-flux configuration, depending on their pore size. Another group of membranes is made from synthetic materials, using polymers such as polyarylethersulfone, polyamide, polyvinylpyrrolidone, polycarbonate, and polyacrylonitrile. These synthetic membranes activate complement to a lesser degree than unsubstituted cellulose membranes. However, they are in general more hydrophobic which leads to increased adsorption of proteins to the membrane surface which in turn can lead to complement system activation. Synthetic membranes can be made in either low- or high-flux configuration, but most are high-flux. Nanotechnology is being used in some of the most recent high-flux membranes to create a uniform pore size. The goal of high-flux membranes is to pass relatively large molecules such as beta-2-microglobulin (MW 11,600 daltons), but not to pass albumin (MW ~66,400 daltons). Every membrane has pores in a range of sizes. As pore size increases, some high-flux dialyzers begin to let albumin pass out of the blood into the dialysate. This is thought to be undesirable, although one school of thought holds that removing some albumin may be beneficial in terms of removing protein-bound uremic toxins. Membrane flux and outcome Whether using a high-flux dialyzer improves patient outcomes is somewhat controversial, but several important studies have suggested that it has clinical benefits. The NIH-funded HEMO trial compared survival and hospitalizations in patients randomized to dialysis with either low-flux or high-flux membranes. Although the primary outcome (all-cause mortality) did not reach statistical significance in the group randomized to use high-flux membranes, several secondary outcomes were better in the high-flux group. A recent Cochrane analysis concluded that benefit of membrane choice on outcomes has not yet been demonstrated. A collaborative randomized trial from Europe, the MPO (Membrane Permeabilities Outcomes) study, comparing mortality in patients just starting dialysis using either high-flux or low-flux membranes, found a nonsignificant trend to improved survival in those using high-flux membranes, and a survival benefit in patients with lower serum albumin levels or in diabetics. Membrane flux and beta-2-microglobulin amyloidosis High-flux dialysis membranes and/or intermittent internal on-line hemodiafiltration (iHDF) may also be beneficial in reducing complications of beta-2-microglobulin accumulation. Because beta-2-microglobulin is a large molecule, with a molecular weight of about 11,600 daltons, it does not pass at all through low-flux dialysis membranes. Beta-2-M is removed with high-flux dialysis, but is removed even more efficiently with IHDF. After several years (usually at least 5–7), patients on hemodialysis begin to develop complications from beta-2-M accumulation, including carpal tunnel syndrome, bone cysts, and deposits of this amyloid in joints and other tissues. Beta-2-M amyloidosis can cause very serious complications, including spondyloarthropathy, and often is associated with shoulder joint problems. Observational studies from Europe and Japan have suggested that using high-flux membranes in dialysis mode, or IHDF, reduces beta-2-M complications in comparison to regular dialysis using a low-flux membrane.KDOQI Clinical Practice Guidelines for Hemodialysis Adequacy, 2006 Updates. CPR 5. Dialyzers and efficiency Dialyzers come in many different sizes. A larger dialyzer with a larger membrane area (A) will usually remove more solutes than a smaller dialyzer, especially at high blood flow rates. This also depends on the membrane permeability coefficient K0 for the solute in question. So dialyzer efficiency is usually expressed as the K0A – the product of permeability coefficient and area. Most dialyzers have membrane surface areas of 0.8 to 2.2 square meters, and values of K0A ranging from about 500 to 1500 mL/min. K0A'', expressed in mL/min, can be thought of as the maximum clearance of a dialyzer at very high blood and dialysate flow rates. Reuse of dialyzers The dialyzer may either be discarded after each treatment or be reused. Reuse requires an extensive procedure of high-level disinfection. Reused dialyzers are not shared between patients. There was an initial controversy about whether reusing dialyzers worsened patient outcomes. The consensus today is that reuse of dialyzers, if done carefully and properly, produces similar outcomes to single use of dialyzers. Dialyzer Reuse is a practice that has been around since the invention of the product. This practice includes the cleaning of a used dialyzer to be reused multiple times for the same patient. Dialysis clinics reuse dialyzers to become more economical and reduce the high costs of "single-use" dialysis which can be extremely expensive and wasteful. Single used dialyzers are initiated just once and then thrown out creating a large amount of bio-medical waste with no mercy for cost savings. If done right, dialyzer reuse can be very safe for dialysis patients. There are two ways of reusing dialyzers, manual and automated. Manual reuse involves the cleaning of a dialyzer by hand. The dialyzer is semi-disassembled then flushed repeatedly before being rinsed with water. It is then stored with a liquid disinfectant(PAA) for 18+ hours until its next use. Although many clinics outside the USA use this method, some clinics are switching toward a more automated/streamlined process as the dialysis practice advances. The newer method of automated reuse is achieved by means of a medical device that began in the early 1980s. These devices are beneficial to dialysis clinics that practice reuse – especially for large dialysis clinical entities – because they allow for several back to back cycles per day. The dialyzer is first pre-cleaned by a technician, then automatically cleaned by machine through a step-cycles process until it is eventually filled with liquid disinfectant for storage. Although automated reuse is more effective than manual reuse, newer technology has sparked even more advancement in the process of reuse. When reused over 15 times with current methodology, the dialyzer can lose B2m, middle molecule clearance and fiber pore structure integrity, which has the potential to reduce the effectiveness of the patient's dialysis session. Currently, as of 2010, newer, more advanced reprocessing technology has proven the ability to eliminate the manual pre-cleaning process altogether and has also proven the potential to regenerate (fully restore) all functions of a dialyzer to levels that are approximately equivalent to single-use for more than 40 cycles. As medical reimbursement rates begin to fall even more, many dialysis clinics are continuing to operate effectively with reuse programs especially since the process is easier and more streamlined than before. Epidemiology Hemodialysis was one of the most common procedures performed in U.S. hospitals in 2011, occurring in 909,000 stays (a rate of 29 stays per 10,000 population). This was an increase of 68 percent from 1997, when there were 473,000 stays. It was the fifth most common procedure for patients aged 45–64 years. History Many have played a role in developing dialysis as a practical treatment for renal failure, starting with Thomas Graham of Glasgow, who first presented the principles of solute transport across a semipermeable membrane in 1854. The artificial kidney was first developed by Abel, Rountree, and Turner in 1913, the first hemodialysis in a human being was by Haas (February 28, 1924) and the artificial kidney was developed into a clinically useful apparatus by Kolff in 1943 to 1945. This research showed that life could be prolonged in patients dying of kidney failure. Willem Kolff was the first to construct a working dialyzer in 1943. The first successfully treated patient was a 67-year-old woman in uremic coma who regained consciousness after 11 hours of hemodialysis with Kolff's dialyzer in 1945. At the time of its creation, Kolff's goal was to provide life support during recovery from acute renal failure. After World War II ended, Kolff donated the five dialyzers he had made to hospitals around the world, including Mount Sinai Hospital, New York. Kolff gave a set of blueprints for his hemodialysis machine to George Thorn at the Peter Bent Brigham Hospital in Boston. This led to the manufacture of the next generation of Kolff's dialyzer, a stainless steel Kolff-Brigham dialysis machine. According to McKellar (1999), a significant contribution to renal therapies was made by Canadian surgeon Gordon Murray with the assistance of two doctors, an undergraduate chemistry student, and research staff. Murray's work was conducted simultaneously and independently from that of Kolff. Murray's work led to the first successful artificial kidney built in North America in 1945–46, which was successfully used to treat a 26-year-old woman out of a uraemic coma in Toronto. The less-crude, more compact, second-generation "Murray-Roschlau" dialyser was invented in 1952–53, whose designs were stolen by German immigrant Erwin Halstrup, and passed off as his own (the "Halstrup–Baumann artificial kidney"). By the 1950s, Willem Kolff's invention of the dialyzer was used for acute renal failure, but it was not seen as a viable treatment for patients with stage 5 chronic kidney disease (CKD). At the time, doctors believed it was impossible for patients to have dialysis indefinitely for two reasons. First, they thought no man-made device could replace the function of kidneys over the long term. In addition, a patient undergoing dialysis developed damaged veins and arteries, so that after several treatments, it became difficult to find a vessel to access the patient's blood. The original Kolff kidney was not very useful clinically, because it did not allow for removal of excess fluid. Swedish professor Nils Alwall encased a modified version of this kidney inside a stainless steel canister, to which a negative pressure could be applied, in this way effecting the first truly practical application of hemodialysis, which was done in 1946 at the University of Lund. Alwall also was arguably the inventor of the arteriovenous shunt for dialysis. He reported this first in 1948 where he used such an arteriovenous shunt in rabbits. Subsequently, he used such shunts, made of glass, as well as his canister-enclosed dialyzer, to treat 1500 patients in renal failure between 1946 and 1960, as reported to the First International Congress of Nephrology held in Evian in September 1960. Alwall was appointed to a newly created Chair of Nephrology at the University of Lund in 1957. Subsequently, he collaborated with Swedish businessman Holger Crafoord to found one of the key companies that would manufacture dialysis equipment in the past 50 years, Gambro. The early history of dialysis has been reviewed by Stanley Shaldon. Belding H. Scribner, working with the biomechanical engineer Wayne Quinton, modified the glass shunts used by Alwall by making them from Teflon. Another key improvement was to connect them to a short piece of silicone elastomer tubing. This formed the basis of the so-called Scribner shunt, perhaps more properly called the Quinton-Scribner shunt. After treatment, the circulatory access would be kept open by connecting the two tubes outside the body using a small U-shaped Teflon tube, which would shunt the blood from the tube in the artery back to the tube in the vein. In 1962, Scribner started the world's first outpatient dialysis facility, the Seattle Artificial Kidney Center, later renamed the Northwest Kidney Centers. Immediately the problem arose of who should be given dialysis, since demand far exceeded the capacity of the six dialysis machines at the center. Scribner decided that he would not make the decision about who would receive dialysis and who would not. Instead, the choices would be made by an anonymous committee, which could be viewed as one of the first bioethics committees. For a detailed history of successful and unsuccessful attempts at dialysis, including pioneers such as Abel and Roundtree, Haas, and Necheles, see this review by Kjellstrand.
Biology and health sciences
Treatments
Health
591253
https://en.wikipedia.org/wiki/Kirchhoff%27s%20circuit%20laws
Kirchhoff's circuit laws
Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis. Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits. Kirchhoff's current law This law, also called Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently: The algebraic sum of currents in a network of conductors meeting at a point is zero. Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as: where is the total number of branches with currents flowing towards or away from the node. Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region. This means that the current law relies on the fact that the net charge in the wires and components is constant. Uses A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such as SPICE. The current law is used with Ohm's law to perform nodal analysis. The current law is applicable to any lumped network irrespective of the nature of the network; whether unilateral or bilateral, active or passive, linear or non-linear. Kirchhoff's voltage law This law, also called Kirchhoff's second law, or Kirchhoff's loop rule, states the following: The directed sum of the potential differences (voltages) around any closed loop is zero. Similarly to Kirchhoff's current law, the voltage law can be stated as: Here, is the total number of voltages measured. Generalization In the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements and conductors. In the low-frequency limit, this is a corollary of Faraday's law of induction (which is one of Maxwell's equations). This has practical application in situations involving "static electricity". Limitations Kirchhoff's circuit laws are the result of the lumped-element model and both depend on the model being applicable to the circuit in question. When the model is not applicable, the laws do not apply. The current law is dependent on the assumption that the net charge in any wire, junction or lumped component is constant. Whenever the electric field between parts of the circuit is non-negligible, such as when two wires are capacitively coupled, this may not be the case. This occurs in high-frequency AC circuits, where the lumped element model is no longer applicable. For example, in a transmission line, the charge density in the conductor may be constantly changing. On the other hand, the voltage law relies on the fact that the actions of time-varying magnetic fields are confined to individual components, such as inductors. In reality, the induced electric field produced by an inductor is not confined, but the leaked fields are often negligible. Modelling real circuits with lumped elements The lumped element approximation for a circuit is accurate at low frequencies. At higher frequencies, leaked fluxes and varying charge densities in conductors become significant. To an extent, it is possible to still model such circuits using parasitic components. If frequencies are too high, it may be more appropriate to simulate the fields directly using finite element modelling or other techniques. To model circuits so that both laws can still be used, it is important to understand the distinction between physical circuit elements and the ideal lumped elements. For example, a wire is not an ideal conductor. Unlike an ideal conductor, wires can inductively and capacitively couple to each other (and to themselves), and have a finite propagation delay. Real conductors can be modeled in terms of lumped elements by considering parasitic capacitances distributed between the conductors to model capacitive coupling, or parasitic (mutual) inductances to model inductive coupling. Wires also have some self-inductance. Example Assume an electric network consisting of two voltage sources and three resistors. According to the first law: Applying the second law to the closed circuit , and substituting for voltage using Ohm's law gives: The second law, again combined with Ohm's law, applied to the closed circuit gives: This yields a system of linear equations in , , : which is equivalent to Assuming the solution is The current has a negative sign which means the assumed direction of was incorrect and is actually flowing in the direction opposite to the red arrow labeled . The current in flows from left to right.
Physical sciences
Electrical circuits
null
591394
https://en.wikipedia.org/wiki/Principle%20of%20explosion
Principle of explosion
In classical logic, intuitionistic logic, and similar logical systems, the principle of explosion is the law according to which any statement can be proven from a contradiction. That is, from a contradiction, any proposition (including its negation) can be inferred; this is known as deductive explosion. The proof of this principle was first given by 12th-century French philosopher William of Soissons. Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory. As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument: We know that "Not all lemons are yellow", as it has been assumed to be true. We know that "All lemons are yellow", as it has been assumed to be true. Therefore, the two-part statement "All lemons are yellow or unicorns exist" must also be true, since the first part of the statement ("All lemons are yellow") has already been assumed, and the use of "or" means that if even one part of the statement is true, the statement as a whole must be true as well. However, since we also know that "Not all lemons are yellow" (as this has been assumed), the first part is false, and hence the second part must be true to ensure the two-part statement to be true, i.e., unicorns exist (this inference is known as the Disjunctive syllogism). The procedure may be repeated to prove that unicorns do not exist (hence proving an additional contradiction where unicorns do and do not exist), as well as any other well-formed formula. Thus, there is an explosion of true statements. In a different solution to the problems posed by the principle of explosion, some mathematicians have devised alternative theories of logic called paraconsistent logics, which allow some contradictory statements to be proven without affecting the truth value of (all) other statements. Symbolic representation In symbolic logic, the principle of explosion can be expressed schematically in the following way: Proof Below is the Lewis argument, a formal proof of the principle of explosion using symbolic logic. This proof was published by C. I. Lewis and is named after him, though versions of it were known to medieval logicians. This is just the symbolic version of the informal argument given in the introduction, with standing for "all lemons are yellow" and standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism. Semantic argument An alternate argument for the principle stems from model theory. A sentence is a semantic consequence of a set of sentences only if every model of is a model of . However, there is no model of the contradictory set . A fortiori, there is no model of that is not a model of . Thus, vacuously, every model of is a model of . Thus is a semantic consequence of . Paraconsistent logic Paraconsistent logics have been developed that allow for subcontrary-forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism, disjunction introduction, and reductio ad absurdum. Usage The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, ) is worthless because all its statements would become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless. Reduction in proof strength of logics without the principle of explosion are discussed in minimal logic.
Mathematics
Mathematical logic
null
591839
https://en.wikipedia.org/wiki/Flammulina%20filiformis
Flammulina filiformis
Flammulina filiformis, commonly called enoki mushroom, is a species of edible agaric (gilled mushroom) in the family Physalacriaceae. It is widely cultivated in East Asia, and well known for its role in Japanese and Chinese cuisine. Until recently, the species was considered to be conspecific with the European Flammulina velutipes, but DNA sequencing has shown that the two are distinct. Etymology In Japanese, the mushroom is known as enoki-take or enoki-dake, both meaning "hackberry mushroom". This is because it is often found growing at the base of hackberry (enoki) trees. In Mandarin Chinese, the mushroom is called jīnzhēngū ( "gold needle mushroom") or jīngū (金菇 "gold mushroom"). In Korean, it is called paengi beoseot (팽이버섯) which means "mushroom planted near catalpa". In Vietnamese it is known as nấm kim châm. In India it is called futu. Description Basidiocarps are agaricoid and grow in clusters. Individual fruit bodies are up to tall, the cap convex at first, becoming flat when expanded, up to across. The cap surface is smooth, viscid when damp, ochraceous yellow to yellow-brown. The lamellae (gills) are cream to yellowish white. The stipe (stem) is smooth, pale yellow at the apex, yellow-brown to dark brown towards the base, and lacking a ring. The spore print is white, the spores (under a microscope) smooth, inamyloid, ellipsoid to cylindrical, c. 5 to 7 by 3 to 3.5μm. There is a significant difference in appearance between wild and cultivated basidiocarps. Cultivated enokitake are not exposed to light, resulting in white or pallid fruit bodies with long stipes and small caps. Taxonomy Flammulina filiformis was originally described from China in 2015 as a variety of F. velutipes, based on internal transcribed spacer sequences. Further molecular research using a combination of different sequences has shown that F. filiformis and F. velutipes are distinct and should be recognized as separate species. Distribution and habitat The fungus is found on dead wood of Betula platyphylla, Broussonetia papyrifera, Dipentodon sinicus, Neolitsea sp., Salix spp, and other broad-leaved trees. It grows naturally in China, Korea, and Japan. Nutritional profile Enoki mushrooms are 88% water, 8% carbohydrates, 3% protein, and contain negligible fat (table). In a 100-gram reference serving, enoki mushrooms provide of food energy and are an excellent source (20% or more of the Daily Value) of the B vitamins, thiamine, niacin, and pantothenic acid, while supplying moderate amounts of riboflavin, folate, and phosphorus (table). Potential health benefits The nutritional value of F. filiformis has long been recognised, which makes them an object of interest in current research. F. filiformis is a rich source for carbohydrates, proteins and unsaturated fatty acids as well as several noteworthy micronutrients and dietary fiber. While its nutritional value and culinary applications are well established, recent studies have begun exploring its potential medicinal properties in greater depth. Several bioactive molecules from various chemical classes have been isolated from F. filiformis extracts, showing promising potential for future applications as nutraceuticals or dietary supplements. Moreover, bioactive polysaccharides derived from F. filiformis have demonstrated to exhibit a broad spectrum of bioactivities, including anticancer, immunomodulatory, and anti-neurodegenerative effects. However, the precise mechanisms underlying these actions remain unclear and warrant further investigation in future research. In conclusion, F. filiformis holds significant promise as both a functional food and a nutraceutical, and may serve as an interesting source of bioactive compounds for therapeutic and pharmaceutical purposes. Uses F. filiformis has been cultivated in China since 800 AD. Commercial production in China was estimated at 1.57 million tonnes per annum in 2010, with Japan producing an additional 140,000 tonnes per annum. The fungus can be cultivated on a range of simple, lignocellulosic substrates including sawdust, wheat straw, and paddy straw. Enokitake are typically grown in the dark, producing pallid fruitbodies having long and narrow stipes with undeveloped caps. Exposure to light results in more normal, short-stiped, colored fruitbodies. As food The mushroom is widely eaten in East Asia. Cultivated F. filiformis is sold both fresh and canned. The fungus has a crisp texture and can be refrigerated for approximately one week. It is a common ingredient for soups, especially in East Asian cuisine, but can be used for salads and other dishes. Improved storaging F. filiformis extract can be added to whipped cream. It was observed that this measure helps to slow down the development of ice crystals, which would maintain the quality of whipped cream longer while storing it in a frozen state. Nutritionally improved meat products F. filiformis are an object of interest in current research for their potential to enhance food products and animal feed by using the stem waste. Studies indicate that the addition of F. filiformis stem waste powder to meat products can improve nutritional quality by increasing dietary fiber and ash content. This ingredient also enhances tenderness, inhibits lipid and protein oxidation, and extends shelf life, without negatively impacting the texture or flavor of the meat products. Feed additive for livestock Natural feed additives become more important in livestock farming. Following this trend, F. filiformis was checked for livestock health and production efficiency improving properties. There are studies showing that the use of Enoki mushroom residue as a feed additive offers several benefits for livestock. It enhances antioxidant enzyme activity, and improves animal digestibility, hormone levels, and immunity. The addition of mushroom residue in the livestock diet can reduce the feed cost and feed conversion ratio and enhance the meat quality, providing consumers with healthier and higher-quality meat products. Cultivation and harvest The common way to cultivate F. filiformis is in a large-scale factory style. By using modern possibilities to mechanize processes, over 300'000 tons a year of F. filiformis can be harvested that way. Indoor cultivation F. filiformis thrive in a warm, moist environment during the incubation phase, with substrate temperatures ranging from 18 to 25°C (64 to 77°F). F. filiformis need significantly cooler conditions to trigger fruiting. Pinning is triggered at temperatures ranging between 7 to 10°C (45 to 50°F), and the optimal temperature range for fruiting is 10 to 16°C (50 to 61°F). As with most fungi, F. filiformis also demand elevated humidity levels—95 to 100% during pinning and 85 to 95% during fruiting. The ideal size to harvest enoki mushrooms is generally recommended to be about 2-4 inches in length. At that time, the cap of F. filiformis should still be tightly closed and the stem should be long and sturdy. If people grow enoki mushrooms at home, they can use a sharp knife or scissors to snip off the mushroom cluster at the base of the stem where it meets the growing medium. It's important to remove both the mushrooms and any remaining mycelium (the white, thread-like structures) from the growing medium during harvest. This helps prevent decaying, which could negatively impact future mushroom growth. Post-harvest handling F. filiformis have thin, delicate stems that need to be handled with care to prevent damage. The following steps are for reference. First, gently brush off any dirt or substrate with a soft brush or a damp cloth. Second, avoid rinsing them with water, as this can cause them to absorb moisture, compromising both their texture and flavor. Once cleaned, separate the clusters into individual stems for easier cooking and better presentation. After cleaning, separate the mushroom clusters into individual stems for easier cooking and presentation. Storage F. filiformis should be kept at temperatures between 7-10°C (44.6-50°F) for optimal freshness. For brief storage (fewer than 7 days), a temperature interval of 1-2°C (34-36°F) with 90-98% relative humidity is advised. Proneness to Listeria F. filiformis have the potential to be contaminated with listeria monocytogenes, which is why disease control centers recommend cooking the mushroom upon consumption. Singapore Food Agency advise people to do the following to ensure food security when consuming F. filiformis: Enoki mushrooms should never be eaten raw Instead, make sure to cook the mushrooms properly before eating them If there are cooking directions at hand, make sure to follow them Enoki mushrooms should be stored at cold temperatures to ensure a slower growth of microbes. This should be done even if the packaging is not opened yet Uncooked enoki mushroom should be stored separately to avoid cross-contamination
Biology and health sciences
Edible fungi
Plants
592151
https://en.wikipedia.org/wiki/Even%20and%20odd%20functions
Even and odd functions
In mathematics, an even function is a real function such that for every in its domain. Similarly, an odd function is a function such that for every in its domain. They are named for the parity of the powers of the power functions which satisfy each condition: the function is even if n is an even integer, and it is odd if n is an odd integer. Even functions are those real functions whose graph is self-symmetric with respect to the and odd functions are those whose graph is self-symmetric with respect to the origin. If the domain of a real function is self-symmetric with respect to the origin, then the function can be uniquely decomposed as the sum of an even function and an odd function. Definition and examples Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on. The given examples are real functions, to illustrate the symmetry of their graphs. Even functions A real function is even if, for every in its domain, is also in its domain and or equivalently Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions are: The absolute value cosine hyperbolic cosine Gaussian function Odd functions A real function is odd if, for every in its domain, is also in its domain and or equivalently Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. If is in the domain of an odd function , then . Examples of odd functions are: The sign function The identity function sine hyperbolic sine The error function Basic properties Uniqueness If a function is both even and odd, it is equal to 0 everywhere it is defined. If a function is odd, the absolute value of that function is an even function. Addition and subtraction The sum of two even functions is even. The sum of two odd functions is odd. The difference between two odd functions is odd. The difference between two even functions is even. The sum of an even and odd function is not even or odd, unless one of the functions is equal to zero over the given domain. Multiplication and division The product of two even functions is an even function. That implies that product of any number of even functions is an even function as well. The product of two odd functions is an even function. The product of an even function and an odd function is an odd function. The quotient of two even functions is an even function. The quotient of two odd functions is an even function. The quotient of an even function and an odd function is an odd function. Composition The composition of two even functions is even. The composition of two odd functions is odd. The composition of an even function and an odd function is even. The composition of any function with an even function is even (but not vice versa). Even–odd decomposition If a real function has a domain that is self-symmetric with respect to the origin, it may be uniquely decomposed as the sum of an even and an odd function, which are called respectively the even part (or the even component) and the odd part (or the odd component) of the function, and are defined by and It is straightforward to verify that is even, is odd, and This decomposition is unique since, if where is even and is odd, then and since For example, the hyperbolic cosine and the hyperbolic sine may be regarded as the even and odd parts of the exponential function, as the first one is an even function, the second one is odd, and . Fourier's sine and cosine transforms also perform even–odd decomposition by representing a function's odd part with sine waves (an odd function) and the function's even part with cosine waves (an even function). Further algebraic properties Any linear combination of even functions is even, and the even functions form a vector space over the reals. Similarly, any linear combination of odd functions is odd, and the odd functions also form a vector space over the reals. In fact, the vector space of all real functions is the direct sum of the subspaces of even and odd functions. This is a more abstract way of expressing the property in the preceding section. The space of functions can be considered a graded algebra over the real numbers by this property, as well as some of those above. The even functions form a commutative algebra over the reals. However, the odd functions do not form an algebra over the reals, as they are not closed under multiplication. Analytic properties A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous. In the following, properties involving derivatives, Fourier series, Taylor series are considered, and these concepts are thus supposed to be defined for the considered functions. Basic analytic properties The derivative of an even function is odd. The derivative of an odd function is even. The integral of an odd function from −A to +A is zero (where A can be finite or infinite, and the function has no vertical asymptotes between −A and A). For an odd function that is integrable over a symmetric interval, e.g. , the result of the integral over that interval is zero; that is . The integral of an even function from −A to +A is twice the integral from 0 to +A (where A is finite, and the function has no vertical asymptotes between −A and A. This also holds true when A is infinite, but only if the integral converges); that is . Series The Maclaurin series of an even function includes only even powers. The Maclaurin series of an odd function includes only odd powers. The Fourier series of a periodic even function includes only cosine terms. The Fourier series of a periodic odd function includes only sine terms. The Fourier transform of a purely real-valued even function is real and even. (see ) The Fourier transform of a purely real-valued odd function is imaginary and odd. (see ) Harmonics In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memory-less nonlinear system, that is, a system whose output at time t only depends on the input at time t and does not depend on the input at any previous times. Such a system is described by a response function . The type of harmonics produced depend on the response function f: When the response function is even, the resulting signal will consist of only even harmonics of the input sine wave; The fundamental is also an odd harmonic, so will not be present. A simple example is a full-wave rectifier. The component represents the DC offset, due to the one-sided nature of even-symmetric transfer functions. When it is odd, the resulting signal will consist of only odd harmonics of the input sine wave; The output signal will be half-wave symmetric. A simple example is clipping in a symmetric push-pull amplifier. When it is asymmetric, the resulting signal may contain either even or odd harmonics; Simple examples are a half-wave rectifier, and clipping in an asymmetrical class-A amplifier. This does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics. Generalizations Multivariate functions Even symmetry: A function is called even symmetric if: Odd symmetry: A function is called odd symmetric if: Complex-valued functions The definitions for even and odd symmetry for complex-valued functions of a real argument are similar to the real case. In signal processing, a similar symmetry is sometimes considered, which involves complex conjugation. Conjugate symmetry: A complex-valued function of a real argument is called conjugate symmetric if A complex valued function is conjugate symmetric if and only if its real part is an even function and its imaginary part is an odd function. A typical example of a conjugate symmetric function is the cis function Conjugate antisymmetry: A complex-valued function of a real argument is called conjugate antisymmetric if: A complex valued function is conjugate antisymmetric if and only if its real part is an odd function and its imaginary part is an even function. Finite length sequences The definitions of odd and even symmetry are extended to N-point sequences (i.e. functions of the form ) as follows: Even symmetry: A N-point sequence is called conjugate symmetric if Such a sequence is often called a palindromic sequence; see also Palindromic polynomial. Odd symmetry: A N-point sequence is called conjugate antisymmetric if Such a sequence is sometimes called an anti-palindromic sequence; see also Antipalindromic polynomial.
Mathematics
Functions: General
null
593231
https://en.wikipedia.org/wiki/Trunk%20%28botany%29
Trunk (botany)
In botany, the trunk (or bole) is the stem and main wooden axis of a tree, which is an important feature in tree identification, and which often differs markedly from the bottom of the trunk to the top, depending on the species. The trunk is the most important part of the tree for timber production. Occurrence Trunks occur both in "true" woody plants and non-woody plants such as palms and other monocots, though the internal physiology is different in each case. In all plants, trunks thicken over time due to the formation of secondary growth, or, in monocots, pseudo-secondary growth. Trunks can be vulnerable to damage, including sunburn. Vocabulary Trunks which are cut down for making lumber are generally called logs; if they are cut to a specific length, called bolts. The term "log" is informally used in English to describe any felled trunk not rooted in the ground, whose roots are detached. A stump is the part of a trunk remaining in the ground after the tree has been felled, or the earth-end of an uprooted tree which retains its un-earthed roots. Structure of the trunk The trunk consists of five main parts: The outer bark, inner bark (phloem), cambium, sapwood (live xylem), and heartwood (dead xylem). From the outside of the tree working in: The first layer is the outer bark; this is the protective outermost layer of the trunk. Under this is the inner bark which is called the phloem. The phloem is how the tree transports nutrients from the roots to the shoots and vice versa. The next layer is the cambium, a very thin layer of undifferentiated cells that divide to replenish the phloem cells on the outside and the xylem cells to the inside. The cambium contains the growth meristem of the trunk. Directly inside of the cambium is the sapwood, or the live xylem cells. These cells transport the water through the tree. The xylem also stores starch inside the tree. At the center of the tree is the heartwood. The heartwood is made up of dead xylem cells that have been filled with resins and minerals; these keep other organisms from infecting and growing in the center of the tree.
Biology and health sciences
Plant stem
null
14534679
https://en.wikipedia.org/wiki/Climate%20of%20the%20Arctic
Climate of the Arctic
The climate of the Arctic is characterized by long, cold winters and short, cool summers. There is a large amount of variability in climate across the Arctic, but all regions experience extremes of solar radiation in both summer and winter. Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, and nearly all parts of the Arctic experience long periods with some form of ice on the surface. The Arctic consists of ocean that is largely surrounded by land. As such, the climate of much of the Arctic is moderated by the ocean water, which can never have a temperature below . In winter, this relatively warm water, even though covered by the polar ice pack, keeps the North Pole from being the coldest place in the Northern Hemisphere, and it is also part of the reason that Antarctica is so much colder than the Arctic. In summer, the presence of the nearby water keeps coastal areas from warming as much as they might otherwise. Overview of the Arctic There are different definitions of the Arctic. The most widely used definition, the area north of the Arctic Circle, where the sun does not set on the June Solstice, is used in astronomical and some geographical contexts. However the two most widely used definitions in the context of climate are the area north of the northern tree line, and the area in which the average summer temperature is less than , which are nearly coincident over most land areas (NSIDC ). This definition of the Arctic can be further divided into four different regions: The Arctic Basin includes the Arctic Ocean within the average minimum extent of sea ice . The Canadian Arctic Archipelago includes the large and small islands, except Greenland, on the Canadian side of the Arctic, and the waters between them. The entire island of Greenland, although its ice sheet and ice-free coastal regions have different climatic conditions. The Arctic waters that are not sea ice in late summer, including Hudson Bay, Baffin Bay, Ungava Bay, the Davis, Denmark, Hudson and Bering Straits, and the Labrador, Norwegian, (ice-free all year), Greenland, Baltic, Barents (southern part ice-free all year), Kara, Laptev, Chukchi, Okhotsk, sometimes Beaufort and Bering Seas. Moving inland from the coast over mainland North America and Eurasia, the moderating influence of the Arctic Ocean quickly diminishes, and the climate transitions from the Arctic to subarctic, generally, in less than , and often over a much shorter distance. History of Arctic climate observation Due to the lack of major population centres in the Arctic, weather and climate observations from the region tend to be widely spaced and of short duration compared to the midlatitudes and tropics. Though the Vikings explored parts of the Arctic over a millennium ago, and small numbers of people have been living along the Arctic coast for much longer, scientific knowledge about the region was slow to develop; the large islands of Severnaya Zemlya, just north of the Taymyr Peninsula on the Russian mainland, were not discovered until 1913, and not mapped until the early 1930s Early European exploration Much of the historical exploration in the Arctic was motivated by the search for the Northwest and Northeast Passages. Sixteenth- and seventeenth-century expeditions were largely driven by traders in search of these shortcuts between the Atlantic and the Pacific. These forays into the Arctic did not venture far from the North American and Eurasian coasts, and were unsuccessful at finding a navigable route through either passage. National and commercial expeditions continued to expand the detail on maps of the Arctic through the eighteenth century, but largely neglected other scientific observations. Expeditions from the 1760s to the middle of the 19th century were also led astray by attempts to sail north because of the belief by many at the time that the ocean surrounding the North Pole was ice-free. These early explorations did provide a sense of the sea ice conditions in the Arctic and occasionally some other climate-related information. By the early 19th century some expeditions were making a point of collecting more detailed meteorological, oceanographic, and geomagnetic observations, but they remained sporadic. Beginning in the 1850s regular meteorological observations became more common in many countries, and the British navy implemented a system of detailed observation. As a result, expeditions from the second half of the nineteenth century began to provide a picture of the Arctic climate. Early European observing efforts The first major effort by Europeans to study the meteorology of the Arctic was the First International Polar Year (IPY) in 1882 to 1883. Eleven nations provided support to establish twelve observing stations around the Arctic. The observations were not as widespread or long-lasting as would be needed to describe the climate in detail, but they provided the first cohesive look at the Arctic weather. In 1884 the wreckage of the Briya, a ship abandoned three years earlier off Russia's eastern Arctic coast, was found on the coast of Greenland. This caused Fridtjof Nansen to realize that the sea ice was moving from the Siberian side of the Arctic to the Atlantic side. He decided to use this motion by freezing a specially designed ship, the Fram, into the sea ice and allowing it to be carried across the ocean. Meteorological observations were collected from the ship during its crossing from September 1893 to August 1896. This expedition also provided valuable insight into the circulation of the ice surface of the Arctic Ocean. In the early 1930s the first significant meteorological studies were carried out on the interior of the Greenland ice sheet. These provided knowledge of perhaps the most extreme climate of the Arctic, and also the first suggestion that the ice sheet lies in a depression of the bedrock below (now known to be caused by the weight of the ice itself). Fifty years after the first IPY, in 1932 to 1933, a second IPY was organized. This one was larger than the first, with 94 meteorological stations, but World War II delayed or prevented the publication of much of the data collected during it. Another significant moment in Arctic observing before World War II occurred in 1937 when the USSR established the first of over 30 North-Pole drifting stations. This station, like the later ones, was established on a thick ice floe and drifted for almost a year, its crew observing the atmosphere and ocean along the way. Cold-War era observations Following World War II, the Arctic, lying between the USSR and North America, became a front line of the Cold War, inadvertently and significantly furthering our understanding of its climate. Between 1947 and 1957, the United States and Canadian governments established a chain of stations along the Arctic coast known as the Distant Early Warning Line (DEWLINE) to provide warning of a Soviet nuclear attack. Many of these stations also collected meteorological data. The Soviet Union was also interested in the Arctic and established a significant presence there by continuing the North-Pole drifting stations. This program operated continuously, with 30 stations in the Arctic from 1950 to 1991. These stations collected data that are valuable to this day for understanding the climate of the Arctic Basin. This map shows the location of Arctic research facilities during the mid-1970s and the tracks of drifting stations between 1958 and 1975. Another benefit from the Cold War was the acquisition of observations from United States and Soviet naval voyages into the Arctic. In 1958 an American nuclear submarine, the Nautilus was the first ship to reach the North Pole. In the decades that followed submarines regularly roamed under the Arctic sea ice, collecting sonar observations of the ice thickness and extent as they went. These data became available after the Cold War, and have provided evidence of thinning of the Arctic sea ice. The Soviet navy also operated in the Arctic, including a sailing of the nuclear-powered ice breaker Arktika to the North Pole in 1977, the first time a surface ship reached the pole. Scientific expeditions to the Arctic also became more common during the Cold-War decades, sometimes benefiting logistically or financially from the military interest. In 1966 the first deep ice core in Greenland was drilled at Camp Century, providing a glimpse of climate through the last ice age. This record was lengthened in the early 1990s when two deeper cores were taken from near the center of the Greenland Ice Sheet. Beginning in 1979 the Arctic Ocean Buoy Program (the International Arctic Buoy Program since 1991) has been collecting meteorological and ice-drift data across the Arctic Ocean with a network of 20 to 30 buoys. Satellite era The end of the Soviet Union in 1991 led to a dramatic decrease in regular observations from the Arctic. The Russian government ended the system of drifting North Pole stations, and closed many of the surface stations in the Russian Arctic. Likewise the United States and Canadian governments cut back on spending for Arctic observing as the perceived need for the DEWLINE declined. As a result, the most complete collection of surface observations from the Arctic is for the period 1960 to 1990. The extensive array of satellite-based remote-sensing instruments now in orbit has helped to replace some of the observations that were lost after the Cold War, and has provided coverage that was impossible without them. Routine satellite observations of the Arctic began in the early 1970s, expanding and improving ever since. A result of these observations is a thorough record of sea-ice extent in the Arctic since 1979; the decreasing extent seen in this record (NASA , NSIDC), and its possible link to anthropogenic global warming, has helped increase interest in the Arctic in recent years. Today's satellite instruments provide routine views of not only cloud, snow, and sea-ice conditions in the Arctic, but also of other, perhaps less-expected, variables, including surface and atmospheric temperatures, atmospheric moisture content, winds, and ozone concentration. Civilian scientific research on the ground has certainly continued in the Arctic, and it is getting a boost from 2007 to 2009 as nations around the world increase spending on polar research as part of the third International Polar Year. During these two years thousands of scientists from over 60 nations will co-operate to carry out over 200 projects to learn about physical, biological, and social aspects of the Arctic and Antarctic (IPY). Modern researchers in the Arctic also benefit from computer models. These pieces of software are sometimes relatively simple, but often become highly complex as scientists try to include more and more elements of the environment to make the results more realistic. The models, though imperfect, often provide valuable insight into climate-related questions that cannot be tested in the real world. They are also used to try to predict future climate and the effect that changes to the atmosphere caused by humans may have on the Arctic and beyond. Another interesting use of models has been to use them, along with historical data, to produce a best estimate of the weather conditions over the entire globe during the last 50 years, filling in regions where no observations were made (ECMWF). These reanalysis datasets help compensate for the lack of observations over the Arctic. Solar radiation Almost all of the energy available to the Earth's surface and atmosphere comes from the sun in the form of solar radiation (light from the sun, including invisible ultraviolet and infrared light). Variations in the amount of solar radiation reaching different parts of the Earth are a principal driver of global and regional climate. Latitude is the most important factor determining the yearly average amount of solar radiation reaching the top of the atmosphere; the incident solar radiation decreases smoothly from the Equator to the poles. Therefore, temperature tends to decrease with increasing latitude. In addition the length of each day, which is determined by the season, has a significant impact on the climate. The 24-hour days found near the poles in summer result in a large daily-average solar flux reaching the top of the atmosphere in these regions. On the June solstice 36% more solar radiation reaches the top of the atmosphere over the course of the day at the North Pole than at the Equator. However, in the six months from the September equinox to March equinox the North Pole receives no sunlight. The climate of the Arctic also depends on the amount of sunlight reaching the surface, and being absorbed by the surface. Variations in cloud cover can cause significant variations in the amount of solar radiation reaching the surface at locations with the same latitude. Differences in surface albedo due for example to presence or absence of snow and ice strongly affect the fraction of the solar radiation reaching the surface that is reflected rather than absorbed. Winter During the winter months of November through February, the sun remains very low in the sky in the Arctic or does not rise at all. Where it does rise, the days are short, and the sun's low position in the sky means that, even at noon, not much energy is reaching the surface. Furthermore, most of the small amount of solar radiation that reaches the surface is reflected away by the bright snow cover. Cold snow reflects between 70% and 90% of the solar radiation that reaches it, and snow covers most of the Arctic land and ice surface in winter. These factors result in a negligible input of solar energy to the Arctic in winter; the only things keeping the Arctic from continuously cooling all winter are the transport of warmer air and ocean water into the Arctic from the south and the transfer of heat from the subsurface land and ocean (both of which gain heat in summer and release it in winter) to the surface and atmosphere. Spring Arctic days lengthen rapidly in March and April, and the sun rises higher in the sky, both bringing more solar radiation to the Arctic than in winter. During these early months of Northern Hemisphere spring most of the Arctic is still experiencing winter conditions, but with the addition of sunlight. The continued low temperatures, and the persisting white snow cover, mean that this additional energy reaching the Arctic from the sun is slow to have a significant impact because it is mostly reflected away without warming the surface. By May, temperatures are rising, as 24-hour daylight reaches many areas, but most of the Arctic is still snow-covered, so the Arctic surface reflects more than 70% of the sun's energy that reaches it over all areas but the Norwegian Sea and southern Bering Sea, where the ocean is ice free, and some of the land areas adjacent to these seas, where the moderating influence of the open water helps melt the snow early. In most of the Arctic the significant snow melt begins in late May or sometime in June. This begins a feedback, as melting snow reflects less solar radiation (50% to 60%) than dry snow, allowing more energy to be absorbed and the melting to take place faster. As the snow disappears on land, the underlying surfaces absorb even more energy, and begin to warm rapidly. Summer At the North Pole on the June solstice, around 21 June, the sun circles at 23.5° above the horizon. This marks noon in the Pole's year-long day; from then until the September equinox, the sun will slowly approach nearer and nearer the horizon, offering less and less solar radiation to the Pole. This period of setting sun also roughly corresponds to summer in the Arctic. As the Arctic continues receiving energy from the sun during this time, the land, which is mostly free of snow by now, can warm up on clear days when the wind is not coming from the cold ocean. Over the Arctic Ocean the snow cover on the sea ice disappears and ponds of melt water start to form on the sea ice, further reducing the amount of sunlight the ice reflects and helping more ice melt. Around the edges of the Arctic Ocean the ice will melt and break up, exposing the ocean water, which absorbs almost all of the solar radiation that reaches it, storing the energy in the water column. By July and August, most of the land is bare and absorbs more than 80% of the sun's energy that reaches the surface. Where sea ice remains, in the central Arctic Basin and the straits between the islands in the Canadian Archipelago, the many melt ponds and lack of snow cause about half of the sun's energy to be absorbed, but this mostly goes toward melting ice since the ice surface cannot warm above freezing. Frequent cloud cover, exceeding 80% frequency over much of the Arctic Ocean in July, reduces the amount of solar radiation that reaches the surface by reflecting much of it before it gets to the surface. Unusual clear periods can lead to increased sea-ice melt or higher temperatures (NSIDC ). Greenland: The interior of Greenland differs from the rest of the Arctic. Low spring and summer cloud frequency and the high elevation, which reduces the amount of solar radiation absorbed or scattered by the atmosphere, combine to give this region the most incoming solar radiation at the surface out of anywhere in the Arctic. However, the high elevation, and corresponding lower temperatures, help keep the bright snow from melting, limiting the warming effect of all this solar radiation. In the summer, when the snow melts, Inuit live in tent-like huts made out of animal skins stretched over a frame. Autumn In September and October the days get rapidly shorter, and in northern areas the sun disappears from the sky entirely. As the amount of solar radiation available to the surface rapidly decreases, the temperatures follow suit. The sea ice begins to refreeze, and eventually gets a fresh snow cover, causing it to reflect even more of the dwindling amount of sunlight reaching it. Likewise, in the beginning of September both the northern and southern land areas receive their winter snow cover, which combined with the reduced solar radiation at the surface, ensures an end to the warm days those areas may experience in summer. By November, winter is in full swing in most of the Arctic, and the small amount of solar radiation still reaching the region does not play a significant role in its climate. Temperature The Arctic is often perceived as a region stuck in a permanent deep freeze. While much of the region does experience very low temperatures, there is considerable variability with both location and season. Winter temperatures average below freezing over all of the Arctic except for small regions in the southern Norwegian and Bering Seas, which remain ice free throughout the winter. Average temperatures in summer are above freezing over all regions except the central Arctic Basin, where sea ice survives through the summer, and interior Greenland. The maps show the average temperature over the Arctic in January and July, generally the coldest and warmest months. These maps were made with data from the NCEP/NCAR Reanalysis, which incorporates available data into a computer model to create a consistent global data set. Neither the models nor the data are perfect, so these maps may differ from other estimates of surface temperatures; in particular, most Arctic climatologies show temperatures over the central Arctic Ocean in July averaging just below freezing, a few degrees lower than these maps show (USSR, 1985). An earlier climatology of temperatures in the Arctic, based entirely on available data, is shown in this map from the CIA Polar Regions Atlas. Record low temperatures in the Northern Hemisphere The World Meteorological Organization has recognized in 2020 a temperature of measured near the topographic summit of the Greenland Ice Sheet on 22 December 1991, as the lowest in the Northern Hemisphere. The record was measured at an automatic weather station and was uncovered after nearly 30 years. Among the coldest location in the Northern Hemisphere is also the interior of Russia's Far East, in the upper-right quadrant of the maps. This is due to the region's continental climate, far from the moderating influence of the ocean, and to the valleys in the region that can trap cold, dense air and create strong temperature inversions, where the temperature increases, rather than decreases, with height. The lowest officially recorded temperatures in the Northern Hemisphere is which occurred in Oymyakon on 6 February 1933, as well as in Verkhoyansk on 5 and 7 February 1892, respectively. However, this region is not part of the Arctic because its continental climate also allows it to have warm summers, with an average July temperature of . In the figure below showing station climatologies, the plot for Yakutsk is representative of this part of the Far East; Yakutsk has a slightly less extreme climate than Verkhoyansk. Arctic Basin The Arctic Basin is typically covered by sea ice year round, which strongly influences its summer temperatures. It also experiences the longest period without sunlight of any part of the Arctic, and the longest period of continuous sunlight, though the frequent cloudiness in summer reduces the importance of this solar radiation. Despite its location centered on the North Pole, and the long period of darkness this brings, this is not the coldest part of the Arctic. In winter, the heat transferred from the water through cracks in the ice and areas of open water helps to moderate the climate some, keeping average winter temperatures around . Minimum temperatures in this region in winter are around . In summer, the sea ice keeps the surface from warming above freezing. Sea ice is mostly fresh water since the salt is rejected by the ice as it forms, so the melting ice has a temperature of , and any extra energy from the sun goes to melting more ice, not to warming the surface. Air temperatures, at the standard measuring height of about 2 meters above the surface, can rise a few degrees above freezing between late May and September, though they tend to be within a degree of freezing, with very little variability during the height of the melt season. In the figure above showing station climatologies, the lower-left plot, for NP 7–8, is representative of conditions over the Arctic Basin. This plot shows data from the Soviet North Pole drifting stations, numbers 7 and 8. It shows the average temperature in the coldest months is in the −30s, and the temperature rises rapidly from April to May; July is the warmest month, and the narrowing of the maximum and minimum temperature lines shows the temperature does not vary far from freezing in the middle of summer; from August through December the temperature drops steadily. The small daily temperature range (the length of the vertical bars) results from the fact that the sun's elevation above the horizon does not change much or at all in this region during one day. Much of the winter variability in this region is due to clouds. Since there is no sunlight, the thermal radiation emitted by the atmosphere is one of this region's main sources of energy in winter. A cloudy sky can emit much more energy toward the surface than a clear sky, so when it is cloudy in winter, this region tends to be warm, and when it is clear, this region cools quickly. Canadian Archipelago In winter, the Canadian Archipelago experiences temperatures similar to those in the Arctic Basin, but in the summer months of June to August, the presence of so much land in this region allows it to warm more than the ice-covered Arctic Basin. In the station-climatology figure above, the plot for Resolute is typical of this region. The presence of the islands, most of which lose their snow cover in summer, allows the summer temperatures to rise well above freezing. The average high temperature in summer approaches , and the average low temperature in July is above freezing, though temperatures below freezing are observed every month of the year. The straits between these islands often remain covered by sea ice throughout the summer. This ice acts to keep the surface temperature at freezing, just as it does over the Arctic Basin, so a location on a strait would likely have a summer climate more like the Arctic Basin, but with higher maximum temperatures because of winds off of the nearby warm islands. Greenland Climatically, Greenland is divided into two very separate regions: the coastal region, much of which is ice free, and the inland ice sheet. The Greenland Ice Sheet covers about 80% of Greenland, extending to the coast in places, and has an average elevation of and a maximum elevation of . Much of the ice sheet remains below freezing all year, and it has the coldest climate of any part of the Arctic. Coastal areas can be affected by nearby open water, or by heat transfer through sea ice from the ocean, and many parts lose their snow cover in summer, allowing them to absorb more solar radiation and warm more than the interior. Coastal regions on the northern half of Greenland experience winter temperatures similar to or slightly warmer than the Canadian Archipelago, with average January temperatures of . These regions are slightly warmer than the Archipelago because of their closer proximity to areas of thin, first-year sea ice cover or to open ocean in the Baffin Bay and Greenland Sea. The coastal regions in the southern part of the island are influenced more by open ocean water and by frequent passage of cyclones, both of which help to keep the temperature there from being as low as in the north. As a result of these influences, the average temperature in these areas in January is considerably higher, between about . The interior ice sheet escapes much of the influence of heat transfer from the ocean or from cyclones, and its high elevation also acts to give it a colder climate since temperatures tend to decrease with elevation. The result is winter temperatures that are lower than anywhere else in the Arctic, with average January temperatures of , depending on location and on which data set is viewed. Minimum temperatures in winter over the higher parts of the ice sheet can drop below (CIA, 1978). In the station climatology figure above, the Centrale plot is representative of the high Greenland Ice Sheet. In summer, the coastal regions of Greenland experience temperatures similar to the islands in the Canadian Archipelago, averaging just a few degrees above freezing in July, with slightly higher temperatures in the south and west than in the north and east. The interior ice sheet remains snow-covered throughout the summer, though significant portions do experience some snow melt. This snow cover, combined with the ice sheet's elevation, help to keep temperatures here lower, with July averages between . Along the coast, temperatures are kept from varying too much by the moderating influence of the nearby water or melting sea ice. In the interior, temperatures are kept from rising much above freezing because of the snow-covered surface but can drop to even in July. Temperatures above 20 °C are rare but do sometimes occur in the far south and south-west coastal areas. Ice-free seas Most Arctic seas are covered by ice for part of the year (see the map in the sea-ice section below); 'ice-free' here refers to those which are not covered year-round. The only regions that remain ice-free throughout the year are the southern part of the Barents Sea and most of the Norwegian Sea. These have very small annual temperature variations; average winter temperatures are kept near or above the freezing point of sea water (about ) since the unfrozen ocean cannot have a temperature below that, and summer temperatures in the parts of these regions that are considered part of the Arctic average less than . During the 46-year period when weather records were kept on Shemya Island, in the southern Bering Sea, the average temperature of the coldest month (February) was and that of the warmest month (August) was ; temperatures never dropped below or rose above ; Western Regional Climate Center) The rest of the seas have ice cover for some part of the winter and spring, but lose that ice during the summer. These regions have summer temperatures between about . The winter ice cover allows temperatures to drop much lower in these regions than in the regions that are ice-free all year. Over most of the seas that are ice-covered seasonally, winter temperatures average between about . Those areas near the sea-ice edge will remain somewhat warmer due to the moderating influence of the nearby open water. In the station-climatology figure above, the plots for Point Barrow, Tiksi, Murmansk, and Isfjord are typical of land areas adjacent to seas that are ice-covered seasonally. The presence of the land allows temperatures to reach slightly more extreme values than the seas themselves. An essentially ice-free Arctic may be a reality in the month of September, anywhere from 2050 to 2100. Precipitation Precipitation in most of the Arctic falls only as rain and snow. Over most areas snow is the dominant, or only, form of precipitation in winter, while both rain and snow fall in summer (Serreze and Barry 2005). The main exception to this general description is the high part of the Greenland Ice Sheet, which receives all of its precipitation as snow, in all seasons. Accurate climatologies of precipitation amount are more difficult to compile for the Arctic than climatologies of other variables such as temperature and pressure. All variables are measured at relatively few stations in the Arctic, but precipitation observations are made more uncertain due to the difficulty in catching in a gauge all of the snow that falls. Typically some falling snow is kept from entering precipitation gauges by winds, causing an underreporting of precipitation amounts in regions that receive a large fraction of their precipitation as snowfall. Corrections are made to data to account for this uncaught precipitation, but they are not perfect and introduce some error into the climatologies (Serreze and Barry 2005). The observations that are available show that precipitation amounts vary by about a factor of 10 across the Arctic, with some parts of the Arctic Basin and Canadian Archipelago receiving less than of precipitation annually, and parts of southeast Greenland receiving over annually. Most regions receive less than annually. For comparison, annual precipitation averaged over the whole planet is about ; see Precipitation). Unless otherwise noted, all precipitation amounts given in this article are liquid-equivalent amounts, meaning that frozen precipitation is melted before it is measured. Arctic Basin The Arctic Basin is one of the driest parts of the Arctic. Most of the Basin receives less than of precipitation per year, qualifying it as a desert. Smaller regions of the Arctic Basin just north of Svalbard and the Taymyr Peninsula receive up to about per year. Monthly precipitation totals over most of the Arctic Basin average about from November through May, and rise to in July, August, and September. The dry winters result from the low frequency of cyclones in the region during that time, and the region's distance from warm open water that could provide a source of moisture (Serreze and Barry 2005). Despite the low precipitation totals in winter, precipitation frequency is higher in January, when 25% to 35% of observations reported precipitation, than in July, when 20% to 25% of observations reported precipitation (Serreze and Barry 2005). Much of the precipitation reported in winter is very light, possibly diamond dust. The number of days with measurable precipitation (more than 0.1 mm [0.004 in] in a day) is slightly greater in July than in January (USSR 1985). Of January observations reporting precipitation, 95% to 99% of them indicate it was frozen. In July, 40% to 60% of observations reporting precipitation indicate it was frozen (Serreze and Barry 2005). The parts of the Basin just north of Svalbard and the Taymyr Peninsula are exceptions to the general description just given. These regions receive many weakening cyclones from the North-Atlantic storm track, which is most active in winter. As a result, precipitation amounts over these parts of the basin are larger in winter than those given above. The warm air transported into these regions also mean that liquid precipitation is more common than over the rest of the Arctic Basin in both winter and summer. Canadian Archipelago Annual precipitation totals in the Canadian Archipelago increase dramatically from north to south. The northern islands receive similar amounts, with a similar annual cycle, to the central Arctic Basin. Over Baffin Island and the smaller islands around it, annual totals increase from just over in the north to about in the south, where cyclones from the North Atlantic are more frequent. Greenland Annual precipitation amounts given below for Greenland are from Figure 6.5 in Serreze and Barry (2005). Due to the scarcity of long-term weather records in Greenland, especially in the interior, this precipitation climatology was developed by analyzing the annual layers in the snow to determine annual snow accumulation (in liquid equivalent) and was modified on the coast with a model to account for the effects of the terrain on precipitation amounts. The southern third of Greenland protrudes into the North-Atlantic storm track, a region frequently influenced by cyclones. These frequent cyclones lead to larger annual precipitation totals than over most of the Arctic. This is especially true near the coast, where the terrain rises from sea level to over , enhancing precipitation due to orographic lift. The result is annual precipitation totals of over the southern interior to over near the southern and southeastern coasts. Some locations near these coasts where the terrain is particularly conducive to causing orographic lift receive up of precipitation per year. More precipitation falls in winter, when the storm track is most active, than in summer. The west coast of the central third of Greenland is also influenced by some cyclones and orographic lift, and precipitation totals over the ice sheet slope near this coast are up to per year. The east coast of the central third of the island receives between of precipitation per year, with increasing amounts from north to south. Precipitation over the north coast is similar to that over the central Arctic Basin. The interior of the central and northern Greenland Ice Sheet is the driest part of the Arctic. Annual totals here range from less than 100 to about 200 mm (4 to 8 in). This region is continuously below freezing, so all precipitation falls as snow, with more in summer than in the winter time. (USSR 1985). Ice-free seas The Chukchi, Laptev, and Kara Seas and Baffin Bay receive somewhat more precipitation than the Arctic Basin, with annual totals between ; annual cycles in the Chukchi and Laptev Seas and Baffin Bay are similar to those in the Arctic Basin, with more precipitation falling in summer than in winter, while the Kara Sea has a smaller annual cycle due to enhanced winter precipitation caused by cyclones from the North Atlantic storm track. The Labrador, Norwegian, Greenland, and Barents Seas and Denmark and Davis Straits are strongly influenced by the cyclones in the North Atlantic storm track, which is most active in winter. As a result, these regions receive more precipitation in winter than in summer. Annual precipitation totals increase quickly from about in the northern to about in the southern part of the region. Precipitation is frequent in winter, with measurable totals falling on an average of 20 days each January in the Norwegian Sea (USSR 1985). The Bering Sea is influenced by the North Pacific storm track, and has annual precipitation totals between , also with a winter maximum. Sea ice Sea ice is frozen sea water that floats on the ocean's surface. It is the dominant surface type throughout the year in the Arctic Basin, and covers much of the ocean surface in the Arctic at some point during the year. The ice may be bare ice, or it may be covered by snow or ponds of melt water, depending on location and time of year. Sea ice is relatively thin, generally less than about , with thicker ridges (NSIDC). NOAA's North Pole Web Cams having been tracking the Arctic summer sea ice transitions through spring thaw, summer melt ponds, and autumn freeze-up since the first webcam was deployed in 2002–present. Sea ice is important to the climate and the ocean in a variety of ways. It reduces the transfer of heat from the ocean to the atmosphere; it causes less solar energy to be absorbed at the surface, and provides a surface on which snow can accumulate, which further decreases the absorption of solar energy; since salt is rejected from the ice as it forms, the ice increases the salinity of the ocean's surface water where it forms and decreases the salinity where it melts, both of which can affect the ocean's circulation. The map shows the areas covered by sea ice when it is at its maximum extent (March) and its minimum extent (September). This map was made in the 1970s, and the extent of sea ice has decreased since then (see below), but this still gives a reasonable overview. At its maximum extent, in March, sea ice covers about 15 million km2 (5.8 million sq mi) of the Northern Hemisphere, nearly as much area as the largest country, Russia. Winds and ocean currents cause the sea ice to move. The typical pattern of ice motion is shown on the map at right. On average, these motions carry sea ice from the Russian side of the Arctic Ocean into the Atlantic Ocean through the area east of Greenland, while they cause the ice on the North American side to rotate clockwise, sometimes for many years. Wind Wind speeds over the Arctic Basin and the western Canadian Archipelago average between 4 and 6 metres per second (14 and 22 kilometres per hour, 9 and 13 miles per hour) in all seasons. Stronger winds do occur in storms, often causing whiteout conditions, but they rarely exceed 25 m/s ( in these areas. During all seasons, the strongest average winds are found in the North-Atlantic seas, Baffin Bay, and Bering and Chukchi Seas, where cyclone activity is most common. On the Atlantic side, the winds are strongest in winter, averaging 7 to 12 m/s (, and weakest in summer, averaging 5 to 7 m/s (. On the Pacific side they average 6 to 9 m/s ( year round. Maximum wind speeds in the Atlantic region can approach 50 m/s ( in winter. Changes in climate Past climates As with the rest of the planet, the climate in the Arctic has changed throughout time. About 55 million years ago it is thought that parts of the Arctic supported subtropical ecosystems and that Arctic sea-surface temperatures rose to about during the Paleocene–Eocene Thermal Maximum. In the more recent past, the planet has experienced a series of ice ages and interglacial periods over about the last 2 million years, with the last ice age reaching its maximum extent about 18,000 years ago and ending by about 10,000 years ago. During these ice ages, large areas of northern North America and Eurasia were covered by ice sheets similar to the one found today on Greenland; Arctic climate conditions would have extended much further south, and conditions in the present-day Arctic region were likely colder. Temperature proxies suggest that over the last 8000 years the climate has been stable, with globally averaged temperature variations of less than about ; (see Paleoclimate). Global warming There are several reasons to expect that climate changes, from whatever cause, may be enhanced in the Arctic, relative to the mid-latitudes and tropics. First is the ice-albedo feedback, whereby an initial warming causes snow and ice to melt, exposing darker surfaces that absorb more sunlight, leading to more warming. Second, because colder air holds less water vapour than warmer air, in the Arctic, a greater fraction of any increase in radiation absorbed by the surface goes directly into warming the atmosphere, whereas in the tropics, a greater fraction goes into evaporation. Third, because the Arctic temperature structure inhibits vertical air motions, the depth of the atmospheric layer that has to warm in order to cause warming of near-surface air is much shallower in the Arctic than in the tropics. Fourth, a reduction in sea-ice extent will lead to more energy being transferred from the warm ocean to the atmosphere, enhancing the warming. Finally, changes in atmospheric and oceanic circulation patterns caused by a global temperature change may cause more heat to be transferred to the Arctic, enhancing Arctic warming. According to the Intergovernmental Panel on Climate Change (IPCC), "warming of the climate system is unequivocal", and the global-mean temperature has increased by over the last century. This report also states that "most of the observed increase in global average temperatures since the mid-20th century is very likely [greater than 90% chance] due to the observed increase in anthropogenic greenhouse gas concentrations." The IPCC also indicate that, over the last 100 years, the annually averaged temperature in the Arctic has increased by almost twice as much as the global mean temperature has. In 2009, NASA reported that 45 percent or more of the observed warming in the Arctic since 1976 was likely a result of changes in tiny airborne particles called aerosols. Climate models predict that the temperature increase in the Arctic over the next century will continue to be about twice the global average temperature increase. By the end of the 21st century, the annual average temperature in the Arctic is predicted to increase by , with more warming in winter () than in summer. Decreases in sea-ice extent and thickness are expected to continue over the next century, with some models predicting the Arctic Ocean will be free of sea ice in late summer by the mid to late part of the century. A study published in the journal Science in September 2009 determined that temperatures in the Arctic are higher presently than they have been at any time in the previous 2,000 years. Samples from ice cores, tree rings and lake sediments from 23 sites were used by the team, led by Darrell Kaufman of Northern Arizona University, to provide snapshots of the changing climate. Geologists were able to track the summer Arctic temperatures as far back as the time of the Romans by studying natural signals in the landscape. The results highlighted that for around 1,900 years temperatures steadily dropped, caused by precession of Earth's orbit that caused the planet to be slightly farther away from the Sun during summer in the Northern Hemisphere. These orbital changes led to a cold period known as the little ice age during the 17th, 18th and 19th centuries. However, during the last 100 years temperatures have been rising, despite the fact that the continued changes in Earth's orbit would have driven further cooling. The largest rises have occurred since 1950, with four of the five warmest decades in the last 2,000 years occurring between 1950 and 2000. The last decade was the warmest in the record.
Physical sciences
Climates
Earth science
14538619
https://en.wikipedia.org/wiki/Vitamin%20B12
Vitamin B12
Vitamin B12, also known as cobalamin, is a water-soluble vitamin involved in metabolism. It is one of eight B vitamins. It is required by animals, which use it as a cofactor in DNA synthesis, and in both fatty acid and amino acid metabolism. It is important in the normal functioning of the nervous system via its role in the synthesis of myelin, and in the circulatory system in the maturation of red blood cells in the bone marrow. Plants do not need cobalamin and carry out the reactions with enzymes that are not dependent on it. Vitamin B12 is the most chemically complex of all vitamins, and for humans the only vitamin that must be sourced from animal-derived foods or supplements. Only some archaea and bacteria can synthesize vitamin B12. Vitamin B12 deficiency is a widespread condition that is particularly prevalent in populations with low or no consumption of animal foods. Such diets can be due to a variety of reasons, such as low socioeconomic status or personal choice (i.e., veganism, vegetarianism). Foods containing vitamin B12 include meat, shellfish, liver, fish, poultry, eggs, and dairy products. Many breakfast cereals are fortified with the vitamin. Supplements and medications are available to treat and prevent vitamin B12 deficiency. They are usually taken by mouth, but for the treatment of deficiency may also be given as an intramuscular injection. Vitamin B12 deficiencies have a greater effect on young children, pregnant and elderly people, and are more common in middle and lower-developed countries due to malnutrition. The most common cause of vitamin B12 deficiency in developed countries is impaired absorption due to a loss of gastric intrinsic factor (IF) which must be bound to a food source of B12 for absorption to occur. A second major cause is an age-related decline in stomach acid production (achlorhydria), because acid exposure frees protein-bound vitamin. For the same reason, people on long-term antacid therapy, using proton-pump inhibitors, H2 blockers or other antacids are at increased risk. The diets of vegetarians and vegans may not provide sufficient B12 unless a dietary supplement is taken. A deficiency may be characterized by limb neuropathy or a blood disorder called pernicious anemia, a type of anemia in which red blood cells become abnormally large. This can result in fatigue, decreased ability to think, lightheadedness, shortness of breath, frequent infections, poor appetite, numbness in the hands and feet, depression, memory loss, confusion, difficulty walking, blurred vision, irreversible nerve damage, and many others. If left untreated in infants, deficiency may lead to neurological damage and anemia. Folate levels in the individual may affect the course of pathological changes and symptomatology of vitamin B12 deficiency. Vitamin B12 deficiency in pregnant women is strongly associated with an increased risk of spontaneous abortion, congenital malformations such as neural tube defects, and problems with brain development growth in the unborn child. Vitamin B12 was discovered as a result of pernicious anemia, an autoimmune disorder in which the blood has a lower than normal number of red blood cells, due to a deficiency of vitamin B12. The ability to absorb the vitamin declines with age, especially in people over 60. Definition Vitamin B12 is a coordination complex of cobalt, which occupies the center of a corrin ligand and is further bound to a benzimidazole ligand and adenosyl group. Several related species are known and these behave similarly, in particular, all function as vitamins. This collection of compounds is sometimes referred to as "cobalamins". These chemical compounds have a similar molecular structure, each of which shows vitamin activity in a vitamin-deficient biological system, they are referred to as vitamers. The vitamin activity is as a coenzyme, meaning that its presence is required for some enzyme-catalyzed reactions. adenosylcobalamin cyanocobalamin, the adenosyl ligand in vitamin B12 is replaced by cyanide. hydroxocobalamin, the adenosyl ligand in vitamin B12 is replaced by hydroxide. methylcobalamin, the adenosyl ligand in vitamin B12 is replaced by methyl. Cyanocobalamin is a manufactured form of B12. Bacterial fermentation creates AdoB12 and MeB12, which are converted to cyanocobalamin by the addition of potassium cyanide in the presence of sodium nitrite and heat. Once consumed, cyanocobalamin is converted to the biologically active AdoB12 and MeB12. The two bioactive forms of vitamin are methylcobalamin in cytosol and adenosylcobalamin in mitochondria. Cyanocobalamin is the most common form used in dietary supplements and food fortification because cyanide stabilizes the molecule against degradation. Methylcobalamin is also offered as a dietary supplement. There is no advantage to the use of adenosylcobalamin or methylcobalamin forms for the treatment of vitamin B12 deficiency. Hydroxocobalamin can be injected intramuscularly to treat vitamin B12 deficiency. It can also be injected intravenously for the purpose of treating cyanide poisoning, as the hydroxyl group is displaced by cyanide, creating a non-toxic cyanocobalamin that is excreted in urine. "Pseudovitamin B12" refers to compounds that are corrinoids with a structure similar to the vitamin but without vitamin activity. Pseudovitamin B12 is the majority corrinoid in spirulina, an algal health food sometimes erroneously claimed as having this vitamin activity. Deficiency Vitamin B12 deficiency can potentially cause severe and irreversible damage, especially to the brain and nervous system. Deficiency at levels only slightly lower than normal can cause a range of symptoms such as fatigue, feeling weak, lightheadedness, dizziness, breathlessness, headaches, mouth ulcers, upset stomach, decreased appetite, difficulty walking (staggering balance problems), muscle weakness, depression, poor memory, poor reflexes, confusion, and pale skin, feeling abnormal sensations, among others, especially in people over age 60. Vitamin B12 deficiency can also cause symptoms of mania and psychosis. Among other problems, weakened immunity, reduced fertility and interruption of blood circulation in women may occur. The main type of vitamin B12 deficiency anemia is pernicious anemia, characterized by a triad of symptoms: Anemia with bone marrow promegaloblastosis (megaloblastic anemia). This is due to the inhibition of DNA synthesis (specifically purines and thymidine). Gastrointestinal symptoms: alteration in bowel motility, such as mild diarrhea or constipation, and loss of bladder or bowel control. These are thought to be due to defective DNA synthesis inhibiting replication in tissue sites with a high turnover of cells. This may also be due to the autoimmune attack on the parietal cells of the stomach in pernicious anemia. There is an association with gastric antral vascular ectasia (which can be referred to as watermelon stomach), and pernicious anemia. Neurological symptoms: sensory or motor deficiencies (absent reflexes, diminished vibration or soft touch sensation) and subacute combined degeneration of the spinal cord. Deficiency symptoms in children include developmental delay, regression, irritability, involuntary movements and hypotonia. Vitamin B12 deficiency is most commonly caused by malabsorption, but can also result from low intake, immune gastritis, low presence of binding proteins, or use of certain medications. Vegans—people who choose to not consume any animal-sourced foods—are at risk because plant-sourced foods do not contain the vitamin in sufficient amounts to prevent vitamin deficiency. Vegetarians—people who consume animal byproducts such as dairy products and eggs, but not the flesh of any animal—are also at risk. Vitamin B12 deficiency has been observed in between 40% and 80% of the vegetarian population who do not also take a vitamin B12 supplement or consume vitamin-fortified food. In Hong Kong and India, vitamin B12 deficiency has been found in roughly 80% of the vegan population. As with vegetarians, vegans can avoid this by consuming a dietary supplement or eating B12 fortified food such as cereal, plant-based milks, and nutritional yeast as a regular part of their diet. The elderly are at increased risk because they tend to produce less stomach acid as they age, a condition known as achlorhydria, thereby increasing their probability of B12 deficiency due to reduced absorption. Nitrous oxide overdose or overuse converts the active monovalent form of vitamin B12 to the inactive bivalent form. Pregnancy, lactation, and early childhood The U.S. Recommended Dietary Allowance (RDA) for pregnancy is , for lactation . Determination of these values was based on an RDA of for non-pregnant women, plus what will be transferred to the fetus during pregnancy and what will be delivered in breast milk. However, looking at the same scientific evidence, the European Food Safety Authority (EFSA) sets adequate intake (AI) at for pregnancy and for lactation. Low maternal vitamin B12, defined as serum concentration less than 148 pmol/L, increases the risk of miscarriage, preterm birth and newborn low birth weight. During pregnancy the placenta concentrates B12, so that newborn infants have a higher serum concentration than their mothers. As it is recently absorbed vitamin content that more effectively reaches the placenta, the vitamin consumed by the mother-to-be is more important than that contained in her liver tissue. Women who consume little animal-sourced food, or who are vegetarian or vegan, are at higher risk of becoming vitamin depleted during pregnancy than those who consume more animal products. This depletion can lead to anemia, and also an increased risk that their breastfed infants become vitamin deficient. Vitamin B12 is not one of the supplements recommended by the World Health Organization for healthy women who are pregnant, however, vitamin B12 is often suggested during pregnancy in a multivitamin along with folic acid especially for pregnant mothers who follow a vegetarian or vegan diet. Low vitamin concentrations in human milk occur in families with low socioeconomic status or low consumption of animal products. Only a few countries, primarily in Africa, have mandatory food fortification programs for either wheat flour or maize flour; India has a voluntary fortification program. What the nursing mother consumes is more important than her liver tissue content, as it is recently absorbed vitamin that more effectively reaches breast milk. Breast milk B12 decreases over months of nursing in both well-nourished and vitamin-deficient mothers. Exclusive or near-exclusive breastfeeding beyond six months is a strong indicator of low serum vitamin status in nursing infants. This is especially true when the vitamin status is poor during the pregnancy and if the early-introduced foods fed to the still-breastfeeding infant are vegan. The risk of deficiency persists if the post-weaning diet is low in animal products. Signs of low vitamin levels in infants and young children can include anemia, poor physical growth, and neurodevelopmental delays. Children diagnosed with low serum B12 can be treated with intramuscular injections, then transitioned to an oral dietary supplement. Gastric bypass surgery Various methods of gastric bypass or gastric restriction surgery are used to treat morbid obesity. Roux-en-Y gastric bypass surgery (RYGB) but not sleeve gastric bypass surgery or gastric banding, increases the risk of vitamin B12 deficiency and requires preventive post-operative treatment with either injected or high-dose oral supplementation. For post-operative oral supplementation, may be needed to prevent vitamin deficiency. Diagnosis According to one review: "At present, no 'gold standard' test exists for the diagnosis of vitamin B12 deficiency and as a consequence the diagnosis requires consideration of both the clinical state of the patient and the results of investigations." The vitamin deficiency is typically suspected when a routine complete blood count shows anemia with an elevated mean corpuscular volume (MCV). In addition, on the peripheral blood smear, macrocytes and hypersegmented polymorphonuclear leukocytes may be seen. Diagnosis is supported based on vitamin B12 blood levels below 150–180 pmol/L (200–250 pg/mL) in adults. However, serum values can be maintained while tissue B12 stores are becoming depleted. Therefore, serum B12 values above the cut-off point of deficiency do not necessarily confirm adequate B12 status. For this reason, elevated serum homocysteine over 15 micromol/L and methylmalonic acid (MMA) over 0.271 micromol/L are considered better indicators of B12 deficiency, rather than relying only on the concentration of B12 in blood. However, elevated MMA is not conclusive, as it is seen in people with B12 deficiency, but also in elderly people who have renal insufficiency, and elevated homocysteine is not conclusive, as it is also seen in people with folate deficiency. In addition, elevated methylmalonic acid levels may also be related to metabolic disorders such as methylmalonic acidemia. If nervous system damage is present and blood testing is inconclusive, a lumbar puncture may be carried out to measure cerebrospinal fluid B12 levels. Serum haptocorrin binds 80-90% of circulating B12, rendering it unavailable for cellular delivery by transcobalamin II. This is conjectured to be a circulating storage function. Several serious, even life-threatening diseases cause elevated serum haptocorrin, measured as abnormally high serum vitamin B12, while at the same time potentially manifesting as a symptomatic vitamin deficiency because of insufficient vitamin bound to transcobalamin II which transfers the vitamin to cells. Medical uses Treatment of deficiency Severe vitamin B12 deficiency is initially corrected with daily intramuscular injections of of the vitamin, followed by maintenance via monthly injections of the same amount or daily oral dosing of . The oral daily dose far exceeds the vitamin requirement because the normal transporter protein-mediated absorption is absent, leaving only very inefficient intestinal passive absorption. Injection side effects include skin rash, itching, chills, fever, hot flushes, nausea and dizziness. Oral maintenance treatment avoids this problem and significantly reduces the cost of treatment. Cyanide poisoning For cyanide poisoning, a large amount of hydroxocobalamin may be given intravenously and sometimes in combination with sodium thiosulfate. The mechanism of action is straightforward: the hydroxycobalamin hydroxide ligand is displaced by the toxic cyanide ion, and the resulting non-toxic cyanocobalamin is excreted in urine. Dietary recommendations Some research shows that most people in the United States and the United Kingdom consume sufficient vitamin B12. However, other research suggests that the proportion of people with low or marginal levels of vitamin B12 is up to 40% in the Western world. Grain-based foods can be fortified by having the vitamin added to them. Vitamin B12 supplements are available as single or multivitamin tablets. Pharmaceutical preparations of vitamin B12 may be given by intramuscular injection. Since there are few non-animal sources of the vitamin, vegans are advised to consume a dietary supplement or fortified foods for B12 intake, or risk serious health consequences. Children in some regions of developing countries are at particular risk due to increased requirements during growth coupled with diets low in animal-sourced foods. The US National Academy of Medicine updated estimated average requirements (EARs) and recommended dietary allowances (RDAs) for vitamin B in 1998. The EAR for vitamin B for women and men ages 14 and up is 2.0μg/day; the RDA is . RDA is higher than EAR to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy equals 2.6μg/day. RDA for lactation equals . For infants up to 12 months, the adequate intake (AI) is 0.4–0.5μg/day. (AIs are established when there is insufficient information to determine EARs and RDAs.) For children ages 1–13 years, the RDA increases with age from 0.9 to 1.8μg/day. Because 10 to 30 percent of older people may be unable to effectively absorb vitamin B naturally occurring in foods, those older than 50 years should meet their RDA mainly by consuming foods fortified with vitamin B or a supplement containing vitamin B. As for safety, tolerable upper intake levels (known as ULs) are set for vitamins and minerals when evidence is sufficient. In the case of vitamin B there is no UL, as there is no human data for adverse effects from high doses. Collectively the EARs, RDAs, AIs, and ULs are referred to as dietary reference intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as "dietary reference values", with population reference intake (PRI) instead of RDA, and average requirement instead of EAR. AI and UL are defined by EFSA the same as in the United States. For women and men over age 18, the adequate intake (AI) is set at 4.0μg/day. AI for pregnancy is 4.5 μg/day, and for lactation 5.0μg/day. For children aged 1–14 years, the AIs increase with age from 1.5 to 3.5μg/day. These AIs are higher than the U.S. RDAs. The EFSA also reviewed the safety question and reached the same conclusion as in the United States—that there was not sufficient evidence to set a UL for vitamin B. The Japan National Institute of Health and Nutrition set the RDA for people ages 12 and older at 2.4μg/day. The World Health Organization also uses 2.4μg/day as the adult recommended nutrient intake for this vitamin. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a "percent of daily value" (%DV). For vitamin B labeling purposes, 100% of the daily value was 6.0μg, but on 27 May 2016, it was revised downward to 2.4μg (see Reference Daily Intake). Compliance with the updated labeling regulations was required by 1 January 2020 for manufacturers with US$10 million or more in annual food sales, and by 1 January 2021 for manufacturers with lower volume food sales. Sources Bacteria and archaea Vitamin B12 is produced in nature by certain bacteria, and archaea. It is synthesized by some bacteria in the gut microbiota in humans and other animals, but it has long been thought that humans cannot absorb this as it is made in the colon, downstream from the small intestine, where the absorption of most nutrients occurs. Ruminants, such as cows and sheep, are foregut fermenters, meaning that plant food undergoes microbial fermentation in the rumen before entering the true stomach (abomasum), and thus they are absorbing vitamin B12 produced by bacteria. Other mammalian species (examples: rabbits, pikas, beaver, guinea pigs) consume high-fiber plants which pass through the gastrointestinal tract and undergo bacterial fermentation in the cecum and large intestine. In this hindgut fermentation, the material from the cecum is expelled as "cecotropes" and are re-ingested, a practice referred to as cecotrophy. Re-ingestion allows for absorption of nutrients made available by bacterial fermentation, and also of vitamins and other nutrients synthesized by the gut bacteria, including vitamin B12. Non-ruminant, non-hindgut herbivores may have an enlarged forestomach and/or small intestine to provide a place for bacterial fermentation and B-vitamin production, including B12. For gut bacteria to produce vitamin B12, the animal must consume sufficient amounts of cobalt. Soil that is deficient in cobalt may result in B12 deficiency, and B12 injections or cobalt supplementation may be required for livestock. Animal-derived foods Animals store vitamin B12 from their diets in their livers and muscles and some pass the vitamin into their eggs and milk. Meat, liver, eggs, and milk are therefore sources of the vitamin for other animals, including humans. For humans, the bioavailability from eggs is less than 9%, compared to 40% to 60% from fish, fowl, and meat. Insects are a source of B12 for animals (including other insects and humans). Animal-derived food sources with a high concentration of vitamin B12 include liver and other organ meats from lamb, veal, beef, and turkey; also shellfish and crab meat. Plants and algae There is some evidence that bacterial fermentation of plant foods and symbiotic relationships between algae and bacteria can provide vitamin B12. However, the Academy of Nutrition and Dietetics considers plant and algae sources "unreliable", stating that vegans should turn to fortified foods and supplements instead. Natural plant and algae sources of vitamin B12 include fermented plant foods such as tempeh and seaweed-derived foods such as nori and laverbread. Methylcobalamin has been identified in Chlorella vulgaris. Since only bacteria and some archea possess the genes and enzymes necessary to synthesize vitamin B12, plant and algae sources all obtain the vitamin secondarily from symbiosis with various species of bacteria, or in the case of fermented plant foods, from bacterial fermentation. Pseudovitamin B12 is the majority corrinoid in spirulina, an algal health food sometimes erroneously claimed as having vitamin activity. Fortified foods Foods for which vitamin B12-fortified versions are available include breakfast cereals, plant-derived milk substitutes such as soy milk and oat milk, energy bars, and nutritional yeast. The fortification ingredient is cyanocobalamin. Microbial fermentation yields adenosylcobalamin, which is then converted to cyanocobalamin by the addition of potassium cyanide or thiocyanate in the presence of sodium nitrite and heat. As of 2019, nineteen countries require food fortification of wheat flour, maize flour, or rice with vitamin B12. Most of these are in southeast Africa or Central America. Vegan advocacy organizations, among others, recommend that every vegan consume B12 from either fortified foods or supplements. Supplements Vitamin B12 is included in multivitamin pills; in some countries grain-based foods such as bread and pasta are fortified with B12. In the US, non-prescription products can be purchased providing up to 5,000μg each, and it is a common ingredient in energy drinks and energy shots, usually at many times the recommended dietary allowance of B12. The vitamin can also be supplied on prescription and delivered via injection or other means. A vitamer methylcobalamin is available as a supplement. The claimed advantage of such product is that methylcobalamin, unline cyanocobalamin, does not contain cyanide. The metabolic fate and biological distribution of methylcobalamin are expected to be similar to that of other sources of vitamin B12 in the diet. The amount of cyanide in cyanocobalamin is generally not a concern, even in the 1,000μg dose, since the amount of cyanide there (20μg in a 1,000μg cyanocobalamin tablet) is less than the daily consumption of cyanide from food, and therefore cyanocobalamin is not considered a health risk. Besides that, the efficacy of methylcobalamin administration in treating vitamin B12 deficiency remains uncertain. While directly providing active cobalamin forms to deficient patients is an attractive approach promoted by the manufacturers of methylcobalamin products, it is not known whether methylcobalamin can reach its intracellular targets in its original, unmodified form to function effectively as ready coenzyme. It is also not known whether current level of evidence sufficient to recommend this relatively expensive strategy as an alternative to cyanocobalamin or hydroxocobalamin. There is currently insufficient evidence on comparative effectiveness and safety of various B12 vitamers (methylcobalamin, cyancobalamin, hydroxocobalamin, adenosylcobalamin). Intramuscular or intravenous injection Injection of hydroxycobalamin is often used if digestive absorption is impaired, but this course of action may not be necessary with high-dose oral supplements (such as 0.5–1.0mg or more), because with large quantities of the vitamin taken orally, even the 1% to 5% of free crystalline B12 that is absorbed along the entire intestine by passive diffusion may be sufficient to provide a necessary amount. A person with cobalamin C disease, a rare autosomal, recessive, inheritance disease which results in combined methylmalonic aciduria and homocystinuria), can be treated with intravenous or intramuscular hydroxocobalamin. Nanotechnologies used in vitamin B12 supplementation Conventional administration does not ensure specific distribution and controlled release of vitamin B12. Moreover, therapeutic protocols involving injection require health care people and commuting of patients to the hospital thus increasing the cost of the treatment and impairing the lifestyle of patients. Targeted delivery of vitamin B12 is a major focus of modern prescriptions. For example, conveying the vitamin to the bone marrow and nerve cells would help myelin recovery. Currently, several nanocarriers strategies are being developed for improving vitamin B12 delivery to simplify administration, reduce costs, improve pharmacokinetics, and ameliorate the quality of patients' lives. Pseudovitamin-B12 Pseudovitamin-B12 refers to B12-like analogues that are biologically inactive in humans. Most cyanobacteria, including Spirulina, and some algae, such as Porphyra tenera (used to make a dried seaweed food called nori in Japan), have been found to contain mostly pseudovitamin-B12 instead of biologically active B12. These pseudo-vitamin compounds can be found in some types of shellfish, in edible insects, and at times as metabolic breakdown products of cyanocobalamin added to dietary supplements and fortified foods. Pseudovitamin-B12 can show up as biologically active vitamin B12 when a microbiological assay with Lactobacillus delbrueckii subsp. lactis is used, as the bacteria can utilize the pseudovitamin despite it being unavailable to humans. To get a reliable reading of B12 content, more advanced techniques are available. One such technique involves pre-separation by silica gel and then assessment with B12-dependent E. coli bacteria. A related concept is antivitamin B12, compounds (often synthetic B12 analogues) that not only have no vitamin action but also actively interfere with the activity of true vitamin B12. The design of these compounds mainly involves the replacement of the metal ion with rhodium, nickel, or zinc; or the attachment of an inactive ligand such as 4-ethylphenyl. These compounds have the potential to be used for analyzing B12 utilization pathways or even attacking B12-dependent pathogens. Drug interactions H2-receptor antagonists and proton-pump inhibitors Gastric acid is needed to release vitamin B12 from protein for absorption. Reduced secretion of gastric acid and pepsin, from the use of H2 blocker or proton-pump inhibitor (PPI) drugs, can reduce the absorption of protein-bound (dietary) vitamin B12, although not of supplemental vitamin B12. H2-receptor antagonist examples include cimetidine, famotidine, nizatidine, and ranitidine. PPIs examples include omeprazole, lansoprazole, rabeprazole, pantoprazole, and esomeprazole. Clinically significant vitamin B12 deficiency and megaloblastic anemia are unlikely, unless these drug therapies are prolonged for two or more years, or if in addition, the person's dietary intake is below recommended levels. Symptomatic vitamin deficiency is more likely if the person is rendered achlorhydric (a complete absence of gastric acid secretion), which occurs more frequently with proton pump inhibitors than H2 blockers. Metformin Reduced serum levels of vitamin B12 occur in up to 30% of people taking long-term anti-diabetic metformin. Deficiency does not develop if dietary intake of vitamin B12 is adequate or prophylactic B12 supplementation is given. If the deficiency is detected, metformin can be continued while the deficiency is corrected with B12 supplements. Other drugs Certain medications can decrease the absorption of orally consumed vitamin B12, including colchicine, extended-release potassium products, and antibiotics such as gentamicin, neomycin and tobramycin. Anti-seizure medications phenobarbital, pregabalin, primidone and topiramate are associated with lower than normal serum vitamin concentration. However, serum levels were higher in people prescribed valproate. In addition, certain drugs may interfere with laboratory tests for the vitamin, such as amoxicillin, erythromycin, methotrexate and pyrimethamine. Chemistry Vitamin B12 is the most chemically complex of all the vitamins. The structure of B12 is based on a corrin ring, which is similar to the porphyrin ring found in heme. The central metal ion is cobalt. As isolated as an air-stable solid and available commercially, cobalt in vitamin B12 (cyanocobalamin and other vitamers) is present in its +3 oxidation state. Biochemically, the cobalt center can take part in both two-electron and one-electron reductive processes to access the "reduced" (B12r, +2 oxidation state) and "super-reduced" (B12s, +1 oxidation state) forms. The ability to shuttle between the +1, +2, and +3 oxidation states is responsible for the versatile chemistry of vitamin B12, allowing it to serve as a donor of deoxyadenosyl radical (radical alkyl source) and as a methyl cation equivalent (electrophilic alkyl source). Four of the six coordination sites are provided by the corrin ring and a fifth by a dimethylbenzimidazole group. The sixth coordination site, the reactive center, is variable, being a cyano group (–CN), a hydroxyl group (–OH), a methyl group (–CH3) or a 5′-deoxyadenosyl group. Historically, the covalent carbon–cobalt bond is one of the first examples of carbon-metal bonds to be discovered in biology. The hydrogenases and, by necessity, enzymes associated with cobalt utilization, involve metal-carbon bonds. Animals can convert cyanocobalamin and hydroxocobalamin to the bioactive forms adenosylcobalamin and methylcobalamin by enzymatically replacing the cyano or hydroxyl groups. Methods for the analysis of vitamin B12 in food Several methods have been used to determine the vitamin B12 content in foods including microbiological assays, chemiluminescence assays, polarographic, spectrophotometric, and high-performance liquid chromatography processes. The microbiological assay has been the most commonly used assay technique for foods, utilizing certain vitamin B12-requiring microorganisms, such as Lactobacillus delbrueckii subsp. lactis ATCC7830. However, it is no longer the reference method due to the high measurement uncertainty of vitamin B12. Furthermore, this assay requires overnight incubation and may give false results if any inactive vitamin B12 analogues are present in the foods. Currently, radioisotope dilution assay (RIDA) with labeled vitamin B12 and hog IF (pigs) have been used to determine vitamin B12 content in food. Previous reports have suggested that the RIDA method can detect higher concentrations of vitamin B12 in foods compared to the microbiological assay method. Biochemistry Coenzyme function Vitamin B12 functions as a coenzyme, meaning that its presence is required in some enzyme-catalyzed reactions. Listed here are the three classes of enzymes that sometimes require B12 to function (in animals): Isomerases Rearrangements in which a hydrogen atom is directly transferred between two adjacent atoms with concomitant exchange of the second substituent, X, which may be a carbon atom with substituents, an oxygen atom of an alcohol, or an amine. These use the AdoB12 (adenosylcobalamin) form of the vitamin. Methyltransferases Methyl (–CH3) group transfers between two molecules. These use the MeB12 (methylcobalamin) form of the vitamin. Dehalogenases Some species of anaerobic bacteria synthesize B12-dependent dehalogenases, which have potential commercial applications for degrading chlorinated pollutants. The microorganisms may either be capable of de novo corrinoid biosynthesis or are dependent on exogenous vitamin B12. In humans, two major coenzyme B12-dependent enzyme families corresponding to the first two reaction types, are known. These are typified by the following two enzymes: Methylmalonyl-CoA mutase Methylmalonyl coenzyme A mutase (MUT) is an isomerase enzyme that uses the AdoB12 form and reaction type 1 to convert L-methylmalonyl-CoA to succinyl-CoA, an important step in the catabolic breakdown of some amino acids into succinyl-CoA, which then enters energy production via the citric acid cycle. This functionality is lost in vitamin B12 deficiency, and can be measured clinically as an increased serum methylmalonic acid (MMA) concentration. The MUT function is necessary for proper myelin synthesis. Based on animal research, it is thought that the increased methylmalonyl-CoA hydrolyzes to form methylmalonate (methylmalonic acid), a neurotoxic dicarboxylic acid, causing neurological deterioration. Methionine synthase Methionine synthase, coded by MTR gene, is a methyltransferase enzyme which uses the MeB12 and reaction type 2 to transfer a methyl group from 5-methyltetrahydrofolate to homocysteine, thereby generating tetrahydrofolate (THF) and methionine. This functionality is lost in vitamin B12 deficiency, resulting in an increased homocysteine level and the trapping of folate as 5-methyl-tetrahydrofolate, from which THF (the active form of folate) cannot be recovered. THF plays an important role in DNA synthesis, so reduced availability of THF results in ineffective production of cells with rapid turnover, in particular red blood cells, and also intestinal wall cells which are responsible for absorption. THF may be regenerated via MTR or may be obtained from fresh folate in the diet. Thus all of the DNA synthetic effects of B12 deficiency, including the megaloblastic anemia of pernicious anemia, resolve if sufficient dietary folate is present. Thus the best-known "function" of B12 (that which is involved with DNA synthesis, cell division, and anemia) is a facultative function that is mediated by B12-conservation of an active form of folate which is needed for efficient DNA production. Other cobalamin-requiring methyltransferase enzymes are also known in bacteria, such as Me-H4-MPT, coenzyme M methyltransferase. Physiology Absorption Vitamin B12 is absorbed by a B12-specific transport proteins or via passive diffusion. Transport-mediated absorption and tissue delivery is a complex process involving three transport proteins: haptocorrin (HC), intrinsic factor (IF) and transcobalamin II (TC2), and respective membrane receptor proteins. HC is present in saliva. As vitamin-containing food is digested by hydrochloric acid and pepsin secreted into the stomach, HC binds the vitamin and protects it from acidic degradation. Upon leaving the stomach the hydrochloric acid of the chyme is neutralized in the duodenum by bicarbonate, and pancreatic proteases release the vitamin from HC, making it available to be bound by IF, which is a protein secreted by gastric parietal cells in response to the presence of food in the stomach. IF delivers the vitamin to receptor proteins cubilin and amnionless, which together form the cubam receptor in the distal ileum. The receptor is specific to the IF-B12 complex, and so will not bind to any vitamin content that is not bound to IF. Investigations into the intestinal absorption of B12 confirm that the upper limit of absorption per single oral dose is about 1.5μg, with 50% efficiency. In contrast, the passive diffusion process of B12 absorption — normally a very small portion of total absorption of the vitamin from food consumption — may exceed the haptocorrin- and IF-mediated absorption when oral doses of B12 are very large, with roughly 1% efficiency. Thus, dietary supplement B12 supplementation at 500 to 1000μg per day allows pernicious anemia and certain other defects in B12 absorption to be treated with daily oral megadoses of B12 without any correction of the underlying absorption defects. After the IF/B12 complex binds to cubam the complex is disassociated and the free vitamin is transported into the portal circulation. The vitamin is then transferred to TC2, which serves as the circulating plasma transporter, hereditary defects in the production of TC2 and its receptor may produce functional deficiencies in B12 and infantile megaloblastic anemia, and abnormal B12 related biochemistry, even in some cases with normal blood B12 levels. For the vitamin to serve inside cells, the TC2-B12 complex must bind to a cell receptor protein and be endocytosed. TC2 is degraded within a lysosome, and free B12 is released into the cytoplasm, where it is transformed into the bioactive coenzyme by cellular enzymes. Malabsorption Antacid drugs that neutralize stomach acid and drugs that block acid production (such as proton-pump inhibitors) will inhibit the absorption of B12 by preventing the release from food in the stomach. Other causes of B12 malabsorption include intrinsic factor deficiency, pernicious anemia, bariatric surgery pancreatic insufficiency, obstructive jaundice, tropical sprue and celiac disease, and radiation enteritis of the distal ileum. Age can be a factor. Elderly people are often achlorhydric due to reduced stomach parietal cell function, and thus have an increased risk of B12 deficiency. Storage and excretion How fast B12 levels change depends on the balance between how much B12 is obtained from the diet, how much is secreted and how much is absorbed. The total amount of vitamin B12 stored in the body is about 2–5mg in adults. Around 50% of this is stored in the liver. Approximately 0.1% of this is lost per day by secretions into the gut, as not all these secretions are reabsorbed. Bile is the main form of B12 excretion; most of the B12 secreted in the bile is recycled via enterohepatic circulation. Excess B12 beyond the blood's binding capacity is typically excreted in urine. Owing to the extremely efficient enterohepatic circulation of B12, the liver can store 3 to 5 years' worth of vitamin B12; therefore, nutritional deficiency of this vitamin is rare in adults in the absence of malabsorption disorders. In the absence of intrinsic factor or distal ileum receptors, only months to a year of vitamin B12 are stored. Cellular reprogramming Vitamin B12 through its involvement in one-carbon metabolism plays a key role in cellular reprogramming and tissue regeneration and epigenetic regulation. Cellular reprogramming is the process by which somatic cells can be converted to a pluripotent state. Vitamin B12 levels affect the histone modification H3K36me3, which suppresses illegitimate transcription outside of gene promoters. Mice undergoing in vivo reprogramming were found to become depleted in B12 and show signs of methionine starvation while supplementing reprogramming mice and cells with B12 increased reprogramming efficiency, indicating a cell-intrinsic effect. Synthesis Biosynthesis Vitamin B12 is derived from a tetrapyrrolic structural framework created by the enzymes deaminase and cosynthetase which transform aminolevulinic acid via porphobilinogen and hydroxymethylbilane to uroporphyrinogen III. The latter is the first macrocyclic intermediate common to heme, chlorophyll, siroheme and B12 itself. Later steps, especially the incorporation of the additional methyl groups of its structure, were investigated using 13C methyl-labelled S-adenosyl methionine. It was not until a genetically engineered strain of Pseudomonas denitrificans was used, in which eight of the genes involved in the biosynthesis of the vitamin had been overexpressed, that the complete sequence of methylation and other steps could be determined, thus fully establishing all the intermediates in the pathway. Species from the following genera and the following individual species are known to synthesize B12: Propionibacterium shermanii, Pseudomonas denitrificans, Streptomyces griseus, Acetobacterium, Aerobacter, Agrobacterium, Alcaligenes, Azotobacter, Bacillus, Clostridium, Corynebacterium, Flavobacterium, Lactobacillus, Micromonospora, Mycobacterium, Nocardia, Proteus, Rhizobium, Salmonella, Serratia, Streptococcus and Xanthomonas. Industrial Industrial production of B12 is achieved through fermentation of selected microorganisms. Streptomyces griseus, a bacterium once thought to be a fungus, was the commercial source of vitamin B12 for many years. The species Pseudomonas denitrificans and Propionibacterium freudenreichii subsp. shermanii are more commonly used today. These are grown under special conditions to enhance yield. Rhone-Poulenc improved yield via genetic engineering P. denitrificans. Propionibacterium, the other commonly used bacteria, produce no exotoxins or endotoxins and are generally recognized as safe (have been granted GRAS status) by the Food and Drug Administration of the United States. The total world production of vitamin B12 in 2008 was 35,000 kg (77,175 lb). Laboratory The complete laboratory synthesis of B12 was achieved by Robert Burns Woodward and Albert Eschenmoser in 1972. The work required the effort of 91 postdoctoral fellows (mostly at Harvard) and 12 PhD students (at ETH Zurich) from 19 nations. The synthesis constitutes a formal total synthesis, since the research groups only prepared the known intermediate cobyric acid, whose chemical conversion to vitamin B12 was previously reported. This synthesis of vitamin B12 is of no practical consequence due to its length, taking 72 chemical steps and giving an overall chemical yield well under 0.01%. Although there have been sporadic synthetic efforts since 1972, the Eschenmoser–Woodward synthesis remains the only completed (formal) total synthesis. History Descriptions of deficiency effects Between 1849 and 1887, Thomas Addison described a case of pernicious anemia, William Osler and William Gardner first described a case of neuropathy, Hayem described large red cells in the peripheral blood in this condition, which he called "giant blood corpuscles" (now called macrocytes), Paul Ehrlich identified megaloblasts in the bone marrow, and Ludwig Lichtheim described a case of myelopathy. Identification of liver as an anti-anemia food During the 1920s, George Whipple discovered that ingesting large amounts of raw liver seemed to most rapidly cure the anemia of blood loss in dogs, and hypothesized that eating liver might treat pernicious anemia. Edwin Cohn prepared a liver extract that was 50 to 100 times more potent in treating pernicious anemia than the natural liver products. William Castle demonstrated that gastric juice contained an "intrinsic factor" which when combined with meat ingestion resulted in absorption of the vitamin in this condition. In 1934, George Whipple shared the 1934 Nobel Prize in Physiology or Medicine with William P. Murphy and George Minot for discovery of an effective treatment for pernicious anemia using liver concentrate, later found to contain a large amount of vitamin B12. Identification of the active compound While working at the Bureau of Dairy Industry, U.S. Department of Agriculture, Mary Shaw Shorb was assigned work on the bacterial strain Lactobacillus lactis Dorner (LLD), which was used to make yogurt and other cultured dairy products. The culture medium for LLD required liver extract. Shorb knew that the same liver extract was used to treat pernicious anemia (her father-in-law had died from the disease), and concluded that LLD could be developed as an assay method to identify the active compound. While at the University of Maryland, she received a small grant from Merck, and in collaboration with Karl Folkers from that company, developed the LLD assay. This identified "LLD factor" as essential for the bacteria's growth. Shorb, Folker and Alexander R. Todd, at the University of Cambridge, used the LLD assay to extract the anti-pernicious anemia factor from liver extracts, purify it, and name it vitamin B12. In 1955, Todd helped elucidate the structure of the vitamin. The complete chemical structure of the molecule was determined by Dorothy Hodgkin based on crystallographic data and published in 1955 and 1956, for which, and for other crystallographic analyses, she was awarded the Nobel Prize in Chemistry in 1964. Hodgkin went on to decipher the structure of insulin. George Whipple, George Minot and William Murphy were awarded the Nobel Prize in 1934 for their work on the vitamin. Three other Nobel laureates, Alexander R. Todd (1957), Dorothy Hodgkin (1964) and Robert Burns Woodward (1965) made important contributions to its study. Commercial production Industrial production of vitamin B12 is achieved through fermentation of selected microorganisms. As noted above, the completely synthetic laboratory synthesis of B12 was achieved by Robert Burns Woodward and Albert Eschenmoser in 1972, though this process has no commercial potential, requiring more than 70 steps and having a yield well below 0.01%. Society and culture In the 1970s, John A. Myers, a physician residing in Baltimore, developed a program of injecting vitamins and minerals intravenously for various medical conditions. The formula included of cyanocobalamin. This came to be known as the Myers' cocktail. After he died in 1984, other physicians and naturopaths took up prescribing "intravenous micronutrient therapy" with unsubstantiated health claims for treating fatigue, low energy, stress, anxiety, migraine, depression, immunocompromised, promoting weight loss, and more. However, other than a report on case studies there are no benefits confirmed in the scientific literature. Healthcare practitioners at clinics and spas prescribe versions of these intravenous combination products, but also intramuscular injections of just vitamin B12. A Mayo Clinic review concluded that there is no solid evidence that vitamin B12 injections provide an energy boost or aid weight loss. There is evidence that for elderly people, physicians often repeatedly prescribe and administer cyanocobalamin injections inappropriately, evidenced by the majority of subjects in one large study either having had normal serum concentrations or having not been tested before the injections.
Biology and health sciences
Vitamins
Health
1552607
https://en.wikipedia.org/wiki/Linkage%20%28mechanical%29
Linkage (mechanical)
A mechanical linkage is an assembly of systems connected so as to manage forces and movement. The movement of a body, or link, is studied using geometry so the link is considered to be rigid. The connections between links are modeled as providing ideal movement, pure rotation or sliding for example, and are called joints. A linkage modeled as a network of rigid links and ideal joints is called a kinematic chain. Linkages may be constructed from open chains, closed chains, or a combination of open and closed chains. Each link in a chain is connected by a joint to one or more other links. Thus, a kinematic chain can be modeled as a graph in which the links are paths and the joints are vertices, which is called a linkage graph. The movement of an ideal joint is generally associated with a subgroup of the group of Euclidean displacements. The number of parameters in the subgroup is called the degrees of freedom (DOF) of the joint. Mechanical linkages are usually designed to transform a given input force and movement into a desired output force and movement. The ratio of the output force to the input force is known as the mechanical advantage of the linkage, while the ratio of the input speed to the output speed is known as the speed ratio. The speed ratio and mechanical advantage are defined so they yield the same number in an ideal linkage. A kinematic chain, in which one link is fixed or stationary, is called a mechanism, and a linkage designed to be stationary is called a structure. History Archimedes applied geometry to the study of the lever. Into the 1500s the work of Archimedes and Hero of Alexandria were the primary sources of machine theory. It was Leonardo da Vinci who brought an inventive energy to machines and mechanism. In the mid-1700s the steam engine was of growing importance, and James Watt realized that efficiency could be increased by using different cylinders for expansion and condensation of the steam. This drove his search for a linkage that could transform rotation of a crank into a linear slide, and resulted in his discovery of what is called Watt's linkage. This led to the study of linkages that could generate straight lines, even if only approximately; and inspired the mathematician J. J. Sylvester, who lectured on the Peaucellier linkage, which generates an exact straight line from a rotating crank. The work of Sylvester inspired A. B. Kempe, who showed that linkages for addition and multiplication could be assembled into a system that traced a given algebraic curve. Kempe's design procedure has inspired research at the intersection of geometry and computer science. In the late 1800s F. Reuleaux, A. B. W. Kennedy, and L. Burmester formalized the analysis and synthesis of linkage systems using descriptive geometry, and P. L. Chebyshev introduced analytical techniques for the study and invention of linkages. In the mid-1900s F. Freudenstein and G. N. Sandor used the newly developed digital computer to solve the loop equations of a linkage and determine its dimensions for a desired function, initiating the computer-aided design of linkages. Within two decades these computer techniques were integral to the analysis of complex machine systems and the control of robot manipulators. R. E. Kaufman combined the computer's ability to rapidly compute the roots of polynomial equations with a graphical user interface to unite Freudenstein's techniques with the geometrical methods of Reuleaux and Burmester and form KINSYN, an interactive computer graphics system for linkage design The modern study of linkages includes the analysis and design of articulated systems that appear in robots, machine tools, and cable driven and tensegrity systems. These techniques are also being applied to biological systems and even the study of proteins. Mobility The configuration of a system of rigid links connected by ideal joints is defined by a set of configuration parameters, such as the angles around a revolute joint and the slides along prismatic joints measured between adjacent links. The geometric constraints of the linkage allow calculation of all of the configuration parameters in terms of a minimum set, which are the input parameters. The number of input parameters is called the mobility, or degree of freedom, of the linkage system. A system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. Include this frame in the count of bodies, so that mobility is independent of the choice of the fixed frame, then we have M = 6(N − 1), where N = n + 1 is the number of moving bodies plus the fixed body. Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, we have f = 1 and therefore c = 6 − 1 = 5. Thus, the mobility of a linkage system formed from n moving links and j joints each with fi, i = 1, ..., j, degrees of freedom can be computed as, where N includes the fixed link. This is known as Kutzbach–Grübler's equation There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A simple open chain consists of n moving links connected end to end by j joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is For a simple closed chain, n moving links are connected end-to-end by n+1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N=j and the mobility of the chain is An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom. An example of a simple closed chain is the RSSR (revolute-spherical-spherical-revolute) spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints. Planar and spherical movement It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the link is now three rather than six, and the constraints imposed by joints are now c = 3 − f. In this case, the mobility formula is given by and we have the special cases, planar or spherical simple open chain, planar or spherical simple closed chain, An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1. Joints The most familiar joints for linkage systems are the revolute, or hinged, joint denoted by an R, and the prismatic, or sliding, joint denoted by a P. Most other joints used for spatial linkages are modeled as combinations of revolute and prismatic joints. For example, the cylindric joint consists of an RP or PR serial chain constructed so that the axes of the revolute and prismatic joints are parallel, the universal joint consists of an RR serial chain constructed such that the axes of the revolute joints intersect at a 90° angle; the spherical joint consists of an RRR serial chain for which each of the hinged joint axes intersect in the same point; the planar joint can be constructed either as a planar RRR, RPR, and PPR serial chain that has three degrees-of-freedom. Analysis and synthesis of linkages The primary mathematical tool for the analysis of a linkage is known as the kinematic equations of the system. This is a sequence of rigid body transformation along a serial chain within the linkage that locates a floating link relative to the ground frame. Each serial chain within the linkage that connects this floating link to ground provides a set of equations that must be satisfied by the configuration parameters of the system. The result is a set of non-linear equations that define the configuration parameters of the system for a set of values for the input parameters. Freudenstein introduced a method to use these equations for the design of a planar four-bar linkage to achieve a specified relation between the input parameters and the configuration of the linkage. Another approach to planar four-bar linkage design was introduced by L. Burmester, and is called Burmester theory. Planar one degree-of-freedom linkages The mobility formula provides a way to determine the number of links and joints in a planar linkage that yields a one degree-of-freedom linkage. If we require the mobility of a planar linkage to be M = 1 and fi = 1, the result is or This formula shows that the linkage must have an even number of links, so we have N = 2, j = 1: this is a two-bar linkage known as the lever; N = 4, j = 4: this is the four-bar linkage; N = 6, j = 7: this is a six-bar linkage [ it has two links that have three joints, called ternary links, and there are two topologies of this linkage depending how these links are connected. In the Watt topology, the two ternary links are connected by a joint. In the Stephenson topology the two ternary links are connected by binary links; N = 8, j = 10: the eight-bar linkage has 16 different topologies; N = 10, j = 13: the 10-bar linkage has 230 different topologies, N = 12, j = 16: the 12-bar has 6856 topologies. See Sunkari and Schmidt for the number of 14- and 16-bar topologies, as well as the number of linkages that have two, three and four degrees-of-freedom. The planar four-bar linkage is probably the simplest and most common linkage. It is a one degree-of-freedom system that transforms an input crank rotation or slider displacement into an output rotation or slide. Examples of four-bar linkages are: the crank-rocker, in which the input crank fully rotates and the output link rocks back and forth; the slider-crank, in which the input crank rotates and the output slide moves back and forth; drag-link mechanisms, in which the input crank fully rotates and drags the output crank in a fully rotational movement. Biological linkages Linkage systems are widely distributed in animals. The most thorough overview of the different types of linkages in animals has been provided by Mees Muller, who also designed a new classification system which is especially well suited for biological systems. A well-known example is the cruciate ligaments of the knee. An important difference between biological and engineering linkages is that revolving bars are rare in biology and that usually only a small range of the theoretically possible is possible due to additional functional constraints (especially the necessity to deliver blood). Biological linkages frequently are compliant. Often one or more bars are formed by ligaments, and often the linkages are three-dimensional. Coupled linkage systems are known, as well as five-, six-, and even seven-bar linkages. Four-bar linkages are by far the most common though. Linkages can be found in joints, such as the knee of tetrapods, the hock of sheep, and the cranial mechanism of birds and reptiles. The latter is responsible for the upward motion of the upper bill in many birds. Linkage mechanisms are especially frequent and manifold in the head of bony fishes, such as wrasses, which have evolved many specialized feeding mechanisms. Especially advanced are the linkage mechanisms of jaw protrusion. For suction feeding a system of linked four-bar linkages is responsible for the coordinated opening of the mouth and 3-D expansion of the buccal cavity. Other linkages are responsible for protrusion of the premaxilla. Linkages are also present as locking mechanisms, such as in the knee of the horse, which enables the animal to sleep standing, without active muscle contraction. In pivot feeding, used by certain bony fishes, a four-bar linkage at first locks the head in a ventrally bent position by the alignment of two bars. The release of the locking mechanism jets the head up and moves the mouth toward the prey within 5–10 ms. Examples Pantograph (four-bar, two DOF) Five bar linkages often have meshing gears for two of the links, creating a one DOF linkage. They can provide greater power transmission with more design flexibility than four-bar linkages. Jansen's linkage is an eight-bar leg mechanism that was invented by kinetic sculptor Theo Jansen. Klann linkage is a six-bar linkage that forms a leg mechanism; Toggle mechanisms are four-bar linkages that are dimensioned so that they can fold and lock. The toggle positions are determined by the colinearity of two of the moving links. The linkage is dimensioned so that the linkage reaches a toggle position just before it folds. The high mechanical advantage allows the input crank to deform the linkage just enough to push it beyond the toggle position. This locks the input in place. Toggle mechanisms are used as clamps. Straight line mechanisms James Watt's parallel motion and Watt's linkage Peaucellier–Lipkin linkage, the first planar linkage to create a perfect straight line output from rotary input; eight-bar, one DOF. A Scott Russell linkage, which converts linear motion, to (almost) linear motion in a line perpendicular to the input. Chebyshev linkage, which provides nearly straight motion of a point with a four-bar linkage. Hoekens linkage, which provides nearly straight motion of a point with a four-bar linkage. Sarrus linkage, which provides motion of one surface in a direction normal to another. Hart's inversor, which provides a perfect straight line motion without sliding guides. Gallery
Technology
Mechanisms
null
1553128
https://en.wikipedia.org/wiki/Freshwater%20crocodile
Freshwater crocodile
The freshwater crocodile (Crocodylus johnstoni), also known commonly as the Australian freshwater crocodile, Johnstone's crocodile, and the freshie, is a species of crocodile native to the northern regions of Australia. Unlike its much larger Australian relative, the saltwater crocodile, the freshwater crocodile is not known as a man-eater, although it bites in self-defence, and brief, nonfatal attacks have occurred, apparently the result of mistaken identity. Taxonomy and etymology When Gerard Krefft named the species in 1873, he intended to commemorate the man who first sent him preserved specimens, Australian native police officer and amateur naturalist Robert Arthur Johnstone (1843–1905). However, Krefft made an error in writing the name, and for many years, the species has been known as C. johnsoni. Recent studies of Krefft's papers have determined the correct spelling of the name, and much of the literature has been updated to the correct usage, but both versions still exist. According to the rules of the International Code of Zoological Nomenclature, the epithet johnstoni (rather than the original johnsoni) is correct. Evolution The genus Crocodylus likely originated in Africa and radiated outwards towards Southeast Asia and the Americas, although an Australia/Asia origin has also been considered. Phylogenetic evidence supports Crocodylus diverging from its closest recent relative, the extinct Voay of Madagascar, around 25 million years ago, near the Oligocene/Miocene boundary. Phylogeny Below is a cladogram based on a 2018 tip dating study by Lee & Yates simultaneously using morphological, molecular (DNA sequencing), and stratigraphic (fossil age) data, as revised by the 2021 Hekkala et al. paleogenomics study using DNA extracted from the extinct Voay. Description The freshwater crocodile is a relatively small crocodilian. Typically, males can grow to a total length (including tail) of if a dominant male (although there are reported specimens of 4 metres in length (see below)), while females reach a maximum size of . Males commonly weigh around , with large specimens up to or more, against the female weight of . In areas such as Lake Argyle and Katherine Gorge, a handful of confirmed 4-metre (13-foot) individuals exist. This species is shy and has a slenderer snout and slightly smaller teeth than the dangerous saltwater crocodile. The body colour is light brown with darker bands on the body and tail. These bands tend to be broken up near the neck. Some individuals possess distinct bands or speckling on the snout. Body scales are relatively large, with wide, close-knit, armoured plates on the back. Rounded, pebbly scales cover the flanks and outsides of the legs. Distribution and habitat Freshwater crocodiles are found in Western Australia, Queensland, and the Northern Territory. Main habitats include freshwater wetlands, billabongs, rivers, and creeks. This species can live in areas where saltwater crocodiles cannot, and are known to inhabit areas above the escarpment in Kakadu National Park and in very arid and rocky conditions (such as Katherine Gorge, where they are common and are relatively safe from saltwater crocodiles during the dry season). However, they are still consistently found in low-level billabongs, living alongside the saltwater crocodiles near the tidal reaches of rivers. In May 2013, a freshwater crocodile was seen in a river near the desert town of Birdsville, hundreds of kilometres south of their normal range. A local ranger suggested that years of flooding may have washed the animal south, or it may have been dumped as a juvenile. A population of freshwater crocodiles has been repeatedly sighted for a number of decades in the Ross River that runs through Townsville. The predominant theory is that the heavy flooding common to the area may have washed a number of the animals in to the Ross River Catchment area. Biology and behaviour They compete poorly with saltwater crocodiles, but are saltwater tolerant. An individual being eaten by an olive python has been filmed; it was reported to have succumbed after a struggle of around five hours. Reproduction Eggs are laid in holes during the Australian dry season (usually in August) and hatch at the beginning of the wet season (November/December). The crocodiles do not defend their nests during incubation. From one to five days prior to hatching, the young begin to call from within the eggs. This induces and synchronizes hatching in siblings and stimulates adults to open the nest. If the adult that opens a given nest is the female which laid the eggs is unknown. As young emerge from the nest, the adult picks them up one by one in the tip of its mouth and transports them to the water. Adults may also assist young in breaking through the egg shell by chewing or manipulating the eggs in its mouth. Diet Feeding in the wild, freshwater crocodiles eat a variety of invertebrate and vertebrate prey. These prey may include crustaceans, insects, spiders, fish, frogs, turtles, snakes, birds, and various mammals. Insects appear to be the most common food, followed by fish. Small prey is usually obtained by a 'sit-and-wait' method, whereby the crocodile lies motionless in shallow water and waits for fish and insects to come within close range, before they are snapped up in a sideways action. However, larger prey such as wallabies and water birds may be stalked and ambushed in a manner similar to that of the saltwater crocodile. Digestive system The crocodiles have teeth that have adapted for capturing and holding prey, and food is swallowed without chewing. The digestive tract is short, as their food is relatively simple to swallow and digest. The stomach has two compartments - a muscular gizzard that grinds food, and a digestive chamber where enzymes act on the food. The crocodile's stomach is comparatively more acidic than that of any other vertebrate and contains ridges that lead to the mechanical breakdown of food. Digestion takes place at a faster pace at high temperatures. Circulation system The hearts of other reptiles are designed to contain three sections, including two atria and one ventricle. The right atrium, which collects the returned deoxygenated blood and the left atrium, which collects the oxygenated blood collected from pulmonary arteries of the lung, takes the blood to a common ventricle. When just one ventricle is available to receive and mix oxygenated and deoxygenated blood and pump it to the body, the mixture of blood the body receives has relatively less oxygen. Crocodiles have a more complex vertebrate circulatory system, with a four-chambered heart, including two ventricles. Like birds and mammals, crocodiles have heart valves that direct blood flows in a single direction through the heart chambers. When under water, the crocodile's heart rate slows down to one to two beats a minute, and muscles receive less blood flow. When it comes out of the water and takes a breath, its heart rate speeds up in seconds, and the muscles receive oxygen-rich blood. Unlike many marine mammals, crocodiles have only a small amount of myoglobin to store oxygen in their muscles. Conservation status Until recently, the freshwater crocodile was common in northern Australia, especially where saltwater crocodiles are absent (such as more arid inland areas and higher elevations). In recent years, the population has dropped dramatically due to the ingestion of the invasive cane toad. The toad is poisonous to freshwater crocodiles, although not to saltwater crocodiles, and the toad is rampant throughout the northern Australian bush. The crocodiles are also infected by Griphobilharzia amoena, a parasitic trematode, in regions such as Darwin. Relationship with humans Although the freshwater crocodile does not attack humans as potential prey, it can deliver a nasty bite. Brief and rapidly abandoned attacks have occurred, and were likely the result of mistaken identity (mistaking a part of the human as a typical prey item). Other attacks have occurred in self defense when the crocodile was touched or approached too closely. No human fatalities are known to have been caused by this species. A few incidents have been reported where people have been bitten whilst swimming with freshwater crocodiles, and others incurred during scientific study. An attack by a freshwater crocodile on a human was recorded at Barramundi Gorge (also known as Maguk) in Kakadu National Park and resulted in minor injuries; the victim managed to swim and walk away from the attack. He had apparently passed directly over the crocodile in the water. In general, though, swimming with this species is still considered safe, so long as they are not aggravated. There has, however, been a freshwater crocodile attack at Lake Argyle. Gallery
Biology and health sciences
Crocodilia
Animals
1553731
https://en.wikipedia.org/wiki/Slipper%20lobster
Slipper lobster
Slipper lobsters are a family (Scyllaridae) of about 90 species of achelate crustaceans, in the Decapoda clade Reptantia, found in all warm oceans and seas. They are not true lobsters, but are more closely related to spiny lobsters and furry lobsters. Slipper lobsters are instantly recognisable by their enlarged antennae, which project forward from the head as wide plates. All the species of slipper lobsters are edible, and some, such as the Moreton Bay bug and the Balmain bug (Ibacus peronii) are of commercial importance. Description Slipper lobsters have six segments in their heads and eight segments in the thorax, which are collectively covered in a thick carapace. The six segments of the abdomen each bear a pair of pleopods, while the thoracic appendages are either walking legs or maxillipeds. The head segments bear various mouthparts and two pairs of antennae. The first antennae, or antennules, are held on a long flexible stalk, and are used for sensing the environment. The second antennae are the slipper lobsters' most conspicuous feature, as they are expanded and flattened into large plates that extend horizontally forward from the animal's head. There is considerable variation in size among species of slipper lobsters. The Mediterranean species Scyllarus pygmaeus is the smallest, growing to a maximum total length of , and rarely more than . The largest species, Scyllarides haanii, may reach long. Ecology Slipper lobsters are typically bottom dwellers of the continental shelves, found at depths of up to . Slipper lobsters eat a variety of molluscs, including limpets, mussels and oysters, as well as crustaceans, polychaetes and echinoderms. They grow slowly and live to a considerable age. They lack the giant neurones which allow other decapod crustaceans to perform tailflips, and must rely on other means to escape predator attack, such as burial in a substrate and reliance on the heavily armoured exoskeleton. The most significant predators of slipper lobsters are bony fish, with the grey triggerfish being the most significant predator of Scyllarides latus in the Mediterranean Sea. Life cycle After hatching out of their eggs, young slipper lobsters pass through around ten instars as phyllosoma larvae — leaf-like, planktonic zoeae. These ten or so stages last the greater part of a year, after which the larva moults into a "nisto" stage that lasts a few weeks. Almost nothing is known about the transition from this stage to the adults, which continue to grow through a series of moults. Commercial importance Although they are fished for wherever they are found, slipper lobsters have not been the subject of such intense fishery as spiny lobsters or true lobsters. The methods used for catching slipper lobsters varies depending on the species' ecology. Those that prefer soft substrates, such as Thenus and Ibacus, are often caught by trawling, while those that prefer crevices, caves and reefs (including Scyllarides, Arctides and Parribacus species) are usually caught by scuba divers. The global catch of slipper lobsters was reported in 1991 to be . More recently, annual production has been around , the majority of which is production of Thenus orientalis in Asia. Common names A number of common names have been applied to the family Scyllaridae. The most common of these is "slipper lobster", followed by "shovel-nosed lobster" and "locust lobster". "Spanish lobster" is used for members of the genus Arctides, "mitten lobster" for Parribacus, and "fan lobster" for Evibacus and Ibacus. In Australia, a number of species are called "bugs" (for example, the Balmain bug and Moreton Bay bug), especially those in the genus Ibacus. Other names used in Australia include "bay lobster", "blind lobster", "flapjack", "flat lobster", "flying saucer", "gulf lobster", "mudbug", "sandbug", "shovel-nose bug", "shovelnose lobster", "crayfish", "slipper bug" and "squagga". Rarer terms include "flathead lobster" (for Thenus orientalis) and "bulldozer lobster". In Greece they may be known as Kolochtypes which roughly translates as 'bum hitter'. Twenty-two genera are recognised, the majority of which were erected in 2002 by Lipke Holthuis for species formerly classified under Scyllarus: Genera Slipper lobsters belong to the following genera. Scyllarinae Latreille, 1825 Acantharctus Holthuis, 2002 Antarctus Holthuis, 2002 Antipodarctus Holthuis, 2002 Bathyarctus Holthuis, 2002 Biarctus Holthuis, 2002 Chelarctus Holthuis, 2002 Crenarctus Holthuis, 2002 Eduarctus Holthuis, 2002 Galearctus Holthuis, 2002 Gibbularctus Holthuis, 2002 Petrarctus Holthuis, 2002 Remiarctus Holthuis, 2002 Scammarctus Holthuis, 2002 Scyllarella Rathbun, 1935 (extinct) Scyllarus Fabricius, 1775 Arctidinae Holthuis, 1985 Arctides Holthuis, 1960 Scyllarides Gill, 1898 Ibacinae Holthuis, 1985 Evibacus S. I. Smith, 1869 Ibacus Leach, 1815 Palibacus Förster, 1984 (extinct) Parribacus Dana, 1852 Theninae Holthuis, 1985 Thenus Leach, 1815 Gallery Gallery of various slipper lobsters species: Fossil record The fossil record of slipper lobsters extends back 100–120 million years, which is considerably less than that of slipper lobsters' closest relatives, the spiny lobsters. One significant earlier fossil is Cancrinos claviger, which was described from Upper Jurassic sediments at least , and may represent either an ancestor of modern slipper lobsters, or the sister group to the family Scyllaridae sensu stricto.
Biology and health sciences
Crayfishes and lobsters
Animals
1554235
https://en.wikipedia.org/wiki/Archosauromorpha
Archosauromorpha
Archosauromorpha (Greek for "ruling lizard forms") is a clade of diapsid reptiles containing all reptiles more closely related to archosaurs (such as crocodilians and dinosaurs, including birds) rather than lepidosaurs (such as tuataras, lizards, and snakes). Archosauromorphs first appeared during the late Middle Permian or Late Permian, though they became much more common and diverse during the Triassic period. Although Archosauromorpha was first named in 1946, its membership did not become well-established until the 1980s. Currently Archosauromorpha encompasses four main groups of reptiles: the stocky, herbivorous allokotosaurs and rhynchosaurs, the hugely diverse Archosauriformes, and a polyphyletic grouping of various long-necked reptiles including Protorosaurus, tanystropheids, and Prolacerta. Other groups including pantestudines (turtles and their extinct relatives) and the semiaquatic choristoderes have also been placed in Archosauromorpha by some authors. Archosauromorpha is one of the most diverse groups of reptiles, but its members can be united by several shared skeletal characteristics. These include laminae on the vertebrae, a posterodorsal process of the premaxilla, a lack of notochordal canals, and the loss of the entepicondylar foramen of the humerus. History and definition The term Archosauromorpha was first used by Friedrich von Huene in 1946 to refer to reptiles more closely related to archosaurs than to lepidosaurs. However, there was little consensus on ancient reptile relationships prior to the late 20th century, so the term Archosauromorpha was seldom used until many years after its creation. The advent of cladistics helped to sort out at least some of the relationships within Reptilia, and it became clear that there was a split between the archosaur lineage and the lepidosaur lineage somewhere within the Permian, with certain reptiles clearly closer to archosaurs and others allied with lepidosaurs. Jacques Gauthier reused the term Archosauromorpha for the archosaur lineage at the 1982 annual meeting of the American Society of Zoologists, and later used it within his 1984 Ph.D. thesis. Archosauromorpha, as formulated by Gauthier, included four main groups of reptiles: Rhynchosauria, "Prolacertiformes", "Trilophosauria", and Archosauria (now equivalent to the group Archosauriformes). Cladistic analyses created during the 1980s by Gauthier, Michael J. Benton, and Susan E. Evans implemented Gauthier's classification scheme within large studies of reptile relations. Michel Laurin (1991) defined Archosauromorpha as a node-based clade containing the most recent common ancestor of Prolacerta, Trilophosaurus, Hyperodapedon and all of its descendants. David Dilkes (1998) formulated a more inclusive (and more common) definition of Archosauromorpha, defining it as a branch-based total group clade containing Protorosaurus and all other saurians that are more closely related to Protorosaurus than to Lepidosauria. Gauthier, as an author for Phylonyms (2020), redefined Archosauromorpha as a node-based clade containing Gallus, Alligator, Mesosuchus, Trilophosaurus, Prolacerta, and Protorosaurus. The new name Pan-Archosauria was established for the broader total group of Archosauromorpha, similar to the definition of Dilkes (1998). In 2016, Martin Ezcurra named a subgroup of Archosauromorpha, Crocopoda ("crocodile feet"). Crocopoda is defined as all archosauromorphs more closely related to allokotosaurs (specifically Azendohsaurus and Trilophosaurus), rhynchosaurs (specifically Rhynchosaurus), or archosauriforms (specifically Proterosuchus) rather than Protorosaurus or tanystropheids (specifically Tanystropheus). This group roughly corresponds to Laurin's definition of Archosauromorpha. Members Unambiguous members Since the seminal studies of the 1980s, Archosauromorpha has consistently been found to contain four specific reptile groups, although the definitions and validity of the groups themselves have been questioned. The least controversial group is Rhynchosauria ("beak reptiles"), a monophyletic clade of stocky herbivores. Many rhynchosaurs had highly modified skulls, with beak-like premaxillary bones and wide heads. Another group of archosauromorphs has traditionally been represented by Trilophosaurus, an unusual iguana-like herbivorous reptile quite different from the rhynchosaurs. Gauthier used the name "Trilophosauria" for this group, but a 2015 study offered an alternative name. This study found that Azendohsauridae, Triassic reptiles previously mistaken for "prosauropod" dinosaurs, were in fact close relatives of Trilophosaurus and the rest of Trilophosauridae. Trilophosaurids and azendohsaurids are now united under the group Allokotosauria ("strange reptiles"). These two groups did not survive the end of the Triassic period, but the most famous group of archosauromorphs not only survived, but have continued to diversify and dominate beyond the Triassic-Jurassic extinction. These were the Archosauriformes, a diverse assortment of animals including the famous dinosaurs and pterosaurs. Two subclades of Archosauriformes survive to the present day: the semiaquatic crocodilians and the last of the feathered dinosaurs: birds. Gauthier used the name Archosauria to refer to what is now called the Archosauriformes; in modern studies, the name Archosauria has a more restricted definition that only includes the ancestors of crocodilians (i.e. Pseudosuchia) and birds (i.e. Avemetatarsalia). The final unambiguous members of Archosauromorpha represent the most controversial group. These were the first archosauromorphs to appear, and can be characterized by their long necks, sprawling posture, and carnivorous habits. One name for the group, Protorosauria, is named after Protorosaurus, the oldest archosauromorph known from good remains. Another name, Prolacertiformes, is in reference to a different member, Prolacerta. Protorosauria/Prolacertiformes has had a complicated history, and many taxa have entered and left the group as paleontologists discover and re-evaluate reptiles of the Triassic. By far the most famous of these are tanystropheids such as Tanystropheus, known for having necks longer than their entire body. Other notable genera include Boreopricea, Pamelaria, and Macrocnemus, as well as strange gliding reptiles such as Sharovipteryx and Mecistotrachelos. A landmark 1998 study by David Dilkes completely deconstructed the concept of Prolacertiformes as a traditional monophyletic group (i.e. one whose members have a single common ancestor). He argued that Prolacerta was much closer to Archosauriformes than to other "prolacertiforms", invalidating the name. Likewise, Pamelaria is now considered an allokotosaur, Macrocnemus is a tanystropheid, and Protorosaurus may be too basal ("primitive") to form a clade with any of its supposed close relatives. As such, this final group of Archosauromorpha is generally considered paraphyletic or polyphyletic, and few modern studies use it. Disputed members Apart from these four groups, Archosauromorpha is sometimes considered to encompass several additional groups of reptiles. One of the most common additions is Choristodera, a group of semiaquatic reptiles with mysterious origins. Although choristodere fossils are only known from the Jurassic through the Miocene, it is theorized that they first appeared during the Permian alongside the earliest archosauromorphs. Choristoderes share numerous otherwise unique traits with archosauromorphs, but they share an equal or greater number of unique traits with lepidosauromorphs as well, so there is still some debate over their inclusion within either group. The chameleon- or tamandua-like drepanosaurs are also semi-regularly placed within Archosauromorpha, although some studies have considered them to be part of a much more basal lineage of reptiles. The aquatic thalattosaurs and gliding kuehneosaurids are also irregularly considered archosauromorphs. Genetic studies have found evidence that modern testudines (turtles and tortoises) are more closely related to crocodilians than to lizards. If this evidence is accurate, then turtles are part of basal Archosauromorpha. Likewise, extinct turtle relatives known as Pantestudines would also fall within Archosauromorpha. Some geneticists have proposed a name to refer to reptiles within the group formed by relatives of turtles and archosaurs. This name is the clade Archelosauria. Since Pantestudines may encompass the entire aquatic reptile order Sauropterygia, this means that Archosauromorpha (as Archelosauria) may be a much wider group than commonly believed. However, anatomical data disagrees with this genetic evidence, instead placing Pantestudines within Lepidosauromorpha but many modern studies have supported Archelosauria. Several recent studies place sauropterygians within Archosauromorpha group, forming a large clade including Ichthyosauromorpha and Thalattosauria as opposed to the Pantestudine relations. Anatomy Although the most diverse clade of living archosauromorphs are birds, early members of the group were evidently reptilian, superficially similar to modern lizards. When archosauromorphs first appeared in the fossil record in the Permian, they were represented by long-necked, lightly built sprawling reptiles with moderately long, tapering snouts. This body plan, similar to that of modern monitor lizards, is also shared by Triassic archosauromorphs such as tanystropheids and Prolacerta. Other early groups such as trilohpsaurids, azendohsaurids, and rhynchosaurs deviate from this body plan by evolving into stockier forms with semi-erect postures and higher metabolisms. The archosauriforms went to further extremes of diversity, encompassing giant sauropod dinosaurs, flying pterosaurs and birds, semiaquatic crocodilians, phytosaurs, and proterochampsians, and apex predators such as erythrosuchids, pseudosuchians, and theropod dinosaurs. Despite the staggering diversity of archosauromorphs, they can still be united as a clade thanks to several subtle skeletal features. Skull Most archosauromorphs more "advanced" than Protorosaurus possessed an adaptation of the premaxilla (tooth-bearing bone at the tip of the snout) known as a posterodorsal or postnarial process. This was a rear-facing branch of bone that stretched up below and behind the external nares (nostril holes) to contact the nasal bones on the upper edge of the snout. A few advanced archosauriforms reacquired the plesiomorphic ("primitive") state present in other reptiles, that being a short or absent posterodorsal process of the premaxilla, with the rear edge of the nares formed primarily by the maxilla bones instead. As for the nares themselves, they were generally large and oval-shaped, positioned high and close to the midline of the skull. Many early archosauromorphs, including Protorosaurus, tanystropheids, Trilophosaurus, and derived rhynchosaurs, have a blade-like sagittal crest on the parietal bones at the rear part of the skull roof, between a pair of holes known as the supratemporal (or upper temporal) fenestrae. However, in other allokotosaurs, the basal rhynchosaur Mesosuchus, and more crownward archosauromorphs, the sagittal crest is weakly differentiated, although the inner edge of each supratemporal fenestra still possessed a depressed basin of bone known as a supratemporal fossa. Ezcurra (2016) argued that presence of supratemporal fossae and an absence or poor development of the sagittal crest could be used to characterize Crocopoda. He also noted that in almost all early archosauromorphs (and some choristoderes), the parietal bones have an additional lowered area which extends transversely (from left to right) behind the supratemporal fenestrae and sagittal crest (when applicable). The lower temporal fenestra is not fully enclosed in early archosauromorphs (and choristoderes) due to alterations to the structure of the quadratojugal bone at the rear lower corner of the skull. This bone is roughly L-shaped in these taxa, with a tall dorsal process (vertical branch), a short anterior process (forward branch), and a tiny or absent posterior process (rear branch). The bones surrounding the quadratojugal also reconfigure to offset the changes to the quadratojugal. For example, the lower branch of the squamosal bone is shortened to offset the tall dorsal process of quadratojugal which connects to it. On the other hand, the rear branch jugal bone lengthens to fill some of the space left by the shortening of the anterior process of the quadratojugal. In archosauriforms, the jugal even re-encloses the lower temporal fenestra. The stapes are long, thin, and solid, without a perforating hole (stapedial foramen) present in the more robust stapes of other reptiles. Vertebrae In conjunction with their long, S-shaped necks, early archosauromorphs had several adaptations of the cervical (neck) vertebrae, and usually the first few dorsal (back) vertebrae as well. The centrum (main body) of each vertebra is parallelogram-shaped, with a front surface typically positioned higher than the rear surface. The transverse processes (rib facets) of these vertebrae extend outwards to a greater extent than in other early reptiles. In many long-necked archosauromorphs, the rib facets are slanted, connecting to cervical ribs that are often long, thin, and dichocephalous (two-headed). Thin, plate-like ridges known as laminae develop to connect the vertebral components, sloping down from the elongated transverse processes to the centra. Laminae are practically unique to archosauromorphs, being present even in the earliest Permian genera such as Aenigmastropheus and Eorasaurus. However, they are also known to occur in the bizarre semiaquatic reptile Helveticosaurus, as well as the biarmosuchian synapsid Hipposaurus. In all adult archosauromorphs with the exception of Aenigmastropheus, the vertebrae lack notochordal canals, holes which perforate the centra. This also sets the archosauromorphs apart from most other Permian and Triassic reptiles. Forelimbs The humerus (upper arm bone) is solid in archosauromorphs, completely lacking a hole near the elbow known as the entepicondylar foramen. This hole, present in most other tetrapods, is also absent in choristoderes yet not fully enclosed in some proterosuchids. In many advanced archosauromorphs, the capitulum and trochlea (elbow joints) of the humerus are poorly developed. Early archosauromorphs retain well-developed elbow joints, but all archosauromorphs apart from Aenigmastropheus have a trochlea (ulna joint) which is shifted towards the outer surface of the humerus, rather than the midpoint of the elbow as in other reptiles. In conjunction with this shift, the olecranon process of the ulna is poorly developed in archosauromorphs apart from Aenigmastropheus and Protorosaurus. Hindlimbs The ankle bones of archosauromorphs tend to acquire complex structures and interactions with each other, and this is particularly the case with the large proximal tarsal bones: the astragalus and calcaneum. The calcaneum, for example, has a tube-like outer extension known as a calcaneal tuber in certain archosauromorphs. This tuber is particularly prominent in the ancient relatives of crocodylians, but it first appeared earlier at the last common ancestor of allokotosaurs, rhynchosaurs, and archosauriforms. The presence of a calcaneal tuber (sometimes known as a lateral tuber of the calcaneum) is a synapomorphy of the group Crocopoda, and is also responsible for its name. Relationships The cladogram shown below follows the most likely result found by an analysis of turtle relationships using both fossil and genetic evidence by M.S. Lee, in 2013. The following cladogram is based on a large analysis of archosauriforms published by M.D. Ezcurra in 2016.
Biology and health sciences
Reptiles: General
Animals
1554274
https://en.wikipedia.org/wiki/Golden%20jackal
Golden jackal
The golden jackal (Canis aureus), also called the common jackal, is a wolf-like canid that is native to Eurasia. The golden jackal's coat varies in color from a pale creamy yellow in summer to a dark tawny beige in winter. It is smaller and has shorter legs, a shorter tail, a more elongated torso, a less-prominent forehead, and a narrower and more pointed muzzle than the Arabian wolf. It is listed as Least Concern on the IUCN Red List due to its widespread distribution and high density in areas with plenty of available food and optimum shelter. Despite its name, the golden jackal is not closely related to the African black-backed jackal or side-striped jackal, which are part of the genus Lupulella. It is instead closer to wolves and coyotes. The ancestor of the golden jackal is believed to be the extinct Arno river dog that lived in southern Europe . It is described as having been a small, jackal-like canine. Genetic studies indicate that the golden jackal expanded from India around 20,000 years ago, towards the end of the last Last Glacial Maximum. The oldest golden jackal fossil, found at the Ksar Akil rock shelter near Beirut, Lebanon, is 7,600 years old. The oldest golden jackal fossils in Europe were found in Greece and are 7,000 years old. There are six subspecies of the golden jackal. It is capable of producing fertile hybrids with both the gray wolf and the African wolf. Jackal–dog hybrids called Sulimov dogs are in service at the Sheremetyevo Airport near Moscow, where they are deployed by the Russian airline Aeroflot for scent-detection. The golden jackal is abundant in valleys and beside rivers and their tributaries, canals, lakes, and seashores; however, the species is rare in foothills and low mountains. It is a social species, the basic social unit of which consists of a breeding pair and any young offspring. It is very adaptable, with the ability to exploit food ranging from fruit and insects to small ungulates. It attacks domestic fowl and domestic mammals up to the size of domestic water buffalo calves. Its competitors are the red fox, steppe wolf, jungle cat, Caucasian wildcat, the raccoon in the Caucasus and in Central Asia, and the Asiatic wildcat. It is expanding beyond its native grounds in Southeast Europe into Central and Northeast Europe into areas where there are few or no wolves. Etymology and naming The word 'jackal' appeared in the English language around 1600. It derives from the Turkish word çakal, which originates from the Persian word šagāl. It is also known as the common jackal. Taxonomy The biological family Canidae is composed of the South American canids, the fox-like canids, and the wolf-like canids. All species within the wolf-like canids share a similar morphology and possess 78chromosomes, allowing them potentially to interbreed. Within the wolf-like canids is the jackal group, which includes the three jackals: the black-backed jackal (Lupulellamesomela), the side-striped jackal (Lupulella adusta), and the golden jackal (Canis aureus). These three species are approximately the same size, possess similar dental and skeletal morphology, and are identified from each other primarily by their coat color. They were once thought to have different distributions across Africa with their ranges overlapping in East Africa (Ethiopia, Kenya, and Tanzania). Although the jackal group has traditionally been considered as homogenous, genetic studies show that jackals are not monophyletic (they do not share a common ancestor), and they are only distantly related. The accuracy of the colloquial name "jackal" to describe all jackals is therefore questionable. Mitochondrial DNA (mDNA) passes along the maternal line and can date back thousands of years. Thus, phylogenetic analysis of mDNA sequences within a species provides a history of maternal lineages that can be represented as a phylogenetic tree. A 2005 genetic study of the canids found that the gray wolf and dog are the most closely related on this tree. The next most closely related are the coyote (Canis latrans), golden jackal, and Ethiopian wolf (Canis simensis), which have all been shown to hybridize with the dog in the wild. The next closest are the dhole (Cuon alpinus) and African wild dog (Lycaon pictus), which are not members of genus Canis. These are followed by the black-backed and side-striped jackals, members of the genus Lupulella and the most basal members of this clade. Results from two recent studies of mDNA from golden jackals indicate that the specimens from Africa are genetically closer to the gray wolf than are the specimens from Eurasia. In 2015 a major DNA study of golden jackals concluded that the six C.aureus subspecies found in Africa should be reclassified under the new species C.anthus (African wolf), reducing the number of golden jackal subspecies to seven. The phylogenetic tree generated from this study shows the golden jackal diverging from the wolf/coyote lineage 1.9million years ago and the African wolf diverging 1.3million years ago. The study found that the golden jackal and the African wolf shared a very similar skull and body morphology and that this had confused taxonomists into regarding these as one species. The study proposes that the very similar skull and body morphology is due to both species having originated from a larger common ancestor. Evolution The Arno river dog (Canis arnensis) is an extinct species of canine that was endemic to Mediterranean Europe during the Early Pleistocene around 1.9million years ago. It is described as a small jackal-like dog and probably the ancestor of modern jackals. Its anatomy and morphology relate it more to the modern golden jackal than to the two African jackal species, the black-backed jackal and the side-striped jackal. The oldest golden jackal fossil was found at the Ksar Akil rock shelter located northeast of Beirut, Lebanon. The fragment of a single tooth is dated approximately 7,600 years ago. The oldest golden jackal fossils found in Europe are from Delphi and Kitsos in Greece and are dated 7,000–6,500 years ago. An unusual fossil of a heel bone found in Azykh Cave, in Nagorno-Karabakh, dates to the Middle Pleistocene and is described as probably belonging to the golden jackal, but its classification is not clear. The fossil is described as being slightly smaller and thinner than the cave lynx, similar to the fox, but too large, and similar to the wolf, but too small. As the golden jackal falls between these two in size, the fossil possibly belongs to a golden jackal. The absence of clearly identified golden jackal fossils in the Caucasus region and Transcaucasia, areas where the species currently resides, indicates that the species is a relatively recent arrival. A haplotype is a group of genes found in an organism that is inherited from one of its parents. A haplogroup is a group of similar haplotypes that share a single mutation inherited from their common ancestor. The mDNA haplotypes of the golden jackal form two haplogroups: the oldest haplogroup is formed by golden jackals from India, and the other, younger, haplogroup diverging from this includes golden jackals from all of the other regions. Indian golden jackals exhibit the highest genetic diversity, and those from northern and western India are the most basal, which indicates that India was the center from which golden jackals spread. The extant golden jackal lineage commenced expanding its population in India 37,000 years ago. During the Last Glacial Maximum, 25,000 to 18,000 years ago, the warmer regions of India and Southeast Asia provided a refuge from colder surrounding areas. At the end of the Last Glacial Maximum and the beginning of the warming cycles, the golden jackal lineage expanded out of India and into Eurasia to reach the Middle East and Europe. Outside of India, golden jackals in the Caucasus and Turkey demonstrate the next highest genetic diversity, while those in Europe indicate low genetic diversity, confirming their more recent expansion into Europe. Genetic data indicates that the golden jackals of the Peloponnese Peninsula in Greece and the Dalmatian coast in Croatia may represent two ancient European populations from 6,000 years ago that have survived into modern times. Jackals were absent from most of Europe until the 19th century, when they started to expand slowly. Jackals were recorded in Hungary with the nearest population known at that time being found in Dalmatia, some 300 kilometers away. This was followed by rapid expansion of jackals towards the end of the 20th century. Golden jackals from both Southeast Europe and the Caucasus are expanding into the Baltic. In the Middle East, golden jackals from Israel have a higher genetic diversity than European jackals. This is thought to be due to Israeli jackals having hybridized with dogs, gray wolves, and African golden wolves, creating a hybrid zone in Israel. Admixture with other Canis species Genetic analysis reveals that mating sometimes occurs between female jackals and gray wolves, producing jackal-wolf hybrids that experts cannot visually distinguish from wolves. Hybridization also occurs between female golden jackals and male dogs, which produces fertile offspring, a jackal–dog hybrid. There was 11–13% of ancient gene flow into the golden jackal from the population that was ancestral to wolves and dogs, and an additional 3% from extant wolf populations. Up to 15% of the Israeli wolf genome is derived from admixture with golden jackals in ancient times. In 2018, whole genome sequencing was used to compare members of the genus Canis. The study supports the African wolf being distinct from the golden jackal, and with the Ethiopian wolf being genetically basal to both. There is evidence of gene flow between African golden wolves, golden jackals, and gray wolves. One African wolf from the Egyptian Sinai Peninsula showed high admixture with the Middle Eastern gray wolves and dogs, highlighting the role of the land bridge between the African and Eurasian continents in canid evolution. There was evidence of gene flow between golden jackals and Middle Eastern wolves, less so with European and Asian wolves, and least with North American wolves. The study proposes that the golden jackal ancestry found in North American wolves may have occurred before the divergence of the Eurasian and North American wolves. Subspecies and populations The golden jackal was taxonomically subordinated to the genus Canis by Carl Linnaeus in his 1758 publication Systema Naturae. 13 subspecies were described since then. Description The golden jackal is similar to the gray wolf but is distinguished by its smaller size, lighter weight, more elongated torso, less-prominent forehead, shorter legs and tail, and a muzzle that is narrower and more pointed. The legs are long in relation to its body, and the feet are slender with small pads. Males measure in body length and females . Males weigh and females weigh . The shoulder height is for both. In comparison, the smallest wolf is the Arabian wolf (Canis lupus arabs), which weighs on average . The skull is most like that of the dingo, and is closer to that of the coyote (C.latrans) and the gray wolf (C.lupus) than to that of the black-backed jackal (L.mesomalas), the side-striped jackal (L.adustus), and the Ethiopian wolf (C.simensis). Compared with the wolf, the skull of the golden jackal is smaller and less massive, with a lower nasal region and shorter facial region; the projections of the skull are prominent but weaker than those of the wolf; the canine teeth are large and strong but relatively thinner; and its carnassial teeth are weaker. The golden jackal is a less specialized species than the gray wolf, and these skull features relate to the jackal's diet of small birds, rodents, small vertebrates, insects, carrion, fruit, and some vegetable matter. It was once thought that golden jackals could develop a horny growth on the skull referred to as a "jackal's horn" which usually measured approximately in length and was concealed by fur. Although no evidence of its existence has been found, belief in it remains common in South Asia. This feature was once associated with magical powers by the people of Sri Lanka. The jackal's fur is coarse and relatively short, with the base color golden, varying seasonally from a pale creamy yellow to a dark tawny. The fur on the back is composed of a mixture of black, brown, and white hairs, sometimes giving the appearance of the dark saddle like that seen on the black-backed jackal. The underparts are a light pale ginger to cream color. Individual specimens can be distinguished by their unique light markings on the throat and chest. The coats of jackals from high elevations tend to be more buff-colored than those of their lowland counterparts while those of jackals in rocky, mountainous areas may exhibit a grayer shade. The bushy tail has a tan to black tip. Melanism can cause a dark-colored coat in some golden jackals, a coloring once fairly common in Bengal. Unlike melanistic wolves and coyotes that received their dark pigmentation from interbreeding with domestic dogs, melanism in golden jackals probably stems from an independent mutation that could be an adaptive trait. What is possibly an albino specimen was photographed in southeastern Iran during 2012. The jackal moults twice a year, in spring and in autumn. In Transcaucasia and Tajikistan, the spring moult begins at the end of winter. If the winter has been warm, the spring moult starts in the middle of February; if the winter has been cold, it begins in the middle of March. The spring moult lasts for 60–65 days; if the animal is sick, it loses only half of its winter fur. The spring moult commences with the head and limbs, extends to the flanks, chest, belly and rump, and ends at the tail. Fur on the underparts is absent. The autumn moult occurs from mid-September with the growth of winter fur; the shedding of the summer fur occurs at the same time. The development of the autumn coat starts with the rump and tail and spreads to the back, flanks, belly, chest, limbs and head, with full winter fur being attained at the end of November. Ecology The golden jackal inhabits Europe and Southwest, Central, South, and Southeast Asia. The golden jackal's omnivorous diet allows it to eat a large range of foods; this diet, together with its tolerance of dry conditions, enables it to live in different habitats. The jackal's long legs and lithe body allow it to trot over great distances in search of food. It is able to go without water for extended periods and has been observed on islands that have no fresh water. Jackals are abundant in valleys and along rivers and their tributaries, canals, lakes, and seashores, but are rare in foothills and low mountains. In Central Asia they avoid waterless deserts and cannot be found in the Karakum Desert nor the Kyzylkum Desert, but can be found at their edges or in oases. On the other hand, in India they can be found living in the Thar Desert. They are found in dense thickets of prickly bushes, reed flood-lands and forests. They have been known to ascend over up the slopes of the Himalayas; they can withstand temperatures as low as and sometimes . They are not adapted to snow, and in snow country they must travel along paths made by larger animals or humans. In India, they will occupy the surrounding foothills above arable areas, entering human settlements at night to feed on garbage, and have established themselves around hill stations at height above mean sea level. They generally avoid mountainous forests, but may enter alpine and sub-alpine areas during dispersal. In Turkey, the Caucasus, and Transcaucasia they have been observed up to above mean sea level, particularly in areas where the climate supports shrublands in high elevations. The Estonian population, which marks the only population of this species adapted to the boreal region, largely inhabits coastal grasslands, alvars, and reed beds, habitats where wolves are seldom present. Diet The golden jackal fills much the same ecological niche in Eurasia as the coyote does in North America; it is both a predator and a scavenger, and an omnivorous and opportunistic forager with a diet that varies according to its habitat and the season. In Keoladeo National Park, India, over 60% of its diet was measured to consist of rodents, birds, and fruit. In the Kanha Tiger Reserve, 80% of its diet consists of rodents, reptiles and fruit. Vegetable matter forms part of the jackal diet, and in India they feed intensively on the fruits of buckthorn, dogbane, Java plum, and the pods of mesquite and the golden rain tree. The jackal scavenges off the kills made by the lion, tiger, leopard, dhole, and gray wolf. In some regions of Bangladesh and India, golden jackals subsist by scavenging on carrion and garbage, and will cache extra food by burying it. The Irish novelist, playwright and poet, Oliver Goldsmith, wrote about the golden jackal: In the Caucasus and Transcaucasia, golden jackals primarily hunt hares and mouse-like rodents, and also pheasants, francolins, ducks, coots, moorhens, and passerines. Vegetable matter eaten by Jackals in these areas includes fruits, such as pears, hawthorn, dogwood, and the cones of common medlars. The jackal is implicated in the destruction of grape, watermelon, muskmelon, and nut crops. Near the Vakhsh River, their spring diet consists almost exclusively of plant bulbs and the roots of wild sugar cane, while during winter they feed on wild stony olives. Around the edges of the Karakum Desert, jackals feed on gerbils, lizards, snakes, fish, muskrats, the fruits of wild stony olives, mulberry, dried apricots, watermelons, muskmelons, tomatoes, and grapes. In Dalmatia, the golden jackal's diet consists of mammals, fruits, vegetables, insects, birds and their eggs, grasses and leaves. Golden jackals change their diet to more readily available foods. In Serbia, their diet is primarily livestock carcasses that are increasingly prevalent due to a lack of removal, and this may have led to the expansion of their population. In Hungary, 55% of their diet is composed of common voles and bank voles, and 41% is composed of wild boar carcasses. Information on the diet of the golden jackal in northeastern Italy is scant, but it is known to prey on small roe deer and hares. In Israel, golden jackals are significant predators of snakes; during a poisoning campaign against golden jackals there was an increase in human snakebite reports, but a decrease when the poisoning ceased. Competition The jackal's competitors are the red fox, wolf, jungle cat, wildcat, and raccoon in the Caucasus, and the steppe wildcat in Central Asia. Wolves dominate jackals, and jackals dominate foxes. In 2017 in Iran, an Indian wolf under study killed a golden jackal. In Europe, the range of wolves and jackals is mutually exclusive, with jackals abandoning their territory with the arrival of a wolf pack. One experiment used loudspeakers to broadcast the calls of jackals, and this attracted wolves at a trotting pace to chase away the perceived competitors. Dogs responded to these calls in the same way while barking aggressively. Unleashed dogs have been observed to immediately chase away jackals when the jackals were detected. In Europe, there are an estimated 12,000 wolves. The jackal's recent expansion throughout eastern and western Europe has been attributed to the extermination of the local wolf populations. The present diffusion of the jackal into the northern Adriatic hinterland is in areas where the wolf is absent or very rare. In the past, jackals competed with tigers and leopards, feeding on the remains of their kills and, in one case, on a dead tiger. Leopards and tigers once hunted jackals, but today, the leopard is rare, and the tiger is extinct in the jackal's range. Eurasian lynxes have also been known to hunt jackals. Red foxes and golden jackals share similar diets. Red foxes fear jackals, which are three times bigger than them. Red foxes will avoid close proximity to jackals and fox populations decrease where jackals are abundant. Foxes can be found only at the fringes of jackal territory. There is however one record of a male golden jackal interacting peacefully with multiple red foxes in southwestern Germany. Striped hyenas prey on golden jackals, and three jackal carcasses were found in one hyena den. A 2022 study indicated that the presence of golden jackals in portions of Eastern Europe leads to a decrease in the population of invasive raccoon dogs (Nyctereutes procyonoides), indicating a potentially positive consequence of the jackal colonization of Europe. Diseases and parasites Some golden jackals carry diseases and parasites harmful to human health. These include rabies, and Donovan's Leishmania that is harmless to jackals but may cause leishmaniasis in people. Jackals in southwestern Tajikistan can carry up to 16 species of parasitic cestodes (flatworm), roundworms, and acanthocephalans (thorny-headed worms), these are: Sparganum mansoni, Diphyllobothrium mansonoides, Taenia hydatigena, T.pisiformis, T.ovis, Hydatigera taeniaeformis, Dipylidium caninum, Mesocestoides lineatus, Ancylostoma caninum, Uncinaria stenocephala, Dioctophyma renale, Toxocara canis, Toxascaris leonina, Dracunculus medinensis, Filariata and Macracanthorhynchus catulinum. Jackals infected with Dracunculus medinensis can infect bodies of water with their eggs, which cause dracunculiasis in people who drink from them. Jackals may also play a large part in spreading coenurosis in sheep and cattle, and canine distemper in dogs. In Tajikistan, jackals may carry up to 12 tick species (which include Ixodes, Rhipicephalus turanicus, R.leporis, R.rossicus, R. sanguineus, R.pumilio, R.schulzei, Hyalomma anatolicum, H.scupense and H.asiaticum), four flea species (Pulex irritans, Xenopsylla nesokiae, Ctenocephanlides canis and C.felis), and one species of louse (Trichodectes canis). In Iran, some golden jackals carry intestinal worms (helminths) and Echinococcus granulosus. In Israel, some jackals are infected with intestinal helminths and Leishmania tropica. In Romania, a jackal was found to be carrying Trichinella britovi. In northeastern Italy, the jackal is a carrier of the tick species Ixodes ricinus and Dermacentor reticulatus, and the smallest human fluke Metagonimus yokogawai that can be caught from ingesting infected raw fish. In Hungary, some jackals carry dog heartworm Dirofilaria immitis, and some have provided the first record in Hungary of Trichinella spiralis and the first record in Europe of Echinococcus multilocularis. A golden jackel from Iran was found to be a host of an intestinal acanthocephalan worm, Pachysentis canicola. Behavior Social behavior Golden jackals exhibit flexible social organization depending on the availability of food. The breeding pair is the basic social unit, and they are sometimes accompanied by their current litter of pups. In India, their distributions are a single jackal, 31%, two jackals, 35%, three jackals, 14%, and more than three jackals, 20%. Family groups of up to 4–5 individuals have been recorded. Scent marking through urination and defecation is common around golden jackal den areas and on the trails they most often use. Scent marking is thought to assist in territorial defense. The hunting ranges of several jackals can overlap. Jackals can travel up to during a single night in search of either food or more suitable habitat. Non-breeding members of a pack may stay near a distant food source, such as a carcass, for up to several days before returning to their home range. Home range sizes can vary between , depending on the available food. Social interactions such as greetings, grooming, and group howling are common in jackals. Howling is more frequent between December and April when pair bonds are being formed and breeding occurs, which suggests howling has a role in the delineation of territory and for defense. Adult jackals howl standing and the young or subordinate jackals howl sitting. Jackals are easily induced to howl and a single howl may solicit replies from several jackals in the vicinity. Howling begins with 2–3 low-pitched calls that rise to high-pitched calls. The howl consists of a wail repeated 3–4 times on an ascending scale, followed by three short yelps. Jackals typically howl at dawn and in the evening, and sometimes at midday. Adults may howl to accompany the ringing of church bells, with their young responding to sirens or the whistles of steam engines and boats. Social canids such as golden jackals, wolves, and coyotes respond to human imitations of their howls. When there is a change in the weather, jackals will produce a long and continuous chorus. Dominant canids defend their territories against intruders with either a howl to warn them off, approach and confront them, or howl followed by an approach. Jackals, wolves and coyotes will always approach a source of howling. Golden jackals give a warning call that is very different from their normal howling when they detect the presence of large carnivores such as wolves and tigers. Reproduction Golden jackals are monogamous and will remain with the one partner until death. Female jackals have only one breeding cycle each year. Breeding occurs from October to March in Israel and from February to March in India, Turkmenistan, Bulgaria, and Transcaucasia, with the mating period lasting up to 26–28 days. Females undergoing their first estrus are often pursued by several males that may quarrel among themselves. Mating results in a copulatory tie that lasts for several minutes, as it does with all other canids. Gestation lasts 63 days, and the timing of the births coincides with the annual abundance of food. In India, the golden jackal will take over the dens of the Bengal fox and the Indian crested porcupine, and will use abandoned gray wolf dens. Most breeding pairs are spaced well apart and maintain a core territory around their dens. Den excavations commence from late April to May in India, with dens located in scrub areas. Rivulets, gullies, and road and check-dam embankments are prime denning habitats. Drainage pipes and culverts have been used as dens. Dens are long and deep, with between 1–3 openings. Young pups can be moved between 2–4 dens. The male helps with digging the den and raising the pups. In the Caucasus and Transcaucasia, the burrow is located either in thick shrub, on the slopes of gullies, or on flat surfaces. In Dagestan and Azerbaijan, litters are sometimes located within the hollows of fallen trees, among tree roots, and under stones on river banks. In Central Asia, the golden jackal does not dig burrows but constructs lairs in dense tugai thickets. Jackals in the tugais and cultivated lands of Tajikistan construct lairs in long grass, shrubs, and reed openings.In Transcaucasia, golden jackal pups are born from late March to late April, and in northeastern Italy during late April; they can be born at any time of year in Nepal. The number of pups born in a single litter varies geographically. Jackals in Transcaucasia give birth to 3–8 pups, Tajikistan 3–7 pups, Uzbekistan 2–8 pups, and Bulgaria 4–7 pups; in India the average is four pups. The pups are born with closed eyes that open after 8–11 days, with the ears erecting after 10–13 days. Their teeth erupt at 11 days after birth, and the eruption of adult dentition is completed after five months. Pups are born with soft fur that ranges in color from light gray to dark brown. At the age of one month, their fur is shed and replaced with a new reddish-colored pelt with black speckles. The pups have a fast growth rate and weigh at two days of age, at one month, and at four months. Females possess four pairs of teats, and lactation lasts for up to 8–10 weeks. The pups begin to eat meat at the age of 15–20 days. Dog pups show unrestrained fighting with their siblings from 2 weeks of age, with injury avoided only due to their undeveloped jaw muscles. This fighting gives way to play-chasing with the development of running skills at 4–5 weeks. Wolf pups possess more-developed jaw muscles from 2 weeks of age, when they first show signs of play-fighting with their siblings; serious fighting occurs during 4–6 weeks of age. Compared to wolf and dog pups, golden jackal pups develop aggression at the age of 4–6 weeks, when play-fighting frequently escalates into uninhibited biting intended to harm. This aggression ceases by 10–12 weeks when a hierarchy has formed. Once the lactation period concludes, the female drives off the pups. Pups born late remain with their mother until early autumn, at which time they leave either singly or in groups of two to four individuals. Females reach sexual maturity after 10–11 months and males at 21–22 months. Foraging The golden jackal often hunts alone, and sometimes in pairs, but rarely hunts in a pack. When hunting alone, it trots around an area and occasionally stops to sniff and listen. Once prey is located, the jackal conceals itself, quickly approaches its prey and then pounces on it. Single jackals hunt rodents, hares, and birds. They hunt rodents in grass by locating them with their hearing before leaping into the air and pouncing on them. In India, they can dig Indian gerbils out from their burrows, and they can hunt young, old, and infirm ungulates up to 4–5 times their body weight. Jackals search for hiding blackbuck calves throughout the day during the calving period. The peak times for their searches are the early morning and the late evening. When hunting in pairs or packs, jackals run parallel to their prey and overtake it in unison. When hunting aquatic rodents or birds, they will run along both sides of narrow rivers or streams and drive their prey from one jackal to another. Pack-hunting of langurs is recorded in India. Packs of between 5 and 18 jackals scavenging on the carcasses of large ungulates is recorded in India and Israel. Packs of 8–12 jackals consisting of more than one family have been observed in the summer periods in Transcaucasia. In India, the Montagu's harrier and the Pallid harrier roost in their hundreds in grasslands during their winter migration. Jackals stalk close to these roosting harriers and then rush at them, attempting to catch one before the harriers can take off or gain sufficient height to escape. In Southeastern Asia, golden jackals have been known to hunt alongside dhole packs. They have been observed in the Blackbuck National Park, Velavadar, India, following Indian wolves (Canis lupus pallipes) when these are on a hunt, and they will scavenge off wolf kills without any hostility shown from the wolves. In India, lone jackals expelled from their pack have been known to form commensal relationships with tigers. These solitary jackals, known as kol-bahl, will associate themselves with a particular tiger, trailing it at a safe distance to feed on the big cat's kills. A kol-bahl will even alert a tiger to prey with a loud "pheal". Tigers have been known to tolerate these jackals, with one report describing how a jackal confidently walked in and out between three tigers walking together. Golden jackals and wild boar can occupy the same territory. Conservation The golden jackal is listed as Least Concern on the IUCN Red List due to its widespread distribution, with it being common throughout its range and with high densities in those areas where food and shelter are abundant. In Europe, golden jackals are not listed under the 1973 Convention on International Trade in Endangered Species of Wild Fauna and Flora nor the 1979 Convention on the Conservation of Migratory Species of Wild Animals. Golden jackals in Europe fall under various international legal instruments. These include the 1979 Berne Convention on the Conservation of European Wildlife and Natural Habitats, the 1992 Convention on Biological Diversity, and the 1992 European Union Council Directive 92/43/EEC on the Conservation of Natural Habitats and of Wild Fauna and Flora. The Council Directive provides both guidance and limits on what participating governments can do when responding to the arrival of expanding jackals. These legislative instruments aim to contribute to conserving native wildlife; some governments argue that the golden jackal is not native wildlife but an invading species. The Golden Jackal informal study Group in Europe (GOJAGE) is an organization that is formed by researchers from across Europe to collect and share information on the golden jackal in Europe. The group also has an interest in the golden jackal's relationship with its environment across Eurasia. Membership is open to anyone who has an interest in golden jackals. In Europe, there are an estimated 70,000 golden jackals. They are fully protected in Albania, North Macedonia, Germany, Italy, Poland and Switzerland. They are unprotected in Belarus, Bosnia and Herzegovina, Czech Republic, Estonia, and Greece. They are hunted in Bosnia and Herzegovina, Bulgaria, Croatia, Hungary, Kosovo, Latvia, Lithuania, Montenegro, Romania, Serbia, Slovakia, Slovenia, and Ukraine. Their protection in Austria and Turkey depends on the part of the country. Their status in Moldova is not known. The Syrian jackal was common in Israel and Lebanon in the 1930s–40s, but their populations were reduced during an anti-rabies campaign. Its current status is difficult to ascertain, due to possible hybridisation with pariah dogs and African golden wolves. The jackal population for the Indian subcontinent is estimated to be over 80,000. In India, the golden jackal occurs in all of India's protected areas apart from those in the higher areas of the Himalayas. It is included in CITES AppendixIII, and is listed in the Wildlife Protection Act, 1972, under ScheduleIII, thus receiving legal protection at the lowest level to help control the trade of pelts and tails in India. Relationships with humans In folklore, mythology and literature Golden jackals appear in Indian folklore and in two ancient texts, the Jakatas and the Panchatantra, where they are portrayed as intelligent and wily creatures. The ancient Hindu text, the Mahabharata, tells the story of a learned jackal who sets his friends the tiger, wolf, mongoose, and mouse against each other so he can eat a gazelle without sharing it. The Panchatantra tells the fable of a jackal who cheats a wolf and a lion out of their shares of a camel. In Buddhist tales, the jackal is regarded as being cunning in a way similar to the fox in European tales. One popular Indian saying describes the jackal as "the sharpest among beasts, the crow among birds, and the barber among men". For a person embarking on an early morning journey, hearing a jackal howl was considered to be a sign of impending good fortune, as was seeing a jackal crossing a road from the left side. In Hinduism, the jackal is portrayed as the familiar of several deities with the most common being Chamunda, the emaciated, devouring goddess of the cremation grounds. Another deity associated with jackals is Kali, who inhabits the cremation ground and is surrounded by millions of jackals. According to the Tantrasara scripture, when offered animal flesh, Kali appears in the form of a jackal. The goddess Shivaduti is depicted with a jackal's head. The goddess Durga was often linked to the jackal. Jackals are considered to be the vahanas (vehicles) of various protective Hindu and Buddhist deities, particularly in Tibet. According to the flood myth of the Kamar people in Raipur district, India, the god Mahadeo (Shiva) caused a deluge to dispose of a jackal who had offended him. In Rudyard Kipling's Mowgli stories collected in The Jungle Book, the character Tabaqui is a jackal despised by the Seeonee wolf pack due to his mock cordiality, his scavenging habits, and his subservience to Shere Khan the tiger. Attacks on humans In the Marwahi forest division of the Chhattisgarh state in eastern India, the jackal is of conservation value and there were no jackal attacks reported before 1997. During 1998–2005 there were 220 reported cases of jackal attacks on humans, although none were fatal. The majority of these attacks occurred in villages, followed by forests and crop fields. Jackals build their dens in the bouldery hillocks that surround flat areas, and these areas have been encroached by human agriculture and settlements. This encroachment has led to habitat fragmentation and the need for jackals to enter agricultural areas and villages in search for food, resulting in conflict with humans. People in this region habitually chase jackals from their villages, which leads to the jackals becoming aggressive. Female jackals with pups respond with an attack more often than lone males. In comparison, over twice as many attacks were carried out by sloth bears over the same period. There are no known attacks on humans in Europe. Livestock, game, and crop predation The golden jackal can be a harmful pest that attacks domestic animals such as turkeys, lambs, sheep, goats, domestic water buffalo calves, and valuable game species like newborn roe deer, hares, coypu, pheasants, francolins, grey partridges, bustards and waterfowl. It destroys grape, coffee, maize, sugarcane, and eats watermelons, muskmelons, and nuts. In Greece, golden jackals are not as damaging to livestock as wolves and red foxes but they can become a serious nuisance to small stock when in great numbers. In southern Bulgaria, over 1,000 attacks on sheep and lambs were recorded between 1982 and 1987, along with some damage to newborn deer in game farms. The damage by jackals in Bulgaria was minimal when compared to the livestock losses due to wolves. Approximately 1.5–1.9% of calves born in the Golan Heights die due to predation, mainly by jackals. The high predation rate by jackals in both Bulgaria and Israel is attributable to the lack of preventative measures in those countries and the availability of food in illegal garbage dumps, leading to jackal population explosions. Golden jackals are extremely harmful to fur-bearing rodents, such as coypu and muskrats. Coypu can be completely extirpated in shallow water bodies. During 1948–1949 in the Amu Darya, muskrats constituted 12.3% of jackal fecal contents, and 71% of muskrat houses were destroyed by jackals. Jackals also harm the fur industry by eating muskrats caught in traps or taking skins left out to dry. Hunting During British rule in India, sportsmen conducted golden jackal hunting on horseback with hounds, with jackal coursing a substitute for the fox hunting of their native England. They were not considered as beautiful as English red foxes, but were esteemed for their endurance in the chase with one pursuit lasting hours. India's weather and terrain added further challenges to jackal hunters that were not present in England: the hounds of India were rarely in as good condition as English hounds, and although the golden jackal has a strong odor, the terrain of northern India was not good in retaining scent. Also, unlike foxes, jackals sometimes feigned death when caught and could be ferociously protective of their captured packmates. Jackals were hunted in three ways: with greyhounds, with foxhounds, and with mixed packs. Hunting jackals with greyhounds offered poor sport because greyhounds were too fast for jackals, and mixed packs were too difficult to control. From 1946 in Iraq, British diplomats and Iraqi riders conducted jackal coursing together. They distinguished three types of jackal: the "city scavenger", which was described as being slow and so smelly that dogs did not like to follow them; the "village jack", which was described as being faster, more alert, and less odorous; and the "open-country jack", which was described as being the fastest, cleanest, and providing the best sport of all three populations. Some indigenous people of India, such as the Kolis and Vaghirs of Gujarat and Rajasthan and the Narikuravas in Tamil Nadu, hunt and eat golden jackals, but the majority of South Asian cultures consider the animal to be unclean. The orthodox dharma texts forbid the eating of jackals because they have five nails. In the area of the former Soviet Union, jackals are not actively hunted and are usually captured only incidentally during the hunting of other animals by means of traps or shooting during drives. In Transcaucasia, jackals are captured with large fishing hooks baited with meat and suspended from the ground with wire. The jackals can only reach the meat by jumping, and are then hooked by the lip or jaw. Fur use In Russia and the other nations of the former Soviet Union, golden jackals are considered furbearers of low quality because of their sparse, coarse, and monotonously colored fur. Jackal hairs have very little fur fiber; therefore, their pelts have a flat appearance. The jackals of Asia and the Middle East produce the coarsest pelts, though this can be remedied during the dressing process. Elburz in northern Iran produces the softest furs. Jackal skins are not graded to a fur standard, and are made into collars, women's coats, and fur coats. During the 1880s, 200 jackals were captured annually in Mervsk and in the Zakatal area of the Transcaucasus, with 300 jackals being captured there during 1896. In this same period, a total of 10,000 jackals were taken within Russia and their furs sent exclusively to the Nizhegorod fair. In the early 1930s there were 20,000–25,000 jackal skins tanned annually in the Soviet Union, but these could not be utilized within the country, and so the majority were exported to the United States. Commencing from 1949, they were all used within the Soviet Union. Sulimov dog The golden jackal may have once been tamed in Neolithic Turkey 11,000 years ago, as there is a sculpture of a man cradling a jackal found in Göbekli Tepe. French explorers during the 19thcentury noted that people in the Levant kept golden jackals in their homes. The Kalmyk people near the Caspian Sea were known to frequently cross their dogs with jackals, and Balkan shepherds once crossed their sheepdogs with jackals. The Russian military established the Red Star kennels in 1924 to improve the performance of working dogs and to conduct military dog research. The Red Star kennel developed "Laikoid" dogs, which were a cross-breed of Spitz-type Russian Laikas with German Shepherds. By the 1980s, the ability of Russia's bomb and narcotic detection dogs were assessed as being inadequate. Klim Sulimov, a research scientist with the DS Likhachev Scientific Research Institute for Cultural Heritage and Environmental Protection, began cross-breeding dogs with their wild relatives in an attempt to improve their scent-detection abilities. The researchers assumed that during domestication dogs had lost some of their scent-detection ability because they no longer had to detect prey. Sulimov crossed European jackals with Laikas, and also with fox terriers to add trainability and loyalty to the mix. He used the jackal because he believed that it was the wild ancestor of the dog, that it had superior scent-detecting ability, and, because it was smaller with more endurance than the dog, it could be housed outdoors in the Russian climate. Sulimov favored a mix of one quarter jackal and three-quarters dog. Sulimov's program continues today with the use of the hybrid Sulimov dogs at the Sheremetyevo Airport near Moscow by the Russian airline Aeroflot. The hybrid program has been criticized, with one of Sulimov's colleagues pointing out that in other tests the Laika performed just as well as the jackal hybrids. The assumption that dogs have lost some of their scent-detection ability may be incorrect, in that dogs need to be able to scent-detect and identify the many humans that they come into contact with in their domesticated environment. Another researcher crossed German Shepherds with wolves and claimed that this hybrid had superior scent-detection abilities. The scientific evidence to support the claims of hybrid researchers is minimal, and more research has been called for.
Biology and health sciences
Canines
Animals
1554995
https://en.wikipedia.org/wiki/Polyol
Polyol
In organic chemistry, a polyol is an organic compound containing multiple hydroxyl groups (). The term "polyol" can have slightly different meanings depending on whether it is used in food science or polymer chemistry. Polyols containing two, three and four hydroxyl groups are diols, triols, and tetrols, respectively. Classification Polyols may be classified according to their chemistry. Some of these chemistries are polyether, polyester, polycarbonate and also acrylic polyols. Polyether polyols may be further subdivided and classified as polyethylene oxide or polyethylene glycol (PEG), polypropylene glycol (PPG) and Polytetrahydrofuran or PTMEG. These have 2, 3 and 4 carbons respectively per oxygen atom in the repeat unit. Polycaprolactone polyols are also commercially available. There is also an increasing trend to use biobased (and hence renewable) polyols. Uses Polyether polyols have numerous uses. As an example, polyurethane foam is a big user of polyether polyols. Polyester polyols can be used to produce rigid foam. They are available in both aromatic and aliphatic versions. They are also available in mixed aliphatic-aromatic versions often made from recycled raw materials, typically polyethylene terephthalate (PET). Acrylic polyols are generally used in higher performance applications where stability to ultraviolet light is required and also lower VOC coatings. Other uses include direct to metal coatings. As they are used where good UV resistance is required, such as automotive coatings, the isocyanate component also tends to be UV resistant and hence isocyanate oligomers or prepolymers based on Isophorone diisocyanate are generally used. Caprolactone-based polyols produce polyurethanes with enhanced hydrolysis resistance. Polycarbonate polyols are more expensive than other polyols and are thus used in more demanding applications. They have been used to make an isophorone diisocyanate based prepolymer which is then used in glass coatings. They may be used in reactive hotmelt adhesives. All polyols may be used to produce polyurethane prepolymers. These then find use in coatings, adhesives, sealants and elastomers. Low molecular weight polyols Low molecular weight polyols are widely used in polymer chemistry where they function as crosslinking agents and chain extenders. Alkyd resins for example, use polyols in their synthesis and are used in paints and in molds for casting. They are the dominant resin or "binder" in most commercial "oil-based" coatings. Approximately 200,000 tons of alkyd resins are produced each year. They are based on linking reactive monomers through ester formation. Polyols used in the production of commercial alkyd resins are glycerol, trimethylolpropane, and pentaerythritol. In polyurethane prepolymer production, a low molecular weight polyol-diol such as 1,4-butanediol may be used as a chain extender to further increase molecular weight though it does increase viscosity because more hydrogen bonding is introduced. Sugar alcohols Sugar alcohols, a class of low molecular weight polyols, are commonly obtained by hydrogenation of sugars. They have the formula (CHOH)nH2, where n = 4–6. Sugar alcohols are added to foods because of their lower caloric content than sugars; however, they are also, in general, less sweet, and are often combined with high-intensity sweeteners. They are also added to chewing gum because they are not broken down by bacteria in the mouth or metabolized to acids, and thus do not contribute to tooth decay. Maltitol, sorbitol, xylitol, erythritol, and isomalt are common sugar alcohols. Polymeric polyols The term polyol is used for various chemistries of the molecular backbone. Polyols may be reacted with diisocyanates or polyisocyanates to produce polyurethanes. MDI finds considerable use in PU foam production. Polyurethanes are used to make flexible foam for mattresses and seating, rigid foam insulation for refrigerators and freezers, elastomeric shoe soles, fibers (e.g. Spandex), coatings, sealants and adhesives. The term polyol is also attributed to other molecules containing hydroxyl groups. For instance, polyvinyl alcohol is (CH2CHOH)n with n hydroxyl groups where n can be in the thousands. Cellulose is a polymer with many hydroxyl groups, but it is not referred to as a polyol. Polyols from recycled or renewable sources There are polyols based on renewable sources such as plant-based materials including castor oil and cottonseed oil. Vegetable oils and biomass are also potential renewable polyol raw materials. Seed oil can even be used to produce polyester polyols. Properties Since the generic term polyol is only derived from chemical nomenclature and just indicates the presence of several hydroxyl groups, no common properties can be assigned to all polyols. However, polyols are usually viscous at room temperature due to hydrogen bonding.
Physical sciences
Alcohols
Chemistry
1555022
https://en.wikipedia.org/wiki/Web%202.0
Web 2.0
Web 2.0 (also known as participative (or participatory) web and social web) refers to websites that emphasize user-generated content, ease of use, participatory culture, and interoperability (i.e., compatibility with other products, systems, and devices) for end users. The term was coined by Darcy DiNucci in 1999 and later popularized by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference in 2004. Although the term mimics the numbering of software versions, it does not denote a formal change in the nature of the World Wide Web, but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web. A Web 2.0 website allows users to interact and collaborate through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to passively viewing content. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications. Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write". On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines. History Web 1.0 Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content". Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities. With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously. Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze". Characteristics Some common design elements of a Web 1.0 site include: Static pages rather than dynamic HTML. Content provided from the server's filesystem rather than a relational database management system (RDBMS). Pages built using Server Side Includes or Common Gateway Interface (CGI) instead of a web application written in a dynamic programming language such as Perl, PHP, Python or Ruby. The use of HTML 3.2-era elements such as frames and tables to position and align elements on a page. These were often used in combination with spacer GIFs. Proprietary HTML extensions, such as the <blink> and <marquee> tags, introduced during the first browser war. Online guestbooks. GIF buttons, graphics (typically 88×31 pixels in size) promoting web browsers, operating systems, text editors and various other products. HTML forms sent via email. Support for server side scripting was rare on shared servers during this period. To provide a feedback mechanism for web site visitors, mailto forms were used. A user would fill in a form, and upon clicking the form's submit button, their email client would launch and attempt to send an email containing the form's details. The popularity and complications of the mailto protocol led browser developers to incorporate email clients into their browsers. Web 2.0 The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future": Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use. The term Web 2.0 did not resurface until 2002. Companies such as Amazon, Facebook, Twitter, and Google, made it easy to connect and engage in online transactions. Web 2.0 introduced new features, such as multimedia content and interactive web applications, which mainly consisted of two-dimensional screens. Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform". In 2004, the term began to popularize when O'Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models of Netscape and the Encyclopædia Britannica Online. For example, In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "the perpetual beta". A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters. The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You). That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites. In the cover story, Lev Grossman explains: Characteristics Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software ("apps") and file storage facilities. This has been called "network as platform" computing. Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress' easy-to-use blog and website creation tools), "tagging" (which enables users to label websites, videos or photos in some fashion), "like" buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking. Users can provide the data and exercise some control over what they share on a Web 2.0 site. These sites may have an "architecture of participation" that encourages users to add value to the application as they use it. Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects. Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet. Web 2.0 offers almost all users the same freedom to contribute, which can lead to effects that are varyingly perceived as productive by members of a given community or not, which can lead to emotional distress and disagreement. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and "free ride" on the contributions of others. This requires what is sometimes called radical trust by the management of the Web site. Encyclopaedia Britannica calls Wikipedia "the epitome of the so-called Web 2.0" and describes what many view as the ideal of a Web 2.0 platform as "an egalitarian environment where the web of social software enmeshes users in both their real and virtual-reality workplaces." According to Best, the characteristics of Web 2.0 are rich user experience, user participation, dynamic content, metadata, Web standards, and scalability. Further characteristics, such as openness, freedom, and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contribute user-generated content to have access to the website, to discourage "free riding".The key features of Web 2.0 include: Folksonomy – free classification of information; allows users to collectively classify and find information (e.g. "tagging" of websites, images, videos or links) Rich user experience – dynamic content that is responsive to user input (e.g., a user can "click" on an image to enlarge it or find out more information) User participation – information flows two ways between the site owner and site users by means of evaluation, review, and online commenting. Site users also typically create user-generated content for others to see (e.g., Wikipedia, an online encyclopedia that anyone can write articles for or edit) Software as a service (SaaS) – Web 2.0 sites developed APIs to allow automated usage, such as by a Web "app" (software application) or a mashup Mass participation – near-universal web access leads to differentiation of concerns, from the traditional Internet user base (who tended to be hackers and computer hobbyists) to a wider variety of users, drastically changing the audience of internet users. Technologies The client-side (Web browser) technologies used in Web 2.0 development include Ajax and JavaScript frameworks. Ajax programming uses JavaScript and the Document Object Model (DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously). Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application. When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor. As a widely available plug-in independent of W3C standards (the World Wide Web Consortium is the governing body of Web standards and protocols), Adobe Flash was capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used was its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash became obsolete, with browser support ending on December 31, 2020. In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel. On the server-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as Perl, PHP, Python, Ruby, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to share machine readable formats such as XML (Atom, RSS, etc.) and JSON. When data is available in one of these formats, another website can use it to integrate a portion of that site's functionality. Concepts Web 2.0 can be described in three parts: Rich web application - defines the experience brought from desktop to browser, whether it is "rich" from a graphical point of view or a usability/interactivity or features point of view. Web-oriented architecture (WOA) - defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications. Examples are feeds, RSS feeds, web services, mashups. Social Web - defines how Web 2.0 websites tend to interact much more with the end user and make the end user an integral part of the website, either by adding his or her profile, adding comments on content, uploading new content, or adding user-generated content (e.g., personal digital photos). As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented Web browsers may use plug-ins and software extensions to handle the content and user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0". Web 2.0 sites include the following features and techniques, referred to as the acronym SLATES by Andrew McAfee: Search Finding information through keyword search. Links to other websites Connects information sources together using the model of the Web. Authoring The ability to create and update content leads to the collaborative work of many authors. Wiki users may extend, undo, redo and edit each other's work. Comment systems allow readers to contribute their viewpoints. Tags Categorization of content by users adding "tags" — short, usually one-word or two-word descriptions — to facilitate searching. For example, a user can tag a metal song as "death metal". Collections of tags created by many users within a single system may be referred to as "folksonomies" (i.e., folk taxonomies). Extensions Software that makes the Web an application platform as well as a document server. Examples include Adobe Reader, Adobe Flash, Microsoft Silverlight, ActiveX, Oracle Java, QuickTime, WPS Office and Windows Media. Signals The use of syndication technology, such as RSS feeds to notify users of content changes. While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses. Social Web A third important part of Web 2.0 is the social web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by: Podcasting Blogging Tagging Curating with RSS Social bookmarking Social networking Social media Wikis Web content voting: Review site or Rating site The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0's to existing concepts and fields of study, including Library 2.0, Social Work 2.0, Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0, and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation", Paul Miller argues "Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others." Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]." There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line. Marketing Web 2.0 is used by companies, non-profit organisations and governments for interactive marketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, customer service enhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Websites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions. Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing. Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—like The New York Times, PC Magazine and Business Week — with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services. User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitors social media outlets to address customer issues and improve products. Destination marketing In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the "Snow at First Sight" campaign launched by the State of Colorado aimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million. The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, "Moms" advisors of the Walt Disney World are responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, "Moms" was chosen to represent the campaign. Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes. Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites. The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example, TripAdvisor is an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor. Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors. The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers. In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages. Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations. For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers. Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a business' controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships. Education Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class. Some studies suggest that Web 2.0 can increase the public's understanding of science, which could improve government policy decisions. A 2012 study by researchers at the University of Wisconsin–Madison notes that "...the internet could be a crucial tool in increasing the general public's level of science literacy. This increase could then lead to better communication between researchers and the public, more substantive discussion, and more informed policy decision." Web-based applications and desktops Ajax has prompted the development of Web sites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki and blogging sites replicate many features of PC authoring applications. Several browser-based services have emerged, including EyeOS and YouOS.(No longer active.) Although named operating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called "operating systems" do not directly control the hardware on the client's computer. Numerous web-based application services appeared during the dot-com bubble of 1997–2001 and then vanished, having failed to gain a critical mass of customers. Distribution of media XML and RSS Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site's data in another context (such as another Web site, a browser plugin, or a separate desktop application). Protocols permitting syndication include RSS (really simple syndication, also known as Web syndication), RDF (as in RSS 1.1), and Atom, all of which are XML-based formats. Observers have started to refer to these technologies as Web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites. Web APIs Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (APIs), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads. REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self-describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP Application programming interface and there are a range of Web service specifications. Trademark In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term "WEB 2.0" for live events. On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organisation IT@Cork on May 24, 2006, but retracted it two days later. The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006. The European Union application (which would confer unambiguous status in Ireland) was declined on May 23, 2007. Criticism Critics of the term claim that "Web 2.0" does not represent a new version of the World Wide Web at all, but merely continues to use so-called "Web 1.0" technologies and concepts: First, techniques such as Ajax do not replace underlying protocols like HTTP, but add a layer of abstraction on top of them. Second, many of the ideas of Web 2.0 were already featured in implementations on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002.Previous developments also came from research in computer-supported collaborative learning and computer-supported cooperative work (CSCW) and from established products like Lotus
Technology
Internet
null
1555317
https://en.wikipedia.org/wiki/Gunter%27s%20chain
Gunter's chain
Gunter's chain (also known as Gunter's measurement) is a distance-measuring device used for surveying. It was designed and introduced in 1620 by English clergyman and mathematician Edmund Gunter (1581–1626). It enabled plots of land to be accurately surveyed and plotted, for legal and commercial purposes. Gunter developed an actual measuring chain of 100 links. These, the chain and the link, became statutory measures in England and subsequently the British Empire. Description The chain is divided into 100 links, usually marked off into groups of 10 by brass rings or tags which simplify intermediate measurement. Each link is thus long. A quarter chain, or 25 links, measures and thus measures a rod (or pole). Ten chains measure a furlong and 80 chains measure a statute mile. Gunter's chain reconciled two seemingly incompatible systems: the traditional English land measurements, based on the number four, and decimals based on the number 10. Since an acre measured 10 square chains in Gunter's system, the entire process of land area measurement could be computed using measurements in chains, and then converted to acres by dividing the results by 10. Hence 10 chains by 10 chains (100 square chains) equals 10 acres, 5 chains by 5 chains (25 square chains) equals 2.5 acres. By the 1670s the chain and the link had become statutory units of measurement in England. Method The method of surveying a field or other parcel of land with Gunter's chain is to first determine corners and other significant locations, and then to measure the distance between them, taking two points at a time. The surveyor is assisted by a chainman. A ranging rod (usually a prominently coloured wooden pole) is placed in the ground at the destination point. Starting at the originating point the chain is laid out towards the ranging rod, and the surveyor then directs the chainman to make the chain perfectly straight and pointing directly at the ranging rod. A pin is put in the ground at the forward end of the chain, and the chain is moved forward so that its hind end is at that point, and the chain is extended again towards the destination point. This process is called ranging, or in the US, chaining; it is repeated until the destination rod is reached, when the surveyor notes how many full lengths (chains) have been laid, and he can then directly read how many links (one-hundredth parts of the chain) are in the distance being measured. The chain usually ends in a handle which may or may not be part of the measurement. An inner loop (visible in the NMAH photograph) is the correct place to put the pin for some chains. Many chains were made with the handles as part of the end link and thus were included in the measurement. The whole process is repeated for all the other pairs of points required, and it is a simple matter to make a scale diagram of the plot of land. The process is surprisingly accurate and requires only very low technology. Surveying with a chain is simple if the land is level and continuous—it is not physically practicable to range across large depressions or significant waterways, for example. On sloping land, the chain was to be "leveled" by raising one end as needed, so that undulations did not increase the apparent length of the side or the area of the tract. Unit of length Although link chains were later superseded by the steel ribbon tape (a form of tape measure), its legacy was a new statutory unit of length called the chain, equal to 22 yards (66 feet) of 100 links. This unit still exists as a location identifier on British railways, as well as all across America in what is called the public land survey system. In the United States (US), for example, Public Lands Survey plats are published in the chain unit to maintain the consistency of a two-hundred-year-old database. In the Midwest of the US it is not uncommon to encounter deeds with references to chains, poles, or rod units, especially in farming country. Minor roads surveyed in Australia and New Zealand in the 19th and early 20th centuries are customarily one chain wide. The length of a cricket pitch is one chain (22 yards). Similar measuring chains A similar American system, of lesser popularity, is Ramsden's or the engineer's system, where the chain consists also of 100 links, each one foot (0.3048 m) long. The original of such chains was that constructed, to very high precision, for the measurement of the baselines of the Anglo-French Survey (1784–1790) and the Principal Triangulation of Great Britain. The even less common Rathborn system, also from the 17th century, is based on a 200-link chain of two rods (33 feet, 10.0584 m) length. Each rod (or perch or pole) consists of 100 links, (1.98 inches, 50.292 mm each), which are called seconds (), ten of which make a prime (, 19.8 inches, 0.503 m). Vincent Wing made chains with 9.90-inch links, most commonly as 33-foot half-chains of 40 links. These chains were sometimes used in the American colonies, particularly Pennsylvania. In India, surveying chains (occasionally 30 metres) in length are used. Links are long. In France after the French Revolution, and later in countries that had adopted the Metric System, 10-metre (32 ft 9.7 in) chains, of 50 links each long were used until the 1950s.
Technology
Surveying tools
null
1555695
https://en.wikipedia.org/wiki/Palm%20%28unit%29
Palm (unit)
The palm is an obsolete anthropic unit of length, originally based on the width of the human palm and then variously standardized. The same name is also used for a second, rather larger unit based on the length of the human hand. The width of the palm was a traditional unit in Ancient Egypt, Israel, Greece, and Rome and in medieval England, where it was also known as the hand, handbreadth, or handsbreadth. The length of the hand—originally the Roman "greater palm"—formed the palm of medieval Italy and France. In Spanish customary units or was the palm, while was the span, the distance between an outstretched thumb and little finger. In Portuguese or was the span. History Ancient Egypt The Ancient Egyptian palm () has been reconstructed as about . The unit is attested as early as the reign of Djer, third pharaoh of the First Dynasty, and appears on many surviving cubit-rods. The palm was subdivided into four digits () of about . Three palms made up the span () or lesser span () of about . Four palms made up the foot () of about . Five made up the of about . Six made up the "Greek cubit" () of about . Seven made up the "royal cubit" () of about . Eight made up the pole () of about . Ancient Israel The palm was not a major unit in ancient Mesopotamia but appeared in ancient Israel as the , , or (, ."a spread"). Scholars were long uncertain as to whether this was reckoned using the Egyptian or Babylonian cubit, but now believe it to have approximated the Egyptian "Greek cubit", giving a value for the palm of about . As in Egypt, the palm was divided into four digits ( or ) of about and three palms made up a span () of about . Six made up the Hebrew cubit ( or ) of about , although the cubits mentioned in Ezekiel follow the royal cubit in consisting of seven palms comprising about . Ancient Greece The Ancient Greek palm (, palaistḗ, , dō̂ron, or , daktylodókhmē) made up ¼ of the Greek foot (poûs), which varied by region between . This gives values for the palm between , with the Attic palm around . These various palms were divided into four digits (dáktylos) or two "middle phalanges" (kóndylos). Two palms made a half-foot (hēmipódion or dikhás); three, a span (spithamḗ); four, a foot (poûs); five, a short cubit (pygōn); and six, a cubit (pē̂khys). The Greeks also had a less common "greater palm" of five digits. Ancient Rome The Roman palm () or lesser palm () made up ¼ of the Roman foot (), which varied in practice between but is thought to have been officially . This would have given the palm a notional value of within a range of a few millimeters. The palm was divided into four digits () of about or three inches () of about . Three made a span ( or "greater palm") of about ; four, a Roman foot; five, a hand-and-a-foot () of about ; six, a cubit () of about . Continental Europe The palms of medieval () and early modern Europe—the Italian, Spanish, and Portuguese and French —were based upon the Roman "greater palm", reckoned as a hand's span or length. In Italy, the palm () varied regionally. The Genovese palm was about ; in the Papal States, the Roman palm about according to Hutton but divided into the Roman "architect's palm" () of about and "merchant's palm" () of about according to Greaves; and the Neapolitan palm reported as by Riccioli but by Hutton's other sources. On Sicily and Malta, it was . In France, the palm ( or ) was about in Pernes-les-Fontaines, Vaucluse, and about in Languedoc. Palaiseau gave metric equivalents for the palme or palmo in 1816, and Rose provided English equivalents in 1900: From 19th C. Italian sources emerges that : - the ancient Venetian palm, five of which made a passo (pace), was equivalent to 0.3774 metres. - the Neapolitan palm = 0.26333670 metres (from 1480 to 1840) - the Neapolitan palm = 0.26455026455 metres (according to the law of 6 April 1840) which differs from previously cited palm measure equivalents in metres above. England The English palm, handbreadth, or handsbreadth is three inches (7.62cm) or, equivalently, four digits. The measurement was, however, not always well distinguished from the hand or handful, which became equal to four inches by a 1541 statute of Henry VIII. The palm was excluded from the British Weights and Measures Act 1824 that established the imperial system and is not a standard US customary unit. Elsewhere The Moroccan palm is given by Hutton as about .
Physical sciences
English
Basics and measurement
1556266
https://en.wikipedia.org/wiki/Nail%20%28unit%29
Nail (unit)
A nail, as a unit of cloth measurement, is generally a sixteenth of a yard or 2 inches (5.715 cm). The nail was apparently named after the practice of hammering brass nails into the counter at shops where cloth was sold. On the other hand, R D Connor, in The weights and measures of England (p 84) states that the nail was the 16th part of a Roman foot, i.e., digitus or finger, although he provides no reference to support this. Zupko's A dictionary of weights and measures for the British Isles (p 256) states that the nail was originally the distance from the thumbnail to the joint at the base of the thumb, or alternately, from the end of the middle finger to the second joint. An archaic usage of the term nail is as a sixteenth of a (long) hundredweight for mass, or 1 clove of 7 pound avoirdupois (3.175 kg). The nail in literature Explanation: Katherine and Petruchio are purchasing new clothes for Bianca's wedding. Petruchio is concerned that Katharine's dress has too many frills, wonders what it will cost, and suspects that he has been cheated. Katherine says she likes it, and complains that Petruchio is making a fool of her. The tailor repeats Katherine's words: Sir, she says you're making a fool of her. Petruchio then launches into the above-quoted tirade. Monstrous may be a double-entendre for cuckold. The half-yard, quarter and nail were divisions of the yard used in cloth measurement. The nail in law
Physical sciences
English
Basics and measurement
9394749
https://en.wikipedia.org/wiki/Temperate%20deciduous%20forest
Temperate deciduous forest
Temperate deciduous or temperate broad-leaf forests are a variety of temperate forest 'dominated' by deciduous trees that lose their leaves each winter. They represent one of Earth's major biomes, making up 9.69% of global land area. These forests are found in areas with distinct seasonal variation that cycle through warm, moist summers, cold winters, and moderate fall and spring seasons. They are most commonly found in the Northern Hemisphere, with particularly large regions in eastern North America, East Asia, and a large portion of Europe, though smaller regions of temperate deciduous forests are also located in South America. Examples of trees typically growing in the Northern Hemisphere's deciduous forests include oak, maple, basswood, beech and elm, while in the Southern Hemisphere, trees of the genus Nothofagus dominate this type of forest. Temperate deciduous forests provide several unique ecosystem services, including habitats for diverse wildlife, and they face a set of natural and human-induced disturbances that regularly alter their structure. Geography Located below the northern boreal forests, temperate deciduous forests make up a significant portion of the land between the Tropic of Cancer (23°N) and latitudes of 50° North, in addition to areas south of the Tropic of Capricorn (23°S). Canada, the United States, China, and several European countries have the largest land area covered by temperate deciduous forests, with smaller portions present throughout South America, specifically Chile and Argentina. Climate Temperate conditions refer to the cycle through four distinct seasons that occurs in areas between the polar regions and tropics. In these regions where temperate deciduous forest are found, warm and cold air circulation accounts for the biome's characteristic seasonal variation. Temperature The average annual temperature tends to be around 10 °Celsius, though this is dependent on the region. Due to shading from the canopy, the microclimate of temperate deciduous forests tends to be about 2.1 °Celsius cooler than the surroundings, whereas winter temperatures are from 0.4 to 0.9 °Celsius warmer within forests as a result of insulation from vegetation strata. Precipitation Annually, temperate deciduous forests experience approximately 750 to 1,500 millimeters of precipitation. As there is no distinct rainy season, precipitation is spread relatively evenly throughout the year. Snow makes up a portion of the precipitation present in temperate deciduous forests in the winter. Tree branches can intercept up to 80% of snowfall, affecting the amount of snow that ultimately reaches and melts on the forest floor. Seasonal variation A factor of temperate deciduous forests is their leaf loss during the transition from fall to winter, an adaptation that arose as a solution for the low sunlight conditions and bitter cold temperatures. In these forests, winter is a time of dormancy for plants, when broadleaf deciduous trees conserve energy and prevent water loss, and many animal species hibernate or migrate. Preceding winter is fruit-bearing autumn, a time when leaves change color to various shades of red, yellow, and orange as chlorophyll breakdown gives rise to anthocyanin, carotene, and xanthophyl pigments. Besides the characteristic colorful autumns and leafless winters, temperate deciduous forests have a lengthy growing season during the spring and summer months that tends to last anywhere from 120 to 250 days. Spring in temperate deciduous forests is a period of ground vegetation and seasonal herb growth, a process that starts early in the season before trees have regrown their leaves and when ample sunlight is available. Once a suitable temperature is reached in mid- to late spring, budding and flowering of tall deciduous trees also begins. In the summer, when fully-developed leaves occupy all trees, a moderately-dense canopy creates shade, increasing the humidity of forested areas. Characteristics Soil Though there is latitudinal variation in soil quality of temperate deciduous forests, with those at central latitudes having a higher soil productivity than those more north or south, soil in this biome is overall highly fertile. The fallen leaves from deciduous trees introduce detritus to the forest floor, increasing levels of nutrients and organic matter in the soil. The high soil productivity of temperate deciduous forests puts them at a high risk of conversion to agricultural land for human use. Flora Temperate deciduous forests are characterized by a variety of temperate deciduous tree species that vary based on region. Most tree species present in temperate deciduous forests are broadleaf trees that lose their leaves in the fall, though some coniferous trees such as pines (Pinus) are present in northern temperate deciduous forests. Europe's temperate deciduous forests are rich with oaks of the genus Quercus, European beech trees (Fagus sylvatica), and hornbeams (Fagus grandifolia), while those in Asia tend to have maples of the genus Acer, a variety of ash trees (Fraxinus), and basswoods (Tilia). Similarly to Asia, North American forests have maples, especially Acer saccharum, and basswoods, in addition to hickories (Carya) and American chestnuts (Castanea dentata). Southern beech (Nothofagus) trees are prevalent in the temperate deciduous forests of South America. Elm trees (Ulmus) and willows (Salix) can also be found dispersed throughout the temperate deciduous forests of the world. While a wide variety of tree species can be found throughout the temperate deciduous forest biome, tree species richness is typically moderate in each individual ecosystem, with only 3 to 4 tree species per square kilometer. Besides the old-growth trees that, with their domed tree crowns, form a canopy that lets little light filter through, a sub-canopy of shrubs such as mountain laurel and azaleas is present. These other plant species found in the canopy layers below the 35- to 40-meter mature trees are either adapted to low-light conditions or follow a seasonal schedule of growth that allows them to thrive before the formation of the canopy from mid-spring through mid-fall. Mosses and lichens make up significant ground cover, though they are also found growing on trees. Fauna In addition to characteristic flora, temperate deciduous forests are home to several animal species that rely on the trees and other plant life for shelter and resources, such as squirrels, rabbits, skunks, birds, mountain lions, bobcats, timber wolves, foxes, and black bears. Deer are also present in large populations, though they are clearing rather than true forest animals. Large deer populations have deleterious effects on tree regeneration overall, and grazing also has significant negative effects on the number and kind of herbaceous flowering plants. The continuous increase of deer populations and killing of top carnivores suggests that overgrazing by deer will continue. Ecosystem services Temperate deciduous forests provide several provisioning, regulating, supporting, and cultural ecosystem services. With a higher biodiversity than boreal forests, temperate deciduous forests maintain their genetic diversity by providing the supporting service of habitat availability for a variety of plants and animal species dependent on shade. These forests play a role in the regulation of air and soil quality by preventing soil erosion and flooding, while also storing carbon in their soil. Provisioning services provided by temperate deciduous forests include access to sources of drinking water, oxygen, food, timber, and biomass. Humans depend on temperate deciduous forests for cultural services, using them as spaces for recreation and spiritual practices. Disturbances Natural disturbances cause regular renewal of temperate deciduous forests and create a healthy, heterogeneous environment with constantly changing structures and populations. Weather events like snow, storms, and wind can cause varying degrees of change to the structure of forest canopies, creating log habitats for small animals and spaces for less shade-tolerant species to grow where fallen trees once stood. Other abiotic sources of disturbances to temperate deciduous forests include droughts, waterlogging, and fires. Natural surface fire patterns are especially important in pine reproduction. Biotic factors affecting forests take the form of fungal outbreaks in addition to mountain pine beetle and bark beetle infestations. These beetles are particularly prevalent in North America and kill trees by clogging their vascular tissue. Temperate deciduous forests tend to be resilient after minor weather-related disturbances, though major insect infestations, widespread anthropogenic disturbances, and catastrophic weather events can cause century-long succession or even the permanent conversion of the forest into a grassland. Climate change Rising temperatures and increased dryness in temperate deciduous forests have been noted in recent years as the climate changes. As a result, temperate deciduous forests have been experiencing an earlier onset to spring, as well as a global increase in the frequency and intensity of disturbances. They have been experiencing lower ecological resilience in the face of increasing mega-fires, longer droughts, and severe storms. Damaged wood from increased storm disturbance events provides nesting habitats for beetles, concurrently increasing bark beetle damage. Forest cover decreases with continuous severe disturbances, causing habitat loss and lower biodiversity. Human use and impact Humans rely on wood from temperate deciduous forests for use in the timber industry as well as paper and charcoal production. Logging practices emit high levels of carbon while also causing erosion because fewer tree roots are present to provide soil support. During the European colonization of North America, potash made from tree ashes was exported back to Europe as fertilizer. At this time in history, clearcutting of the original temperate deciduous forests was also performed to make space for agricultural land use, so many forests now present are second-growth. Over 50% of temperate deciduous forests are affected by fragmentation, resulting in small fragments dissected by fields and roads; these islands of green often differ substantially from the original forests and cause challenges for species migration. Seminatural temperate deciduous forests with developed trail systems serve as sites for tourism and recreational activities, such as hiking and hunting. In addition to fragmentation, human use of land adjacent to temperate deciduous forests is associated with pollution that can stunt the growth rate of trees. Invasive species that outcompete native species and alter forest nutrient cycles, such as common buckthorn (Rhamnus cathartica), are also introduced by humans. The introduction of exotic diseases, especially, continues to be a threat to forest trees and, hence, the forest. Humans have also introduced earthworms in deciduous forests in Norh America, which has had a deep impact on the ecosystem and reduced biodiversity. Conservation A method for preserving temperate deciduous forests that has been used in the past is fire suppression. The process of preventing fires is associated with the build-up of biomass that, ultimately, increases the intensity of incidental fires. As an alternative, prescribed burning has been put into practice, in which regular, managed fires are administered to forest ecosystems to imitate the natural disturbances that play a significant role in preserving biodiversity. To combat the effects of deforestation, reforestation has been employed.
Physical sciences
Forests
Earth science
4147723
https://en.wikipedia.org/wiki/Scleromochlus
Scleromochlus
Scleromochlus (from , 'hard' and , 'lever') is an extinct genus of small pterosauromorph archosaurs from the Late Triassic Lossiemouth Sandstone of Scotland. The genus contains the type and only species Scleromochlus taylori, named by Arthur Smith Woodward in 1907. Discovery Its fossils have been found in the Carnian Lossiemouth Sandstone of Scotland. The holotype was discovered around 1900 and is listed as specimen BMNH R3556, a partial skeleton preserved as an impression in sandstone, with portions of the skull and tail missing. Arthur Smith Woodward named and described Scleromochlus taylori in 1907. Description Scleromochlus taylori was about long, with long hind legs; it may have been capable of four-legged and two-legged locomotion. Studies about its gait suggest that it engaged in kangaroo- or springhare-like plantigrade hopping; however, a 2020 reassessment of Scleromochlus by Bennett suggested that it was a "sprawling quadrupedal hopper analogous to frogs." If Scleromochlus is indeed related to pterosaurs, this may offer insight as to how the latter evolved, since early pterosaurs also show adaptations for saltatorial locomotion. Classification A lightly built cursorial animal, its phylogenetic position has been debated; as different analyses have found it to be either the basal-most ornithodiran, the sister-taxon to Pterosauria, or a basal member of Avemetatarsalia that lies outside of Ornithodira. In the phylogenetic analyses conducted by Nesbitt et al. (2017) Scleromochlus was recovered either as a basal member of Dinosauromorpha or as a non-aphanosaurian, non-pterosaur basal avemetatarsalian. However, the authors stressed that scoring Scleromochlus was challenging given the small size and poor preservation of the fossils, and stated that it could not be scored for many of the important characters that optimize near the base of Avemetatarsalia. In 2020, Bennett interpreted Scleromochlus as possessing certain characteristics, including osteoderms and a crurotarsal morphology of the ankle, which suggested that Scleromochlus was not closely related to ornithodirans. Instead, he argued for a position of Scleromochlus among the Doswelliidae or elsewhere among basal members of the Archosauriformes. However, in 2022, Foffa and colleagues reconstructed a complete skeleton using microcomputed tomographic scans of the seven specimens found to date. This enabled a new phylogenetic analysis to be undertaken, which strongly supported the hypothesis that Scleromochlus was a member of the Pterosauromorpha – either as a genus of the Lagerpetidae family (shown to be a part of Pterosauromorpha in 2020) or as the sister group to pterosaurs and lagerpetids. Previous alternative classifications were demonstrated to have been based on misinterpretations of incomplete or ambiguous anatomical features found in the fossil record.
Biology and health sciences
Other prehistoric archosaurs
Animals
10897878
https://en.wikipedia.org/wiki/Mesoscopic%20physics
Mesoscopic physics
Mesoscopic physics is a subdiscipline of condensed matter physics that deals with materials of an intermediate size. These materials range in size between the nanoscale for a quantity of atoms (such as a molecule) and of materials measuring micrometres. The lower limit can also be defined as being the size of individual atoms. At the microscopic scale are bulk materials. Both mesoscopic and macroscopic objects contain many atoms. Whereas average properties derived from constituent materials describe macroscopic objects, as they usually obey the laws of classical mechanics, a mesoscopic object, by contrast, is affected by thermal fluctuations around the average, and its electronic behavior may require modeling at the level of quantum mechanics. A macroscopic electronic device, when scaled down to a meso-size, starts revealing quantum mechanical properties. For example, at the macroscopic level the conductance of a wire increases continuously with its diameter. However, at the mesoscopic level, the wire's conductance is quantized: the increases occur in discrete, or individual, whole steps. During research, mesoscopic devices are constructed, measured and observed experimentally and theoretically in order to advance understanding of the physics of insulators, semiconductors, metals, and superconductors. The applied science of mesoscopic physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when a macroscopic object is miniaturized, as with the miniaturization of transistors in semiconductor electronics. The mechanical, chemical, and electronic properties of materials change as their size approaches the nanoscale, where the percentage of atoms at the surface of the material becomes significant. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no rigid definition for mesoscopic physics but the systems studied are normally in the range of 100 nm (the size of a typical virus) to 1 000 nm (the size of a typical bacterium): 100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a close connection to the fields of nanofabrication and nanotechnology. Devices used in nanotechnology are examples of mesoscopic systems. Three categories of new electronic phenomena in such systems are interference effects, quantum confinement effects and charging effects. Quantum confinement effects Quantum confinement effects describe electrons in terms of energy levels, potential wells, valence bands, conduction bands, and electron energy band gaps. Electrons in bulk dielectric materials (larger than 10 nm) can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands. In bulk materials these energy levels are described as continuous because the difference in energy is negligible. As electrons stabilize at various energy levels, most vibrate in valence bands below a forbidden energy level, named the band gap. This region is an energy range in which no electron states exist. A smaller amount have energy levels above the forbidden gap, and this is the conduction band. The quantum confinement effect can be observed once the diameter of the particle is of the same magnitude as the wavelength of the electron's wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials. As the material is miniaturized towards nano-scale the confining dimension naturally decreases. The characteristics are no longer averaged by bulk, and hence continuous, but are at the level of quanta and thus discrete. In other words, the energy spectrum becomes discrete, measured as quanta, rather than continuous as in bulk materials. As a result, the bandgap asserts itself: there is a small and finite separation between energy levels. This situation of discrete energy levels is called quantum confinement. In addition, quantum confinement effects consist of isolated islands of electrons that may be formed at the patterned interface between two different semiconducting materials. The electrons typically are confined to disk-shaped regions termed quantum dots. The confinement of the electrons in these systems changes their interaction with electromagnetic radiation significantly, as noted above. Because the electron energy levels of quantum dots are discrete rather than continuous, the addition or subtraction of just a few atoms to the quantum dot has the effect of altering the boundaries of the bandgap. Changing the geometry of the surface of the quantum dot also changes the bandgap energy, owing again to the small size of the dot, and the effects of quantum confinement. Interference effects In the mesoscopic regime, scattering from defects – such as impurities – induces interference effects which modulate the flow of electrons. The experimental signature of mesoscopic interference effects is the appearance of reproducible fluctuations in physical quantities. For example, the conductance of a given specimen oscillates in an apparently random manner as a function of fluctuations in experimental parameters. However, the same pattern may be retraced if the experimental parameters are cycled back to their original values; in fact, the patterns observed are reproducible over a period of days. These are known as universal conductance fluctuations. Time-resolved mesoscopic dynamics Time-resolved experiments in mesoscopic dynamics: the observation and study, at nanoscales, of condensed phase dynamics such as crack formation in solids, phase separation, and rapid fluctuations in the liquid state or in biologically relevant environments; and the observation and study, at nanoscales, of the ultrafast dynamics of non-crystalline materials. Related s
Physical sciences
Basics_2
Physics
10903919
https://en.wikipedia.org/wiki/Anal%20hygiene
Anal hygiene
Anal hygiene refers to practices (anal cleansing) that are performed on the anus to maintain personal hygiene, usually immediately or shortly after defecation. Anal cleansing may also occur while showering or bathing. Post-defecation cleansing is rarely discussed academically, partly due to the social taboo surrounding it. The scientific objective of post-defecation cleansing is to prevent exposure to pathogens. The process of post-defecation cleansing involves washing the anus and inner part of the buttocks with water. Water-based cleansing typically involves either the use of running water from a handheld vessel and a hand for washing or the use of pressurized water through a jet device, such as a bidet. In either method, subsequent hand sanitization is essential to achieve the ultimate objectives of post-defecation cleansing. History Materials The ancient Greeks were known to use fragments of ceramic, known as pessoi (πεσσοί), to perform anal cleansing. The ancient Romans used a tersorium (), consisting of a sponge on a wooden stick. The stick would be soaked in a water channel in front of a toilet, and then stuck through the hole built into the front of the toilet for anal cleaning. The tersorium was shared by people using public latrines. To clean the sponge, they washed it in a bucket with water and salt or vinegar. However, this became a breeding ground for bacteria, causing the spread of disease in the latrine. In ancient Japan, wooden skewers known as chuugi ("shit sticks") were used for post-defecation cleaning. The use of toilet paper first started in ancient China around the 2nd century BC. According to Charlier (2012), French physician François Rabelais had argued about the ineffectiveness of toilet paper in the 16th century. The first commercially available toilet paper was invented by American entrepreneur Joseph Gayetty in 1857, with the dawning of the Second Industrial Revolution. Facilities Post-defecation facilities evolved with human civilization, and thus, so did post-defecation cleansing. According to Fernando, there is archeological evidence of toilet use in medieval Sri Lanka, ranging from the 6th-century Abhayagiri Complex in Anuradhapura; the 10th-century Pamsukulika Monastery in Ritigala, and the Baddhasimapasada and the Alahana Pirivena hospital complex in Polonnaruwa; to the 12th-century hospital toilet in Mihintale. These toilets were found to be with a complete system of plumbing and sewage with multi-stage treatment plants. According to Buddhism, toilet etiquettes (Wachchakutti Wattakkandaka in Pali language) were enumerated by Buddha himself in Tripitaka, the earliest collection of Buddhist teachings. Common methods Washing In countries that have predominantly Catholic, Eastern Orthodox, Hindu, Buddhist, or Islamic cultural traditions, water is usually used for anal cleansing. It is also practiced in some Protestant cultures, such as that of Finland. The cleaning process is typically done through either a pressurized device (e.g., a bidet or a bidet shower) or a non-pressurized vessel (e.g., a lota or an aftabeh) alongside a person's hand; many cultures assert that only the left hand is to be used for this task. Washing is sometimes followed by drying the cleaned areas with a cloth towel. Wiping In some parts of the developing world and in other areas where water may not always be usable, such as during camping trips, materials such as vegetable matter (leaves), mudballs, snow, corncobs, and stones are sometimes used for anal cleansing. Having hygienic means for anal cleansing available at the toilet or site of defecation is important for overall public health. The absence of proper materials in households can, under some circumstances, be correlated to the number of diarrhea episodes per household. The history of anal hygiene, from the Greco-Roman world to ancient China and ancient Japan, involves the widespread use of sponges and sticks as well as water and paper. The inclusion of anal cleansing facilities is often overlooked in the design of public or shared toilets in developing countries. In most cases, materials for anal cleansing are not made available within those facilities. Ensuring safe disposal of anal cleansing materials is often overlooked, which can lead to unhygienic debris inside or surrounding public toilets that contributes to the spread of diseases. Cultural preferences Water Water with soap cleansing is a reliable and hygienic way of removing fecal remnants. Muslim societies The use of water in Muslim countries is due in part to Islamic toilet etiquette which encourages washing after all instances of defecation. There are flexible provisions for when water is scarce: stones or papers can be used for cleansing after defecation instead. In Turkey, all Western-style toilets have a small nozzle on the centre rear of the toilet rim aiming at the anus. This nozzle is called taharet musluğu and it is controlled by a small tap placed within hand's reach near the toilet. It is used to wash the anus after wiping and drying with toilet paper. Squat toilets in Turkey do not have this kind of nozzle (a small bucket of water from a hand's reach tap or a bidet shower is used instead). Another alternative resembles a miniature shower and is known as a "health faucet", bidet shower, or "bum gun". It is commonly found to the right of the toilet where it is easy to reach. These are commonly used in the Muslim world. In the Indian subcontinent, a lota vessel is often used to cleanse with water, though the shower or nozzle is common among new toilets. Christian societies The use of water in many Christian countries is due in part to the biblical toilet etiquette which encourages washing after all instances of defecation. The bidet is common in predominantly Catholic countries where water is considered essential for anal cleansing. Some people in Europe and the Americas use bidets for anal cleansing with water. Bidets are common bathroom fixtures in many Western and Southern European countries and many South American countries, while bidet showers are more common in Finland and Greece. The availability of bidets varies widely within this group of countries. Furthermore, even where bidets exist, they may have other uses than for anal washing. In Italy, the installation of bidets in every household and hotel became mandatory by law on July 5, 1975. East Asia The first "paperless" toilet seat was invented in Japan in 1980. A spray toilet seat, commonly known by Toto's trademark Washlet, is typically a combination of seat warmer, bidet and drier, controlled by an electronic panel or remote control next to the toilet seat. A nozzle placed at rear of the toilet bowl aims a water jet to the anus and serves the purpose of cleaning. Many models have a separate "bidet" function aimed towards the front for vaginal cleansing. The spray toilet seat is common only in Western-style toilets, and is not incorporated in traditional style squat toilets. Some modern Japanese bidet toilets, especially in hotels and public areas, are labeled with pictograms to avoid language problems, and most newer models have a sensor that will refuse to activate the bidet unless someone is sitting on the toilet. Southeast Asia In Southeast Asian countries such as Indonesia, the Philippines, Thailand, Brunei, Malaysia, and East Timor, house bathrooms usually have a medium size wide plastic dipper (called in Indonesia, in the Philippines, () in Thai) or large cup, which is also used in bathing. In Thailand, the "bum gun" is ubiquitous. Some health faucets are metal sets attached to the bowl of the water closet, with the opening pointed at the anus. Toilets in public establishments mainly provide toilet paper for free or dispensed, though the dipper (often a cut up plastic bottle or small jug) is occasionally encountered in some establishments. Owing to its ethnic diversity, restrooms in Malaysia often feature a combination of anal cleansing methods where most public restrooms in cities offer toilet paper as well as a built in bidet or a small hand-held bidet shower (health faucets) connected to the plumbing in the absence of a built-in bidet. In Vietnam, people often use a bidet shower. It is usually available both at general households and public places. Toilet paper Western world and Sub-Saharan Africa In some cultures—such as many Western countries—cleaning after defecation is generally done with toilet paper only, until the person can bathe or shower. Toilet paper is considered a very important household commodity in Western culture, as illustrated by the panic buying of toilet paper in many Western countries during the COVID-19 pandemic. In some parts of the world, especially before toilet paper was available or affordable, the use of newspaper, telephone directory pages, or other paper products was common. In North America, the widely distributed Sears Roebuck catalog was also a popular choice until it began to be printed on glossy paper (at which point some people wrote to the company to complain). With flush toilets, using newspaper as toilet paper is likely to cause blockages. This practice continues today in parts of Africa; while rolls of toilet paper are readily available, they can be fairly expensive, prompting poorer members of the community to use newspapers. People suffering from hemorrhoids may find it more difficult to keep the anal area clean using only toilet paper and may prefer washing with water as well. Although wiping from front to back minimizes the risk of contaminating the urethra, the directionality of wiping varies based on sex, personal preference, and culture. Some people wipe their anal region standing while others wipe theirs sitting. Other methods and materials Wet wipes and gel wipes When cleaning babies' buttocks during diaper changes wet wipes are often used, in combination with water if available. As wet wipes are produced from plastic textiles made of polyester or polypropylene, they are notoriously bad for sewage systems as they do not decompose, although the wet wipe industry maintains they are biodegradable but not "flushable". A product of the 21st century, special foams, sprays and gels can be combined with dry toilet paper as alternatives to wet wipes. A moisturizing gel can be applied to toilet paper for personal hygiene or to reduce skin irritation from diarrhea. This product is called gel wipe. Pre-wipes Novel pre-wipes and methods are disclosed for assisting in the cleaning of skin in the anal area. The pre-wipes comprise an anti-adherent formulation and are wiped across the anal region of a user prior to defecation to introduce a film of the anti-adherent formulation onto the anal region. This film reduces the amount of fecal material that is retained in the anal region after defecation and reduces the amount of cleanup required. This reduced amount of cleanup results in cleaner, healthier skin. Natural materials Stones, leaves, corn cobs and similar natural materials may also be used for anal cleansing.
Biology and health sciences
Hygiene and grooming: General
Health
7218230
https://en.wikipedia.org/wiki/Pontederia%20crassipes
Pontederia crassipes
Pontederia crassipes (formerly Eichhornia crassipes), commonly known as common water hyacinth, is an aquatic plant native to South America, naturalized throughout the world, and often invasive outside its native range. It is the sole species of the subgenus Oshunae within the genus Pontederia. Anecdotally, it is known as the "terror of Bengal" due to its invasive growth tendencies. Description Water hyacinth is a free-floating perennial aquatic plant (or hydrophyte) native to tropical and subtropical South America. With broad, thick, glossy, ovate leaves, water hyacinth may rise above the surface of the water as much as in height. The leaves are across on a stem, which is floating by means of buoyant bulb-like nodules at its base above the water surface. They have long, spongy, bulbous stalks. The feathery, freely hanging roots are purple-black. An erect stalk supports a single spike of 8–15 conspicuously attractive flowers, mostly lavender to pink in colour with six petals. When not in bloom, water hyacinth may be mistaken for frogbit (Limnobium spongia) or Amazon frogbit (Limnobium laevigatum). One of the fastest-growing plants known, water hyacinth reproduces primarily by way of runners or stolons, which eventually form daughter plants. Each plant additionally can produce thousands of seeds each year, and these seeds can remain viable for more than 28 years. Common water hyacinths are vigorous growers, and mats can double in size in one to two weeks. In terms of plant count rather than size, they are said to multiply by more than a hundredfold in number in a matter of 23 days. In their native range, the flowers are pollinated by long-tongued bees, and the plants can reproduce both sexually and clonally. The invasiveness of the hyacinth is related to its ability to clone itself, and large patches are likely to all be part of the same genetic form. Water hyacinth has three flower morphs and is termed "tristylous". The flower morphs are named for the length of their pistils: long (L), medium (M), and short (S). Tristylous populations are, however, limited to the native lowland South American range of water hyacinth; in the introduced range, the M-morph prevails, with the L-morph occurring occasionally and the S-morph is absent altogether. This geographical distribution of the floral morphs indicates that founder events have played a prominent role in the species' worldwide spread. Habitat and ecology Its habitat ranges from tropical desert to subtropical or warm, temperate desert to rainforest zones. The temperature tolerance of the water hyacinth is: Its minimum growth temperature is Its optimum growth temperature is Its maximum growth temperature is Its pH tolerance is estimated at 5.0–7.5. Leaves are killed by frost and plants do not tolerate water temperatures more than . Water hyacinths do not grow where the average salinity is greater than 15% that of sea water (around 5 g salt per kg). In brackish water, its leaves show epinasty and chlorosis, and eventually die. Rafts of harvested water hyacinth have been floated to the sea, which kills it. Azotobacter chroococcum, a species of nitrogen-fixing bacteria, is probably concentrated around the bases of the petioles, but the bacteria do not fix nitrogen unless the plant is suffering extreme nitrogen deficiency. Fresh plants contain prickly crystals. This plant is reported to contain hydrogen cyanide, alkaloids, and triterpenoids, and may induce itching. Plants sprayed with 2,4-dichlorophenoxyacetic acid (2,4-D) may accumulate lethal doses of nitrates and other harmful elements in polluted environments. Invasive species Water hyacinth grows and reproduces quickly, so it can cover large portions of ponds and lakes. It can easily coexist with other invasive plants and native plants in an area. Particularly vulnerable are bodies of water that have already been affected by human activities, such as artificial reservoirs or eutrophied lakes that receive large amounts of nutrients. It outcompetes native aquatic plants, both floating and submerged. In 2011, Wu Fuqin et al. tracked the results of Yunnan Dianchi Lake and also showed that water hyacinth could affect the photosynthesis of phytoplankton, submerged plants, and algae by water environment quality and inhibit their growth. The decay process depletes dissolved oxygen in the water, often killing fish. Water hyacinth can absorb a large amount of harmful heavy metals and other substances. After death, it rots and sinks to the bottom of the water, causing secondary pollution to the water body, destroying the natural water quality, and may even affect the quality of residents' drinking water in severe cases. Water where water hyacinth grows heavily is often a breeding place for disease vectors (e.g. mosquitoes and snails) and harmful pathogens, posing a potential threat to the health of local residents. It is very critical to monitor areas quickly that are infested in order to efficiently reduce or control the growth of these species. On the other hand, water hyacinth can also provide a food source for goldfish, keep water clean and help to provide oxygen. The invasion of water hyacinth also has socioeconomic consequences. Since water hyacinth is composed of up to 95% water, its evapotranspiration rate is high. As such, small lakes that have been covered with the species can dry out and leave communities without adequate water or food supply. In some areas, dense mats of water hyacinth prevent the use of a waterway, leading to the loss of transportation (both human and cargo), as well as a loss of fishing possibilities. Large sums of money are allocated to the removal of water hyacinth from the water bodies as well as figuring out how to destroy the remains harvested. Harvesting water hyacinth mechanically requires considerable effort. A million tons of fresh biomass would require 75 trucks with a capacity of 40 m3, per day, for 365 days to get rid of it. The water hyacinth would then be transferred to a dumping site and allowed to decompose, which releases CO2, methane, and nitrogen oxides. Water hyacinth has been widely introduced in North America, Europe, Asia, Australia, Africa, and New Zealand. In many areas, it has become an important and pernicious invasive species. In New Zealand, it is listed on the National Pest Plant Accord, which prevents it from being propagated, distributed, or sold. In large water areas such as Louisiana, the Kerala Backwaters in India, Tonlé Sap in Cambodia, and Lake Victoria, it has become a serious pest. The common water hyacinth has become an invasive plant species on Lake Victoria in Africa after it was introduced into the area in the 1980s. A 1.22 Gb/8 chromosome reference genome was assembled to study nuclear and chloroplast genomes between 10 water hyacinth lines from 3 continents. Results indicate the spread of a limited genotype of water hyacinth from South America, where it has the highest genetic diversity. The paper proposes the spread potentially originating from ships travelling from Itajaí Port on the Brazilian East Coast. However, genetic studies on samples from Bangladesh and Indonesia demonstrate different genotypes, potentially implicating multiple introduction vectors. Further, the genomic study also revealed the adaptation in four key pathways; plant-pathogen interaction, plant-hormone signal transduction, photosynthesis and abiotic stress tolerance, which provide water hyacinth to expand its niche and compete with other native flora United States Introduction into the U.S. Various accounts are given as to how the water hyacinth was introduced to the United States. The claim that the water hyacinth was introduced to the U.S. in 1884 at the World's Fair in New Orleans, also known as the World Cotton Centennial, has been characterized as the "first authentic account", as well as "local legend". At some point, a legend arose that the plants had been given away as a gift by a Japanese delegation at the fair. This claim is absent in a pertinent article published in a military engineer's trade journal dating to 1940, but appears in a piece penned in 1941 by the director of the wildlife and fisheries division at the Louisiana Department of Conservation, where the author writes, "the Japanese government maintained a Japanese building" at the fair, and the "Japanese staff imported from Venezuela considerable numbers of water hyacinth, which were given away as souvenirs". The claim has been repeated by later writers, with various shifts in the details. Thus National Academy of Sciences fellow Noel D. Vietmeyer (1975) wrote that "Japanese entrepreneurs" introduced the plant into the U.S., and the plants had been "collected from the Orinoco River in Venezuela." This claim was echoed by a pair of NASA researchers (), who asserted that the souvenir plants were carelessly dumped in various waterways. Canadian biologist Spencer C. H. Barrett (2004) meanwhile favored the theory they were first cultivated in garden ponds, after which they multiplied and escaped to the environs. The account gains different details as told by children's story-teller Carole Marsh (1992), who says "Japan gave away water hyacinth seeds" during the exposition, and another Southern raconteur, Gaspar J. "Buddy" Stall (1998), assured his readership that the Japanese gave each family a package of those seeds. One paper has also inquired into the role which catalog sales of seeds and plants may have played in the dissemination of invasive plants. P. crassipes was found to have been offered in the 1884 issue of Bordentown, New Jersey–based Edmund D. Sturtevant's Catalogue of Rare Water Lilies and Other Choice Aquatic Plants, and of Germany has offered the plant since 1864 (when the firm was founded). By 1895, it was offered by seed purveyors in the states of New Jersey, New York, California, and Florida. Harper's Weekly magazine (1895) printed an anecdotal account stating that a certain man from New Orleans collected and brought home water hyacinths from Colombia, around 1892, and the plant proliferated in a matter of 2 years. Infestation and control in the Southeast As the hyacinths multiply into mats, they eliminate the presence of fish, and choke waterways for boating and shipping. This effect was well underway in the state of Louisiana by the turn of the 20th century. The plant invaded Florida in 1890, and an estimated 50 kg/m2 of the plant mass choked Florida's waterways. The clogging of the St. Johns River was posing a serious threat, and in 1897 the government dispatched a task force of the United States Army Corps of Engineers to solve the water hyacinth problem plaguing Gulf states such as Florida and Louisiana. Thus, in the early 20th century, the U.S. War Department (i.e., the Army Corps of Engineers) tested various means of eradicating the plants, including the jet-streaming of steam and hot water, application of various strong acids, and application of petroleum followed by incineration. Spraying with saturated salt solution (but not dilute solutions) effectively killed the plants; unfortunately this was considered prohibitively expensive, and the engineers selected Harvesta brand herbicide, whose active ingredient was arsenic acid, as the optimal cost-effective tool for eradication. This herbicide was used until 1905, when it was substituted with a different, white arsenic–based compound. An engineer charged with the spraying did not think the poison to be a matter of concern, stating that the crew of the spraying boat would routinely catch fish from their working areas and consume them. However, spraying had little hope of completely eradicating the water hyacinth, due to the vastness of escaped colonies and the inaccessibility of some of the infested areas, and the engineer suggested that some biological means of control may be needed. In 1910, a bold solution was put forth by the New Foods Society. Their plan was to import and release hippopotamus from Africa into the rivers and bayous of Louisiana. The hippopotamus would then eat the water hyacinth and also produce meat to solve another serious problem at the time, the American meat crisis. Known as the American Hippo Bill, H.R. 23621 was introduced by Louisiana Congressman Robert Broussard and debated by the Agricultural Committee of the U.S. House of Representatives. The chief collaborators in the New Foods Society and proponents of Broussard's bill were Major Frederick Russell Burnham, the celebrated American Scout, and Captain Fritz Duquesne, a South African Scout who later became a notorious spy for Germany. Presenting before the Agricultural Committee, Burnham made the point that none of the animals that Americans ate (chickens, pigs, cattle, sheep, or lambs) were native to the U.S. and had all been imported by European settlers centuries before, so Americans should not hesitate to introduce hippopotamus and other large animals into the American diet. Duquesne, who was born and raised in South Africa, further noted that European settlers on that continent commonly included hippopotamus, ostrich, antelope, and other African wildlife in their diets and suffered no ill effects. The American Hippo Bill fell one vote short of passage. Water hyacinths have also been introduced into waters inhabited by manatees in Florida, for the purpose of bioremediation (cf. §Phytoremediation below) of the waters that have become contaminated and fallen victim to algal blooming. The manatees include the water hyacinth in their diet, but it may not be the food of first choice for them. Legality of sale and shipment in the United States In 1956, E. crassipes was banned for sale or shipment in the United States, subject to a fine and/or imprisonment. This law was repealed by HR133 [116th Congress (2019–2020)] on 12/27/2020. Africa The water hyacinth may have been introduced into Egypt in the late 18th to early 19th century during Muhammad Ali of Egypt's era, but was not recognized as an invasive threat until 1879. The invasion into Egypt is dated between 1879 and 1892 by Brij Gopal. The plant (Afrikaans: waterhiasint) arguably invaded South Africa in 1910, although earlier dates have been claimed. A waterbody extensively threatened by water hyacinth is the Hartebeespoort Dam near Brits in North West Province. The plant was introduced by Belgian colonists to Rwanda to beautify their holdings. It then advanced by natural means to Lake Victoria, where it was first sighted in 1988. There, without any natural enemies, it has become an ecological plague, suffocating the lake, diminishing the fish reservoir, and hurting the local economies. It impedes access to Kisumu and other harbors. The water hyacinth has also appeared in Ethiopia, where it was first reported in 1965 at the Koka Reservoir and in the Awash River, where the Ethiopian Electric Light and Power Authority has managed to bring it under moderate control at considerable cost of human labor. Other infestations in Ethiopia include many bodies of water in the Gambela Region, the Blue Nile from Lake Tana into Sudan, and Lake Ellen near Alem Tena. By 2018, it has become a serious problem on Lake Tana in Ethiopia. The water hyacinth is also present on the Shire River in the Liwonde National Park in Malawi. Asia The water hyacinth was introduced to Bengal, India, because of its ornamental flowers and shapes of leaves, but became an invasive weed, draining oxygen from the water bodies and resulting in devastation of fish stocks. The water hyacinth was referred to as the "(beautiful) blue devil" in Bengal and "Bengal terror" elsewhere in India; it was called "German weed" (Bengali: Germani pana) in Bangladesh out of belief the German Kaiser submarine mission was involved in introducing them at the outbreak of World War I. Concerted efforts were made to eradicate water hyacinths, which affected navigability in Bengal's rivers. The Bengal Water Hyacinth Act, 1936 prohibited the cultivation of the plants. By 1947, the problem was resolved, and navigability was restored to the rivers, although the plants still exist in wetlands. Water hyacinths were called "Japanese trouble" in Sri Lanka because there was a rumor that the British had planted them to entice Japanese aircraft to land on the insecure pads. The plant entered Japan in 1884 for horticultural appreciation, according to conventional wisdom, but a researcher devoted to the study of the plant has discovered that ukiyo-e artist Utagawa Kunisada (or Utagawa Toyokuni III, d. 1865) produced a wood-block print featuring the water hyacinth, goldfish, and beautiful women, dated to 1855. The plant is floated on the water surface of filled (glassware) fishbowls or glazed earthenware waterlily pots (hibachi pots serving as substitute). In the 1930s, water hyacinth was introduced into China as a feed, ornamental plant, and sewage-control plant, and it was widely planted in the south as an animal feed. Beginning in the 1980s, with the rapid development of China's inland industry, the eutrophication of inland waters has intensified. With the help of its efficient asexual reproduction and environmental adaptation mechanisms, water hyacinth began to spread widely in river basins. The hyacinths has blocked rivers and hindered water traffic. For example, many waterways in Zhejiang and other provinces have been blocked by the rapidly growing water hyacinth. In addition, a large number of water hyacinths floating in the water block sunlight from entering the water, and its decay consumes dissolved oxygen in the water, pollutes water quality, and can kill other aquatic plants. The outbreak of water hyacinth has seriously affected the biodiversity of the local ecosystem and threatened the production, life, and health of community residents. Europe In 2016, the European Union banned any sales of the water hyacinth in the EU. The species features on the list of Invasive Alien Species of Union Concern. This means that not only the sales but also importation, cultivation, or intentional release into the environment are forbidden in the whole of the European Union. Oceania In Papua New Guinea, water hyacinth blocks sunlight to other aquatic organisms, creates habitat for malaria-carrying mosquitoes, clogs waterways to the point that boats cannot get through, and reduces the quality of water for purposes such as cooking, washing, and drinking. People have lost income or even died due to being unable to travel to get food or medical care, or due to diseases from contaminated water or mosquitoes. Control Control depends on the specific conditions of each affected location such as the extent of water hyacinth infestation, regional climate, and proximity to human and wildlife. Chemical control Chemical control is the least used of the three controls of water hyacinth, because of its long-term effects on the environment and human health. The use of herbicides requires strict approval from governmental protection agencies and skilled technicians to handle and spray the affected areas. The use of chemical herbicides is only used in case of severe infiltration of water hyacinth. However, the most successful use of herbicides is when it is used for smaller areas of infestation, because in larger areas, more mats of water hyacinths are likely to survive the herbicides and can fragment to further propagate a large area of water hyacinth mats. In addition, it is more cost-effective and less laborious than mechanical control, yet it can lead to environmental effects, as it can penetrate into the ground water system and can affect not only the hydrological cycle within an ecosystem, but also negatively affect the local water system and human health. Also of note, the use of herbicides is not strictly selective of water hyacinths; keystone species and vital organisms such as microalgae can perish from the toxins and can disrupt fragile food webs. The chemical regulation of water hyacinths can be done using common herbicides such as 2,4-D, glyphosate, and diquat. The herbicides are sprayed on the water hyacinth leaves and leads to direct changes to the physiology of the plant. The use of the herbicide known as 2,4-D leads to the death of water hyacinth through inhibition of cell growth of new tissue and cellular apoptosis. Almost a two-week period may be needed before mats of water hyacinth are destroyed with 2, 4-D. Between of water hyacinth and alligator weed are treated annually in Louisiana. The herbicide known as diquat is a liquid bromide salt that can rapidly penetrate the leaves of the water hyacinth and lead to immediate inactivity of plant cells and cellular processes. The herbicide glyphosate has a lower toxicity than the other herbicides, so takes longer for the water hyacinth mats to be destroyed (about three weeks). The symptoms include steady wilting of the plants and a yellow discoloration of the plant leaves that eventually leads to plant decay. Physical control Physical control is performed by land-based machines, such as bucket cranes, draglines, or boom, or by water-based machinery such as aquatic weed harvesters, dredges, or vegetation shredder. Mechanical removal is seen as the best short-term solution to the proliferation of the plant. A project on Lake Victoria in Africa used various pieces of equipment to chop, collect, and dispose of of water hyacinth in a 12-month period. It is, however, costly and requires the use of both land and water vehicles, but many years were needed for the lake to become in poor condition, and reclamation will be a continual process. It can have an annual cost from $6 million to $20 million and is only considered a short-term solution to a long-term problem. Another disadvantage with mechanical harvesting is that it can lead to further fragmentation of water hyacinths when the plants are broken up by spinning cutters of the plant-harvesting machinery. The fragments of water hyacinth that are left behind in the water can easily reproduce asexually and cause another infestation. Transportation and disposal of the harvested water hyacinth is a challenge, though, because the vegetation is heavy in weight. The harvested water hyacinth can pose a health risk to humans because of the plant's propensity for absorbing contaminants, and it is considered toxic to humans. Furthermore, the practice of mechanical harvesting is not effective in large-scale infestations, because this aquatic invasive species grows much more rapidly than it can be eliminated. Only of water hyacinth can be mechanically harvested daily because of the vast amounts in the environment. Therefore, the process is very time-intensive. Biological control As chemical and mechanical removals are often too expensive, polluting, and ineffective, researchers have turned to biological control agents to deal with water hyacinth. The effort began in the 1970s, when USDA researchers released into the United States three species of weevils known to feed on water hyacinth, Neochetina bruchi, N. eichhorniae, and the water hyacinth borer Sameodes albiguttalis. The weevil species were introduced into the Gulf Coast states, such as Louisiana, Texas, and Florida, where thousands of acres were infested by water hyacinth. A decade later, a decrease was found in water hyacinth mats by as much as 33%, but because the lifecycle of the weevils is 90 days, the use of biological predation to efficiently suppress water hyacinth growth is limited. These organisms regulate water hyacinth by limiting its size, vegetative propagation, and seed production. They also carry microorganisms that can be pathological to the water hyacinth. These weevils eat stem tissue, which results in a loss of buoyancy for the plant, which will eventually sink. Although meeting with limited success, the weevils have since been released in many other countries. However, the most effective control method remains the control of excessive nutrients and prevention of the spread of this species. In May 2010, the USDA's Agricultural Research Service released Megamelus scutellaris as an additional biological control insect for the invasive water hyacinth species. M. scutellaris is a small planthopper insect native to Argentina. Researchers have been studying the effects of the biological control agent in extensive host-range studies since 2006 and concluded that the insect is highly host-specific and will not pose a threat to any other plant population other than the targeted water hyacinth. Researchers also hope that this biological control will be more resilient than existing biological controls and the herbicides that are already in place to combat the invasive water hyacinth. Another insect being considered as a biological control agent is the semiaquatic grasshopper Cornops aquaticum. This insect is specific to the water hyacinth and its family, and besides feeding on the plant, it introduces a secondary pathogenic infestation. This grasshopper has been introduced into South Africa in controlled trials. The Rhodes University Centre for Biological Control is rearing M. scutellaris and the water hyacinth weevils N. eichhorniae and N. bruchi en masse for biological control at dams in South Africa, including the Hartbeespoort Dam. The moth Niphograpta albiguttalis (Warren) (Lepidoptera: Pyralidae) has been introduced to North America, Africa, and Australia. Larvae of this moth bore in the stems and flower buds of water hyacinth. Uses Since water hyacinth is so prolific, harvesting it for various uses also serves as a means of environmental control. Bioenergy Due to its extremely high rate of development, Pontederia crassipes is an excellent source of biomass. of standing crop thus produces more than of biogas (70% , 30% ). According to Curtis and Duke, of dry matter can yield of biogas, giving a heating value of compared to pure methane (895 Btu/ft3) Wolverton and McDonald report approximately methane, indicating biomass requirements of to attain the yield projected by the National Academy of Sciences (Washington). Ueki and Kobayashi mention more than per year. Reddy and Tucker found an experimental maximum of more than per day. Bengali farmers collect and pile up these plants to dry at the onset of the cold season; they then use the dry water hyacinths as fuel. The ashes are used as fertilizer. In India, of dried water hyacinth yields about 50 liters ethanol and 200 kg residual fiber (7,700 Btu). Bacterial fermentation of yields 26,500 ft3 gas (600 Btu) with 51.6% methane (), 25.4% hydrogen (), 22.1% carbon dioxide (), and 1.2% oxygen (). Gasification of dry matter by air and steam at high temperatures () gives about 40,000 ft3 (1,100 m3) natural gas (143 Btu/ft3) containing 16.6% , 4.8% , 21.7% (carbon monoxide), 4.1% , and 52.8% (nitrogen). The high moisture content of water hyacinth, adding so much to handling costs, tends to limit commercial ventures. A continuous, hydraulic production system could be designed, which would provide a better utilization of capital investments than in conventional agriculture, which is essentially a batch operation. The labor involved in harvesting water hyacinth can be greatly reduced by locating collection sites and processors on impoundments that take advantage of prevailing winds. Wastewater treatment systems could also favorably be added to this operation. The harvested biomass would then be converted to ethanol, biogas, hydrogen, gaseous nitrogen, and/or fertilizer. The byproduct water can be used to irrigate nearby cropland. Phytoremediation, waste water treatment Water hyacinth removes arsenic from arsenic-contaminated drinking water. It may be a useful tool in removing arsenic from tube well water in Bangladesh. Water hyacinth is also observed to enhance nitrification in wastewater treatment cells of living technology. Their root zones are superb micro-sites for bacterial communities. Water hyacinth is a common fodder plant in the third world especially Africa though excessive use can be toxic. It is high in protein (nitrogen) and trace minerals and the goat feces are a good source of fertilizer as well. Water hyacinth is reported for its efficiency to remove about 60–80% nitrogen and about 69% of potassium from water. The roots of water hyacinth were found to remove particulate matter and nitrogen in a natural shallow eutrophicated wetland. The plant is extremely tolerant of, and has a high capacity for, the uptake of heavy metals, including cadmium, chromium, cobalt, nickel, lead, and mercury, which could make it suitable for the biocleaning of industrial wastewater. The roots of Pontederia crassipes naturally absorb some organic compounds believed to be carcinogenic, in concentrations 10,000 times that in the surrounding water. Water hyacinths can be cultivated for waste water treatment (especially dairy waste water). In addition to heavy metals, Pontederia crassipes can also remove other toxins, such as cyanide, which is environmentally beneficial in areas that have endured gold-mining operations. Water hyacinth can take in and degrade ethion, a phosphorus pesticide. Agriculture In places where water hyacinth is invasive, overabundant, and in need of clearing away, these traits make it free for the harvesting, which makes it very useful as a source of organic matter for composting in organic farming. It is used internationally for fertilizer and as animal feed and silage for cattle, sheep, geese, pigs, and other livestock. In Bengal, India the kachuri-pana has been used primarily for fertilizer, compost or mulch, and secondarily as fodder for livestock and fish. In Bangladesh, farmers in the southwestern region cultivate vegetables on "floating gardens" usually with a bamboo-built frame base, with dried mass of water hyacinth covered in soil as bedding. As a large portion of cultivable land goes under water for months during monsoon in this low-lying region, farmers have grown this method for many decades now. The method of this agriculture is known by many names including dhap chash and vasoman chash. In Kenya, East Africa, it has been used experimentally as organic fertilizer, although there is controversy stemming from the high alkaline pH value of the fertilizer. Other uses In various places in the world, the plant is used for making furniture, handbags, baskets, rope, and household goods/interior products (lampshades, picture frames) by businesses launched by NGOs and entrepreneurs. Woven products American-Nigerian Achenyo Idachaba has won an award for showing how this plant can be exploited for profit as woven products in Nigeria. Paper Though a study found water hyacinths of very limited use for paper production, they are nonetheless being used for paper production on a small scale. Goswami pointed out in his article that water hyacinth has the potential to make tough and strong paper. He found that adding water hyacinth pulp to the raw material of bamboo pulp for anti-grease paper can increase the physical strength of paper. Edibility The plant is used as a carotene-rich table vegetable in Taiwan. Javanese sometimes cook and eat the green parts and inflorescence. Vietnamese also cook the plant and sometimes add its young leaves and flower to their salads. Potential as bioherbicidal agent Water hyacinth leaf extract has been shown to exhibit phytotoxicity against another invasive weed Mimosa pigra. The extract inhibited the germination of M. pigra seeds in addition to suppressing the root growth of the seedlings. Biochemical data suggested that the inhibitory effects may be mediated by enhanced hydrogen peroxide production, inhibition of soluble peroxidase activity, and stimulation of cell wall-bound peroxidase activity in the root tissues of M. pigra. Gallery Explanatory notes
Biology and health sciences
Monocots
null
7221088
https://en.wikipedia.org/wiki/Air%20conditioning
Air conditioning
Air conditioning, often abbreviated as A/C (US) or air con (UK), is the process of removing heat from an enclosed space to achieve a more comfortable interior temperature and in some cases also controlling the humidity of internal air. Air conditioning can be achieved using a mechanical 'air conditioner' or by other methods, including passive cooling and ventilative cooling. Air conditioning is a member of a family of systems and techniques that provide heating, ventilation, and air conditioning (HVAC). Heat pumps are similar in many ways to air conditioners, but use a reversing valve to allow them both to heat and to cool an enclosed space. Air conditioners, which typically use vapor-compression refrigeration, range in size from small units used in vehicles or single rooms to massive units that can cool large buildings. Air source heat pumps, which can be used for heating as well as cooling, are becoming increasingly common in cooler climates. Air conditioners can reduce mortality rates due to higher temperature. According to the International Energy Agency (IEA) 1.6 billion air conditioning units were used globally in 2016. The United Nations called for the technology to be made more sustainable to mitigate climate change and for the use of alternatives, like passive cooling, evaporative cooling, selective shading, windcatchers, and better thermal insulation. History Air conditioning dates back to prehistory. Double-walled living quarters, with a gap between the two walls to encourage air flow, were found in the ancient city of Hamoukar, in modern Syria. Ancient Egyptian buildings also used a wide variety of passive air-conditioning techniques. These became widespread from the Iberian Peninsula through North Africa, the Middle East, and Northern India. Passive techniques remained widespread until the 20th century when they fell out of fashion and were replaced by powered air conditioning. Using information from engineering studies of traditional buildings, passive techniques are being revived and modified for 21st-century architectural designs. Air conditioners allow the building's indoor environment to remain relatively constant, largely independent of changes in external weather conditions and internal heat loads. They also enable deep plan buildings to be created and have allowed people to live comfortably in hotter parts of the world. Development Preceding discoveries In 1558, Giambattista della Porta described a method of chilling ice to temperatures far below its freezing point by mixing it with potassium nitrate (then called "nitre") in his popular science book Natural Magic. In 1620, Cornelis Drebbel demonstrated "Turning Summer into Winter" for James I of England, chilling part of the Great Hall of Westminster Abbey with an apparatus of troughs and vats. Drebbel's contemporary Francis Bacon, like della Porta a believer in science communication, may not have been present at the demonstration, but in a book published later the same year, he described it as "experiment of artificial freezing" and said that "Nitre (or rather its spirit) is very cold, and hence nitre or salt when added to snow or ice intensifies the cold of the latter, the nitre by adding to its cold, but the salt by supplying activity to the cold of the snow." In 1758, Benjamin Franklin and John Hadley, a chemistry professor at the University of Cambridge, conducted experiments applying the principle of evaporation as a means to cool an object rapidly. Franklin and Hadley confirmed that the evaporation of highly volatile liquids (such as alcohol and ether) could be used to drive down the temperature of an object past the freezing point of water. They experimented with the bulb of a mercury-in-glass thermometer as their object. They used a bellows to speed up the evaporation. They lowered the temperature of the thermometer bulb down to while the ambient temperature was . Franklin noted that soon after they passed the freezing point of water , a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about thick when they stopped the experiment upon reaching . Franklin concluded: "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day." The 19th century included many developments in compression technology. In 1820, English scientist and inventor Michael Faraday discovered that compressing and liquefying ammonia could chill air when the liquefied ammonia was allowed to evaporate. In 1842, Florida physician John Gorrie used compressor technology to create ice, which he used to cool air for his patients in his hospital in Apalachicola, Florida. He hoped to eventually use his ice-making machine to regulate the temperature of buildings. He envisioned centralized air conditioning that could cool entire cities. Gorrie was granted a patent in 1851, but following the death of his main backer, he was not able to realize his invention. In 1851, James Harrison created the first mechanical ice-making machine in Geelong, Australia, and was granted a patent for an ether vapor-compression refrigeration system in 1855 that produced three tons of ice per day. In 1860, Harrison established a second ice company. He later entered the debate over competing against the American advantage of ice-refrigerated beef sales to the United Kingdom. First devices Electricity made the development of effective units possible. In 1901, American inventor Willis H. Carrier built what is considered the first modern electrical air conditioning unit. In 1902, he installed his first air-conditioning system, in the Sackett-Wilhelms Lithographing & Publishing Company in Brooklyn, New York. His invention controlled both the temperature and humidity, which helped maintain consistent paper dimensions and ink alignment at the printing plant. Later, together with six other employees, Carrier formed The Carrier Air Conditioning Company of America, a business that in 2020 employed 53,000 people and was valued at $18.6 billion. In 1906, Stuart W. Cramer of Charlotte, North Carolina, was exploring ways to add moisture to the air in his textile mill. Cramer coined the term "air conditioning" in a patent claim which he filed that year, where he suggested that air conditioning was analogous to "water conditioning", then a well-known process for making textiles easier to process. He combined moisture with ventilation to "condition" and change the air in the factories; thus, controlling the humidity that is necessary in textile plants. Willis Carrier adopted the term and incorporated it into the name of his company. Domestic air conditioning soon took off. In 1914, the first domestic air conditioning was installed in Minneapolis in the home of Charles Gilbert Gates. It is, however, possible that the considerable device (c. ) was never used, as the house remained uninhabited (Gates had already died in October 1913.) In 1931, H.H. Schultz and J.Q. Sherman developed what would become the most common type of individual room air conditioner: one designed to sit on a window ledge. The units went on sale in 1932 at US$10,000 to $50,000 (the equivalent of $ to $ in .) A year later, the first air conditioning systems for cars were offered for sale. Chrysler Motors introduced the first practical semi-portable air conditioning unit in 1935, and Packard became the first automobile manufacturer to offer an air conditioning unit in its cars in 1939. Further development Innovations in the latter half of the 20th century allowed more ubiquitous air conditioner use. In 1945, Robert Sherman of Lynn, Massachusetts, invented a portable, in-window air conditioner that cooled, heated, humidified, dehumidified, and filtered the air. The first inverter air conditioners were released in 1980–1981. In 1954, Ned Cole, a 1939 architecture graduate from the University of Texas at Austin, developed the first experimental "suburb" with inbuilt air conditioning in each house. 22 homes were developed on a flat, treeless track in northwest Austin, Texas, and the community was christened the 'Austin Air-Conditioned Village.' The residents were subjected to a year-long study of the effects of air conditioning led by the nation’s premier air conditioning companies, builders, and social scientists. In addition, researchers from UT’s Health Service and Psychology Department studied the effects on the "artificially cooled humans." One of the more amusing discoveries was that each family reported being troubled with scorpions, the leading theory being that scorpions sought cool, shady places. Other reported changes in lifestyle were that mothers baked more, families ate heavier foods, and they were more apt to choose hot drinks. Air conditioner adoption tends to increase above around $10,000 annual household income in warmer areas. Global GDP growth explains around 85% of increased air condition adoption by 2050, while the remaining 15% can be explained by climate change. As of 2016 an estimated 1.6 billion air conditioning units were used worldwide, with over half of them in China and USA, and a total cooling capacity of 11,675 gigawatts. The International Energy Agency predicted in 2018 that the number of air conditioning units would grow to around 4 billion units by 2050 and that the total cooling capacity would grow to around 23,000 GW, with the biggest increases in India and China. Between 1995 and 2004, the proportion of urban households in China with air conditioners increased from 8% to 70%. As of 2015, nearly 100 million homes, or about 87% of US households, had air conditioning systems. In 2019, it was estimated that 90% of new single-family homes constructed in the US included air conditioning (ranging from 99% in the South to 62% in the West). Operation Operating principles Cooling in traditional air conditioner systems is accomplished using the vapor-compression cycle, which uses a refrigerant's forced circulation and phase change between gas and liquid to transfer heat. The vapor-compression cycle can occur within a unitary, or packaged piece of equipment; or within a chiller that is connected to terminal cooling equipment (such as a fan coil unit in an air handler) on its evaporator side and heat rejection equipment such as a cooling tower on its condenser side. An air source heat pump shares many components with an air conditioning system, but includes a reversing valve, which allows the unit to be used to heat as well as cool a space. Air conditioning equipment will reduce the absolute humidity of the air processed by the system if the surface of the evaporator coil is significantly cooler than the dew point of the surrounding air. An air conditioner designed for an occupied space will typically achieve a 30% to 60% relative humidity in the occupied space. Most modern air-conditioning systems feature a dehumidification cycle during which the compressor runs. At the same time, the fan is slowed to reduce the evaporator temperature and condense more water. A dehumidifier uses the same refrigeration cycle but incorporates both the evaporator and the condenser into the same air path; the air first passes over the evaporator coil, where it is cooled and dehumidified before passing over the condenser coil, where it is warmed again before it is released back into the room. Free cooling can sometimes be selected when the external air is cooler than the internal air. Therefore, the compressor does not need to be used, resulting in high cooling efficiencies for these times. This may also be combined with seasonal thermal energy storage. Heating Some air conditioning systems can reverse the refrigeration cycle and act as an air source heat pump, thus heating instead of cooling the indoor environment. They are also commonly referred to as "reverse cycle air conditioners". The heat pump is significantly more energy-efficient than electric resistance heating, because it moves energy from air or groundwater to the heated space and the heat from purchased electrical energy. When the heat pump is in heating mode, the indoor evaporator coil switches roles and becomes the condenser coil, producing heat. The outdoor condenser unit also switches roles to serve as the evaporator and discharges cold air (colder than the ambient outdoor air). Most air source heat pumps become less efficient in outdoor temperatures lower than 4 °C or 40 °F. This is partly because ice forms on the outdoor unit's heat exchanger coil, which blocks air flow over the coil. To compensate for this, the heat pump system must temporarily switch back into the regular air conditioning mode to switch the outdoor evaporator coil back to the condenser coil, to heat up and defrost. Therefore, some heat pump systems will have electric resistance heating in the indoor air path that is activated only in this mode to compensate for the temporary indoor air cooling, which would otherwise be uncomfortable in the winter. Newer models have improved cold-weather performance, with efficient heating capacity down to . However, there is always a chance that the humidity that condenses on the heat exchanger of the outdoor unit could freeze, even in models that have improved cold-weather performance, requiring a defrosting cycle to be performed. The icing problem becomes much more severe with lower outdoor temperatures, so heat pumps are sometimes installed in tandem with a more conventional form of heating, such as an electrical heater, a natural gas, heating oil, or wood-burning fireplace or central heating, which is used instead of or in addition to the heat pump during harsher winter temperatures. In this case, the heat pump is used efficiently during milder temperatures, and the system is switched to the conventional heat source when the outdoor temperature is lower. Performance The coefficient of performance (COP) of an air conditioning system is a ratio of useful heating or cooling provided to the work required. Higher COPs equate to lower operating costs. The COP usually exceeds 1; however, the exact value is highly dependent on operating conditions, especially absolute temperature and relative temperature between sink and system, and is often graphed or averaged against expected conditions. Air conditioner equipment power in the U.S. is often described in terms of "tons of refrigeration", with each approximately equal to the cooling power of one short ton ( of ice melting in a 24-hour period. The value is equal to 12,000 BTUIT per hour, or 3,517 watts. Residential central air systems are usually from 1 to 5 tons (3.5 to 18 kW) in capacity. The efficiency of air conditioners is often rated by the seasonal energy efficiency ratio (SEER), which is defined by the Air Conditioning, Heating and Refrigeration Institute in its 2008 standard AHRI 210/240, Performance Rating of Unitary Air-Conditioning and Air-Source Heat Pump Equipment. A similar standard is the European seasonal energy efficiency ratio (ESEER). Efficiency is strongly affected by the humidity of the air to be cooled. Dehumidifying the air before attempting to cool it can reduce subsequent cooling costs by as much as 90 percent. Thus, reducing dehumidifying costs can materially affect overall air conditioning costs. Control system Wireless remote control This type of controller uses an infrared LED to relay commands from a remote control to the air conditioner. The output of the infrared LED (like that of any infrared remote) is invisible to the human eye because its wavelength is beyond the range of visible light (940 nm). This system is commonly used on mini-split air conditioners because it is simple and portable. Some window and ducted central air conditioners uses it as well. Wired controller A wired controller, also called a "wired thermostat," is a device that controls an air conditioner by switching heating or cooling on or off. It uses different sensors to measure temperatures and actuate control operations. Mechanical thermostats commonly use bimetallic strips, converting a temperature change into mechanical displacement, to actuate control of the air conditioner. Electronic thermostats, instead, use a thermistor or other semiconductor sensor, processing temperature change as electronic signals to control the air conditioner. These controllers are usually used in hotel rooms because they are permanently installed into a wall and hard-wired directly into the air conditioner unit, eliminating the need for batteries. Types * where the typical capacity is in kilowatt as follows: very small: <1.5 kW small: 1.5–3.5 kW medium: 4.2–7.1 kW large: 7.2–14 kW very large: >14 kW Mini-split and multi-split systems Ductless systems (often mini-split, though there are now ducted mini-split) typically supply conditioned and heated air to a single or a few rooms of a building, without ducts and in a decentralized manner. Multi-zone or multi-split systems are a common application of ductless systems and allow up to eight rooms (zones or locations) to be conditioned independently from each other, each with its indoor unit and simultaneously from a single outdoor unit. The first mini-split system was sold in 1961 by Toshiba in Japan, and the first wall-mounted mini-split air conditioner was sold in 1968 in Japan by Mitsubishi Electric, where small home sizes motivated their development. The Mitsubishi model was the first air conditioner with a cross-flow fan. In 1969, the first mini-split air conditioner was sold in the US. Multi-zone ductless systems were invented by Daikin in 1973, and variable refrigerant flow systems (which can be thought of as larger multi-split systems) were also invented by Daikin in 1982. Both were first sold in Japan. Variable refrigerant flow systems when compared with central plant cooling from an air handler, eliminate the need for large cool air ducts, air handlers, and chillers; instead cool refrigerant is transported through much smaller pipes to the indoor units in the spaces to be conditioned, thus allowing for less space above dropped ceilings and a lower structural impact, while also allowing for more individual and independent temperature control of spaces. The outdoor and indoor units can be spread across the building. Variable refrigerant flow indoor units can also be turned off individually in unused spaces. The lower start-up power of VRF's DC inverter compressors and their inherent DC power requirements also allow VRF solar-powered heat pumps to be run using DC-providing solar panels. Ducted central systems Split-system central air conditioners consist of two heat exchangers, an outside unit (the condenser) from which heat is rejected to the environment and an internal heat exchanger (the evaporator, or Fan Coil Unit, FCU) with the piped refrigerant being circulated between the two. The FCU is then connected to the spaces to be cooled by ventilation ducts. Floor standing air conditioners are similar to this type of air conditioner but sit within spaces that need cooling. Central plant cooling Large central cooling plants may use intermediate coolant such as chilled water pumped into air handlers or fan coil units near or in the spaces to be cooled which then duct or deliver cold air into the spaces to be conditioned, rather than ducting cold air directly to these spaces from the plant, which is not done due to the low density and heat capacity of air, which would require impractically large ducts. The chilled water is cooled by chillers in the plant, which uses a refrigeration cycle to cool water, often transferring its heat to the atmosphere even in liquid-cooled chillers through the use of cooling towers. Chillers may be air- or liquid-cooled. Portable units A portable system has an indoor unit on wheels connected to an outdoor unit via flexible pipes, similar to a permanently fixed installed unit (such as a ductless split air conditioner). Hose systems, which can be monoblock or air-to-air, are vented to the outside via air ducts. The monoblock type collects the water in a bucket or tray and stops when full. The air-to-air type re-evaporates the water, discharges it through the ducted hose, and can run continuously. Many but not all portable units draw indoor air and expel it outdoors through a single duct, negatively impacting their overall cooling efficiency. Many portable air conditioners come with heat as well as a dehumidification function. Window unit and packaged terminal The packaged terminal air conditioner (PTAC), through-the-wall, and window air conditioners are similar. These units are installed on a window frame or on a wall opening. The unit usually has an internal partition separating its indoor and outdoor sides, which contain the unit's condenser and evaporator, respectively. PTAC systems may be adapted to provide heating in cold weather, either directly by using an electric strip, gas, or other heaters, or by reversing the refrigerant flow to heat the interior and draw heat from the exterior air, converting the air conditioner into a heat pump. They may be installed in a wall opening with the help of a special sleeve on the wall and a custom grill that is flush with the wall and window air conditioners can also be installed in a window, but without a custom grill. Packaged air conditioner Packaged air conditioners (also known as self-contained units) are central systems that integrate into a single housing all the components of a split central system, and deliver air, possibly through ducts, to the spaces to be cooled. Depending on their construction they may be outdoors or indoors, on roofs (rooftop units), draw the air to be conditioned from inside or outside a building and be water or air-cooled. Often, outdoor units are air-cooled while indoor units are liquid-cooled using a cooling tower. Types of compressors Reciprocating This compressor consists of a crankcase, crankshaft, piston rod, piston, piston ring, cylinder head and valves. Scroll This compressor uses two interleaving scrolls to compress the refrigerant. it consists of one fixed and one orbiting scrolls. This type of compressor is more efficient because it has 70 percent less moving parts than a reciprocating compressor. Screw This compressor use two very closely meshing spiral rotors to compress the gas. The gas enters at the suction side and moves through the threads as the screws rotate. The meshing rotors force the gas through the compressor, and the gas exits at the end of the screws. The working area is the inter-lobe volume between the male and female rotors. It is larger at the intake end, and decreases along the length of the rotors until the exhaust port. This change in volume is the compression. Capacity modulation technologies There are several ways to modulate the cooling capacity in refrigeration or air conditioning and heating systems. The most common in air conditioning are: on-off cycling, hot gas bypass, use or not of liquid injection, manifold configurations of multiple compressors, mechanical modulation (also called digital), and inverter technology. Hot gas bypass Hot gas bypass involves injecting a quantity of gas from discharge to the suction side. The compressor will keep operating at the same speed, but due to the bypass, the refrigerant mass flow circulating with the system is reduced, and thus the cooling capacity. This naturally causes the compressor to run uselessly during the periods when the bypass is operating. The turn down capacity varies between 0 and 100%. Manifold configurations Several compressors can be installed in the system to provide the peak cooling capacity. Each compressor can run or not in order to stage the cooling capacity of the unit. The turn down capacity is either 0/33/66 or 100% for a trio configuration and either 0/50 or 100% for a tandem. Mechanically modulated compressor This internal mechanical capacity modulation is based on periodic compression process with a control valve, the two scroll set move apart stopping the compression for a given time period. This method varies refrigerant flow by changing the average time of compression, but not the actual speed of the motor. Despite an excellent turndown ratio – from 10 to 100% of the cooling capacity, mechanically modulated scrolls have high energy consumption as the motor continuously runs. Variable-speed compressor This system uses a variable-frequency drive (also called an Inverter) to control the speed of the compressor. The refrigerant flow rate is changed by the change in the speed of the compressor. The turn down ratio depends on the system configuration and manufacturer. It modulates from 15 or 25% up to 100% at full capacity with a single inverter from 12 to 100% with a hybrid tandem. This method is the most efficient way to modulate an air conditioner's capacity. It is up to 58% more efficient than a fixed speed system. Impact Health effects In hot weather, air conditioning can prevent heat stroke, dehydration due to excessive sweating, electrolyte imbalance, kidney failure, and other issues due to hyperthermia. Heat waves are the most lethal type of weather phenomenon in the United States. A 2020 study found that areas with lower use of air conditioning correlated with higher rates of heat-related mortality and hospitalizations. The August 2003 France heatwave resulted in approximately 15,000 deaths, where 80% of the victims were over 75 years old. In response, the French government required all retirement homes to have at least one air-conditioned room at per floor during heatwaves. Air conditioning (including filtration, humidification, cooling and disinfection) can be used to provide a clean, safe, hypoallergenic atmosphere in hospital operating rooms and other environments where proper atmosphere is critical to patient safety and well-being. It is sometimes recommended for home use by people with allergies, especially mold. However, poorly maintained water cooling towers can promote the growth and spread of microorganisms such as Legionella pneumophila, the infectious agent responsible for Legionnaires' disease. As long as the cooling tower is kept clean (usually by means of a chlorine treatment), these health hazards can be avoided or reduced. The state of New York has codified requirements for registration, maintenance, and testing of cooling towers to protect against Legionella. Economic effects First designed to benefit targeted industries such as the press as well as large factories, the invention quickly spread to public agencies and administrations with studies with claims of increased productivity close to 24% in places equipped with air conditioning. Air conditioning caused various shifts in demography, notably that of the United States starting from the 1970s. In the US, the birth rate was lower in the spring than during other seasons until the 1970s but this difference then declined since then. As of 2007, the Sun Belt contained 30% of the total US population while it was inhabited by 24% of Americans at the beginning of the 20th century. Moreover, the summer mortality rate in the US, which had been higher in regions subject to a heat wave during the summer, also evened out. The spread of the use of air conditioning acts as a main driver for the growth of global demand of electricity. According to a 2018 report from the International Energy Agency (IEA), it was revealed that the energy consumption for cooling in the United States, involving 328 million Americans, surpasses the combined energy consumption of 4.4 billion people in Africa, Latin America, the Middle East, and Asia (excluding China). A 2020 survey found that an estimated 88% of all US households use AC, increasing to 93% when solely looking at homes built between 2010 and 2020. Environmental effects Space cooling including air conditioning accounted globally for 2021 terawatt-hours of energy usage in 2016 with around 99% in the form of electricity, according to a 2018 report on air-conditioning efficiency by the International Energy Agency. The report predicts an increase of electricity usage due to space cooling to around 6200 TWh by 2050, and that with the progress currently seen, greenhouse gas emissions attributable to space cooling will double: 1,135 million tons (2016) to 2,070 million tons. There is some push to increase the energy efficiency of air conditioners. United Nations Environment Programme (UNEP) and the IEA found that if air conditioners could be twice as effective as now, 460 billion tons of GHG could be cut over 40 years. The UNEP and IEA also recommended legislation to decrease the use of hydrofluorocarbons, better building insulation, and more sustainable temperature-controlled food supply chains going forward. Refrigerants have also caused and continue to cause serious environmental issues, including ozone depletion and climate change, as several countries have not yet ratified the Kigali Amendment to reduce the consumption and production of hydrofluorocarbons. CFCs and HCFCs refrigerants such as R-12 and R-22, respectively, used within air conditioners have caused damage to the ozone layer, and hydrofluorocarbon refrigerants such as R-410A and R-404A, which were designed to replace CFCs and HCFCs, are instead exacerbating climate change. Both issues happen due to the venting of refrigerant to the atmosphere, such as during repairs. HFO refrigerants, used in some if not most new equipment, solve both issues with an ozone damage potential (ODP) of zero and a much lower global warming potential (GWP) in the single or double digits vs. the three or four digits of hydrofluorocarbons. Hydrofluorocarbons would have raised global temperatures by around by 2100 without the Kigali Amendment. With the Kigali Amendment, the increase of global temperatures by 2100 due to hydrofluorocarbons is predicted to be around . Alternatives to continual air conditioning include passive cooling, passive solar cooling, natural ventilation, operating shades to reduce solar gain, using trees, architectural shades, windows (and using window coatings) to reduce solar gain. Social effects Socioeconomic groups with a household income below around $10,000 tend to have a low air conditioning adoption, which worsens heat-related mortality. The lack of cooling can be hazardous, as areas with lower use of air conditioning correlate with higher rates of heat-related mortality and hospitalizations. Premature mortality in NYC is projected to grow between 47% and 95% in 30 years, with lower-income and vulnerable populations most at risk. Studies on the correlation between heat-related mortality and hospitalizations and living in low socioeconomic locations can be traced in Phoenix, Arizona, Hong Kong, China, Japan, and Italy. Additionally, costs concerning health care can act as another barrier, as the lack of private health insurance during a 2009 heat wave in Australia, was associated with heat-related hospitalization. Disparities in socioeconomic status and access to air conditioning are connected by some to institutionalized racism, which leads to the association of specific marginalized communities with lower economic status, poorer health, residing in hotter neighborhoods, engaging in physically demanding labor, and experiencing limited access to cooling technologies such as air conditioning. A study overlooking Chicago, Illinois, Detroit, and Michigan found that black households were half as likely to have central air conditioning units when compared to their white counterparts. Especially in cities, Redlining creates heat islands, increasing temperatures in certain parts of the city. This is due to materials heat-absorbing building materials and pavements and lack of vegetation and shade coverage. There have been initiatives that provide cooling solutions to low-income communities, such as public cooling spaces. Other techniques Buildings designed with passive air conditioning are generally less expensive to construct and maintain than buildings with conventional HVAC systems with lower energy demands. While tens of air changes per hour, and cooling of tens of degrees, can be achieved with passive methods, site-specific microclimate must be taken into account, complicating building design. Many techniques can be used to increase comfort and reduce the temperature in buildings. These include evaporative cooling, selective shading, wind, thermal convection, and heat storage. Passive ventilation Passive cooling Daytime radiative cooling Passive daytime radiative cooling (PDRC) surfaces reflect incoming solar radiation and heat back into outer space through the infrared window for cooling during the daytime. Daytime radiative cooling became possible with the ability to suppress solar heating using photonic structures, which emerged through a study by Raman et al. (2014). PDRCs can come in a variety of forms, including paint coatings and films, that are designed to be high in solar reflectance and thermal emittance. PDRC applications on building roofs and envelopes have demonstrated significant decreases in energy consumption and costs. In suburban single-family residential areas, PDRC application on roofs can potentially lower energy costs by 26% to 46%. PDRCs are predicted to show a market size of ~$27 billion for indoor space cooling by 2025 and have undergone a surge in research and development since the 2010s. Fans Hand fans have existed since prehistory. Large human-powered fans built into buildings include the punkah. The 2nd-century Chinese inventor Ding Huan of the Han dynasty invented a rotary fan for air conditioning, with seven wheels in diameter and manually powered by prisoners. In 747, Emperor Xuanzong (r. 712–762) of the Tang dynasty (618–907) had the Cool Hall (Liang Dian ) built in the imperial palace, which the Tang Yulin describes as having water-powered fan wheels for air conditioning as well as rising jet streams of water from fountains. During the subsequent Song dynasty (960–1279), written sources mentioned the air conditioning rotary fan as even more widely used. Thermal buffering In areas that are cold at night or in winter, heat storage is used. Heat may be stored in earth or masonry; air is drawn past the masonry to heat or cool it. In areas that are below freezing at night in winter, snow and ice can be collected and stored in ice houses for later use in cooling. This technique is over 3,700 years old in the Middle East. Harvesting outdoor ice during winter and transporting and storing for use in summer was practiced by wealthy Europeans in the early 1600s, and became popular in Europe and the Americas towards the end of the 1600s. This practice was replaced by mechanical compression-cycle icemakers. Evaporative cooling In dry, hot climates, the evaporative cooling effect may be used by placing water at the air intake, such that the draft draws air over water and then into the house. For this reason, it is sometimes said that the fountain, in the architecture of hot, arid climates, is like the fireplace in the architecture of cold climates. Evaporative cooling also makes the air more humid, which can be beneficial in a dry desert climate. Evaporative coolers tend to feel as if they are not working during times of high humidity, when there is not much dry air with which the coolers can work to make the air as cool as possible for dwelling occupants. Unlike other types of air conditioners, evaporative coolers rely on the outside air to be channeled through cooler pads that cool the air before it reaches the inside of a house through its air duct system; this cooled outside air must be allowed to push the warmer air within the house out through an exhaust opening such as an open door or window.
Technology
Household appliances
null
16074530
https://en.wikipedia.org/wiki/Lakes%20of%20Titan
Lakes of Titan
Lakes of liquid ethane and methane exist on the surface of Titan, Saturn's largest moon. This was confirmed by the Cassini–Huygens space probe, as had been suspected since the 1980s. The large bodies of liquid are known as (seas) and the small ones as (lakes). History and discovery The possibility that there are seas on Titan was first suggested based on data from the Voyager 1 and 2 space probes, which flew past Titan in 1980. The data showed Titan to have a thick atmosphere of approximately the correct temperature and composition to support liquid hydrocarbons. Direct evidence was obtained in 1995 when data from the Hubble Space Telescope and other observations suggested the existence of liquid methane on Titan, either in disconnected pockets or on the scale of satellite-wide oceans, similar to water on Earth. The Cassini mission affirmed the former hypothesis, although not immediately. When the probe arrived in the Saturnian system in 2004, it was hoped that hydrocarbon lakes or oceans might be detectable by reflected sunlight from the surface of any liquid bodies, but no specular reflections were initially observed. The possibility remained that liquid ethane and methane might be found on Titan's polar regions, where they were expected to be abundant and stable. In Titan's south polar region, an enigmatic dark feature named Ontario Lacus was the first suspected lake identified, possibly created by clouds that are observed to cluster in the area. A possible shoreline was also identified near the pole via radar imagery. Following a flyby on July 22, 2006, in which the Cassini spacecraft's radar imaged the northern latitudes, which were at the time in winter. A number of large, smooth (and thus dark to radar) patches were seen dotting the surface near the pole. Based on the observations, scientists announced "definitive evidence of lakes filled with methane on Saturn's moon Titan" in January 2007. The Cassini–Huygens team concluded that the imaged features are almost certainly the long-sought hydrocarbon lakes, the first stable bodies of surface liquid found off Earth. Some appear to have channels associated with liquid and lie in topographical depressions. Channels in some regions have created surprisingly little erosion, suggesting erosion on Titan is extremely slow, or some other recent phenomena may have wiped out older riverbeds and landforms. Overall, the Cassini radar observations have shown that lakes cover only a few percent of the surface and are concentrated near the poles, making Titan much drier than Earth. The high relative humidity of methane in Titan's lower atmosphere could be maintained by evaporation from lakes covering only 0.002–0.02% of the whole surface. During a Cassini flyby in late February 2007, radar and camera observations revealed several large features in the north polar region interpreted as large expanses of liquid methane and/or ethane, including one, Ligeia Mare, with an area of , slightly larger than Lake Michigan–Huron, the largest freshwater lake on Earth; and another, Kraken Mare, that would later prove to be three times that size. A flyby of Titan's southern polar regions in October 2007 revealed similar, though far smaller, lakelike features. During a close Cassini flyby in December 2007 the visual and mapping instrument observed a lake, Ontario Lacus, in Titan's south polar region. This instrument identifies chemically different materials based on the way they absorb and reflect infrared light. Radar measurements made in July 2009 and January 2010 indicate that Ontario Lacus is extremely shallow, with an average depth of , and a maximum depth of . It may thus resemble a terrestrial mudflat. In contrast, the northern hemisphere's Ligeia Mare has depths of . Chemical composition and surface roughness of the lakes According to Cassini data, scientists announced on February 13, 2008, that Titan hosts within its polar lakes "hundreds of times more natural gas and other liquid hydrocarbons than all the known oil and natural gas reserves on Earth." The desert sand dunes along the equator, while devoid of open liquid, nonetheless hold more organics than all of Earth's coal reserves. It has been estimated that the visible lakes and seas of Titan contain about 300 times the volume of Earth's proven oil reserves. In June 2008, Cassini Visible and Infrared Mapping Spectrometer confirmed the presence of liquid ethane beyond doubt in a lake in Titan's southern hemisphere. The exact blend of hydrocarbons in the lakes is unknown. According to a computer model, 3/4 of an average polar lake is ethane, with 10 percent methane, 7 percent propane and smaller amounts of hydrogen cyanide, butane, nitrogen and argon. Benzene is expected to fall like snow and quickly dissolve into the lakes, although the lakes may become saturated just as the Dead Sea on Earth is packed with salt. The excess benzene would then build up in a mud-like sludge on the shores and on the lake floors before eventually being eroded by ethane rain, forming a complex cave-riddled landscape. Salt-like compounds composed of ammonia and acetylene are also predicted to form. However, the chemical composition and physical properties of the lakes probably varies from one lake to another (Cassini observations in 2013 indicate Ligeia Mare is filled with a ternary mixture of methane, ethane, and nitrogen and consequently the probe's radar signals were able to detect the sea floor below the liquid surface). No waves were initially detected by Cassini as the northern lakes emerged from winter darkness (calculations indicate wind speeds of less than should whip up detectable waves in Titan's ethane lakes but none were observed). This may be either due to low seasonal winds or solidification of hydrocarbons. Titan has several lakes that reside near its northern pole that vary in size, the area these lakes cover and lower wind speeds could as well explain why there were no surface waves being detected. The area over a liquid that wind blows across is known as fetch. The larger this area is, the larger waves become as wind has more area to blow across to transfer energy. The smaller the area of fetch, the smaller waves will be. The optical properties of solid methane surface (close to the melting point) are quite close to the properties of liquid surface however the viscosity of solid methane, even near the melting point, is many orders of magnitude higher, which might explain extraordinary smoothness of the surface. Solid methane is denser than liquid methane so it will eventually sink. It is possible that the methane ice could float for a time as it probably contains bubbles of nitrogen gas from Titan's atmosphere. Temperatures close to the freezing point of methane () could lead to both floating and sinking ice - that is, a hydrocarbon ice crust above the liquid and blocks of hydrocarbon ice on the bottom of the lake bed. The ice is predicted to rise to the surface again at the onset of spring before melting. Since 2014, Cassini has detected transient features in scattered patches in Kraken Mare, Ligeia Mare and Punga Mare. Laboratory experiments suggest these features (e.g. RADAR-bright "magic islands") might be vast patches of bubbles caused by the rapid release of nitrogen dissolved in the lakes. Bubble outburst events are predicted to occur as the lakes cool and subsequently warm or whenever methane-rich fluids mix with ethane-rich ones due to heavy rainfall. Bubble outburst events may also influence the formation of Titan's river deltas. An alternative explanation is the transient features in Cassini VIMS near-infrared data may be shallow, wind-driven capillary waves (ripples) moving at about and at heights of about . Post-Cassini analysis of VIMS data suggests tidal currents may also be responsible for the generation of persistent waves in narrow channels (Freta) of Kraken Mare. Cyclones driven by evaporation and involving rain as well as gale-force winds of up to are expected to form over the large northern seas only (Kraken Mare, Ligeia Mare, Punga Mare) in northern summer during 2017, lasting up to ten days. However, a 2017 analysis of Cassini data from 2007 to 2015 indicates waves across these three seas were diminutive, reaching only about high and long. The results call into question the early summer's classification as the beginning of the Titan's windy season, because high winds probably would have made for larger waves. A 2019 theoretical study concluded that it is possible that the relatively dense aerosols raining down on Titan's lakes may have liquid-repelling properties, forming a persistent film on the surface of the lakes which then would inhibit formation of waves larger than a few centimetres in wavelength. Observation of specular reflections On 21 December 2008, Cassini passed directly over Ontario Lacus at an altitude of and was able to observe specular reflection in radar observations. The signals were much stronger than anticipated and saturated the probe's receiver. The conclusion drawn from the strength of the reflection was that the lake level did not vary by more than over a first Fresnel zone reflecting area only wide (smoother than any natural dry surface on Earth). From this it was surmised that surface winds in the area are minimal at that season and/or the lake fluid is more viscous than expected. On 8 July 2009, Cassini Visual and Infrared Mapping Spectrometer (VIMS) observed a specular reflection in 5 μm infrared light off a northern hemisphere body of liquid at 71° N, 337° W. This has been described as at the southern shoreline of Kraken Mare, but on a combined radar-VIMS image the location is shown as a separate lake (later named Jingpo Lacus). The observation was made shortly after the north polar region emerged from 15 years of winter darkness. Because of the polar location of the reflecting liquid body, the observation required a phase angle close to 180°. Equatorial in-situ observations by the Huygens probe The discoveries in the polar regions contrast with the findings of the Huygens probe, which landed near Titan's equator on January 14, 2005. The images taken by the probe during its descent showed no open areas of liquid, but strongly indicated the presence of liquids in the recent past, showing pale hills crisscrossed with dark drainage channels that lead into a wide, flat, darker region. It was initially thought that the dark region might be a lake of a fluid or at least tar-like substance, but it is now clear that Huygens landed on the dark region, and that it is solid without any indication of liquids. A penetrometer studied the composition of the surface as the craft impacted it, and it was initially reported that the surface was similar to wet clay, or perhaps crème brûlée (that is, a hard crust covering a sticky material). Subsequent analysis of the data suggests that this reading was likely caused by Huygens displacing a large pebble as it landed, and that the surface is better described as a "sand" made of ice grains. The images taken after the probe's landing show a flat plain covered in pebbles. The pebbles may be made of water ice and are somewhat rounded, which may indicate the action of fluids. Thermometers indicated that heat was wicked away from Huygens so quickly that the ground must have been damp, and one image shows light reflected by a dewdrop as it falls across the camera's field of view. On Titan, the feeble sunlight allows only about one centimeter of evaporation per year (versus one meter of water on Earth), but the atmosphere can hold the equivalent of about of liquid before rain forms (versus about on Earth). So Titan's weather is expected to feature downpours of several meters (15–20 feet) causing flash floods, interspersed by decades or centuries of drought (whereas typical weather on Earth includes a little rain most weeks). Cassini has observed equatorial rainstorms only once since 2004. Despite this, a number of long-standing tropical hydrocarbon lakes were unexpectedly discovered in 2012 (including one near the Huygens landing site in the Shangri-La region which is about half the size of Utah's Great Salt Lake, with a depth of at least 1 meter [3'4"]). As on Earth, the likely supplier is probably underground aquifers, in other words the arid equatorial regions of Titan contain "oases". Impact of Titan's methane cycle and geology on lake formation Models of oscillations in Titan's atmospheric circulation suggest that over the course of a Saturnian year, liquid is transported from the equatorial region to the poles, where it falls as rain. This might account for the equatorial region's relative dryness. According to a computer model, intense rainstorms should occur in normally rainless equatorial areas during Titan's vernal and autumnal equinoxes—enough liquid to carve out the type of channels that Huygens found. The model also predicts energy from the Sun will evaporate liquid methane from Titan's surface except at the poles, where the relative absence of sunlight makes it easier for liquid methane to accumulate into permanent lakes. The model also apparently explains why there are more lakes in the northern hemisphere. Due to the eccentricity of Saturn's orbit, the northern summer is longer than the southern summer and consequently the rainy season is longer in the north. However, recent Cassini observations (from 2013) suggest geology may also explain the geographic distribution of the lakes and other surface features. One puzzling feature of Titan is the lack of impact craters at the poles and mid-latitudes, particularly at lower elevations. These areas may be wetlands fed by subsurface ethane and methane springs. Any crater created by meteorites is thus quickly subsumed by wet sediment. The presence of underground aquifers could explain another mystery. Titan's atmosphere is full of methane, which according to calculations should react with ultraviolet radiation from the sun to produce liquid ethane. Over time, the moon should have built up an ethane ocean hundreds of meters (1,500 to 2,500 feet) deep instead of only a handful of polar lakes. The presence of wetlands would suggest that the ethane soaks into the ground, forming a subsurface liquid layer akin to groundwater on Earth. A possibility is that the formation of materials called clathrates changes the chemical composition of the rainfall runoff that charges the subsurface hydrocarbon "aquifers." This process leads to the formation of reservoirs of propane and ethane that may feed into some rivers and lakes. The chemical transformations taking place underground would affect Titan's surface. Lakes and rivers fed by springs from propane or ethane subsurface reservoirs would show the same kind of composition, whereas those fed by rainfall would be different and contain a significant fraction of methane. 97% of Titan's lakes have been found within a bright unit of terrain covering about near the north pole. The lakes found here have very distinctive shapes—rounded complex silhouettes and steep sides—suggesting deformation of the crust created fissures that could be filled up with liquid. A variety of formation mechanisms have been proposed. The explanations range from the collapse of land after a cryovolcanic eruption to karst terrain, where liquids dissolve soluble ice. Smaller lakes (up to tens of miles across) with steep rims (up to hundreds of feet high) might be analogous to maar lakes, i.e. explosion craters subsequently filled with liquid. The explosions are proposed to result from fluctuations in climate, which lead to pockets of liquid nitrogen accumulating within the crust during colder periods and then exploding when warming caused the nitrogen to rapidly expand as it shifted to a gas state. Titan Mare Explorer Titan Mare Explorer (TiME) was a proposed NASA/ESA lander that would splash down on Ligeia Mare and analyze its surface, shoreline and Titan's atmosphere. However, it was turned down in August 2012, when NASA instead selected the InSight mission to Mars. Named lakes and seas Features labeled lacus are believed to be ethane/methane lakes, while features labeled lacuna are believed to be dry lake beds. Both are named after lakes on Earth. Features labeled sinus are bays within the lakes or seas. They are named after bays and fjords on Earth. Features labeled insula are islands within the body of liquid. They are named after mythical islands. Titanean maria (large hydrocarbon seas) are named after sea monsters in world mythology. The tables are up-to-date as of 2023. Sea names of Titan Lake names of Titan Lakebed names of Titan Bay names of Titan Island names of Titan Image gallery
Physical sciences
Solar System
Astronomy
49598
https://en.wikipedia.org/wiki/Pigment
Pigment
A pigment is a powder used to add color or change visual appearance. Pigments are completely or nearly insoluble and chemically unreactive in water or another medium; in contrast, dyes are colored substances which are soluble or go into solution at some stage in their use. Dyes are often organic compounds whereas pigments are often inorganic. Pigments of prehistoric and historic value include ochre, charcoal, and lapis lazuli. Economic impact In 2006, around 7.4 million tons of inorganic, organic, and special pigments were marketed worldwide. According to an April 2018 report by Bloomberg Businessweek, the estimated value of the pigment industry globally is $30 billion. The value of titanium dioxide – used to enhance the white brightness of many products – was placed at $13.2 billion per year, while the color Ferrari red is valued at $300 million each year. Physical principles Like all materials, the color of pigments arises because they absorb only certain wavelengths of visible light. The bonding properties of the material determine the wavelength and efficiency of light absorption. Light of other wavelengths are reflected or scattered. The reflected light spectrum defines the color that we observe. The appearance of pigments is sensitive to the source light. Sunlight has a high color temperature and a fairly uniform spectrum. Sunlight is considered a standard for white light. Artificial light sources are less uniform. Color spaces used to represent colors numerically must specify their light source. Lab color measurements, unless otherwise noted, assume that the measurement was recorded under a D65 light source, or "Daylight 6500 K", which is roughly the color temperature of sunlight. Other properties of a color, such as its saturation or lightness, may be determined by the other substances that accompany pigments. Binders and fillers can affect the color. History Minerals have been used as colorants since prehistoric times. Early humans used paint for aesthetic purposes such as body decoration. Pigments and paint grinding equipment believed to be between 350,000 and 400,000 years old have been reported in a cave at Twin Rivers, near Lusaka, Zambia. Ochre, iron oxide, was the first color of paint. A favored blue pigment was derived from lapis lazuli. Pigments based on minerals and clays often bear the name of the city or region where they were originally mined. Raw sienna and burnt sienna came from Siena, Italy, while raw umber and burnt umber came from Umbria. These pigments were among the easiest to synthesize, and chemists created modern colors based on the originals. These were more consistent than colors mined from the original ore bodies, but the place names remained. Also found in many Paleolithic and Neolithic cave paintings are Red Ochre, anhydrous Fe2O3, and the hydrated Yellow Ochre (Fe2O3.H2O). Charcoal—or carbon black—has also been used as a black pigment since prehistoric times. The first known synthetic pigment was Egyptian blue, which is first attested on an alabaster bowl in Egypt dated to Naqada III (circa 3250 BC). Egyptian blue (blue frit), calcium copper silicate CaCuSi4O10, made by heating a mixture of quartz sand, lime, a flux and a copper source, such as malachite. Already invented in the Predynastic Period of Egypt, its use became widespread by the 4th Dynasty. It was the blue pigment par excellence of Roman antiquity; its art technological traces vanished in the course of the Middle Ages until its rediscovery in the context of the Egyptian campaign and the excavations in Pompeii and Herculaneum. Later premodern synthetic pigments include white lead (basic lead carbonate, (PbCO3)2Pb(OH)2), vermilion, verdigris, and lead-tin yellow. Vermilion, a mercury sulfide, was originally made by grinding a powder of natural cinnabar. From the 17th century on, it was also synthesized from the elements. It was favored by old masters such as Titian. Indian yellow was once produced by collecting the urine of cattle that had been fed only mango leaves. Dutch and Flemish painters of the 17th and 18th centuries favored it for its luminescent qualities, and often used it to represent sunlight. Since mango leaves are nutritionally inadequate for cattle, the practice of harvesting Indian yellow was eventually declared to be inhumane. Modern hues of Indian yellow are made from synthetic pigments. Vermillion has been partially replaced in by cadmium reds. Because of the cost of lapis lazuli, substitutes were often used. Prussian blue, the oldest modern synthetic pigment, was discovered by accident in 1704. By the early 19th century, synthetic and metallic blue pigments included French ultramarine, a synthetic form of lapis lazuli. Ultramarine was manufactured by treating aluminium silicate with sulfur. Various forms of cobalt blue and Cerulean blue were also introduced. In the early 20th century, Phthalo Blue, a synthetic metallo-organic pigment was prepared. At the same time, Royal Blue, another name once given to tints produced from lapis lazuli, has evolved to signify a much lighter and brighter color, and is usually mixed from Phthalo Blue and titanium dioxide, or from inexpensive synthetic blue dyes. The discovery in 1856 of mauveine, the first aniline dyes, was a forerunner for the development of hundreds of synthetic dyes and pigments like azo and diazo compounds. These dyes ushered in the flourishing of organic chemistry, including systematic designs of colorants. The development of organic chemistry diminished the dependence on inorganic pigments. Manufacturing and industrial standards Before the development of synthetic pigments, and the refinement of techniques for extracting mineral pigments, batches of color were often inconsistent. With the development of a modern color industry, manufacturers and professionals have cooperated to create international standards for identifying, producing, measuring, and testing colors. First published in 1905, the Munsell color system became the foundation for a series of color models, providing objective methods for the measurement of color. The Munsell system describes a color in three dimensions, hue, value (lightness), and chroma (color purity), where chroma is the difference from gray at a given hue and value. By the middle 20th century, standardized methods for pigment chemistry were available, part of an international movement to create such standards in industry. The International Organization for Standardization (ISO) develops technical standards for the manufacture of pigments and dyes. ISO standards define various industrial and chemical properties, and how to test for them. The principal ISO standards that relate to all pigments are as follows: ISO-787 General methods of test for pigments and extenders. ISO-8780 Methods of dispersion for assessment of dispersion characteristics. Other ISO standards pertain to particular classes or categories of pigments, based on their chemical composition, such as ultramarine pigments, titanium dioxide, iron oxide pigments, and so forth. Many manufacturers of paints, inks, textiles, plastics, and colors have voluntarily adopted the Colour Index International (CII) as a standard for identifying the pigments that they use in manufacturing particular colors. First published in 1925—and now published jointly on the web by the Society of Dyers and Colourists (United Kingdom) and the American Association of Textile Chemists and Colorists (US)—this index is recognized internationally as the authoritative reference on colorants. It encompasses more than 27,000 products under more than 13,000 generic color index names. In the CII schema, each pigment has a generic index number that identifies it chemically, regardless of proprietary and historic names. For example, Phthalocyanine Blue BN has been known by a variety of generic and proprietary names since its discovery in the 1930s. In much of Europe, phthalocyanine blue is better known as Helio Blue, or by a proprietary name such as Winsor Blue. An American paint manufacturer, Grumbacher, registered an alternate spelling (Thanos Blue) as a trademark. Colour Index International resolves all these conflicting historic, generic, and proprietary names so that manufacturers and consumers can identify the pigment (or dye) used in a particular color product. In the CII, all phthalocyanine blue pigments are designated by a generic color index number as either PB15 or PB16, short for pigment blue 15 and pigment blue 16; these two numbers reflect slight variations in molecular structure, which produce a slightly more greenish or reddish blue. Figures of merit The following are some of the attributes of pigments that determine their suitability for particular manufacturing processes and applications: Lightfastness and sensitivity for damage from ultraviolet light Heat stability Toxicity Tinting strength Staining Dispersion (which can be measured with a Hegman gauge) Opacity or transparency Resistance to alkalis and acids Reactions and interactions between pigments Swatches Swatches are used to communicate colors accurately. The types of swatches are dictated by the media, i.e., printing, computers, plastics, and textiles. Generally, the medium that offers the broadest gamut of color shades is widely used across diverse media. Printed swatches Reference standards are provided by printed swatches of color shades. PANTONE, RAL, Munsell, etc. are widely used standards of color communication across diverse media like printing, plastics, and textiles. Plastic swatches Companies manufacturing color masterbatches and pigments for plastics offer plastic swatches in injection molded color chips. These color chips are supplied to the designer or customer to choose and select the color for their specific plastic products. Plastic swatches are available in various special effects like pearl, metallic, fluorescent, sparkle, mosaic etc. However, these effects are difficult to replicate on other media like print and computer display. Plastic swatches have been created by 3D modelling to including various special effects. Computer swatches The appearance of pigments in natural light is difficult to replicate on a computer display. Approximations are required. The Munsell Color System provides an objective measure of color in three dimensions: hue, value (or lightness), and chroma. Computer displays in general fail to show the true chroma of many pigments, but the hue and lightness can be reproduced with relative accuracy. However, when the gamma of a computer display deviates from the reference value, the hue is also systematically biased. The following approximations assume a display device at gamma 2.2, using the sRGB color space. The further a display device deviates from these standards, the less accurate these swatches will be. Swatches are based on the average measurements of several lots of single-pigment watercolor paints, converted from Lab color space to sRGB color space for viewing on a computer display. The appearance of a pigment may depend on the brand and even the batch. Furthermore, pigments have inherently complex reflectance spectra that will render their color appearance greatly different depending on the spectrum of the source illumination, a property called metamerism. Averaged measurements of pigment samples will only yield approximations of their true appearance under a specific source of illumination. Computer display systems use a technique called chromatic adaptation transforms to emulate the correlated color temperature of illumination sources, and cannot perfectly reproduce the intricate spectral combinations originally seen. In many cases, the perceived color of a pigment falls outside of the gamut of computer displays and a method called gamut mapping is used to approximate the true appearance. Gamut mapping trades off any one of lightness, hue, or saturation accuracy to render the color on screen, depending on the priority chosen in the conversion's ICC rendering intent. Biological pigments In biology, a pigment is any colored material of plant or animal cells. Many biological structures, such as skin, eyes, fur, and hair contain pigments (such as melanin). Animal skin coloration often comes about through specialized cells called chromatophores, which animals such as the octopus and chameleon can control to vary the animal's color. Many conditions affect the levels or nature of pigments in plant, animal, some protista, or fungus cells. For instance, the disorder called albinism affects the level of melanin production in animals. Pigmentation in organisms serves many biological purposes, including camouflage, mimicry, aposematism (warning), sexual selection and other forms of signalling, photosynthesis (in plants), and basic physical purposes such as protection from sunburn. Pigment color differs from structural color in that pigment color is the same for all viewing angles, whereas structural color is the result of selective reflection or iridescence, usually because of multilayer structures. For example, butterfly wings typically contain structural color, although many butterflies have cells that contain pigment as well. Pigments by chemical composition Aluminium pigment: aluminum powder Barium: barium white (lithopone) Cadmium pigments: cadmium yellow, cadmium red, cadmium green, cadmium orange, cadmium sulfoselenide Carbon pigments: carbon black (including vine black, lamp black), ivory black (bone charcoal) Chromium pigments: chrome yellow and chrome green (viridian) Cobalt pigments: cobalt violet, cobalt blue, cerulean blue, aureolin (cobalt yellow) Copper pigments: azurite, Han purple, Han blue, Egyptian blue, malachite, Paris green, Phthalocyanine Blue BN, Phthalocyanine Green G, verdigris Iron oxide pigments: sanguine, caput mortuum, oxide red, red ochre, yellow ochre, Venetian red, Prussian blue, raw sienna, burnt sienna, raw umber, burnt umber Lead pigments: lead white, Naples yellow, red lead, lead-tin yellow Manganese pigments: manganese violet, YInMn blue Mercury pigments: vermilion Sulfur pigments: ultramarine, ultramarine green shade, lapis lazuli Titanium pigments: titanium yellow, titanium white, titanium black Zinc pigments: zinc white, zinc ferrite, zinc yellow Biological and organic Biological origins: alizarin, gamboge, cochineal red, rose madder, indigo, Indian yellow, Tyrian purple Non-biological organic: quinacridone, magenta, phthalo green, phthalo blue, pigment red 170, diarylide yellow
Technology
Artist's tools
null
49600
https://en.wikipedia.org/wiki/Mantophasmatidae
Mantophasmatidae
Mantophasmatidae is a family of carnivorous wingless insects within the monotypic order Mantophasmatodea, which was discovered in Africa in 2001. Recent evidence indicates a sister group relationship with Grylloblattidae (classified in the order Grylloblattodea), and Arillo and Engel have combined the two groups into a single order, Notoptera, with Grylloblattodea and Mantophasmatodea ranked as suborders. Overview The most common vernacular name for this order is gladiators, although they also are called rock crawlers, heelwalkers, mantophasmids, and colloquially, mantos. Their modern centre of endemism is western South Africa and Namibia (Brandberg Massif), although the modern relict population of Tanzaniophasma subsolana in Tanzania and Eocene fossils suggest a wider ancient distribution. Mantophasmatodea are wingless even as adults, making them relatively difficult to identify. They resemble a cross between praying mantises and phasmids, and molecular evidence indicates that they are most closely related to the equally enigmatic group Grylloblattodea. Initially, the gladiators were described from old museum specimens that originally were found in Namibia (Mantophasma zephyra) and Tanzania (M. subsolana), and from a 45-million-year-old specimen of Baltic amber (Raptophasma kerneggeri). Live specimens were found in Namibia by an international expedition in early 2002; Tyrannophasma gladiator was found on the Brandberg Massif, and Mantophasma zephyra was found on the Erongoberg Massif. Since then, a number of new genera and species have been discovered, the most recent being two new genera, Kuboesphasma and Minutophasma, each with a single species, described from Richtersveld in South Africa in 2018. Biology Mantophasmatids are wingless carnivores. During courtship, they communicate using vibrations transmitted through the ground or substrate. Classification The classification of Mantophasmatodea in Arillo & Engel (2006) recognizes numerous genera, including fossils, in a single family Manophasmatidae: Basal and incertae sedis Genus †Raptophasma Zompro, 2001 – Baltic amber, Eocene Genus †Adicophasma Engel & Grimaldi, 2004 – Baltic amber, Eocene Genus †Juramantophasma Huang et al, 2008 – Daohugou Bed, China, Middle Jurassic (Callovian) Genus ?†Ensiferophasma Zompro, 2005 – Baltic amber, Eocene (assignment to Mantophasmatodea considered dubious) Subfamily Tanzaniophasmatinae Klass, Picker, Damgaard, van Noort, Tojo, 2003 Genus Tanzaniophasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – Tanzania Species Tanzaniophasma subsolana (Zompro, Klass, Kristensen, & Adis 2002) Subfamily Mantophasmatinae Tribe Tyrannophasmatini Zompro, 2005 Genus Praedatophasma Zompro & Adis, 2002 – Namibia Species Praedatophasma maraisi Zompro & Adis, 2002 Genus Tyrannophasma Zompro, 2003 – Namibia Species Tyrannophasma gladiator Zompro, 2003 Tribe Mantophasmatini Zompro, Klass, Kristensen, Adis, 2002 (paraphyletic?) Genus Mantophasma Zompro, Klass, Kristensen, Adis, 2002 – Namibia Species Mantophasma gamsbergense Zompro & Adis, 2006 Species Mantophasma kudubergense Zompro & Adis, 2006 Species Mantophasma omatakoense Zompro & Adis, 2006 Species Mantophasma zephyra Zompro, Klass, Kristensen, & Adis 2002 Genus Pachyphasma Wipfler, Pohl, & Predel, 2012 – Namibia Species Pachyphasma brandbergense Wipfler, Pohl, & Predel, 2012 Genus Sclerophasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – Namibia Species Sclerophasma paresisense Klass, Picker, Damgaard, van Noort, & Tojo 2003 Tribe Austrophasmatini Klass, Picker, Damgaard, van Noort, Tojo, 2003 Genus Austrophasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – South Africa Species Austrophasma caledonense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Species Austrophasma gansbaaiense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Species Austrophasma rawsonvillense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Genus Hemilobophasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – South Africa Species Hemilobophasma montaguense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Genus Karoophasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – South Africa Species Karoophasma biedouwense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Species Karoophasma botterkloofense Klass, Picker, Damgaard, van Noort & Tojo, 2003 Genus Kuboesphasma Wipfler, Theska & Predel, 2018 – South Africa Species Kuboesphasma compactum Wipfler, Theska & Predel, 2018 Genus Lobatophasma Damgaard, Klass, Picker & Buder, 2008 (formerly Lobophasma Klass, Picker, Damgaard, van Noort & Tojo, 2003) – South Africa Species Lobatophasma redelinghuysense (Klass, Picker, Damgaard, van Noort & Tojo, 2003) Genus Minutophasma Wipfler, Theska & Predel, 2018 – South Africa Species Minutophasma richtersveldense Wipfler, Theska & Predel, 2018 Genus Namaquaphasma Klass, Picker, Damgaard, van Noort, Tojo, 2003 – South Africa Species Namaquaphasma ookiepense Klass, Picker, Damgaard, van Noort, Tojo, 2003 Genus Striatophasma Wipfler, Pohl & Predel, 2012 – Namibia Species Striatophasma naukluftense Wipfler, Pohl & Predel, 2012 Genus Viridiphasma Eberhard, Picker, Klass, 2011 – South Africa Species Viridiphasma clanwilliamense Eberhard, Picker, Klass, 2011 Some taxonomists assign full family status to the subfamilies and tribes, and sub-ordinal status to the family. In total, there are 21 extant species described as of 2018.
Biology and health sciences
Insects: General
Animals
49604
https://en.wikipedia.org/wiki/Hearing%20loss
Hearing loss
Hearing loss is a partial or total inability to hear. Hearing loss may be present at birth or acquired at any time afterwards. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to acquire spoken language, and in adults it can create difficulties with social interaction and at work. Hearing loss can be temporary or permanent. Hearing loss related to age usually affects both ears and is due to cochlear hair cell loss. In some people, particularly older people, hearing loss can result in loneliness. Hearing loss may be caused by a number of factors, including: genetics, ageing, exposure to noise, some infections, birth complications, trauma to the ear, and certain medications or toxins. A common condition that results in hearing loss is chronic ear infections. Certain infections during pregnancy, such as cytomegalovirus, syphilis and rubella, may also cause hearing loss in the child. Hearing loss is diagnosed when hearing testing finds that a person is unable to hear 25 decibels in at least one ear. Testing for poor hearing is recommended for all newborns. Hearing loss can be categorized as mild (25 to 40 dB), moderate (41 to 55 dB), moderate-severe (56 to 70 dB), severe (71 to 90 dB), or profound (greater than 90 dB). There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss. About half of hearing loss globally is preventable through public health measures. Such practices include immunization, proper care around pregnancy, avoiding loud noise, and avoiding certain medications. The World Health Organization recommends that young people limit exposure to loud sounds and the use of personal audio players to an hour a day in an effort to limit exposure to noise. Early identification and support are particularly important in children. For many, hearing aids, sign language, cochlear implants and subtitles are useful. Lip reading is another useful skill some develop. Access to hearing aids, however, is limited in many areas of the world. As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in about 466 million people (5% of the global population), and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture may see themselves as having a difference rather than a disability. Many members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture. Definition Hearing loss is defined as diminished acuity to sounds which would otherwise be heard normally. The terms hearing impaired or hard of hearing are usually reserved for people who have relative inability to hear sound in the speech frequencies. Hearing loss occurs when sound waves enter the ears and damage the sensitive tissues The severity of hearing loss is categorized according to the increase in intensity of sound above the usual level required for the listener to detect it. Deafness is defined as a degree of loss such that a person is unable to understand speech, even in the presence of amplification. In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard. Speech perception is another aspect of hearing which involves the perceived clarity of a word rather than the intensity of sound made by the word. In humans, this is usually measured with speech discrimination tests, which measure not only the ability to detect sound, but also the ability to understand speech. There are very rare types of hearing loss that affect speech discrimination alone. One example is auditory neuropathy, a variety of hearing loss in which the outer hair cells of the cochlea are intact and functioning, but sound information is not faithfully transmitted by the auditory nerve to the brain. Use of the terms "hearing impaired", "deaf-mute", or "deaf and dumb" to describe deaf and hard of hearing people is discouraged by many in the deaf community as well as advocacy organizations, as they are offensive to many deaf and hard of hearing people. Hearing standards Human hearing extends in frequency from 20 to 20,000 Hz, and in intensity from 0 dB to 120 dB HL or more. 0 dB does not represent absence of sound, but rather the softest sound an average unimpaired human ear can hear; some people can hear down to −5 or even −10 dB. Sound is generally uncomfortably loud above 90 dB and 115 dB represents the threshold of pain. The ear does not hear all frequencies equally well: hearing sensitivity peaks around 3,000 Hz. There are many qualities of human hearing besides frequency range and intensity that cannot easily be measured quantitatively. However, for many practical purposes, normal hearing is defined by a frequency versus intensity graph, or audiogram, charting sensitivity thresholds of hearing at defined frequencies. Because of the cumulative impact of age and exposure to noise and other acoustic insults, 'typical' hearing may not be normal. Signs and symptoms The presentation is as follows: difficulty using the telephone loss of sound localization difficulty understanding speech, especially of children and women whose voices are of a higher frequency. difficulty understanding speech in the presence of background noise (cocktail party effect) sounds or speech sounding dull, muffled or attenuated need for increased volume on television, radio, music and other audio sources Hearing loss is sensory, but may have accompanying symptoms: pain or pressure in the ears a blocked feeling There may also be accompanying secondary symptoms: hyperacusis, heightened sensitivity with accompanying auditory pain to certain intensities and frequencies of sound, sometimes defined as "auditory recruitment" tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present vertigo and disequilibrium tympanophonia, also known as autophonia, abnormal hearing of one's own voice and respiratory sounds, usually as a result of a patulous (a constantly open) eustachian tube or dehiscent superior semicircular canals disturbances of facial movement (indicating a possible tumour or stroke) or in persons with Bell's palsy Complications Hearing loss is associated with Alzheimer's disease and dementia. The risk increases with the hearing loss degree. There are several hypotheses including cognitive resources being redistributed to hearing and social isolation from hearing loss having a negative effect. According to preliminary data, hearing aid usage can slow down the decline in cognitive functions. Hearing loss is responsible for causing thalamocortical dysrthymia in the brain which is a cause for several neurological disorders including tinnitus and visual snow syndrome. Cognitive decline Hearing loss is an increasing concern especially in aging populations. The prevalence of hearing loss increases about two-fold for each decade increase in age after age 40. While the secular trend might decrease individual level risk of developing hearing loss, the prevalence of hearing loss is expected to rise due to the aging population in the US. Another concern about aging process is cognitive decline, which may progress to mild cognitive impairment and eventually dementia. The association between hearing loss and cognitive decline has been studied in various research settings. Despite the variability in study design and protocols, the majority of these studies have found consistent association between age-related hearing loss and cognitive decline, cognitive impairment, and dementia. The association between age-related hearing loss and Alzheimer's disease was found to be nonsignificant, and this finding supports the hypothesis that hearing loss is associated with dementia independent of Alzheimer pathology. There are several hypotheses about the underlying causal mechanism for age-related hearing loss and cognitive decline. One hypothesis is that this association can be explained by common etiology or shared neurobiological pathology with decline in other physiological system. Another possible cognitive mechanism emphasize on individual's cognitive load. As people developing hearing loss in the process of aging, the cognitive load demanded by auditory perception increases, which may lead to change in brain structure and eventually to dementia. One other hypothesis suggests that the association between hearing loss and cognitive decline is mediated through various psychosocial factors, such as decrease in social contact and increase in social isolation. Findings on the association between hearing loss and dementia have significant public health implication, since about 9% of dementia cases are associated with hearing loss. Falls People with hearing loss are at a higher risk of falling. There is also a potential dose–response relationship between hearing loss and falls—greater severity of hearing loss is associated with increased difficulties in postural control and increased prevalence of falls. The underlying causal link between the association of hearing loss and falls is yet to be elucidated. There are several hypotheses that indicate that there may be a common process between decline in auditory system and increase in incident falls, driven by physiological, cognitive, and behavioral factors. This evidence suggests that treating hearing loss has potential to increase health-related quality of life in older adults.Falls have important health implications, especially for an aging population where they can lead to significant morbidity and mortality. Elderly people are particularly vulnerable to the consequences of injuries caused by falls, since older individuals typically have greater bone fragility and poorer protective reflexes. Fall-related injury can also lead to burdens on the financial and health care systems. In literature, age-related hearing loss is found to be significantly associated with incident falls. Depression Hearing loss can contribute to decrease in health-related quality of life, increase in social isolation and decline in social engagement, which are all risk factors for increased risk of developing depression symptoms. Depression is one of the leading causes of morbidity and mortality worldwide. In older adults, the suicide rate is higher than it is for younger adults, and more suicide cases are attributable to depression. Some chronic diseases are found to be significantly associated with risk of developing depression, such as coronary heart disease, pulmonary disease, vision loss and hearing loss. Spoken language ability Prelingual deafness is profound hearing loss that is sustained before the acquisition of language, which can occur due to a congenital condition or through hearing loss before birth or in early infancy. Prelingual deafness impairs an individual's ability to acquire a spoken language in children, but deaf children can acquire spoken language through support from cochlear implants (sometimes combined with hearing aids). Non-signing (hearing) parents of deaf babies (90–95% of cases) usually go with oral approach without the support of sign language, as these families lack previous experience with sign language and cannot competently provide it to their children without learning it themselves. This may in some cases (late implantation or not sufficient benefit from cochlear implants) bring the risk of language deprivation for the deaf baby because the deaf baby would not have a sign language if the child is unable to acquire spoken language successfully. The 5–10% of cases of deaf babies born into signing families have the potential of age-appropriate development of language due to early exposure to a sign language by sign-competent parents, thus they have the potential to meet language milestones, in sign language in lieu of spoken language. Post-lingual deafness is hearing loss that is sustained after the acquisition of language, which can occur due to disease, trauma, or as a side-effect of a medicine. Typically, hearing loss is gradual and often detected by family and friends of affected individuals long before the patients themselves will acknowledge the disability. Post-lingual deafness is far more common than pre-lingual deafness. Those who lose their hearing later in life, such as in late adolescence or adulthood, face their own challenges, living with the adaptations that allow them to live independently. Causes Hearing loss has multiple causes, including ageing, genetics, perinatal problems and acquired causes like noise and disease. For some kinds of hearing loss the cause may be classified as of unknown cause. There is a progressive loss of ability to hear high frequencies with aging known as presbycusis. For men, this can start as early as 25 and women at 30. Although genetically variable, it is a normal concomitant of ageing and is distinct from hearing losses caused by noise exposure, toxins or disease agents. Common conditions that can increase the risk of hearing loss in elderly people are high blood pressure, diabetes (hearing loss in diabetes), or the use of certain medications harmful to the ear. While everyone loses hearing with age, the amount and type of hearing loss is variable. Noise-induced hearing loss (NIHL), also known as acoustic trauma, typically manifests as elevated hearing thresholds (i.e. less sensitivity or muting). Noise exposure is the cause of approximately half of all cases of hearing loss, causing some degree of problems in 5% of the population globally. The majority of hearing loss is not due to age, but due to noise exposure. Various governmental, industry and standards organizations set noise standards. Many people are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, motor vehicles, crowds, lawn and maintenance equipment, power tools, gun use, musical instruments, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. In the US, 12.5% of children aged 6–19 years have permanent hearing damage from excessive noise exposure. The World Health Organization estimates that half of those between 12 and 35 are at risk from using personal audio devices that are too loud. Hearing loss in adolescents may be caused by loud noise from toys, music by headphones, and concerts or events. Hearing loss can be inherited. Around 75–80% of all these cases are inherited by recessive genes, 20–25% are inherited by dominant genes, 1–2% are inherited by X-linked patterns, and fewer than 1% are inherited by mitochondrial inheritance. Syndromic deafness occurs when there are other signs or medical problems aside from deafness in an individual, such as Usher syndrome, Stickler syndrome, Waardenburg syndrome, Alport's syndrome, and neurofibromatosis type 2. Nonsyndromic deafness occurs when there are no other signs or medical problems associated with the deafness in an individual. Fetal alcohol spectrum disorders are reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess alcohol intake. Premature birth can be associated with sensorineural hearing loss because of an increased risk of hypoxia, hyperbilirubinaemia, ototoxic medication and infection as well as noise exposure in the neonatal units. Also, hearing loss in premature babies is often discovered far later than a similar hearing loss would be in a full-term baby because normally babies are given a hearing test within 48 hours of birth, but doctors must wait until the premature baby is medically stable before testing hearing, which can be months after birth. The risk of hearing loss is greatest for those weighing less than 1500 g at birth. Disorders responsible for hearing loss include auditory neuropathy, Down syndrome, Charcot–Marie–Tooth disease variant 1E, autoimmune disease, multiple sclerosis, meningitis, cholesteatoma, otosclerosis, perilymph fistula, Ménière's disease, recurring ear infections, strokes, superior semicircular canal dehiscence, Pierre Robin, Treacher-Collins, Usher Syndrome, Pendred Syndrome, and Turner syndrome, syphilis, vestibular schwannoma, and viral infections such as measles, mumps, congenital rubella (also called German measles) syndrome, several varieties of herpes viruses, HIV/AIDS, and West Nile virus. Some medications may reversibly or irreversibly affect hearing. These medications are considered ototoxic. This includes loop diuretics such as furosemide and bumetanide, non-steroidal anti-inflammatory drugs (NSAIDs) both over-the-counter (aspirin, ibuprofen, naproxen) as well as prescription (celecoxib, diclofenac, etc.), paracetamol, quinine, and macrolide antibiotics. Others may cause permanent hearing loss. The most important group is the aminoglycosides (main member gentamicin) and platinum based chemotherapeutics such as cisplatin and carboplatin. In addition to medications, hearing loss can also result from specific chemicals in the environment: metals, such as lead; solvents, such as toluene (found in crude oil, gasoline and automobile exhaust, for example); and asphyxiants. Combined with noise, these ototoxic chemicals have an additive effect on a person's hearing loss. Hearing loss due to chemicals starts in the high frequency range and is irreversible. It damages the cochlea with lesions and degrades central portions of the auditory system. For some ototoxic chemical exposures, particularly styrene, the risk of hearing loss can be higher than being exposed to noise alone. The effects is greatest when the combined exposure include impulse noise. A 2018 informational bulletin by the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) introduces the issue, provides examples of ototoxic chemicals, lists the industries and occupations at risk and provides prevention information. There can be damage either to the ear, whether the external or middle ear, to the cochlea, or to the brain centers that process the aural information conveyed by the ears. Damage to the middle ear may include fracture and discontinuity of the ossicular chain. Damage to the inner ear (cochlea) may be caused by temporal bone fracture. People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent. Pathophysiology Sound waves reach the outer ear and are conducted down the ear canal to the eardrum, causing it to vibrate. The vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear. The fluid moves hair cells (stereocilia), and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve. The auditory nerve takes the impulses to the brainstem, which sends the impulses to the midbrain. Finally, the signal goes to the auditory cortex of the temporal lobe to be interpreted as sound. Hearing loss is most commonly caused by long-term exposure to loud noises, from recreation or from work, that damage the hair cells, which do not grow back on their own. Older people may lose their hearing from long exposure to noise, changes in the inner ear, changes in the middle ear, or from changes along the nerves from the ear to the brain. Diagnosis Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician (audiovestibular physician) or otorhinolaryngologist. Hearing loss is generally measured by playing generated or recorded sounds, and determining whether the person can hear them. Hearing sensitivity varies according to the frequency of sounds. To take this into account, hearing sensitivity can be measured for a range of frequencies and plotted on an audiogram. Other method for quantifying hearing loss is a hearing test using a mobile application or hearing aid application, which includes a hearing test. Hearing diagnosis using mobile application is similar to the audiometry procedure. Audiograms, obtained using mobile applications, can be used to adjust hearing aid applications. Another method for quantifying hearing loss is a speech-in-noise test. which gives an indication of how well one can understand speech in a noisy environment. Otoacoustic emissions test is an objective hearing test that may be administered to toddlers and children too young to cooperate in a conventional hearing test. Auditory brainstem response testing is an electrophysiological test used to test for hearing deficits caused by pathology within the ear, the cochlear nerve and also within the brainstem. A case history (usually a written form, with questionnaire) can provide valuable information about the context of the hearing loss, and indicate what kind of diagnostic procedures to employ. Examinations include otoscopy, tympanometry, and differential testing with the Weber, Rinne, Bing and Schwabach tests. In case of infection or inflammation, blood or other body fluids may be submitted for laboratory analysis. MRI and CT scans can be useful to identify the pathology of many causes of hearing loss. Hearing loss is categorized by severity, type, and configuration. Furthermore, a hearing loss may exist in only one ear (unilateral) or in both ears (bilateral). Hearing loss can be temporary or permanent, sudden or progressive. The severity of a hearing loss is ranked according to ranges of nominal thresholds in which a sound must be so it can be detected by an individual. It is measured in decibels of hearing loss, or dB HL. There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss. An additional problem which is increasingly recognised is auditory processing disorder which is not a hearing loss as such but a difficulty perceiving sound. The shape of an audiogram shows the relative configuration of the hearing loss, such as a Carhart notch for otosclerosis, 'noise' notch for noise-induced damage, high frequency rolloff for presbycusis, or a flat audiogram for conductive hearing loss. In conjunction with speech audiometry, it may indicate central auditory processing disorder, or the presence of a schwannoma or other tumor. People with unilateral hearing loss or single-sided deafness (SSD) have difficulty in hearing conversation on their impaired side, localizing sound, and understanding speech in the presence of background noise. One reason for the hearing problems these patients often experience is due to the head shadow effect. Idiopathic sudden hearing loss is a condition where a person as an immediate decrease in the sensitivity of their sensorineural hearing that does not have a known cause. This type of loss is usually only on one side (unilateral) and the severity of the loss varies. A common threshold of a "loss of at least 30 dB in three connected frequencies within 72 hours" is sometimes used, however there is no universal definition or international consensus for diagnosing idiopathic sudden hearing loss. Prevention It is estimated that half of cases of hearing loss are preventable. About 60% of hearing loss in children under the age of 15 can be avoided. There are a number of effective preventative strategies, including: immunization against rubella to prevent congenital rubella syndrome, immunization against H. influenza and S. pneumoniae to reduce cases of meningitis, and avoiding or protecting against excessive noise exposure. The World Health Organization also recommends immunization against measles, mumps, and meningitis, efforts to prevent premature birth, and avoidance of certain medication as prevention. World Hearing Day is a yearly event to promote actions to prevent hearing damage. Avoiding exposure to loud noise can help prevent noise-induced hearing loss. 18% of adults exposed to loud noise at work for five years or more report hearing loss in both ears as compared to 5.5% of adults who were not exposed to loud noise at work. Different programs exist for specific populations such as school-age children, adolescents and workers. But the HPD (without individual selection, training and fit testing) does not significantly reduce the risk of hearing loss. The use of antioxidants is being studied for the prevention of noise-induced hearing loss, particularly for scenarios in which noise exposure cannot be reduced, such as during military operations. Workplace noise regulation Noise is widely recognized as an occupational hazard. In the United States, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and enforcement on workplace noise levels. The hierarchy of hazard controls demonstrates the different levels of controls to reduce or eliminate exposure to noise and prevent hearing loss, including engineering controls and personal protective equipment (PPE). Other programs and initiative have been created to prevent hearing loss in the workplace. For example, the Safe-in-Sound Award was created to recognize organizations that can demonstrate results of successful noise control and other interventions. Additionally, the Buy Quiet program was created to encourage employers to purchase quieter machinery and tools. By purchasing less noisy power tools like those found on the NIOSH Power Tools Database and limiting exposure to ototoxic chemicals, great strides can be made in preventing hearing loss. Companies can also provide personal hearing protector devices tailored to both the worker and type of employment. Some hearing protectors universally block out all noise, and some allow for certain noises to be heard. Workers are more likely to wear hearing protector devices when they are properly fitted. Often interventions to prevent noise-induced hearing loss have many components. A 2017 Cochrane review found that stricter legislation might reduce noise levels. Providing workers with information on their sound exposure levels was not shown to decrease exposure to noise. Ear protection, if used correctly, can reduce noise to safer levels, but often, providing them is not sufficient to prevent hearing loss. Engineering noise out and other solutions such as proper maintenance of equipment can lead to noise reduction, but further field studies on resulting noise exposures following such interventions are needed. Other possible solutions include improved enforcement of existing legislation and better implementation of well-designed prevention programmes, which have not yet been proven conclusively to be effective. The conclusion of the Cochrane Review was that further research could modify what is now regarding the effectiveness of the evaluated interventions. The Institute for Occupational Safety and Health of the German Social Accident Insurance has created a hearing impairment calculator based on the ISO 1999 model for studying threshold shift in relatively homogeneous groups of people, such as workers with the same type of job. The ISO 1999 model estimates how much hearing impairment in a group can be ascribed to age and noise exposure. The result is calculated via an algebraic equation that uses the A-weighted sound exposure level, how many years the people were exposed to this noise, how old the people are, and their sex. The model's estimations are only useful for people without hearing loss due to non-job related exposure and can be used for prevention activities. Screening The United States Preventive Services Task Force recommends neonatal hearing screening for all newborns, as the first three years of life are believed to be the most important for language development. Universal neonatal hearing screenings have now been widely implemented across the U.S., with rates of newborn screening increasing from less than 3% in the early 1990s to 98% in 2009. Newborns whose screening reveals a high index of suspicion of hearing loss are referred for additional diagnostic testing with the goal of providing early intervention and access to language. The American Academy of Pediatrics advises that children should have their hearing tested several times throughout their schooling: When they enter school At ages 6, 8, and 10 At least once during middle school At least once during high school While the American College of Physicians indicated that there is not enough evidence to determine the utility of screening in adults over 50 years old who do not have any symptoms, the American Language, Speech Pathology and Hearing Association recommends that adults should be screened at least every decade through age 50 and at three-year intervals thereafter, to minimize the detrimental effects of the untreated condition on quality of life. For the same reason, the US Office of Disease Prevention and Health Promotion included as one of Healthy People 2020 objectives: to increase the proportion of persons who have had a hearing examination. Management Management depends on the specific cause if known as well as the extent, type and configuration of the hearing loss. Sudden hearing loss due to an underlying nerve problem may be treated with corticosteroids. Most hearing loss, that result from age and noise, is progressive and irreversible, and there are currently no approved or recommended treatments. A few specific kinds of hearing loss are amenable to surgical treatment. In other cases, treatment is addressed to underlying pathologies, but any hearing loss incurred may be permanent. Some management options include hearing aids, cochlear implants, middle ear implants, assistive technology, and closed captioning; in movie theaters, a Hearing Impaired (HI) audio track may be available via headphones to better hear dialog. This choice depends on the level of hearing loss, type of hearing loss, and personal preference. Hearing aid applications are one of the options for hearing loss management. For people with bilateral hearing loss, it is not clear if bilateral hearing aids (hearing aids in both ears) are better than a unilateral hearing aid (hearing aid in one ear). Idiopathic sudden hearing loss For people with idiopathic sudden hearing loss, different treatment approaches have been suggested that are usually based on the suspected cause of the sudden hearing loss. Treatment approaches may include corticosteroid medications, rheological drugs, vasodilators, anesthetics, and other medications chosen based on the suspected underlying pathology that caused the sudden hearing loss. The evidence supporting most treatment options for idiopathic sudden hearing loss is very weak and adverse effects of these different medications is a consideration when deciding on a treatment approach. Epidemiology Globally, hearing loss affects about 10% of the population to some degree. It caused moderate to severe disability in 124.2 million people as of 2004 (107.9 million of whom are in low and middle income countries). Of these 65 million acquired the condition during childhood. At birth ~3 per 1000 in developed countries and more than 6 per 1000 in developing countries have hearing problems. Hearing loss increases with age. In those between 20 and 35 rates of hearing loss are 3% while in those 44 to 55 it is 11% and in those 65 to 85 it is 43%. A 2017 report by the World Health Organization estimated the costs of unaddressed hearing loss and the cost-effectiveness of interventions, for the health-care sector, for the education sector and as broad societal costs. Globally, the annual cost of unaddressed hearing loss was estimated to be in the range of $750–790 billion international dollars. The International Organization for Standardization (ISO) developed the ISO 1999 standards for the estimation of hearing thresholds and noise-induced hearing impairment. They used data from two noise and hearing study databases, one presented by Burns and Robinson (Hearing and Noise in Industry, Her Majesty's Stationery Office, London, 1970) and by Passchier-Vermeer (1968). As race are some of the factors that can affect the expected distribution of pure-tone hearing thresholds several other national or regional datasets exist, from Sweden, Norway, South Korea, the United States and Spain. In the United States hearing is one of the health outcomes measure by the National Health and Nutrition Examination Survey (NHANES), a survey research program conducted by the National Center for Health Statistics. It examines health and nutritional status of adults and children in the United States. Data from the United States in 2011–2012 found that rates of hearing loss has declined among adults aged 20 to 69 years, when compared with the results from an earlier time period (1999–2004). It also found that adult hearing loss is associated with increasing age, sex, ethnicity, educational level, and noise exposure. Nearly one in four adults had audiometric results suggesting noise-induced hearing loss. Almost one in four adults who reported excellent or good hearing had a similar pattern (5.5% on both sides and 18% on one side). Among people who reported exposure to loud noise at work, almost one third had such changes. Social and cultural aspects People with extreme hearing loss may communicate through sign languages. Sign languages convey meaning through manual communication and body language instead of acoustically conveyed sound patterns. This involves the simultaneous combination of hand shapes, orientation and movement of the hands, arms or body, and facial expressions to express a speaker's thoughts. "Sign languages are based on the idea that vision is the most useful tool a deaf person has to communicate and receive information". Deaf culture refers to a tight-knit cultural group of people whose primary language is signed, and who practice social and cultural norms which are distinct from those of the surrounding hearing community. This community does not automatically include all those who are clinically or legally deaf, nor does it exclude every hearing person. According to Baker and Padden, it includes any person or persons who "identifies him/herself as a member of the Deaf community, and other members accept that person as a part of the community," an example being children of deaf adults with normal hearing ability. It includes the set of social beliefs, behaviors, art, literary traditions, history, values, and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication. Members of the Deaf community tend to view deafness as a difference in human experience rather than a disability or disease. When used as a cultural label especially within the culture, the word deaf is often written with a capital D and referred to as "big D Deaf" in speech and sign. When used as a label for the audiological condition, it is written with a lower case d. There also multiple educational institutions for both deaf and Deaf people, that usually use sign language as the main language of instruction. Famous institutions include Gallaudet University and the National Technical Institute for the Deaf in the US, and the National University Corporation of Tsukuba University of Technology in Japan. Research Stem cell transplant and gene therapy A 2005 study achieved successful regrowth of cochlea cells in guinea pigs. However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity, as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans. Recent research reported in 2012 achieved growth of cochlear nerve cells resulting in hearing improvements in gerbils by using stem cells. Also reported in 2013 was regrowth of hair cells in deaf adult mice using a drug intervention resulting in hearing improvement. The Hearing Health Foundation in the US has embarked on a project called the Hearing Restoration Project. Also Action on Hearing Loss in the UK is also aiming to restore hearing. Researchers reported in 2015 that genetically deaf mice which were treated with TMC1 gene therapy recovered some of their hearing. In 2017, additional studies were performed to treat Usher syndrome and here, a recombinant adeno-associated virus seemed to outperform the older vectors. Audition Besides research studies seeking to improve hearing, such as the ones listed above, research studies on the deaf have also been carried out in order to understand more about audition. Pijil and Shwarz (2005) conducted their study on the deaf who lost their hearing later in life and, hence, used cochlear implants to hear. They discovered further evidence for rate coding of pitch, a system that codes for information for frequencies by the rate that neurons fire in the auditory system, especially for lower frequencies as they are coded by the frequencies that neurons fire from the basilar membrane in a synchronous manner. Their results showed that the subjects could identify different pitches that were proportional to the frequency stimulated by a single electrode. The lower frequencies were detected when the basilar membrane was stimulated, providing even further evidence for rate coding.
Biology and health sciences
Disability
null
49611
https://en.wikipedia.org/wiki/Menopause
Menopause
Menopause, also known as the climacteric, is the time when menstrual periods permanently stop, marking the end of reproduction. It typically occurs between the ages of 45 and 55, although the exact timing can vary. Menopause is usually a natural change related to a decrease in circulating blood estrogen levels. It can occur earlier in those who smoke tobacco. Other causes include surgery that removes both ovaries, some types of chemotherapy, or anything that leads to a decrease in hormone levels. At the physiological level, menopause happens because of a decrease in the ovaries' production of the hormones estrogen and progesterone. While typically not needed, a diagnosis of menopause can be confirmed by measuring hormone levels in the blood or urine. Menopause is the opposite of menarche, the time when a girl's periods start. In the years before menopause, a woman's periods typically become irregular, which means that periods may be longer or shorter in duration or be lighter or heavier in the amount of flow. During this time, women often experience hot flashes; these typically last from 30 seconds to ten minutes and may be associated with shivering, night sweats, and reddening of the skin. Hot flashes can recur for four to five years. Other symptoms may include vaginal dryness, trouble sleeping, and mood changes. The severity of symptoms varies between women. Menopause before the age of 45 years is considered to be "early menopause" and when ovarian failure/surgical removal of the ovaries occurs before the age of 40 years this is termed "premature ovarian insufficiency". In addition to symptoms (hot flushes/flashes, night sweats, mood changes, arthralgia and vaginal dryness), the physical consequences of menopause include bone loss, increased central abdominal fat, and adverse changes in a woman's cholesterol profile and vascular function. These changes predispose postmenopausal women to increased risks of osteoporosis and bone fracture, and of cardio-metabolic disease (diabetes and cardiovascular disease). Medical professionals often define menopause as having occurred when a woman has not had any menstrual bleeding for a year. It may also be defined by a decrease in hormone production by the ovaries. In those who have had surgery to remove their uterus but still have functioning ovaries, menopause is not considered to have yet occurred. Following the removal of the uterus, symptoms of menopause typically occur earlier. Iatrogenic menopause occurs when both ovaries are surgically removed (Oophorectomy) along with uterus for medical reasons. The primary indications for treatment of menopause are symptoms and prevention of bone loss. Mild symptoms may be improved with treatment. With respect to hot flashes, avoiding smoking, caffeine, and alcohol is often recommended; sleeping naked in a cool room and using a fan may help. The most effective treatment for menopausal symptoms is menopausal hormone therapy (MHT). Non-hormonal therapies for hot flashes include cognitive-behavioral therapy, clinical hypnosis, gabapentin, fezolinetant or selective serotonin reuptake inhibitors. These will not improve symptoms such as joint pain or vaginal dryness which affect over 55% of women. Exercise may help with sleeping problems. Many of the concerns about the use of MHT raised by older studies are no longer considered barriers to MHT in healthy women. High-quality evidence for the effectiveness of alternative medicine has not been found. Signs and symptoms During early menopause transition, the menstrual cycles remain regular but the interval between cycles begins to lengthen. Hormone levels begin to fluctuate. Ovulation may not occur with each cycle. The term menopause refers to a point in time that follows one year after the last menstruation. During the menopausal transition and after menopause, women can experience a wide range of symptoms. However, for women who enter the menopause transition without having regular menstrual cycles (due to prior surgery, other medical conditions or ongoing hormonal contraception) the menopause cannot be identified by bleeding patterns and is defined as the permanent loss of ovarian function. Vagina, uterus and bladder (urogenital tract) During the transition to menopause, menstrual patterns can show shorter cycling (by 2–7 days); longer cycles remain possible. There may be irregular bleeding (lighter, heavier, spotting). Dysfunctional uterine bleeding is often experienced by women approaching menopause due to the hormonal changes that accompany the menopause transition. Spotting or bleeding may simply be related to vaginal atrophy, a benign sore (polyp or lesion), or may be a functional endometrial response. The European Menopause and Andropause Society has released guidelines for assessment of the endometrium, which is usually the main source of spotting or bleeding. In post-menopausal women, however, any unscheduled vaginal bleeding is of concern and requires an appropriate investigation to rule out the possibility of malignant diseases. Urogenital symptoms may appear during menopause and continue through postmenopause and include painful intercourse, vaginal dryness and atrophic vaginitis (thinning of the membranes of the vulva, the vagina, the cervix and the outer urinary tract). There may also be considerable shrinking and loss in elasticity of all of the outer and inner genital areas. Urinary urgency may also occur and urinary incontinence in some women. Other physical effects The most common physical symptoms of menopause are heavy night sweats, and hot flashes (also known as vasomotor symptoms). Sleeping problems and insomnia are also common. Other physical symptoms may be reported that are not specific to menopause but may be exacerbated by it, such as lack of energy, joint soreness, stiffness, back pain, breast enlargement, breast pain, heart palpitations, headache, dizziness, dry, itchy skin, thinning, tingling skin, rosacea, weight gain. Mood and memory effects Psychological symptoms are often reported but they are not specific to menopause and can be caused by other factors. They include anxiety, poor memory, inability to concentrate, depressive mood, irritability, mood swings, and less interest in sexual activity. Menopause-related cognitive impairment can be confused with the mild cognitive impairment that precedes dementia. There is evidence of small decreases in verbal memory, on average, which may be caused by the effects of declining estrogen levels on the brain, or perhaps by reduced blood flow to the brain during hot flashes. However, these tend to resolve for most women during the postmenopause. Subjective reports of memory and concentration problems are associated with several factors, such as lack of sleep, and stress. Long-term effects Cardiovascular health Exposure to endogenous estrogen during reproductive years provides women with protection against cardiovascular disease, which is lost around 10 years after the onset of menopause. The menopausal transition is associated with an increase in fat mass (predominantly in visceral fat), an increase in insulin resistance, dyslipidaemia, and endothelial dysfunction. Women with vasomotor symptoms during menopause seem to have an especially unfavorable cardiometabolic profile, as well as women with premature onset of menopause (before 45 years of age). These risks can be reduced by managing risk factors, such as tobacco smoking, hypertension, increased blood lipids and body weight. Bone health The annual rates of bone mineral density loss are highest starting one year before the final menstrual period and continuing through the two years after it. Thus, post menopausal women are at increased risk of osteopenia, osteoporosis and fractures. Causes Menopause is a normal event in a woman's life and a natural part of aging. Menopause can also be induced early. Induced menopause occurs as a result of medical treatment such as chemotherapy, radiotherapy, oophorectomy, or complications of tubal ligation, hysterectomy, unilateral or bilateral salpingo-oophorectomy or leuprorelin usage. Age Menopause typically occurs at some point between 47 and 54 years of age. According to various data, more than 95% of women have their last period between the ages of 44–56 (median 49–50). 2% of women under the age of 40, 5% between the ages of 40–45 and the same number between the ages of 55–58 have their last bleeding. The average age of the last period in the United States is 51 years, in Russia is 50 years, in Greece is 49 years, in Turkey is 47 years, in Egypt is 47 years and in India is 46 years. Beyond the influence of genetics, these differences are also due to early-life environmental conditions and associated with epigenetic effects. The menopausal transition or perimenopause leading up to menopause usually lasts 3–4 years (sometimes as long as 5–14 years). Undiagnosed and untreated coeliac disease is a risk factor for early menopause. Coeliac disease can present with several non-gastrointestinal symptoms, in the absence of gastrointestinal symptoms, and most cases escape timely recognition and go undiagnosed, leading to a risk of long-term complications. A strict gluten-free diet reduces the risk. Women with early diagnosis and treatment of coeliac disease present a normal duration of fertile life span. Women who have undergone hysterectomy with ovary conservation go through menopause on average 1.5 years earlier than the expected age. Premature ovarian insufficiency In rare cases, a woman's ovaries stop working at a very early age, ranging anywhere from the age of puberty to age 40. This is known as premature ovarian failure or premature ovarian insufficiency (POI) and affects 1 to 2% of women by age 40. It is diagnosed or confirmed by high blood levels of follicle stimulating hormone (FSH) and luteinizing hormone (LH) on at least three occasions at least four weeks apart. Premature ovarian insufficiency may be related to an auto immune disorder and therefore might co-occur with other autoimmune disorders such as thyroid disease, [adrenal insufficiency], and diabetes mellitus. Other causes include chemotherapy, being a carrier of the fragile X syndrome gene, and radiotherapy. However, in about 50–80% of cases of premature ovarian insufficiency, the cause is unknown, i.e., it is generally idiopathic. Early menopause can be related to cigarette smoking, higher body mass index, racial and ethnic factors, illnesses, and the removal of the uterus. Surgical menopause Menopause can be surgically induced by bilateral oophorectomy (removal of ovaries), which is often, but not always, done in conjunction with removal of the fallopian tubes (salpingo-oophorectomy) and uterus (hysterectomy). Cessation of menses as a result of removal of the ovaries is called "surgical menopause". Surgical treatments, such as the removal of ovaries, might cause periods to stop altogether. The sudden and complete drop in hormone levels may produce extreme withdrawal symptoms such as hot flashes, etc. The symptoms of early menopause may be more severe. Removal of the uterus without removal of the ovaries does not directly cause menopause, although pelvic surgery of this type can often precipitate a somewhat earlier menopause, perhaps because of a compromised blood supply to the ovaries. The time between surgery and possible early menopause is due to the fact that ovaries are still producing hormones. Mechanism The menopausal transition, and postmenopause itself, is a natural change, not usually a disease state or a disorder. The main cause of this transition is the natural depletion and aging of the finite amount of oocytes (ovarian reserve). This process is sometimes accelerated by other conditions and is known to occur earlier after a wide range of gynecologic procedures such as hysterectomy (with and without ovariectomy), endometrial ablation and uterine artery embolisation. The depletion of the ovarian reserve causes an increase in circulating follicle-stimulating hormone (FSH) and luteinizing hormone (LH) levels because there are fewer oocytes and follicles responding to these hormones and producing estrogen. The transition has a variable degree of effects. The stages of the menopause transition have been classified according to a woman's reported bleeding pattern, supported by changes in the pituitary follicle-stimulating hormone (FSH) levels. In younger women, during a normal menstrual cycle the ovaries produce estradiol, testosterone and progesterone in a cyclical pattern under the control of FSH and luteinizing hormone (LH), which are both produced by the pituitary gland. During perimenopause (approaching menopause), estradiol levels and patterns of production remain relatively unchanged or may increase compared to young women, but the cycles become frequently shorter or irregular. The often observed increase in estrogen is presumed to be in response to elevated FSH levels that, in turn, is hypothesized to be caused by decreased feedback by inhibin. Similarly, decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. The menopausal transition is characterized by marked, and often dramatic, variations in FSH and estradiol levels. Because of this, measurements of these hormones are not considered to be reliable guides to a woman's exact menopausal status. Menopause occurs because of the sharp decrease of estradiol and progesterone production by the ovaries. After menopause, estrogen continues to be produced mostly by aromatase in fat tissues and is produced in small amounts in many other tissues such as ovaries, bone, blood vessels, and the brain where it acts locally. The substantial fall in circulating estradiol levels at menopause impacts many tissues, from brain to skin. In contrast to the sudden fall in estradiol during menopause, the levels of total and free testosterone, as well as dehydroepiandrosterone sulfate (DHEAS) and androstenedione appear to decline more or less steadily with age. An effect of natural menopause on circulating androgen levels has not been observed. Thus specific tissue effects of natural menopause cannot be attributed to loss of androgenic hormone production. Hot flashes and other vasomotor and body symptoms accompanying the menopausal transition are associated with estrogen insufficiency and changes that occur in the brain, primarily the hypothalamus and involve complex interplay between the neurotransmitters kisspeptin, neurokinin B, and dynorphin, which are found in KNDy neurons in the infundibular nucleus. Ovarian aging Decreased inhibin feedback after hysterectomy is hypothesized to contribute to increased ovarian stimulation and earlier menopause. Hastened ovarian aging has been observed after endometrial ablation. While it is difficult to prove that these surgeries are causative, it has been hypothesized that the endometrium may be producing endocrine factors contributing to the endocrine feedback and regulation of the ovarian stimulation. Elimination of these factors contributes to faster depletion of the ovarian reserve. Reduced blood supply to the ovaries that may occur as a consequence of hysterectomy and uterine artery embolisation has been hypothesized to contribute to this effect. Impaired DNA repair mechanisms may contribute to earlier depletion of the ovarian reserve during aging. As women age, double-strand breaks accumulate in the DNA of their primordial follicles. Primordial follicles are immature primary oocytes surrounded by a single layer of granulosa cells. An enzyme system is present in oocytes that ordinarily accurately repairs DNA double-strand breaks. This repair system is called "homologous recombinational repair", and it is especially effective during meiosis. Meiosis is the general process by which germ cells are formed in all sexual eukaryotes; it appears to be an adaptation for efficiently removing damages in germ line DNA. Human primary oocytes are present at an intermediate stage of meiosis, termed prophase I (see Oogenesis). Expression of four key DNA repair genes that are necessary for homologous recombinational repair during meiosis (BRCA1, MRE11, Rad51, and ATM) decline with age in oocytes. This age-related decline in ability to repair DNA double-strand damages can account for the accumulation of these damages, that then likely contributes to the depletion of the ovarian reserve. Diagnosis Ways of assessing the impact on women of some of these menopause effects, include the Greene climacteric scale questionnaire, the Cervantes scale and the Menopause rating scale. Perimenopause The term "perimenopause", which literally means "around the menopause", refers to the menopause transition years before the date of the final episode of flow. According to the North American Menopause Society, this transition can last for four to eight years. The Centre for Menstrual Cycle and Ovulation Research describes it as a six- to ten-year phase ending 12 months after the last menstrual period. During perimenopause, estrogen levels average about 20–30% higher than during premenopause, often with wide fluctuations. These fluctuations cause many of the physical changes during perimenopause as well as menopause, especially during the last 1–2 years of perimenopause (before menopause). Some of these changes are hot flashes, night sweats, difficulty sleeping, mood swings, vaginal dryness or atrophy, incontinence, osteoporosis, and heart disease. Perimenopause is also associated with a higher likelihood of depression (affecting from 45 percent to 68 percent of perimenopausal women), which is twice as likely to affect those with a history of depression. During this period, fertility diminishes but is not considered to reach zero until the official date of menopause. The official date is determined retroactively, once 12 months have passed after the last appearance of menstrual blood. The menopause transition typically begins between 40 and 50 years of age (average 47.5). The duration of perimenopause may be for up to eight years. Women will often, but not always, start these transitions (perimenopause and menopause) about the same time as their mother did. Some research appears to show that melatonin supplementation in perimenopausal women can improve thyroid function and gonadotropin levels, as well as restoring fertility and menstruation and preventing depression associated with menopause. Postmenopause The term "postmenopausal" describes women who have not experienced any menstrual flow for a minimum of 12 months, assuming that they have a uterus and are not pregnant or lactating. The reason for this delay in declaring postmenopause is that periods are usually erratic during menopause. Therefore, a reasonably long stretch of time is necessary to be sure that the cycling has ceased. At this point a woman is considered infertile; however, the possibility of becoming pregnant has usually been very low (but not quite zero) for a number of years before this point is reached. In women with or without a uterus, menopause or postmenopause can also be identified by a blood test showing a very high follicle-stimulating hormone level, greater than 25 IU/L in a random blood draw; it rises as ovaries become inactive. FSH continues to rise, as its counterpart estradiol continues to drop for about 2 years after the last menstrual period, after which the levels of each of these hormones stabilize. The stabilization period after the begin of early postmenopause has been estimated to last 3 to 6 years, so early postmenopause lasts altogether about 5 to 8 years, during which hormone withdrawal effects such as hot flashes disappear. Finally, late postmenopause has been defined as the remainder of a woman s lifespan, when reproductive hormones do not change any more. A period-like flow during postmenopause, even spotting, may be a sign of endometrial cancer. Management Perimenopause is a natural stage of life. It is not a disease or a disorder. Therefore, it does not automatically require any kind of medical treatment. However, in those cases where the physical, mental, and emotional effects of perimenopause are strong enough that they significantly disrupt the life of the woman experiencing them, palliative medical therapy may sometimes be appropriate. Menopausal hormone therapy In the context of the menopause, menopausal hormone therapy (MHT) is the use of estrogen in women without a uterus and estrogen plus progestogen in women who have an intact uterus. MHT may be reasonable for the treatment of menopausal symptoms, such as hot flashes. It is the most effective treatment option, especially when delivered as a skin patch. Its use, however, appears to increase the risk of strokes and blood clots. When used for menopausal symptoms the global recommendation is MHT should be prescribed for a long as there are defined treatment effects and goals for the individual woman. MHT is also effective for preventing bone loss and osteoporotic fracture, but it is generally recommended only for women at significant risk for whom other therapies are unsuitable. MHT may be unsuitable for some women, including those at increased risk of cardiovascular disease, increased risk of thromboembolic disease (such as those with obesity or a history of venous thrombosis) or increased risk of some types of cancer. There is some concern that this treatment increases the risk of breast cancer. Women at increased risk of cardiometabolic disease and VTE may be able to use transdermal estradiol which does not appear to increase risks in low to moderate doses. Adding testosterone to hormone therapy has a positive effect on sexual function in postmenopausal women, although it may be accompanied by hair growth or acne if used in excess. Transdermal testosterone therapy in appropriate dosing is generally safe. Selective estrogen receptor modulators SERMs are a category of drugs, either synthetically produced or derived from a botanical source, that act selectively as agonists or antagonists on the estrogen receptors throughout the body. The most commonly prescribed SERMs are raloxifene and tamoxifen. Raloxifene exhibits oestrogen agonist activity on bone and lipids, and antagonist activity on breast and the endometrium. Tamoxifen is in widespread use for treatment of hormone sensitive breast cancer. Raloxifene prevents vertebral fractures in postmenopausal, osteoporotic women and reduces the risk of invasive breast cancer. Other medications Some of the SSRIs and SNRIs appear to provide some relief from vasomotor symptoms. The most effective SSRIs and SNRIs are paroxetine, escitalopram, citalopram, venlafaxine, and desvenlafaxine. They may, however, be associated with appetite and sleeping problems, constipation and nausea. Gabapentin or fezolinetant can also improve the frequency and severity of vasomotor symptoms. Side effects of using gabapentin include drowsiness and headaches. Therapy Cognitive behavioural therapy and clinical hypnosis can decrease the amount women are affected by hot flashes. Mindfulness is not yet proven to be effective in easing vasomotor symptoms. Lifestyle and exercise Exercise has been thought to reduce postmenopausal symptoms through the increase of endorphin levels, which decrease as estrogen production decreases. However, there is insufficient evidence to suggest that exercise helps with the symptoms of menopause. Similarly, yoga has not been shown to be useful as a treatment for vasomotor symptoms. However a high BMI is a risk factor for vasomotor symptoms in particular. Weight loss may help with symptom management. There is no strong evidence that cooling techniques such as using specific clothing or environment control tools (for example fans) help with symptoms. Paced breathing and relaxation are not effective in easing symptoms. Dietary supplements There is no evidence of consistent benefit of taking any dietary supplements or herbal products for menopausal symptoms. These widely marketed but ineffective supplements include soy isoflavones, pollen extracts, black cohosh, omega-3 among many others. Alternative medicine There is no evidence of consistent benefit of alternative therapies for menopausal symptoms despite their popularity. As of 2023, there is no evidence to support the efficacy of acupuncture as a management for menopausal symptoms. The Cochrane review found not enough evidence in 2016 to show a difference between Chinese herbal medicine and placebo for the vasomotor symptoms. Other efforts Lack of lubrication is a common problem during and after perimenopause. Vaginal moisturizers can help women with overall dryness, and lubricants can help with lubrication difficulties that may be present during intercourse. It is worth pointing out that moisturizers and lubricants are different products for different issues: some women complain that their genitalia are uncomfortably dry all the time, and they may do better with moisturizers. Those who need only lubricants do well using them only during intercourse. Low-dose prescription vaginal estrogen products such as estrogen creams are generally a safe way to use estrogen topically, to help vaginal thinning and dryness problems (see vaginal atrophy) while only minimally increasing the levels of estrogen in the bloodstream. Individual counseling or support groups can sometimes be helpful to handle sad, depressed, anxious or confused feelings women may be having as they pass through what can be for some a very challenging transition time. Osteoporosis can be minimized by smoking cessation, adequate vitamin D intake and regular weight-bearing exercise. The bisphosphonate drug alendronate may decrease the risk of a fracture, in women that have both bone loss and a previous fracture and less so for those with just osteoporosis. A surgical procedure where a part of one of the ovaries is removed earlier in life and frozen and then over time thawed and returned to the body (ovarian tissue cryopreservation) has been tried. While at least 11 women have undergone the procedure and paid over £6,000, there is no evidence it is safe or effective. Society and culture Attitudes and experiences The menopause transition is a process, involving hormonal, menstrual, and typically vasomotor changes. However, the experience of the menopause as a whole is very much influenced by psychological and social factors, such as past experience, lifestyle, social and cultural meanings of menopause, and a woman's social and material circumstances. Menopause has been described as a biopsychosocial experience, with social and cultural factors playing a prominent role in the way menopause is experienced and perceived. The paradigm within which a woman considers menopause influences the way she views it: women who understand menopause as a medical condition rate it significantly more negatively than those who view it as a life transition or a symbol of aging. There is some evidence that negative attitudes and expectations, held before the menopause, predict symptom experience during the menopause, and beliefs and attitudes toward menopause tend to be more positive in postmenopausal than in premenopausal women. Women with more negative attitudes towards the menopause report more symptoms during this transition. Menopause is a stage of life experienced in different ways. It can be characterized by personal challenges, changes in personal roles within the family and society. Women's approaches to changes during menopause are influenced by their personal, family and sociocultural background. Women from different regions and countries also have different attitudes. Postmenopausal women had more positive attitudes toward menopause compared with peri- or premenopausal women. Other influencing factors of attitudes toward menopause include age, menopausal symptoms, psychological and socioeconomical status, and profession and ethnicity. Ethnicity and geography play roles in the experience of menopause. American women of different ethnicities report significantly different types of menopausal effects. One major study found Caucasian women most likely to report what are sometimes described as psychosomatic symptoms, while African-American women were more likely to report vasomotor symptoms. There may be variations in experiences of women from different ethnic backgrounds regarding menopause and care. Immigrant women reported more vasomotor symptoms and other physical symptoms and poorer mental health than non-immigrant women and were mostly dissatisfied with the care they had received. Self-management strategies for menopausal symptoms were also influenced by culture. Two multinational studies of Asian women, found that hot flushes were not the most commonly reported symptoms, instead body and joint aches, memory problems, sleeplessness, irritability and migraines were. In another study comparing experiences of menopause amongst White Australian women and women in Laos, Australian women reported higher rates of depression, as well as fears of aging, weight gain and cancer – fears not reported by Laotian women, who positioned menopause as a positive event. Japanese women experience menopause effects, or kōnenki (更年期), in a different way from American women. Japanese women report lower rates of hot flashes and night sweats; this can be attributed to a variety of factors, both biological and social. Historically, kōnenki was associated with wealthy middle-class housewives in Japan, i.e., it was a "luxury disease" that women from traditional, inter-generational rural households did not report. Menopause in Japan was viewed as a symptom of the inevitable process of aging, rather than a "revolutionary transition", or a "deficiency disease" in need of management. As of 2005, in Japanese culture, reporting of vasomotor symptoms has been on the increase, with research finding that of 140 Japanese participants, hot flashes were prevalent in 22.1%. This was almost double that of 20 years prior. Whilst the exact cause for this is unknown, possible contributing factors include dietary changes, increased medicalisation of middle-aged women and increased media attention on the subject. However, reporting of vasomotor symptoms is still "significantly" lower than in North America. Additionally, while most women in the United States apparently have a negative view of menopause as a time of deterioration or decline, some studies seem to indicate that women from some Asian cultures have an understanding of menopause that focuses on a sense of liberation and celebrates the freedom from the risk of pregnancy. Diverging from these conclusions, one study appeared to show that many American women "experience this time as one of liberation and self-actualization". In some women, menopause may bring about a sense of loss related to the end of fertility. In addition, this change often aligns with other stressors, such as the responsibility of looking after elderly parents or dealing with the emotional challenges of "empty nest syndrome" when children move out of the family home. This situation can be accentuated in cultures where being older is negatively perceived. Impact on work Midlife is typically a life stage when men and women may be dealing with demanding life events and responsibilities, such as work, health problems, and caring roles. For example, in 2018 in the UK women aged 45–54 report more work-related stress than men or women of any other age group. Hot flashes are often reported to be particularly distressing at work and lead to embarrassment and worry about potential stigmatisation. A June 2023 study by the Mayo Clinic estimated an annual loss of $1.8 billion in the United States due to workdays missed as a result of menopause symptoms. This was one of the largest studies to date examining the impact of menopause symptoms on work outcomes. The research concluded there was a strong need to improve medical treatment for menopausal women and make the workplace environment more supportive to avoid such productivity losses. Etymology Menopause literally means the "end of monthly cycles" (the end of monthly periods or menstruation), from the Greek word pausis ("pause") and mēn ("month"). This is a medical coinage; the Greek word for menses is actually different. In Ancient Greek, the menses were described in the plural, ("the monthlies"), and its modern descendant has been clipped to ta emmēna. The Modern Greek medical term is emmenopausis in Katharevousa or emmenopausi in Demotic Greek. The Ancient Greeks did not produce medical concepts about any symptoms associated with end of menstruation and did not use a specific word to refer to this time of a woman's life. The word menopause was invented by French doctors at the beginning of the nineteenth century. Greek etymology was reconstructed at this time and it was the Parisian student doctor Charles-Pierre-Louis de Gardanne who invented a variation of the word in 1812, which was edited to its final French form in 1821. Some of them noted that peasant women had no complaints about the end of menses, while urban middle-class women had many troubling symptoms. Doctors at this time considered the symptoms to be the result of urban lifestyles of sedentary behaviour, alcohol consumption, too much time indoors, and over-eating, with a lack of fresh fruit and vegetables. The word "menopause" was coined specifically for female humans, where the end of fertility is traditionally indicated by the permanent stopping of monthly menstruations. However, menopause exists in some other animals, many of which do not have monthly menstruation; in this case, the term means a natural end to fertility that occurs before the end of the natural lifespan. In popular culture, law and politics In the 21st century, celebrities have spoken out about their experiences of the menopause, which has led to it becoming less of a taboo as it has boosted awareness of the debilitating symptoms. Subsequently, TV shows have been running features on the menopause to help women experiencing symptoms. In the UK Lorraine Kelly has been an advocate for getting women to speak about their experiences including sharing her own. This has led to an increase in women seeking treatment such as HRT. Davina McCall also led an awareness campaign based on a documentary on Channel 4. In the UK, Carolyn Harris sponsored the Menopause (Support and Services) Bill in June 2021. It was to exempt hormone replacement therapy from National Health Service prescription charges and to make provisions about menopause support and services, including public education and communication in supporting perimenopausal and post-menopausal women, and to raise awareness of menopause and its effects. The bill was withdrawn on 29 October 2021. In the US, David McKinley, Republican from West Virginia introduced the Menopause Research Act in September 2022 for $100 million in 2023 and 2024, but it stalled. Other animals The majority of mammal species reach menopause when they cease the production of ovarian follicles, which contain eggs (oocytes), between one-third and two-thirds of their maximum possible lifespan. However, few live long enough in the wild to reach this point. Humans are joined by a limited number of other species in which females live substantially longer than their ability to reproduce. Examples of others include cetaceans: beluga whales, narwhals, orcas, false killer whales and short-finned pilot whales. Menopause has been reported in a variety of other vertebrate species, but these examples tend to be from captive individuals, and thus are not necessarily representative of what happens in natural populations in the wild. Menopause in captivity has been observed in several species of nonhuman primates, including rhesus monkeys and chimpanzees. Some research suggests that wild chimpanzees do not experience menopause, as their fertility declines are associated with declines in overall health. Menopause has been reported in elephants in captivity and guppies. Dogs do not experience menopause; the canine estrus cycle simply becomes irregular and infrequent. Although older female dogs are not considered good candidates for breeding, offspring have been produced by older animals, see Canine reproduction. Similar observations have been made in cats. Life histories show a varying degree of senescence; rapid senescing organisms (e.g., Pacific salmon and annual plants) do not have a post-reproductive life-stage. Gradual senescence is exhibited by all placental mammalian life histories. Evolution There are various theories on the origin and process of the evolution of the menopause. These attempt to suggest evolutionary benefits to the human species stemming from the cessation of women's reproductive capability before the end of their natural lifespan. It is conjectured that in highly social groups natural selection favors females that stop reproducing and devote that post-reproductive life span to continuing to care for existing offspring, both their own and those of others to whom they are related, especially their granddaughters and grandsons.
Biology and health sciences
Animal reproduction
null
49627
https://en.wikipedia.org/wiki/Universal%20Disk%20Format
Universal Disk Format
Universal Disk Format (UDF) is an open, vendor-neutral file system for computer data storage for a broad range of media. In practice, it has been most widely used for DVDs and newer optical disc formats, supplanting ISO 9660. Due to its design, it is very well suited to incremental updates on both write-once and re-writable optical media. UDF was developed and maintained by the Optical Storage Technology Association (OSTA). In engineering terms, Universal Disk Format is a profile of the specifications known as ISO/IEC 13346 and ECMA-167. Usage Normally, authoring software will master a UDF file system in a batch process and write it to optical media in a single pass. But when packet writing to rewritable media, such as CD-RW, UDF allows files to be created, deleted and changed on-disc just as a general-purpose filesystem would on removable media like floppy disks and flash drives. This is also possible on write-once media, such as CD-R, but in that case the space occupied by the deleted files cannot be reclaimed (and instead becomes inaccessible). Multi-session mastering is also possible in UDF, though some implementations may be unable to read disks with multiple sessions. History The Optical Storage Technology Association standardized the UDF file system to form a common file system for all optical media: both for read-only media and for re-writable optical media. When first standardized, the UDF file system aimed to replace ISO 9660, allowing support for both read-only and writable media. After the release of the first version of UDF, the DVD Consortium adopted it as the official file system for DVD-Video and DVD-Audio. UDF shares the basic volume descriptor format with ISO 9660. A "UDF Bridge" format is defined since 1.50 so that a disc can also contain a ISO 9660 file system making references to files on the UDF part. Revisions Multiple revisions of UDF have been released: Revision 1.00 (24 October 1995). Original Release. Revision 1.01 (3 November 1995). Added DVD Appendix and made a few minor changes. Revision 1.02 (30 August 1996). This format is used by DVD-Video discs. Revision 1.50 (4 February 1997). Added support for CD-R/W Packet Writing and (virtual) rewritability on CD-R/DVD-R media by introducing the Virtual Allocation Table (VAT) structure. Added sparing tables for defect management on rewritable media such as CD-RW, and DVD-RW and DVD+RW. Add UDF bridge. Revision 2.00 (3 April 1998). Added support for Stream Files, Access Control lists, Power Calibration, real-time files (for DVD recording) and simplified directory management. VAT support was extended. Revision 2.01 (15 March 2000) is mainly a bugfix release to UDF 2.00. Many of the UDF standard's ambiguities were resolved in version 2.01. Revision 2.50 (30 April 2003). Added the Metadata Partition facilitating metadata clustering, easier crash recovery and optional duplication of file system information: All metadata like nodes and directory contents are written on a separate partition which can optionally be mirrored. This format is used by some versions of Blu-rays and most HD-DVD discs. Revision 2.60 (1 March 2005). Added Pseudo OverWrite method for drives supporting pseudo overwrite capability on sequentially recordable media. Has read-only compatibility with UDF 2.50 implementations. (Some Blu-rays use this format.) UDF Revisions are internally encoded as binary-coded decimals; Revision 2.60, for example, is represented as . In addition to declaring its own revision, compatibility for each volume is defined by the minimum read and minimum write revisions, each signalling the requirements for these operations to be possible for every structure on this image. A "maximum write" revision additionally records the highest UDF support level of all the implementations that has written to this image. For example, a UDF 2.01 volume that does not use Stream Files (introduced in UDF 2.00) but uses VAT (UDF 1.50) created by a UDF 2.60-capable implementation may have the revision declared as , the minimum read revision set to , the minimum write to , and the maximum write to . Specifications The UDF standard defines three file system variations, called "builds". These are: Plain (Random Read/Write Access). This is the original format supported in all UDF revisions Virtual Allocation Table, also known as VAT (Incremental Writing). Used specifically for writing to write-once media Spared (Limited Random Write Access). Used specifically for writing to rewritable media Plain build Introduced in the first version of the standard, this format can be used on any type of disk that allows random read/write access, such as hard disks, DVD+RW and DVD-RAM media. Metadata (up to v2.50) and file data is addressed more or less directly. In writing to such a disk in this format, any physical block on the disk may be chosen for allocation of new or updated files. Since this is the basic format, practically any operating system or file system driver claiming support for UDF should be able to read this format. VAT build Write-once media such as DVD-R and CD-R have limitations when being written to, in that each physical block can only be written to once, and the writing must happen incrementally. Thus the plain build of UDF can only be written to CD-Rs by pre-mastering the data and then writing all data in one piece to the media, similar to the way an ISO 9660 file system gets written to CD media. To enable a CD-R to be used virtually like a hard disk, whereby the user can add and modify files on a CD-R at will (so-called "drive letter access" on Windows), OSTA added the VAT build to the UDF standard in its revision 1.5. The VAT is an additional structure on the disc that allows packet writing; that is, remapping physical blocks when files or other data on the disc are modified or deleted. For write-once media, the entire disc is virtualized, making the write-once nature transparent for the user; the disc can be treated the same way one would treat a rewritable disc. The write-once nature of CD-R or DVD-R media means that when a file is deleted on the disc, the file's data still remains on the disc. It does not appear in the directory any more, but it still occupies the original space where it was stored. Eventually, after using this scheme for some time, the disc will be full, as free space cannot be recovered by deleting files. Special tools can be used to access the previous state of the disc (the state before the delete occurred), making recovery possible. Not all drives fully implement version 1.5 or higher of the UDF, and some may therefore be unable to handle VAT builds. Spared (RW) build Rewriteable media such as DVD-RW and CD-RW have fewer limitations than DVD-R and CD-R media. Sectors can be rewritten at random (though in packets at a time). These media can be erased entirely at any time, making the disc blank again, ready for writing a new UDF or other file system (e.g., ISO 9660 or CD Audio) to it. However, sectors of -RW media may "wear out" after a while, meaning that their data becomes unreliable, through having been rewritten too often (typically after a few hundred rewrites, with CD-RW). The plain and VAT builds of the UDF format can be used on rewriteable media, with some limitations. If the plain build is used on a -RW media, file-system level modification of the data must not be allowed, as this would quickly wear out often-used sectors on the disc (such as those for directory and block allocation data), which would then go unnoticed and lead to data loss. To allow modification of files on the disc, rewriteable discs can be used like -R media using the VAT build. This ensures that all blocks get written only once (successively), ensuring that there are no blocks that get rewritten more often than others. This way, a RW disc can be erased and reused many times before it should become unreliable. However, it will eventually become unreliable with no easy way of detecting it. When using the VAT build, CD-RW/DVD-RW media effectively appears as CD-R or DVD+/-R media to the computer. However, the media may be erased again at any time. The spared build was added in revision 1.5 to address the particularities of rewriteable media. This build adds an extra Sparing Table in order to manage the defects that will eventually occur on parts of the disc that have been rewritten too many times. This table keeps track of worn-out sectors and remaps them to working ones. UDF defect management does not apply to systems that already implement another form of defect management, such as Mount Rainier (MRW) for optical discs, or a disk controller for a hard drive. The tools and drives that do not fully support revision 1.5 of UDF will ignore the sparing table, which would lead them to read the outdated worn-out sectors, leading to retrieval of corrupted data. An overhead that is spread over the entire disc reserves a portion of the data storage space, limiting the usable capacity of a CD-RW with e.g. 650 MB of original capacity to around 500 MB. Character set The UDF specifications allow only one Character Set OSTA CS0, which can store any Unicode Code point excluding U+FEFF and U+FFFE. Additional character sets defined in ECMA-167 are not used. Since Errata DCN-5157, the range of code points was expanded to all code points from Unicode 4.0 (or any newer or older version), which includes Plane 1-16 characters such as Emoji. DCN-5157 also recommends normalizing the strings to Normalization Form C. The OSTA CS0 character set stores a 16-bit Unicode string "compressed" into 8-bit or 16-bit units, preceded by a single-byte "compID" tag to indicate the compression type. The 8-bit storage is functionally equivalent to ISO-8859-1, and the 16-bit storage is UTF-16 in big endian. 8-bit-per-character file names save space because they only require half the space per character, so they should be used if the file name contains no special characters that can not be represented with 8 bits only. The reference algorithm neither checks for forbidden code points nor interprets surrogate pairs, so like NTFS the string may be malformed. (No specific form of storage is specified by DCN-5157, but UTF-16BE is the only well-known method for storing all of Unicode while being mostly backward compatible with UCS-2.) Compatibility Many DVD players do not support any UDF revision other than version 1.02. Discs created with a newer revision may still work in these players if the ISO 9660 bridge format is used. Even if an operating system claims to be able to read UDF 1.50, it still may only support the plain build and not necessarily either the VAT or Spared UDF builds. Mac OS X 10.4.5 claims to support Revision 1.50 (see man mount_udf), yet it can only mount disks of the plain build properly and provides no virtualization support at all. It cannot mount UDF disks with VAT, as seen with the Sony Mavica issue. Releases before 10.4.11 mount disks with Sparing Table but does not read its files correctly. Version 10.4.11 fixes this problem. Similarly, Windows XP Service Pack 2 (SP2) cannot read DVD-RW discs that use the UDF 2.00 sparing tables as a defect management system. This problem occurs if the UDF defect management system creates a sparing table that spans more than one sector on the DVD-RW disc. Windows XP SP2 can recognize that a DVD is using UDF, but Windows Explorer displays the contents of a DVD as an empty folder. A hotfix is available for this and is included in Service Pack 3. Due to the default UDF versions and options, a UDF partition formatted by Windows cannot be written under macOS. On the other hand, a partition formatted by macOS cannot be directly written by Windows, due to the requirement of a MBR partition table. In addition, Linux only supports writing to UDF 2.01. A script for Linux and macOS called handles these incompatibilities by using UDF 2.01 and adding a fake MBR; for Windows the best solution is using the command-line tool .
Technology
Data storage and memory
null
49658
https://en.wikipedia.org/wiki/Aurora
Aurora
An aurora ( aurorae or auroras), also commonly known as the northern lights (aurora borealis) or southern lights (aurora australis), is a natural light display in Earth's sky, predominantly seen in high-latitude regions (around the Arctic and Antarctic). Auroras display dynamic patterns of brilliant lights that appear as curtains, rays, spirals, or dynamic flickers covering the entire sky. Auroras are the result of disturbances in the Earth's magnetosphere caused by the solar wind. Major disturbances result from enhancements in the speed of the solar wind from coronal holes and coronal mass ejections. These disturbances alter the trajectories of charged particles in the magnetospheric plasma. These particles, mainly electrons and protons, precipitate into the upper atmosphere (thermosphere/exosphere). The resulting ionization and excitation of atmospheric constituents emit light of varying colour and complexity. The form of the aurora, occurring within bands around both polar regions, is also dependent on the amount of acceleration imparted to the precipitating particles. Planets in the Solar System, brown dwarfs, comets, and some natural satellites also host auroras. Etymology The term aurora borealis was coined by Galileo Galilei in 1619, from the Roman Aurora, goddess of the dawn, and the Greek Boreas, god of the cold north wind. The word aurora is derived from the name of the Roman goddess of the dawn, Aurora, who travelled from east to west announcing the coming of the Sun. Aurora was first used in English in the 14th century. The words borealis and australis are derived from the names of the ancient gods of the north wind (Boreas) and the south wind (Auster or australis) in Greco-Roman mythology. Aurora borealis was first used to describe the northern lights by the French philosopher, Pierre Gassendi (also called Petrus Gassendus) in 1621, then entered English in 1828. Occurrence Auroras are most commonly observed in the "auroral zone", a band approximately 6° (~660 km) wide in latitude centered on 67° north and south. The region that currently displays an aurora is called the "auroral oval". The oval is displaced by the solar wind, pushing it about 15° away from the geomagnetic pole (not the geographic pole) in the noon direction and 23° away in the midnight direction. The peak equatorward extent of the oval is displaced slightly from geographic midnight. It is centered about 3–5° nightward of the magnetic pole, so that auroral arcs reach furthest toward the equator when the magnetic pole in question is in between the observer and the Sun, which is called magnetic midnight. Early evidence for a geomagnetic connection comes from the statistics of auroral observations. Elias Loomis (1860), and later Hermann Fritz (1881) and Sophus Tromholt (1881) in more detail, established that the aurora appeared mainly in the auroral zone. In northern latitudes, the effect is known as the aurora borealis or the northern lights. The southern counterpart, the aurora australis or the southern lights, has features almost identical to the aurora borealis and changes simultaneously with changes in the northern auroral zone. The aurora australis is visible from high southern latitudes in Antarctica, the Southern Cone, South Africa, Australasia, the Falkland Islands, and under exceptional circumstances as far north as Uruguay. The aurora borealis is visible from areas around the Arctic such as Alaska, Canada, Iceland, Greenland, the Faroe Islands, Scandinavia, Finland, Scotland, and Russia. A geomagnetic storm causes the auroral ovals (north and south) to expand, bringing the aurora to lower latitudes. On rare occasions, the aurora borealis can be seen as far south as the Mediterranean and the southern states of the US while the aurora australis can be seen as far north as New Caledonia and the Pilbara region in Western Australia. During the Carrington Event, the greatest geomagnetic storm ever observed, auroras were seen even in the tropics. Auroras seen within the auroral oval may be directly overhead. From farther away, they illuminate the poleward horizon as a greenish glow, or sometimes a faint red, as if the Sun were rising from an unusual direction. Auroras also occur poleward of the auroral zone as either diffuse patches or arcs, which can be subvisual. Auroras are occasionally seen in latitudes below the auroral zone, when a geomagnetic storm temporarily enlarges the auroral oval. Large geomagnetic storms are most common during the peak of the 11-year sunspot cycle or during the three years after the peak. An electron spirals (gyrates) about a field line at an angle that is determined by its velocity vectors, parallel and perpendicular, respectively, to the local geomagnetic field vector B. This angle is known as the "pitch angle" of the particle. The distance, or radius, of the electron from the field line at any time is known as its Larmor radius. The pitch angle increases as the electron travels to a region of greater field strength nearer to the atmosphere. Thus, it is possible for some particles to return, or mirror, if the angle becomes 90° before entering the atmosphere to collide with the denser molecules there. Other particles that do not mirror enter the atmosphere and contribute to the auroral display over a range of altitudes. Other types of auroras have been observed from space; for example, "poleward arcs" stretching sunward across the polar cap, the related "theta aurora", and "dayside arcs" near noon. These are relatively infrequent and poorly understood. Other interesting effects occur such as pulsating aurora, "black aurora" and their rarer companion "anti-black aurora" and subvisual red arcs. In addition to all these, a weak glow (often deep red) observed around the two polar cusps, the field lines separating the ones that close through Earth from those that are swept into the tail and close remotely. Images Early work on the imaging of the auroras was done in 1949 by the University of Saskatchewan using the SCR-270 radar. The altitudes where auroral emissions occur were revealed by Carl Størmer and his colleagues, who used cameras to triangulate more than 12,000 auroras. They discovered that most of the light is produced between above the ground, while extending at times to more than . Forms According to Clark (2007), there are five main forms that can be seen from the ground, from least to most visible: A mild glow, near the horizon. These can be close to the limit of visibility, but can be distinguished from moonlit clouds because stars can be seen undiminished through the glow. Patches or surfaces that look like clouds. Arcs curve across the sky. Rays are light and dark stripes across arcs, reaching upwards by various amounts. Coronas cover much of the sky and diverge from one point on it. Brekke (1994) also described some auroras as "curtains". The similarity to curtains is often enhanced by folds within the arcs. Arcs can fragment or break up into separate, at times rapidly changing, often rayed features that may fill the whole sky. These are also known as discrete auroras, which are at times bright enough to read a newspaper by at night. These forms are consistent with auroras being shaped by Earth's magnetic field. The appearances of arcs, rays, curtains, and coronas are determined by the shapes of the luminous parts of the atmosphere and a viewer's position. Colours and wavelengths of auroral light Red: At its highest altitudes, excited atomic oxygen emits at 630 nm (red); low concentration of atoms and lower sensitivity of eyes at this wavelength make this colour visible only under more intense solar activity. The low number of oxygen atoms and their gradually diminishing concentration is responsible for the faint appearance of the top parts of the "curtains". Scarlet, crimson, and carmine are the most often-seen hues of red for the auroras. Green: At lower altitudes, the more frequent collisions suppress the 630 nm (red) mode: rather the 557.7 nm emission (green) dominates. A fairly high concentration of atomic oxygen and higher eye sensitivity in green make green auroras the most common. The excited molecular nitrogen (atomic nitrogen being rare due to the high stability of the N2 molecule) plays a role here, as it can transfer energy by collision to an oxygen atom, which then radiates it away at the green wavelength. (Red and green can also mix together to produce pink or yellow hues.) The rapid decrease of concentration of atomic oxygen below about 100 km is responsible for the abrupt-looking end of the lower edges of the curtains. Both the 557.7 and 630.0 nm wavelengths correspond to forbidden transitions of atomic oxygen, a slow mechanism responsible for the graduality (0.7 s and 107 s respectively) of flaring and fading. Blue: At yet lower altitudes, atomic oxygen is uncommon, and molecular nitrogen and ionized molecular nitrogen take over in producing visible light emission, radiating at a large number of wavelengths in both red and blue parts of the spectrum, with 428 nm (blue) being dominant. Blue and purple emissions, typically at the lower edges of the "curtains", show up at the highest levels of solar activity. The molecular nitrogen transitions are much faster than the atomic oxygen ones. Ultraviolet: Ultraviolet radiation from auroras (within the optical window but not visible to virtually all humans) has been observed with the requisite equipment. Ultraviolet auroras have also been seen on Mars, Jupiter, and Saturn. Infrared: Infrared radiation, in wavelengths that are within the optical window, is also part of many auroras. Yellow and pink are a mix of red and green or blue. Other shades of red, as well as orange and gold, may be seen on rare occasions; yellow-green is moderately common. As red, green, and blue are linearly independent colours, additive synthesis could, in theory, produce most human-perceived colours, but the ones mentioned in this article comprise a virtually exhaustive list. Changes with time Auroras change with time. Over the night they begin with glows and progress toward coronas, although they may not reach them. They tend to fade in the opposite order. Until about 1963, it was thought that these changes are due to the rotation of the Earth under a pattern fixed with respect to the Sun. Later, it was found by comparing all-sky films of auroras from different places (collected during the International Geophysical Year) that they often undergo global changes in a process called auroral substorm. They change in a few minutes from quiet arcs all along the auroral oval to active displays along the darkside and after 1–3 hours they gradually change back. Changes in auroras over time are commonly visualized using keograms. At shorter time scales, auroras can change their appearances and intensity, sometimes so slowly as to be difficult to notice, and at other times rapidly down to the sub-second scale. The phenomenon of pulsating auroras is an example of intensity variations over short timescales, typically with periods of 2–20 seconds. This type of aurora is generally accompanied by decreasing peak emission heights of about 8 km for blue and green emissions and above average solar wind speeds (). Other auroral radiation In addition, the aurora and associated currents produce a strong radio emission around 150 kHz known as auroral kilometric radiation (AKR), discovered in 1972. Ionospheric absorption makes AKR only observable from space. X-ray emissions, originating from the particles associated with auroras, have also been detected. Noise Aurora noise, similar to a crackling noise, begins about above Earth's surface and is caused by charged particles in an inversion layer of the atmosphere formed during a cold night. The charged particles discharge when particles from the Sun hit the inversion layer, creating the noise. Unusual types STEVE In 2016, more than fifty citizen science observations described what was to them an unknown type of aurora which they named "STEVE", for "Strong Thermal Emission Velocity Enhancement". STEVE is not an aurora but is caused by a wide ribbon of hot plasma at an altitude of , with a temperature of and flowing at a speed of (compared to outside the ribbon). Picket-fence aurora The processes that cause STEVE are also associated with a picket-fence aurora, although the latter can be seen without STEVE. It is an aurora because it is caused by precipitation of electrons in the atmosphere but it appears outside the auroral oval, closer to the equator than typical auroras. When the picket-fence aurora appears with STEVE, it is below. Dune aurora First reported in 2020, and confirmed in 2021, the dune aurora phenomenon was discovered by Finnish citizen scientists. It consists of regularly-spaced, parallel stripes of brighter emission in the green diffuse aurora which give the impression of sand dunes. The phenomenon is believed to be caused by the modulation of atomic oxygen density by a large-scale atmospheric wave travelling horizontally in a waveguide through an inversion layer in the mesosphere in presence of electron precipitation. Horse-collar aurora Horse-collar auroras (HCA) are auroral features in which the auroral ellipse shifts poleward during the dawn and dusk portions and the polar cap becomes teardrop-shaped. They form during periods when the interplanetary magnetic field (IMF) is permanently northward, when the IMF clock angle is small. Their formation is associated with the closure of the magnetic flux at the top of the dayside magnetosphere by the double lobe reconnection (DLR). There are approximately 8 HCA events per month, with no seasonal dependence, and that the IMF must be within 30 degrees of northwards. Conjugate auroras Conjugate auroras are nearly exact mirror-image auroras found at conjugate points in the northern and southern hemispheres on the same geomagnetic field lines. These generally happen at the time of the equinoxes, when there is little difference in the orientation of the north and south geomagnetic poles to the sun. Attempts were made to image conjugate auroras by aircraft from Alaska and New Zealand in 1967, 1968, 1970, and 1971, with some success. Causes A full understanding of the physical processes which lead to different types of auroras is still incomplete, but the basic cause involves the interaction of the solar wind with Earth's magnetosphere. The varying intensity of the solar wind produces effects of different magnitudes but includes one or more of the following physical scenarios. A quiescent solar wind flowing past Earth's magnetosphere steadily interacts with it and can both inject solar wind particles directly onto the geomagnetic field lines that are 'open', as opposed to being 'closed' in the opposite hemisphere and provide diffusion through the bow shock. It can also cause particles already trapped in the radiation belts to precipitate into the atmosphere. Once particles are lost to the atmosphere from the radiation belts, under quiet conditions, new ones replace them only slowly, and the loss-cone becomes depleted. In the magnetotail, however, particle trajectories seem constantly to reshuffle, probably when the particles cross the very weak magnetic field near the equator. As a result, the flow of electrons in that region is nearly the same in all directions ("isotropic") and assures a steady supply of leaking electrons. The leakage of electrons does not leave the tail positively charged, because each leaked electron lost to the atmosphere is replaced by a low energy electron drawn upward from the ionosphere. Such replacement of "hot" electrons by "cold" ones is in complete accord with the second law of thermodynamics. The complete process, which also generates an electric ring current around Earth, is uncertain. Geomagnetic disturbance from an enhanced solar wind causes distortions of the magnetotail ("magnetic substorms"). These 'substorms' tend to occur after prolonged spells (on the order of hours) during which the interplanetary magnetic field has had an appreciable southward component. This leads to a higher rate of interconnection between its field lines and those of Earth. As a result, the solar wind moves magnetic flux (tubes of magnetic field lines, 'locked' together with their resident plasma) from the day side of Earth to the magnetotail, widening the obstacle it presents to the solar wind flow and constricting the tail on the night-side. Ultimately some tail plasma can separate ("magnetic reconnection"); some blobs ("plasmoids") are squeezed downstream and are carried away with the solar wind; others are squeezed toward Earth where their motion feeds strong outbursts of auroras, mainly around midnight ("unloading process"). A geomagnetic storm resulting from greater interaction adds many more particles to the plasma trapped around Earth, also producing enhancement of the "ring current". Occasionally the resulting modification of Earth's magnetic field can be so strong that it produces auroras visible at middle latitudes, on field lines much closer to the equator than those of the auroral zone. Acceleration of auroral charged particles invariably accompanies a magnetospheric disturbance that causes an aurora. This mechanism, which is believed to predominantly arise from strong electric fields along the magnetic field or wave-particle interactions, raises the velocity of a particle in the direction of the guiding magnetic field. The pitch angle is thereby decreased and increases the chance of it being precipitated into the atmosphere. Both electromagnetic and electrostatic waves, produced at the time of greater geomagnetic disturbances, make a significant contribution to the energizing processes that sustain an aurora. Particle acceleration provides a complex intermediate process for transferring energy from the solar wind indirectly into the atmosphere. The details of these phenomena are not fully understood. However, it is clear that the prime source of auroral particles is the solar wind feeding the magnetosphere, the reservoir containing the radiation zones and temporarily magnetically trapped particles confined by the geomagnetic field, coupled with particle acceleration processes. Auroral particles The immediate cause of the ionization and excitation of atmospheric constituents leading to auroral emissions was discovered in 1960, when a pioneering rocket flight from Fort Churchill in Canada revealed a flux of electrons entering the atmosphere from above. Since then an extensive collection of measurements has been acquired painstakingly and with steadily improving resolution since the 1960s by many research teams using rockets and satellites to traverse the auroral zone. The main findings have been that auroral arcs and other bright forms are due to electrons that have been accelerated during the final few 10,000 km or so of their plunge into the atmosphere. These electrons often, but not always, exhibit a peak in their energy distribution, and are preferentially aligned along the local direction of the magnetic field. Electrons mainly responsible for diffuse and pulsating auroras have, in contrast, a smoothly falling energy distribution, and an angular (pitch-angle) distribution favouring directions perpendicular to the local magnetic field. Pulsations were discovered to originate at or close to the equatorial crossing point of auroral zone magnetic field lines. Protons are also associated with auroras, both discrete and diffuse. Atmosphere Auroras result from emissions of photons in Earth's upper atmosphere, above , from ionized nitrogen atoms regaining an electron, and oxygen atoms and nitrogen based molecules returning from an excited state to ground state. They are ionized or excited by the collision of particles precipitated into the atmosphere. Both incoming electrons and protons may be involved. Excitation energy is lost within the atmosphere by the emission of a photon, or by collision with another atom or molecule: Oxygen emissions green or orange-red, depending on the amount of energy absorbed. Nitrogen emissionsblue, purple or red; blue and purple if the molecule regains an electron after it has been ionized, red if returning to ground state from an excited state. Oxygen is unusual in terms of its return to ground state: it can take 0.7 seconds to emit the 557.7 nm green light and up to two minutes for the red 630.0 nm emission. Collisions with other atoms or molecules absorb the excitation energy and prevent emission; this process is called collisional quenching. Because the highest parts of the atmosphere contain a higher percentage of oxygen and lower particle densities, such collisions are rare enough to allow time for oxygen to emit red light. Collisions become more frequent progressing down into the atmosphere due to increasing density, so that red emissions do not have time to happen, and eventually, even green light emissions are prevented. This is why there is a colour differential with altitude; at high altitudes oxygen red dominates, then oxygen green and nitrogen blue/purple/red, then finally nitrogen blue/purple/red when collisions prevent oxygen from emitting anything. Green is the most common colour. Then comes pink, a mixture of light green and red, followed by pure red, then yellow (a mixture of red and green), and finally, pure blue. Precipitating protons generally produce optical emissions as incident hydrogen atoms after gaining electrons from the atmosphere. Proton auroras are usually observed at lower latitudes. Ionosphere Bright auroras are generally associated with Birkeland currents (Schield et al., 1969; Zmuda and Armstrong, 1973), which flow down into the ionosphere on one side of the pole and out on the other. In between, some of the current connects directly through the ionospheric E layer (125 km); the rest ("region 2") detours, leaving again through field lines closer to the equator and closing through the "partial ring current" carried by magnetically trapped plasma. The ionosphere is an ohmic conductor, so some consider that such currents require a driving voltage, which an, as yet unspecified, dynamo mechanism can supply. Electric field probes in orbit above the polar cap suggest voltages of the order of 40,000 volts, rising up to more than 200,000 volts during intense magnetic storms. In another interpretation, the currents are the direct result of electron acceleration into the atmosphere by wave/particle interactions. Ionospheric resistance has a complex nature, and leads to a secondary Hall current flow. By a strange twist of physics, the magnetic disturbance on the ground due to the main current almost cancels out, so most of the observed effect of auroras is due to a secondary current, the auroral electrojet. An auroral electrojet index (measured in nanotesla) is regularly derived from ground data and serves as a general measure of auroral activity. Kristian Birkeland deduced that the currents flowed in the east–west directions along the auroral arc, and such currents, flowing from the dayside toward (approximately) midnight were later named "auroral electrojets" (see also Birkeland currents). Ionosphere can contribute to the formation of auroral arcs via the feedback instability under high ionospheric resistance conditions, observed at night time and in dark Winter hemisphere. Interaction of the solar wind with Earth Earth is constantly immersed in the solar wind, a flow of magnetized hot plasma (a gas of free electrons and positive ions) emitted by the Sun in all directions, a result of the two-million-degree temperature of the Sun's outermost layer, the corona. The solar wind reaches Earth with a velocity typically around 400 km/s, a density of around 5 ions/cm3 and a magnetic field intensity of around 2–5 nT (for comparison, Earth's surface field is typically 30,000–50,000 nT). During magnetic storms, in particular, flows can be several times faster; the interplanetary magnetic field (IMF) may also be much stronger. Joan Feynman deduced in the 1970s that the long-term averages of solar wind speed correlated with geomagnetic activity. Her work resulted from data collected by the Explorer 33 spacecraft. The solar wind and magnetosphere consist of plasma (ionized gas), which conducts electricity. It is well known (since Michael Faraday's work around 1830) that when an electrical conductor is placed within a magnetic field while relative motion occurs in a direction that the conductor cuts across (or is cut by), rather than along, the lines of the magnetic field, an electric current is induced within the conductor. The strength of the current depends on a) the rate of relative motion, b) the strength of the magnetic field, c) the number of conductors ganged together and d) the distance between the conductor and the magnetic field, while the direction of flow is dependent upon the direction of relative motion. Dynamos make use of this basic process ("the dynamo effect"), any and all conductors, solid or otherwise are so affected, including plasmas and other fluids. The IMF originates on the Sun, linked to the sunspots, and its field lines (lines of force) are dragged out by the solar wind. That alone would tend to line them up in the Sun-Earth direction, but the rotation of the Sun angles them at Earth by about 45 degrees forming a spiral in the ecliptic plane, known as the Parker spiral. The field lines passing Earth are therefore usually linked to those near the western edge ("limb") of the visible Sun at any time. The solar wind and the magnetosphere, being two electrically conducting fluids in relative motion, should be able in principle to generate electric currents by dynamo action and impart energy from the flow of the solar wind. However, this process is hampered by the fact that plasmas conduct readily along magnetic field lines, but less readily perpendicular to them. Energy is more effectively transferred by the temporary magnetic connection between the field lines of the solar wind and those of the magnetosphere. Unsurprisingly this process is known as magnetic reconnection. As already mentioned, it happens most readily when the interplanetary field is directed southward, in a similar direction to the geomagnetic field in the inner regions of both the north magnetic pole and south magnetic pole. Auroras are more frequent and brighter during the intense phase of the solar cycle when coronal mass ejections increase the intensity of the solar wind. Magnetosphere Earth's magnetosphere is shaped by the impact of the solar wind on Earth's magnetic field. This forms an obstacle to the flow, diverting it, at an average distance of about 70,000 km (11 Earth radii or Re), producing a bow shock 12,000 km to 15,000 km (1.9 to 2.4 Re) further upstream. The width of the magnetosphere abreast of Earth is typically 190,000 km (30 Re), and on the night side a long "magnetotail" of stretched field lines extends to great distances (> 200 Re). The high latitude magnetosphere is filled with plasma as the solar wind passes Earth. The flow of plasma into the magnetosphere increases with additional turbulence, density, and speed in the solar wind. This flow is favoured by a southward component of the IMF, which can then directly connect to the high latitude geomagnetic field lines. The flow pattern of magnetospheric plasma is mainly from the magnetotail toward Earth, around Earth and back into the solar wind through the magnetopause on the day-side. In addition to moving perpendicular to Earth's magnetic field, some magnetospheric plasma travels down along Earth's magnetic field lines, gains additional energy and loses it to the atmosphere in the auroral zones. The cusps of the magnetosphere, separating geomagnetic field lines that close through Earth from those that close remotely allow a small amount of solar wind to directly reach the top of the atmosphere, producing an auroral glow. On 26 February 2008, THEMIS probes were able to determine, for the first time, the triggering event for the onset of magnetospheric substorms. Two of the five probes, positioned approximately one third the distance to the Moon, measured events suggesting a magnetic reconnection event 96 seconds prior to auroral intensification. Geomagnetic storms that ignite auroras may occur more often during the months around the equinoxes. It is not well understood, but geomagnetic storms may vary with Earth's seasons. Two factors to consider are the tilt of both the solar and Earth's axis to the ecliptic plane. As Earth orbits throughout a year, it experiences an interplanetary magnetic field (IMF) from different latitudes of the Sun, which is tilted at 8 degrees. Similarly, the 23-degree tilt of Earth's axis about which the geomagnetic pole rotates with a diurnal variation changes the daily average angle that the geomagnetic field presents to the incident IMF throughout a year. These factors combined can lead to minor cyclical changes in the detailed way that the IMF links to the magnetosphere. In turn, this affects the average probability of opening a door through which energy from the solar wind can reach Earth's inner magnetosphere and thereby enhance auroras. Recent evidence in 2021 has shown that individual separate substorms may in fact be correlated networked communities. Auroral particle acceleration Just as there are many types of aurora, there are many different mechanisms that accelerate auroral particles into the atmosphere. Electron aurora in Earth's auroral zone (i.e. commonly visible aurora) can be split into two main categories with different immediate causes: diffuse and discrete aurora. Diffuse aurora appear relatively structureless to an observer on the ground, with indistinct edges and amorphous forms. Discrete aurora are structured into distinct features with well-defined edges such as arcs, rays and coronas; they also tend to be much brighter than the diffuse aurora. In both cases, the electrons that eventually cause the aurora start out as electrons trapped by the magnetic field in Earth's magnetosphere. These trapped particles bounce back and forth along magnetic field lines and are prevented from hitting the atmosphere by the magnetic mirror formed by the increasing magnetic field strength closer to Earth. The magnetic mirror's ability to trap a particle depends on the particle's pitch angle: the angle between its direction of motion and the local magnetic field. An aurora is created by processes that decrease the pitch angle of many individual electrons, freeing them from the magnetic trap and causing them to hit the atmosphere. In the case of diffuse auroras, the electron pitch angles are altered by their interaction with various plasma waves. Each interaction is essentially wave-particle scattering; the electron energy after interacting with the wave is similar to its energy before interaction, but the direction of motion is altered. If the final direction of motion after scattering is close to the field line (specifically, if it falls within the loss cone) then the electron will hit the atmosphere. Diffuse auroras are caused by the collective effect of many such scattered electrons hitting the atmosphere. The process is mediated by the plasma waves, which become stronger during periods of high geomagnetic activity, leading to increased diffuse aurora at those times. In the case of discrete auroras, the trapped electrons are accelerated toward Earth by electric fields that form at an altitude of about 4000–12000 km in the "auroral acceleration region". The electric fields point away from Earth (i.e. upward) along the magnetic field line. Electrons moving downward through these fields gain a substantial amount of energy (on the order of a few keV) in the direction along the magnetic field line toward Earth. This field-aligned acceleration decreases the pitch angle for all of the electrons passing through the region, causing many of them to hit the upper atmosphere. In contrast to the scattering process leading to diffuse auroras, the electric field increases the kinetic energy of all of the electrons transiting downward through the acceleration region by the same amount. This accelerates electrons starting from the magnetosphere with initially low energies (tens of eV or less) to energies required to create an aurora (100s of eV or greater), allowing that large source of particles to contribute to creating auroral light. The accelerated electrons carry an electric current along the magnetic field lines (a Birkeland current). Since the electric field points in the same direction as the current, there is a net conversion of electromagnetic energy into particle energy in the auroral acceleration region (an electric load). The energy to power this load is eventually supplied by the magnetized solar wind flowing around the obstacle of Earth's magnetic field, although exactly how that power flows through the magnetosphere is still an active area of research. While the energy to power the aurora is ultimately derived from the solar wind, the electrons themselves do not travel directly from the solar wind into Earth's auroral zone; magnetic field lines from these regions do not connect to the solar wind, so there is no direct access for solar wind electrons. Some auroral features are also created by electrons accelerated by dispersive Alfvén waves. At small wavelengths transverse to the background magnetic field (comparable to the electron inertial length or ion gyroradius), Alfvén waves develop a significant electric field parallel to the background magnetic field. This electric field can accelerate electrons to keV energies, significant to produce auroral arcs. If the electrons have a speed close to that of the wave's phase velocity, they are accelerated in a manner analogous to a surfer catching an ocean wave. This constantly-changing wave electric field can accelerate electrons along the field line, causing some of them to hit the atmosphere. Electrons accelerated by this mechanism tend to have a broad energy spectrum, in contrast to the sharply-peaked energy spectrum typical of electrons accelerated by quasi-static electric fields. In addition to the discrete and diffuse electron aurora, proton aurora is caused when magnetospheric protons collide with the upper atmosphere. The proton gains an electron in the interaction, and the resulting neutral hydrogen atom emits photons. The resulting light is too dim to be seen with the naked eye. Other aurora not covered by the above discussion include transpolar arcs (formed poleward of the auroral zone), cusp aurora (formed in two small high-latitude areas on the dayside) and some non-terrestrial auroras. Historically significant events The discovery of a 1770 Japanese diary in 2017 depicting auroras above the ancient Japanese capital of Kyoto suggested that the storm may have been 7% larger than the Carrington event, which affected telegraph networks. The auroras that resulted from the Carrington event on both 28 August and 2 September 1859, are thought to be the most spectacular in recent history. In a paper to the Royal Society on 21 November 1861, Balfour Stewart described both auroral events as documented by a self-recording magnetograph at the Kew Observatory and established the connection between the 2 September 1859 auroral storm and the Carrington–Hodgson flare event when he observed that "It is not impossible to suppose that in this case our luminary was taken in the act." The second auroral event, which occurred on 2 September 1859, was a result of the (unseen) coronal mass ejection associated with the exceptionally intense Carrington–Hodgson white light solar flare on 1 September 1859. This event produced auroras so widespread and extraordinarily bright that they were seen and reported in published scientific measurements, ship logs, and newspapers throughout the United States, Europe, Japan, and Australia. It was reported by The New York Times that in Boston on Friday 2 September 1859 the aurora was "so brilliant that at about one o'clock ordinary print could be read by the light". One o'clock EST time on Friday 2 September would have been 6:00 GMT; the self-recording magnetograph at the Kew Observatory was recording the geomagnetic storm, which was then one hour old, at its full intensity. Between 1859 and 1862, Elias Loomis published a series of nine papers on the Great Auroral Exhibition of 1859 in the American Journal of Science where he collected worldwide reports of the auroral event. That aurora is thought to have been produced by one of the most intense coronal mass ejections in history. It is also notable for the fact that it is the first time where the phenomena of auroral activity and electricity were unambiguously linked. This insight was made possible not only due to scientific magnetometer measurements of the era, but also as a result of a significant portion of the of telegraph lines then in service being significantly disrupted for many hours throughout the storm. Some telegraph lines, however, seem to have been of the appropriate length and orientation to produce a sufficient geomagnetically induced current from the electromagnetic field to allow for continued communication with the telegraph operator power supplies switched off. The following conversation occurred between two operators of the American Telegraph Line between Boston and Portland, Maine, on the night of 2 September 1859 and reported in the Boston Traveller: The conversation was carried on for around two hours using no battery power at all and working solely with the current induced by the aurora, and it was said that this was the first time on record that more than a word or two was transmitted in such manner. Such events led to the general conclusion that In May 2024, a series of solar storms caused the aurora borealis to be observed from as far south as Ferdows, Iran. Historical views and folklore The earliest datable record of an aurora was recorded in the Bamboo Annals, a historical chronicle of the history of ancient China, in 977 or 957 BC. An aurora was described by the Greek explorer Pytheas in the 4th century BC. Seneca wrote about auroras in the first book of his Naturales Quaestiones, classifying them, for instance, as ('barrel-like'); ('chasm'); ('bearded'); ('like cypress trees'); and describing their manifold colours. He wrote about whether they were above or below the clouds, and recalled that under Tiberius, an aurora formed above the port city of Ostia that was so intense and red that a cohort of the army, stationed nearby for fire duty, galloped to the rescue. It has been suggested that Pliny the Elder depicted the aurora borealis in his Natural History, when he refers to , , "falling red flames", and "daylight in the night". The earliest depiction of the aurora may have been in Cro-Magnon cave paintings of northern Spain dating to 30,000 BC. The oldest known written record of the aurora was in a Chinese legend written around 2600 BC. On an autumn around 2000 BC, according to a legend, a young woman named Fubao was sitting alone in the wilderness by a bay, when suddenly a "magical band of light" appeared like "moving clouds and flowing water", turning into a bright halo around the Big Dipper, which cascaded a pale silver brilliance, illuminating the earth and making shapes and shadows seem alive. Moved by this sight, Fubao became pregnant and gave birth to a son, the Emperor Xuanyuan, known legendarily as the initiator of Chinese culture and the ancestor of all Chinese people. In the , a creature named is described to be like a red dragon shining in the night sky with a body a thousand miles long. In ancient times, the Chinese did not have a fixed word for the aurora, so it was named according to the different shapes of the aurora, such as "Sky Dog" (), "Sword/Knife Star" (), "Chiyou banner" (), "Sky's Open Eyes" (), and "Stars like Rain" (). In Japanese folklore, pheasants were considered messengers from heaven. However, researchers from Japan's Graduate University for Advanced Studies and National Institute of Polar Research claimed in March 2020 that red pheasant tails witnessed across the night sky over Japan in 620 A.D., might be a red aurora produced during a magnetic storm. In the traditions of Aboriginal Australians, the Aurora Australis is commonly associated with fire. For example, the Gunditjmara people of western Victoria called auroras ('ashes'), while the Gunai people of eastern Victoria perceived auroras as bushfires in the spirit world. The Dieri people of South Australia say that an auroral display is , an evil spirit creating a large fire. Similarly, the Ngarrindjeri people of South Australia refer to auroras seen over Kangaroo Island as the campfires of spirits in the 'Land of the Dead'. Aboriginal people in southwest Queensland believe the auroras to be the fires of the Oola Pikka, ghostly spirits who spoke to the people through auroras. Sacred law forbade anyone except male elders from watching or interpreting the messages of ancestors they believed were transmitted through an aurora. Among the Māori people of New Zealand, aurora australis or ("great torches in the sky") were lit by ancestors who sailed south to a "land of ice" (or their descendants); these people were said to be Ui-te-Rangiora's expedition party who had reached the Southern Ocean. around the 7th century. In Scandinavia, the first mention of (the northern lights) is found in the Norwegian chronicle from AD 1230. The chronicler has heard about this phenomenon from compatriots returning from Greenland, and he gives three possible explanations: that the ocean was surrounded by vast fires; that the sun flares could reach around the world to its night side; or that glaciers could store energy so that they eventually became fluorescent. Walter William Bryant wrote in his book Kepler (1920) that Tycho Brahe "seems to have been something of a homoeopathist, for he recommends sulfur to cure infectious diseases 'brought on by the sulfurous vapours of the Aurora Borealis. In 1778, Benjamin Franklin theorized in his paper Aurora Borealis, Suppositions and Conjectures towards forming an Hypothesis for its Explanation that an aurora was caused by a concentration of electrical charge in the polar regions intensified by the snow and moisture in the air: Observations of the rhythmic movement of compass needles due to the influence of an aurora were confirmed in the Swedish city of Uppsala by Anders Celsius and Olof Hiorter. In 1741, Hiorter was able to link large magnetic fluctuation to the observation of an aurora overhead. This evidence helped to support their theory that 'magnetic storms' are responsible for such compass fluctuations. A variety of Native American myths surround the spectacle. The European explorer Samuel Hearne travelled with Chipewyan Dene in 1771 and recorded their views on the ('caribou'). According to Hearne, the Dene people saw the resemblance between an aurora and the sparks produced when caribou fur is stroked. They believed that the lights were the spirits of their departed friends dancing in the sky, and when they shone brightly it meant that their deceased friends were very happy. During the night after the Battle of Fredericksburg, an aurora was seen from the battlefield. The Confederate Army took this as a sign that God was on their side, as the lights were rarely seen so far south. The painting Aurora Borealis by Frederic Edwin Church is widely interpreted to represent the conflict of the American Civil War. A mid 19th-century British source says auroras were a rare occurrence before the 18th century. It quotes Halley as saying that before the aurora of 1716, no such phenomenon had been recorded for more than 80 years, and none of any consequence since 1574. It says no appearance is recorded in the Transactions of the French Academy of Sciences between 1666 and 1716; and that one aurora recorded in Berlin Miscellany for 1797 was called a very rare event. One observed in 1723 at Bologna was stated to be the first ever seen there. Celsius (1733) states the oldest residents of Uppsala thought the phenomenon a great rarity before 1716. The period between approximately 1645 and 1715 corresponds to the Maunder minimum in sunspot activity. In Robert W. Service's satirical poem "The Ballad of the Northern Lights" (1908), a Yukon prospector discovers that the aurora is the glow from a radium mine. He stakes his claim, then goes to town looking for investors. In the early 1900s, the Norwegian scientist Kristian Birkeland laid the foundation for the current understanding of geomagnetism and polar auroras. In Sami mythology, the northern lights are caused by the deceased who bled to death cutting themselves, their blood spilling on the sky. Many aboriginal peoples of northern Eurasia and North America share similar beliefs of northern lights being the blood of the deceased, some believing they are caused by dead warriors' blood spraying on the sky as they engage in playing games, riding horses or having fun in some other way. Extraterrestrial aurorae Both Jupiter and Saturn have magnetic fields that are stronger than Earth's (Jupiter's equatorial field strength is 4.3 gauss, compared to 0.3 gauss for Earth), and both have extensive radiation belts. Auroras have been observed on both gas planets, most clearly using the Hubble Space Telescope, and the Cassini and Galileo spacecraft, as well as on Uranus and Neptune. The aurorae on Saturn seem, like Earth's, to be powered by the solar wind. However, Jupiter's aurorae are more complex. Jupiter's main auroral oval is associated with the plasma produced by the volcanic moon Io, and the transport of this plasma within the planet's magnetosphere. An uncertain fraction of Jupiter's aurorae are powered by the solar wind. In addition, the moons, especially Io, are also powerful sources of aurora. These arise from electric currents along field lines ("field aligned currents"), generated by a dynamo mechanism due to the relative motion between the rotating planet and the moving moon. Io, which has active volcanism and an ionosphere, is a particularly strong source, and its currents also generate radio emissions, which have been studied since 1955. Using the Hubble Space Telescope, auroras over Io, Europa and Ganymede have all been observed. Auroras have also been observed on Venus and Mars. Venus has no magnetic field and so Venusian auroras appear as bright and diffuse patches of varying shape and intensity, sometimes distributed over the full disc of the planet. A Venusian aurora originates when electrons from the solar wind collide with the night-side atmosphere. An aurora was detected on Mars, on 14 August 2004, by the SPICAM instrument aboard Mars Express. The aurora was located at Terra Cimmeria, in the region of 177° east, 52° south. The total size of the emission region was about 30 km across, and possibly about 8 km high. By analysing a map of crustal magnetic anomalies compiled with data from Mars Global Surveyor, scientists observed that the region of the emissions corresponded to an area where the strongest magnetic field is localized. This correlation indicated that the origin of the light emission was a flux of electrons moving along the crust magnetic lines and exciting the upper atmosphere of Mars. Between 2014 and 2016, cometary auroras were observed on comet 67P/Churyumov–Gerasimenko by multiple instruments on the Rosetta spacecraft. The auroras were observed at far-ultraviolet wavelengths. Coma observations revealed atomic emissions of hydrogen and oxygen caused by the photodissociation (not photoionization, like in terrestrial auroras) of water molecules in the comet's coma. The interaction of accelerated electrons from the solar wind with gas particles in the coma is responsible for the aurora. Since comet 67P has no magnetic field, the aurora is diffusely spread around the comet. Exoplanets, such as hot Jupiters, have been suggested to experience ionization in their upper atmospheres and generate an aurora modified by weather in their turbulent tropospheres. However, there is no current detection of an exoplanet aurora. The first ever extra-solar auroras were discovered in July 2015 over the brown dwarf star LSR J1835+3259. The mainly red aurora was found to be a million times brighter than the northern lights, a result of the charged particles interacting with hydrogen in the atmosphere. It has been speculated that stellar winds may be stripping off material from the surface of the brown dwarf to produce their own electrons. Another possible explanation for the auroras is that an as-yet-undetected body around the dwarf star is throwing off material, as is the case with Jupiter and its moon Io.
Physical sciences
Atmospheric optics
null
49718
https://en.wikipedia.org/wiki/Poynting%20vector
Poynting vector
In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area, per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884. Nikolay Umov is also credited with formulating the concept. Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition. The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields. Definition In Poynting's original paper and in most textbooks, the Poynting vector is defined as the cross product where bold letters represent vectors and E is the electric field vector; H is the magnetic field's auxiliary field vector or magnetizing field. This expression is often called the Abraham form and is the most widely used. The Poynting vector is usually denoted by S or N. In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy. If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem: where is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit. Example: Power flow in a coaxial cable Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z). The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations. The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors () symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form: W can be evaluated by integrating the electric field from to which must be the negative of the voltage V: so that: The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r: Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r): where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors: Substituting the earlier solution for the constant W we find: that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity. Other similar examples in which the P = V · I result can be analytically calculated are: the parallel-plate transmission line, using Cartesian coordinates, and the two-wire transmission line, using bipolar cylindrical coordinates. Other forms In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article). It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al. summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy). The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view. Interpretation The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law: where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by where E is the electric field; D is the electric displacement field; B is the magnetic flux density; H is the magnetizing field. The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u. For light in free space, the linear momentum density is For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as where ε is the permittivity of the material; μ is the permeability of the material. Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency. In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms. One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work. Plane waves In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium: where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction: Finding the time-averaged power in the plane wave then requires averaging over the wave period (the inverse frequency of the wave): where Erms is the root mean square (RMS) electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, Erms is , with the average Poynting vector then given by: This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 ≈377Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index , the intrinsic impedance is found as: In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term). Formulation in terms of microscopic fields The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as where μ0 is the vacuum permeability; E is the electric field vector; B is the magnetic flux. This is actually the general expression of the Poynting vector. The corresponding form of Poynting's theorem is where J is the total current density and the energy density u is given by where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only. The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where . In all other cases, they differ in that and the corresponding u are purely radiative, since the dissipation term covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term. Since only the microscopic fields E and B occur in the derivation of and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials. Time-averaged Poynting vector The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes. We would thus not be considering the instantaneous and used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as is understood to signify a sinusoidally varying field whose instantaneous amplitude follows the real part of where is the (radian) frequency of the sinusoidal wave being considered. In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle . The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as: where ∗ denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of . The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), and are exactly in phase, so is simply a real number according to the above definition. The equivalence of to the time-average of the instantaneous Poynting vector can be shown as follows. The average of the instantaneous Poynting vector S over time is given by: The second term is the double-frequency component having an average value of zero, so we find: According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of and refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ), then the correct average power flow is obtained without multiplication by 1/2. Resistive dissipation If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface. This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given. Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454. Radiation pressure The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by Uniqueness of the Poynting vector The Poynting vector occurs in Poynting's theorem only through its divergence , that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem. However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique. The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H. Static fields The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, . To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end. While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore, the circular flow of electromagnetic energy implies an angular momentum. If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged.
Physical sciences
Electromagnetic radiation
Physics
49725
https://en.wikipedia.org/wiki/American%20bison
American bison
The American bison (Bison bison; : bison), commonly known as the American buffalo, or simply buffalo (not to be confused with true buffalo), is a species of bison that is endemic (or native) to North America. It is one of two extant species of bison, along with the European bison. Its historical range circa 9000 BC is referred to as the great bison belt, a tract of rich grassland spanning from Alaska south to the Gulf of Mexico, and east to the Atlantic Seaboard (nearly to the Atlantic tidewater in some areas), as far north as New York, south to Georgia, and according to some sources, further south to northern Florida, with sightings in North Carolina near Buffalo Ford on the Catawba River as late as 1750. Two subspecies or ecotypes have been described: the plains bison (B. b. bison), smaller and with a more rounded hump; and the wood bison (B. b. athabascae), the larger of the two and having a taller, square hump. Furthermore, the plains bison has been suggested to consist of a northern plains (B. b. montanae) and a southern plains (B. b. bison) subspecies, bringing the total to three. However, this is generally not supported. The wood bison is one of the largest wild species of extant bovid in the world, surpassed only by the Asian gaur. Among extant land animals in North America, the bison is the heaviest and the longest, and the second tallest after the moose. Once roaming in vast herds, the species nearly became extinct by a combination of commercial hunting and slaughter in the 19th century and introduction of bovine diseases from domestic cattle. With an estimated population of 60 million in the late 18th century, the species was culled down to just 541 animals by 1889 as part of the subjugation of the Native Americans, because the American bison was a major resource for their traditional way of life (food source, hides for clothing and shelter, and horns and bones for tools). Recovery efforts expanded in the mid-20th century, with a resurgence to roughly 31,000 wild bison as of March 2019. For many years, the population was primarily found in a few national parks and reserves. Through multiple reintroductions, the species now freely roams wild in several regions in the United States, Canada and Mexico. The American bison has also been introduced to Yakutia in Russia. Spanning back millennia, Indigenous peoples of the Great Plains have had cultural and spiritual connections to the American bison. It is the national mammal of the United States. Etymology In American English, both buffalo and bison are considered correct terms for the American bison. However, in British English, the word buffalo is reserved for the African buffalo and water buffalo and not used for the bison. In English usage, the term buffalo was used to refer to the American mammal as early as 1625. The word bison was applied in the 1690s. Buffalo was applied to the American bison by Samuel de Champlain as the French word buffles in 1616 (published 1619), after seeing skins and a drawing. These were shown to him by members of the Nipissing First Nation, who said they traveled forty days (from east of Lake Huron) to trade with another nation who hunted the animals. Buffel in turn comes from Portuguese bufalo (water buffalo), which comes from Latin bufalus (an antelope, gazelle, or wild ox), from Greek boubalos. From the same Greek word boubalos we also get the Bubal hartebeest. Bison was borrowed from French bison in the early 17th century, from Latin bison (aurochs), from a Proto-Germanic word similar to wisent and, per Etymonline, first applied to American buffalo in the 1690s. In Plains Indian languages in general, male and female bison are distinguished, with each having a different designation rather than there being a single generic word covering both sexes. Thus: in Arapaho: (bison cow), (bison bull) in Lakota: (bison cow), (bison bull) Such a distinction is not a general feature of the language (for example, Arapaho possesses gender-neutral terms for other large mammals such as elk, mule deer, etc.), and so presumably is due to the special significance of the bison in Plains Indian life and culture. Description A bison has a shaggy, long, dark-brown winter coat, and a lighter-weight, lighter-brown summer coat. Male bison are significantly larger and heavier than females. Plains bison are often in the smaller range of sizes, and wood bison in the larger range. Head-rump lengths at maximum up to for males and for females long and the tail adding . Heights at withers in the species can reach up to for B. b. bison and B. b. athabascae respectively. Typically weights can range from , with medians of (B.b. bison) and (B.b.athabascae) in males, and with medians of in females, although the lowest weights probably representing typical weight around the age of sexual maturity at 2 to 3 years of age. The heaviest wild bull for B.b.bison ever recorded weighed while there had been bulls estimated to be . B.b.athabascae is significantly larger and heavier on average than B.b.bison while the number of recorded samples for the former was limited after the rediscovery of a relatively pure herd. Elk Island National Park, which has wild populations of both wood and plains bison, has recorded maximum weights for bull bison of 1186 kg (plains) and 1099 kg (wood), but noted that 3/4 of all bison over 1000 kg were wood bison. When raised in captivity and farmed for meat, the bison can grow unnaturally heavy and the largest semidomestic bison weighed . The heads and forequarters are massive, and both sexes have short, curved horns that can grow up to long with to width, which they use in fighting for status within the herd and for defense. Bison are herbivores, grazing on the grasses and sedges of the North American prairies. Their daily schedule involves two-hour periods of grazing, resting, and cud chewing, then moving to a new location to graze again. Sexually mature young bulls may try to start mating with cows by the age of two or three years, but if more mature bulls are present, they may not be able to compete until they reach five years of age. For the first two months of life, calves are lighter in color than mature bison. Though extremely rare, white buffalos exist. Evolution Bison are members of the tribe Bovini. Genetic evidence from nuclear DNA indicates that the closest living relatives of bison are yaks, with bison being nested within the genus Bos, rendering Bos without including bison paraphyletic. While nuclear DNA indicates that the two living bison species are each other's closest living relatives, the mitochondrial DNA of European bison is more closely related to that of domestic cattle and aurochs, which is suggested to be the result of either incomplete lineage sorting or ancient introgression. Bison first appeared in Asia during the Early Pleistocene, around 2.6 million years ago. Bison only arrived in North America 195,000 to 135,000 years ago, during the late Middle Pleistocene, descending from the widespread Siberian steppe bison (Bison priscus), which had migrated through Beringia. Following their first appearance in North America, the bison rapidly differentiated into new species, such as the largest of all bison, the long-horned Bison latifrons, along with Bison antiquus. The first appearance of bison in North America is considered to define the regional Rancholabrean faunal stage, due to its major impact on the ecology of the continent. Modern American bison are thought to have evolved from B. antiquus at the end of the Late Pleistocene - beginning of the Holocene, with likely intermediates between the species referred to as Bison "occidentalis". The North American bison population experienced demographic stability during the Middle Holocene but began a slow decline in the Late Holocene beginning about 2,700 BP. Differences from European bison Although they are superficially similar, the American and European bison exhibit a number of physical and behavioral differences. Adult American bison are slightly heavier on average because of their less rangy build and have shorter legs, which render them slightly shorter at the shoulder. American bison tend to graze more and browse less than their European relatives because their necks are set differently. Compared to the nose of the American bison, that of the European species is set farther forward than the forehead when the neck is in a neutral position. The body of the American bison is hairier, though its tail has less hair than that of the European bison. The horns of the European bison point forward through the plane of its face, making it more adept at fighting through the interlocking of horns in the same manner as domestic cattle, unlike the American bison, which favors charging. American bison are more easily tamed than the European and breed more readily with domestic cattle. Crossbreeding with cattle During the population bottleneck, after the great slaughter of American bison during the 19th century, the number of bison remaining alive in North America declined to as low as 541. During that period, a handful of ranchers gathered remnants of the existing herds to save the species from extinction. These ranchers bred some of the bison with cattle in an effort to produce "cattalo" or "beefalo". Accidental crossings were also known to occur. Generally, male domestic bulls were crossed with bison cows, producing offspring of which only the females were fertile. The crossbred animals did not demonstrate any form of hybrid vigor, so the practice was abandoned. The proportion of cattle DNA that has been measured in introgressed individuals and bison herds today is typically quite low, ranging from 0.56 to 1.8%. Many claimed "beefalo", even those regarded as pedigree, have no detectable bison ancestry. In the United States, many ranchers are now using DNA testing to cull the residual cattle genetics from their bison herds. The U.S. National Bison Association has adopted a code of ethics which prohibits its members from deliberately crossbreeding bison with any other species. Range and population Population estimates in 2010 ranged from 400,000 to 500,000, with approximately 20,500 animals in 62 conservation herds and the remainder in approximately 6,400 commercial herds. According to the IUCN, roughly 15,000 bison are considered wild, free-range bison not primarily confined by fencing. The Nature Conservancy (TNC) has reintroduced bison to over a dozen nature preserves around the United States. In October 2016, TNC established its easternmost bison herd in the country, at Kankakee Sands nature preserve in Morocco, Newton County, Indiana. In 2014, U.S. Tribes and Canadian First Nations signed a treaty to help with the restoration of bison, the first to be signed in nearly 150 years. Habitat and trails American bison live in river valleys, and on prairies and plains. Typical habitat is open or semiopen grasslands, as well as sagebrush, semiarid lands, and scrublands. Some lightly wooded areas are also known historically to have supported bison. Bison also graze in hilly or mountainous areas where the slopes are not steep. Though not particularly known as high-altitude animals, bison in the Yellowstone Park bison herd are frequently found at elevations above , and the Henry Mountains bison herd is found on the plains around the Henry Mountains, Utah, as well as in mountain valleys of the Henry Mountains to an altitude of . Reintroduced plains bison in Banff National Park have been observed to roam mountainous areas, including high ridges and steep drainages, and archaeological finds indicate that some bison historically may have spent their lives within mountain ranges while others may have migrated in and out of mountain ranges. Those in Yukon, Canada, typically summer in alpine plateaus above treeline. The first thoroughfares of North America, except for the time-obliterated paths of mastodon or muskox and the routes of the mound builders, were the traces made by bison and deer in seasonal migration and between feeding grounds and salt licks. Many of these routes, hammered by countless hoofs instinctively following watersheds and the crests of ridges in avoidance of lower places' summer muck and winter snowdrifts, and often following the routes of least resistance across rolling terrain, were followed by the aboriginal North Americans as routes to hunting grounds and as warriors' paths. They were invaluable to explorers and were adopted by pioneers. Bison traces were characteristically north and south along seasonal migration routes, but several key east–west buffalo trails were used later as routes for railways. Some of these include the Cumberland Gap through the Blue Ridge Mountains to upper Kentucky. A heavily used trace crossed the Ohio River at the Falls of the Ohio and ran west, crossing the Wabash River near Vincennes, Indiana. In Senator Thomas Hart Benton's phrase saluting these sagacious path-makers, the bison paved the way for the railroads to the Pacific. Mexico The southern extent of the historic range of the American bison includes northern Mexico and adjoining areas in the United States as documented by archeological records and historical accounts from Mexican archives from 700 CE to the 19th century. The Janos-Hidalgo bison herd has ranged between Chihuahua, Mexico, and New Mexico, United States, since at least the 1920s. The persistence of this herd suggests that habitat for bison is suitable in northern Mexico. In 2009, genetically pure bison were reintroduced to the Janos Biosphere Reserve in northern Chihuahua adding to the Mexican bison population. In 2020, the second herd was formed in Maderas del Carmen. A private reserve named Jagüey de Ferniza has kept bisons since before the above-mentioned reintroductions in Coahuila. Introductions to Siberia Since 2006, an outherd of wood bison sent from Alberta's Elk Island National Park was established in Yakutia, Russia as a practice of pleistocene rewilding; wood bison are the most similar to the extinct steppe bison species (Bison priscus). The bison are adapting well to the cold climate, and Yakutia's Red List officially registered the species in 2019; a second herd was formed in 2020. In Pleistocene Park, there are also 24 plains bison as wood bison could not be acquired. Behavior and ecology Bison are migratory and herd migrations can be directional as well as altitudinal in some areas. Bison have usual daily movements between foraging sites during the summer. In the Hayden Valley, Wyoming, bison have been recorded traveling, on average, per day. The summer ranges of bison appear to be influenced by seasonal vegetation changes, interspersion and size of foraging sites, the rut, and the number of biting insects. The size of preserve and availability of water may also be a factor. Bison are largely grazers, eating primarily grasses and sedges. On shortgrass pasture, bison predominately consume warm-season grasses. On mixed prairie, cool-season grasses, including some sedges, apparently compose 79–96% of their diet. In montane and northern areas, sedges are selected throughout the year. Bison also drink water or consume snow on a daily basis. Social behavior and reproduction Female bison live in maternal herds which include other females and their offspring. Male offspring leave their maternal herd when around three years old and either live alone or join other males in bachelor herds. Male and female herds usually do not mingle until the breeding season, which can occur from July through September. However, female herds may also contain a few older males. During the breeding season, dominant bulls maintain a small harem of females for mating. Individual bulls "tend" cows until allowed to mate, by following them around and chasing away rival males. The tending bull shields the female's vision with his body so she will not see any other challenging males. A challenging bull may bellow or roar to get a female's attention, and the tending bull has to bellow or roar back. The most dominant bulls mate in the first 2–3 weeks of the season. More subordinate bulls mate with any remaining estrous cow that has not mated yet. Male bison play no part in raising the young. Bison herds have dominance hierarchies that exist for both males and females. A bison's dominance is related to its birth date. Bison born earlier in the breeding season are more likely to be larger and more dominant as adults. Thus, bison are able to pass on their dominance to their offspring as dominant bison breed earlier in the season. In addition to dominance, the older bison of a generation also have a higher fertility rate than the younger ones. Bison mate in August and September; gestation is 285 days. A single reddish-brown calf nurses until the next calf is born. If the cow is not pregnant, a calf will nurse for 18 months. Cows nurse their calves for at least 7 or 8 months, but most calves seem to be weaned before the end of their first year. At three years of age, bison cows are mature enough to produce a calf. The birthing period for bison in boreal biomes is protracted compared to that of other northern ungulates, such as moose and caribou. Bison have a life expectancy around 15 years in the wild and up to 25 years in captivity. However, males and females from a hunted population also subject to wolf predation in northern Canada have been reported to live to 22 and 25 years of age, respectively. Bison have been observed to display homosexual behaviors, males much more so than females. In the case of males, it is unlikely to be related to dominance, but rather to social bonding or gaining sexual experience. Horning Bison mate in late spring and summer in more open plain areas. During fall and winter, bison tend to gather in more wooded areas. During this time, bison partake in horning behaviors. They rub their horns against trees, young saplings, and even utility poles. Aromatic trees like cedars and pine seem to be preferred. Horning appears to be associated with insect defense, as it occurs most often in the fall when the insect population is at its highest. Cedar and pines emit an aroma after bison horn them and this seems to be used as a deterrent for insects. Wallowing behavior A bison wallow is a shallow depression in the soil, which bison use either wet or dry. Bison roll in these depressions, covering themselves with dust or mud. Past and current hypotheses to explain the purpose of wallowing include grooming associated with shedding, male-male interaction (typically rutting), social behavior for group cohesion, play, relief from skin irritation due to biting insects, reduction of ectoparasite (tick and lice) load, and thermoregulation. Bison wallowing has important ecosystem engineering effects and enhances plant and animal diversity on prairies. Predation While often secure from predation because of their size and strength, in some areas, vulnerable individuals are regularly preyed upon by wolves. Wolf predation typically peaks in late winter, when elk migrates south and bison are distressed with heavy snows and shortages of food sources, with attacks usually being concentrated on weakened and injured cows and calves. Wolves more actively target herds with calves than those without. The length of a predation episode varies, ranging from a few minutes to over nine hours. Bison calves use five apparent defense strategies in protecting themselves from wolves: running to a cow, running to a herd, running to the nearest bull, running in the front or center of a stampeding herd, and entering a lake or river or other body of water. When fleeing wolves in open areas, cows with young calves take the lead, while bulls take to the rear of the herds to guard the cows' escape. Bison typically ignore wolves not displaying hunting behavior. Wolf packs specializing in bison tend to have more males because their larger size than females allows them to wrestle prey to the ground more effectively. Healthy, mature bulls in herds rarely fall prey. Grizzly bears are known to feed on carcass and may steal wolves' kills. Grizzlies can sometimes kill calves as well as old, injured, or sick adult bison, but direct killing of adult bison is rare even when grizzlies target lone and injured young individuals. Attacking a healthy bison is risky for a bear, who itself may be killed instead. Dangers to humans Bison are among the most dangerous animals encountered by visitors to the various North American national parks and will attack humans if provoked. They appear slow because of their lethargic movements but can easily outrun humans; bison have been observed running as fast as . Bison may approach people for curiosity. Close encounters, including to touch the animals, can be dangerous, and gunshots do not startle them. Between 1980 and 1999, more than three times as many people in Yellowstone National Park were injured by bison than by bears. During this period, bison charged and injured 79 people, with injuries ranging from goring puncture wounds and broken bones to bruises and abrasions. Bears injured 24 people during the same time. Three people died from the injuries inflicted—one person by bison in 1983, and two people by bears in 1984 and 1986. Genetics A major problem that bison face today is a lack of genetic diversity due to the population bottleneck the species experienced during its near-extinction in the late 1800s. Another genetic issue is the entry of genes from domestic cattle into the bison population, through hybridization. Officially, the "American buffalo" is classified by the United States government as a type of cattle, and the government allows private herds to be managed as such. This is a reflection of the characteristics that bison share with cattle. Though the American bison is a separate species and usually regarded as being in a separate genus from domestic cattle (Bos taurus), they have a lot of genetic compatibility with cattle. American bison can interbreed with cattle, although only the female offspring are fertile in the first generation. These female hybrids can be bred back to either bison or domestic bulls, resulting in either 1/4 or 3/4 bison young. Female offspring from this cross are also fertile, but males are not reliably fertile unless they are either bison or domestic. Moreover, when they do interbreed, crossbreed animals in the first generation tend to look very much like purebred bison, so appearance is completely unreliable as a means of determining which is a purebred bison, a crossbred cow and a crossbred bison. Many ranchers have deliberately crossbred their cattle with bison, and some natural hybridization could be expected in areas where cattle and bison occur in the same range. Since cattle and bison eat similar food and tolerate similar conditions, they have often been in the same range together in the past, and opportunity for crossbreeding may sometimes have been common. In recent decades, tests were developed to determine the source of mitochondrial DNA in cattle and bison, and most private "buffalo" herds were actually crossbred with cattle, and even most state and federal buffalo herds had some cattle DNA. With the advent of nuclear microsatellite DNA testing, the number of herds known to contain cattle genes has increased. As of 2011, though about 500,000 bison existed on private ranches and in public herds, perhaps only 15,000 to 25,000 of these bison were pure and not actually bison-cattle hybrids. DNA from domestic cattle (Bos taurus) has been found in almost all examined bison herds. Significant public bison herds that do not appear to have hybridized domestic cattle genes are the Yellowstone Park bison herd, the Henry Mountains bison herd, which was started with bison taken from Yellowstone Park, the Wind Cave bison herd, and the Wood Buffalo National Park bison herd and subsidiary herds started from it, in Canada. A landmark study of bison genetics performed by James Derr of Texas A&M University corroborated this. The Derr study was undertaken in an attempt to determine what genetic problems bison might face as they repopulate former areas, and it noted that bison seem to be adapting successfully, despite their apparent genetic bottleneck. One possible explanation for this might be the small amount of domestic cattle genes that are now in most bison populations, though this is not the only possible explanation for bison success. In the study, cattle genes were also found in small amounts throughout most national, state, and private herds. "The hybridization experiments conducted by some of the owners of the five foundation herds of the late 1800s, have left a legacy of a small amount of cattle genetics in many of our existing bison herds," said Derr. "All of the state owned bison herds tested (except for possibly one) contain animals with domestic cattle mtDNA." It appears that the one state herd that had no cattle genes was the Henry Mountains bison herd; the Henry Mountain herd was started initially with transplanted animals from Yellowstone Park. However, the extension of this herd into the Book Cliffs of central Utah involved mixing the founders with additional bison from another source, so it is not known if the Book Cliffs extension of the herd is also free of cattle hybridization. A separate study by Wilson and Strobeck, published in Genome, was done to define the relationships between different herds of bison in the United States and Canada, and to determine whether the bison at Wood Buffalo National Park in Canada and the Yellowstone Park bison herd were possibly separate subspecies. The Wood Buffalo Park bison were determined to actually be crossbreeds between plains and wood bison, but their predominant genetic makeup was that of the expected "wood buffalo". However, the Yellowstone Park bison herd was pure plains bison, and not any of the other previously suggested subspecies. Another finding was that the bison in the Antelope Island herd in Utah appeared to be more distantly related to other plains bison in general than any other plains bison group that was tested, though this might be due to genetic drift caused by the small size of only 12 individuals in the founder population. A side finding of this was that the Antelope Island bison herd appears to be most closely related to the Wood Buffalo National Park bison herd, though the Antelope Island bison are actually plains bison. In order to bolster the genetic diversity of the American bison, the National Park Service alongside the Department of the Interior announced the 2020 Bison Conservation Initiative on May 7, 2020. This initiative focuses on maintaining the genetic diversity of the metapopulation rather than individual herds. Small populations of bison are at considerably larger risk due to their decreased gene pool and are susceptible to catastrophic events more so than larger herds. The 2020 Bison Conservation Initiative aims to translocate up to three bison every five to ten years between the Department of the Interior's herds. Specific smaller herds will require a more intense management plan. Translocated bison will also be screened for any health defects such as infection of brucellosis bacteria as to not put the larger herd at risk. Population bottleneck and near extinction Bison went from numbering an estimated 60 million individuals before the 1870s to becoming nearly extinct in the 1880s. This was due to the mass slaughtering of bison during the 1870s, which caused the plains bison population to undergo a population bottleneck. The bottleneck resulted in a founding population of around 100 individuals, split into six herds, five of which were managed by private ranchers and one managed by the New York Zoological Park (now the Bronx Zoo). Additionally, a wild herd consisting of 25 individuals in Yellowstone National Park survived the bottleneck. Each of the privately ranched herds had an initial effective population size (Ne) of an estimated 5 to 7 individuals, for a total combined effective population size of between 30 and 50 individuals, from which all of the modern plains bison descend. While these herds have remained mostly isolated, some more than others, there has been some interbreeding between the herds over the past 150 years. The conservation efforts and copious amounts of data taken on American bison populations allow for American bison to serve as a useful study case of population bottlenecking and its effects. This is especially true of the Texas State Bison Herd, which underwent very extreme genetic bottlenecking, with a founding population of only 5 individuals. Texas State Bison Herd The Texas State Bison Herd (TSBH), also known as the Goodnight herd, was established by Charles Goodnight in the mid-1880s with five wild-caught calves. In 1887, the herd consisted of 13 individuals; in 1910, the population consisted of 125 individuals; and in the 1920s, the population ranged from 200 to 250 individuals. In 1929, Goodnight died and the herd switched hands multiple times, leaving the population of the herd unknown from 1930 until the herd was donated to the State of Texas in 1997, with a population of 36 individuals, solely descended from the original five calves. By 2002, the population of the TSBH consisted of 40 individuals and had concerningly low birth rates and high rates of calf mortality. This led to extra attention being given to this herd by conservationists who then performed significant amounts of genetic testing. Goodnight was an advocate for the hybridization of bison with cattle, in the hopes of creating a stronger and healthier breed. When the herd was donated to the State of Texas, genetic testing revealed that 6 out of 36 individuals still carried cattle mitochondrial DNA. Researchers found that the average number of alleles per locus and the heterozygosity levels (a measure of genetic diversity, where high heterozygosity is representative of high genetic diversity) for the TSBH were significantly lower than that of the Yellowstone National Park bison population and the Theodore Roosevelt National Park bison population. Additionally, of the 54 nuclear microsatellites that were examined, the TSBH had 8 monomorphic loci (i.e., each loci had only one allele), whereas in both the Yellowstone and Theodore Roosevelt herds there was only one monomorphic locus, indicating a much lower level of genetic diversity in the TSBH. The Yellowstone herd had an average number of alleles per locus of 4.75, the Theodore Roosevelt National Park herd had an average of 4.15 alleles per locus, but the TSBH only had an average of 2.54 alleles per locus, statistically significantly lower than the others. The heterozygosity level of the Yellowstone, Theodore Roosevelt, and TSBH populations were 0.63, 0.57, and 0.38 respectively, with the TSBH again having a statistically significantly lower value. This low genetic diversity found in TSBH is likely due to the critically low starting population, several additional bottlenecks throughout the herd's history–leading to inbreeding depression–, and a continuously low population allowing for genetic drift to have a large effect. Before any addition of new individuals, the rate of loss of genetic diversity was estimated to be between 30 and 40% over the proceeding 50 years. The inbreeding depression resulting from the multiple extreme population bottlenecks in the TSBH led to a coefficient of inbreeding of 0.367, equal to the level of inbreeding that results from two generations of full-siblings mating. The Texas State Bison Herd is also a useful example of the deleterious effects of extreme population bottlenecking, with an average natality rate of 0.376 offspring per female and a 1st-year mortality rate of 52.6% from 1997 to 2002, compared to an average natality rate of 0.560 offspring per female and a 1st-year mortality rate of 4.2% for the other bison herds. Additionally, if it were not for the intervention of conservationists, the Texas State Bison Herd would have most likely gone extinct, as the population bottleneck would have proven to be too severe. Multiple population models based on the genetics of the TSBH in the early 2000s predicted a 99% chance of extinction of the TSBH in less than 50 years, with an estimation in 2004 giving the TSBH a 99% chance of extinction in 41 years without the introduction of any outside individuals (Halbert et al. 2004). Importantly for conservation, another simulation predicted that the addition of multiple (3–9) outside male bison into the herd would increase genetic diversity enough to give the herd a 100% chance of surviving for another 100 years. Conservation efforts have led the current TSBH population to be at the carrying capacity of their habitat, at around 300 individuals. Yellowstone National Park Bison Herd The Yellowstone National Park Bison herd started with only 25 individuals, and there was evidence of two population bottlenecking events from 1896 to 1912, with a population ranging between 25 and 50 individuals during this time. In 1902, 18 female and 3 male bison from outside herds–the Pablo-Allard herd and Goodnight (TSBH) herds respectively–were introduced to the Yellowstone herd. After the addition of those individuals, the effective population size is estimated to have been Ne=7.2 individuals. The Yellowstone herd was kept completely isolated from 1902 to around 1920, and these previously mentioned founders contributed between 60 and 70% of the genetics of the current bison population at Yellowstone. Similar to the Texas State Bison Herd, the introduction of new individuals into the population in 1902 likely was the savior of this herd, which now numbers around 5,900 individuals as of summer 2022. Population recovery From the late 19th century onwards, the bison population gradually rose from 325 in 1884 to 500,000 in 2017, as a result of careful preservation and a general population boom. Although they are no longer classified as endangered, there are still conservation efforts in order to prevent population crashes down the line. Hunting Buffalo hunting, i.e. hunting of the American bison, was an activity fundamental to the Indigenous peoples of the Great Plains, providing more than 150 uses for all parts of the animal, including being a major food source, hides for clothing and shelter, bones and horns as tools as well as ceremonial and adornment uses. Bison hunting was later adopted by American professional hunters, as well as by the U.S. government, in an effort to sabotage the central resource of some American Indian Nations during the later portions of the American Indian Wars, leading to the near-extinction of the species around 1890. For many tribes the buffalo was an integral part of life—something guaranteed to them by the Creator. In fact, for some Plains indigenous peoples, bison are known as the first people. The concept of species extinction was foreign to many tribes. Thus, when the U.S. government began to massacre the buffalo, it was particularly harrowing to the Indigenous people. As Crow chief Plenty Coups described it: "When the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened. There was little singing anywhere." Spiritual loss was rampant; bison were an integral part of traditional tribal societies, and they would frequently take part in ceremonies for each bison they killed to honor its sacrifice. In order to boost morale during this time, Sioux and other tribes took part in the Ghost Dance, which consisted of hundreds of people dancing until 100 persons were lying unconscious. Many conservation measures have been taken by Native Americans, with the Inter Tribal Bison Council being one of the most significant. Formed in 1990, it comprises 56 tribes in 19 states. These tribes represent a collective herd of more than 15,000 bison and focus on reestablishing herds on tribal lands in order to promote culture, revitalize spiritual solidarity, and restore the ecosystem. Some Inter Tribal Bison Council members argue that the bison's economic value is one of the main factors driving its resurgence. Bison serve as a low-cost substitute for cattle, and they can withstand the winters in the Plains region far easier than cattle. As livestock Bison are increasingly raised for meat, hide, wool, and dairy products. The majority of American bison in the world are raised for human consumption or fur clothing. Bison meat is generally considered to taste very similar to beef, but is lower in fat and cholesterol, yet higher in protein than beef, which has led to the development of beefalo, a fertile hybrid of bison and domestic cattle. In 2005, about 35,000 bison were processed for meat in the U.S., with the National Bison Association and USDA providing a "Certified American Buffalo" program with birth-to-consumer tracking of bison via RFID ear tags. There is kosher bison meat; these bison are slaughtered at one of the few kosher mammal slaughterhouses in the U.S., and the meat is then distributed nationwide. Bison are found in publicly and privately held herds. Custer State Park in South Dakota is home to 1,500 bison, one of the largest publicly held herds in the world, but some question the genetic purity of the animals. Wildlife officials believe that free roaming herds with minimal cattle introgression on public lands in North America can be found only in: the Yellowstone Park bison herd; the Henry Mountains bison herd at the Book Cliffs and Henry Mountains in Utah; at Wind Cave National Park in South Dakota; Fort Peck Indian Reservation in Montana; Mackenzie Bison Sanctuary in the Northwest Territories; Elk Island National Park and Wood Buffalo National Park in Alberta; Grasslands National Park and Prince Albert National Park in Saskatchewan. Another population, the Antelope Island bison herd on Antelope Island in Utah, consisting of 550 to 700 bison, is also one of the largest and oldest public herds in the United States, but the bison in that herd are considered to be only semifree roaming, since they are confined to the Antelope Island. In addition, recent genetic studies indicate that, like most bison herds, the Antelope Island bison herd has a small number of genes from domestic cattle. In 2002, the United States government donated some bison calves from South Dakota and Colorado to the Mexican government. Their descendants live in the Mexican nature reserves El Uno Ranch at Janos and Santa Elena Canyon, Chihuahua, and Boquillas del Carmen, Coahuila, located near the southern banks of the Rio Grande, and around the grassland state line with Texas and New Mexico. Recent genetic studies of privately owned herds of bison show that many of them include animals with genes from domestic cattle. For example, the herd on Santa Catalina Island, California, isolated since 1924 after being brought there for a movie shoot, were found to have cattle introgression. As few as 12,000 to 15,000 pure bison are estimated to remain in the world. The numbers are uncertain because the tests used to date—mitochondrial DNA analysis—indicate only if the maternal line (back from mother to mother) ever included domesticated bovines, thus say nothing about possible male input in the process. Most hybrids were found to look exactly like purebred bison; therefore, appearance is not a good indicator of genetics. The size of the Canadian domesticated herd (genetic questions aside) grew dramatically through the 1990s and 2000s. The 2006 Census of Agriculture reported the Canadian herd at 195,728 head, a 34.9% increase since 2001. Of this total, over 95% were located in Western Canada, and less than 5% in Eastern Canada. Alberta was the province with the largest herd, accounting for 49.7% of the herd and 45.8% of the farms. The next-largest herds were in Saskatchewan (23.9%), Manitoba (10%), and British Columbia (6%). The main producing regions were in the northern parts of the Canadian prairies, specifically in the parkland belt, with the Peace River region (shared between Alberta and British Columbia) being the most important cluster, accounting for 14.4% of the national herd. Canada also exports bison meat, totaling in 2006. A proposal known as Buffalo Commons has been suggested by a handful of academics and policymakers to restore large parts of the drier portion of the Great Plains to native prairie grazed by bison. Proponents argue that current agricultural use of the shortgrass prairie is not sustainable, pointing to periodic disasters, including the Dust Bowl, and continuing significant human population loss over the last 60 years. However, this plan is opposed by some who live in the areas in question. Domestication Despite being the closest relatives of domestic cattle native to North America, bison were never domesticated by Native Americans. Later attempts of domestication by Europeans prior to the 20th century met with limited success. Bison were described as having a "wild and ungovernable temper"; they can jump close to vertically, and run when agitated. This agility and speed, combined with their great size and weight, makes bison herds difficult to confine, as they can easily escape or destroy most fencing systems, including most razor wire. The most successful systems involve large, fences made from welded steel I beams sunk at least into concrete. These fencing systems, while expensive, require very little maintenance. Furthermore, making the fence sections overlap so the grassy areas beyond are not visible prevents the bison from trying to get to new range. It has been alleged that the Aztec emperor Moctezuma II kept a bison at his private zoo (Totocalli) in Tenochtitlan, observed by the first Spanish conquistadors in the region; this would provide proof of Native Americans keeping bison in captivity, serve as an extremely far range extension south, and be the very first observation of bison by European colonists. These claims originate from Juan Díaz de Solís's interpretation of Bernal Diaz del Castillo's accounts of the totocalli, in which de Solís claims the conquistadors observed "the Mexican Bull; a wonderful composition of divers Animals." However, further analysis of del Castillo's account shows no such mention of such an animal, and the mention of this "Mexican Bull" was likely an embellishment by de Solís. As a symbol Native Americans Among many Native American tribes, especially the Plains Indians, the bison is considered a sacred animal and religious symbol. According to University of Montana anthropology and Native American studies professor S. Neyooxet Greymorning, "The creation stories of where buffalo came from put them in a very spiritual place among many tribes. The buffalo crossed many different areas and functions, and it was utilized in many ways. It was used in ceremonies, as well as to make tipi covers that provided homes for people, utensils, shields, weapons and parts were used for sewing with the sinew." The Sioux consider the birth of a white buffalo to be the return of White Buffalo Calf Woman, their primary cultural prophet and the bringer of their "Seven Sacred Rites". Among the Mandan and Hidatsa, the White Buffalo Cow Society was the most sacred of societies for women. North America The American bison is often used in North America in official seals, flags, and logos. In 2016, the American bison became the national mammal of the United States. The bison is a popular symbol in the Great Plains states: Kansas, Oklahoma, and Wyoming have adopted the animal as their official state mammal, and many sports teams have chosen the bison as their mascot. In Canada, the bison is the official animal of the province of Manitoba and appears on the Manitoba flag. It is also used in the official coat of arms of the Royal Canadian Mounted Police. Several American coins feature the bison, most famously on the reverse side of the "buffalo nickel" from 1913 to 1938. In 2005, the United States Mint coined a nickel with a new depiction of the bison as part of its "Westward Journey" series. The Kansas and North Dakota state quarters, part of the "50 State Quarter" series, each feature bison. The Kansas state quarter has only the bison and does not feature any writing, while the North Dakota state quarter has two bison. The Montana state quarter prominently features a bison skull over a landscape. The Yellowstone National Park quarter also features a bison standing next to a geyser. Other institutions which have adopted the bison as a symbol or mascot include:
Biology and health sciences
Artiodactyla
null
49786
https://en.wikipedia.org/wiki/Microclimate
Microclimate
A microclimate (or micro-climate) is a local set of atmospheric conditions that differ from those in the surrounding areas, often slightly but sometimes substantially. The term may refer to areas as small as a few square meters or smaller (for example a garden bed, underneath a rock, or a cave) or as large as many square kilometers. Because climate is statistical, which implies spatial and temporal variation of the mean values of the describing parameters, microclimates are identified as statistically distinct conditions which occur and/or persist within a region. Microclimates can be found in most places but are most pronounced in topographically dynamic zones such as mountainous areas, islands, and coastal areas. Microclimates exist, for example, near bodies of water which may cool the local atmosphere, or in heavy urban areas where brick, concrete, and asphalt absorb the sun's energy, heat up, and re-radiate that heat to the ambient air: the resulting urban heat island (UHI) is a kind of microclimate that is additionally driven by relative paucity of vegetation. Background The terminology "micro-climate" first appeared in the 1950s in publications such as Climates in Miniature: A Study of Micro-Climate Environment (Thomas Bedford Franklin, 1955). Examples of microclimates The area in a developed industrial park may vary greatly from a wooded park nearby, as natural flora in parks absorb light and heat in leaves that a building roof or parking lot just radiates back into the air. Advocates of solar energy argue that widespread use of solar collection can mitigate overheating of urban environments by absorbing sunlight and putting it to work instead of heating the foreign surface objects. A microclimate can offer an opportunity as a small growing region for crops that cannot thrive in the broader area; this concept is often used in permaculture practiced in northern temperate climates. Microclimates can be used to the advantage of gardeners who carefully choose and position their plants. Cities often raise the average temperature by zoning, and a sheltered position can reduce the severity of winter. Roof gardening, however, exposes plants to more extreme temperatures in both summer and winter. In an urban area, tall buildings create their own microclimate, both by overshadowing large areas and by channeling strong winds to ground level. Wind effects around tall buildings are assessed as part of a microclimate study. Microclimates can also refer to purpose-made environments, such as those in a room or other enclosure. Microclimates are commonly created and carefully maintained in museum display and storage environments. This can be done using passive methods, such as silica gel, or with active microclimate control devices. Usually, if the inland areas have a humid continental climate, the coastal areas stay much milder during winter months, in contrast to the hotter summers. This is the case in places such as British Columbia, where Vancouver has an oceanic wet winter with rare frosts, but inland areas that average several degrees warmer in summer have cold and snowy winters. Sources and influences on microclimate Two main parameters to define a microclimate within a certain area are temperature and humidity. A source of a drop in temperature and/or humidity can be attributed to different sources or influences. Often a microclimate is shaped by a conglomerate of different influences and is a subject of microscale meteorology. Cold air pool Examples of the cold air pool (CAP) effect are Gstettneralm Sinkhole in Austria (lowest recorded temperature ) and Peter Sinks in the US. The main criterion on the wind speed in order to create a warm air flow penetration into a CAP is the following: where is the Froude number, — the Brunt–Väisälä frequency, — depth of the valley, and — Froude number at the threshold wind speed. Craters The presence of permafrost close to the surface in a crater creates a unique microclimate environment. Caves Caves are important geologic formations that can house unique and delicate geologic/biological environments. The vast majority of caves found are made of calcium carbonates such as limestone. In these dissolution environments, many species of flora and fauna find home. The mixture of water content within the cave atmosphere, air pressure, geochemistry of the cave rock as well as the waste product from these species can combine to make unique microclimates within cave systems. The speleogenetic effect is an observed and studied process of air circulation within cave environments brought on by convection. In phreatic conditions the cave surfaces are exposed to the enclosed air (as opposed to submerged and interacting with water from the water table in vadose conditions). This air circulates water particles that condense on cave walls and formations such as speleothems. This condensing water has been found to contribute to cave wall erosion and the formation of morphological features. Some examples of this can be found in the limestone walls of Grotta Giusti; a thermal cave near Monsummano, Lucca, Italy. Any process that leads to an increase or decrease in chemical/physical processes will subsequently impact the environment within that system. Air density within caves, which directly relates to the convection processes, is determined by the air temperature, humidity, and pressure. In enclosed cave environments, the introduction of bacteria, algae, plants, animals, or human interference can change any one of these factors therefore altering the microenvironment within the cave. There are over 750 caves worldwide that are available for people to visit. The constant human traffic through these cave environments can have a negative effect on the microclimates as well as on the geological and archeological findings. Factors that play into the deterioration of these environments include nearby deforestation, agriculture operations, water exploitation, mining, and tourist operations. The speleogenetic effect of normal caves tends to show a slow circulation of air. In unique conditions where acids are present, the effects of erosion and changes to the microenvironment can be drastically enhanced. One example is the effect of the presence of hydrosulfuric acid(). When the oxidized hydrosulfuric acid chemically alters to sulfuric acid(), this acid starts to react with the calcium carbonate rock at much higher rates. The water involved in this reaction tends to have a high pH of 3 which renders the water almost unlivable for many bacteria and algae. An example of this can be found in the Grotta Grande del Vento cave in Ancona, Italy. Plant microclimate As pointed out by meteorologist Rudolf Geiger, not only does climate influence the living plant, but the opposite effect of the interaction of plants on their environment can also take place, which is ultimately known as plant climate. This effect has important consequences for forests in the midst of a continent; indeed, if forests were not creating their own clouds and water cycle with their efficient evapotranspiration activity, there would be no forest far away from coasts, as statistically, without any other influence, rainfall occurrence would decrease from the coast towards inland. Planting trees to fight drought has also been proposed in the context of afforestation. Dams Artificial reservoirs as well as natural ones create microclimates and often influence the macroscopic climate as well. Slopes Another contributing factor of microclimate is the slope or aspect of an area. South-facing slopes in the Northern Hemisphere and north-facing slopes in the Southern Hemisphere are exposed to more direct sunlight than opposite slopes and are therefore warmer for longer periods of time, giving the slope a warmer microclimate than the areas around the slope. The lowest area of a glen may sometimes frost sooner or harder than a nearby spot uphill, because cold air sinks, a drying breeze may not reach the lowest bottom, and humidity lingers and precipitates, then freezes. Soil types The type of soil found in an area can also affect microclimates. For example, soils heavy in clay can act like pavement, moderating the near ground temperature. On the other hand, if soil has many air pockets, then the heat could be trapped underneath the topsoil, resulting in the increased possibility of frost at ground level. Cities and regions known for microclimates Americas Northern California above the Bay Area is also well known for microclimates with significant differences of temperatures. The coastline typically has daytime temperatures of during summer months along that coastline, but inland towns not far from the ocean such as Lakeport, can be as hot as in an average summer day, in spite of being just around inland. Even as far north as the Klamath River valley around the 41st parallel north between Willow Creek and Eureka averages such temperatures, which is extremely hot for such northerly areas. At this parallel, the temperature at the coast is so cool that Willow Creek beats Eureka's all-time record temperature on average 79 times per year. This is in spite of the areas being less than from each other. San Francisco is a city with various microclimates. Due to the city's varied topography and influence from the prevailing summer marine layer, weather conditions can vary by as much as 9 °F (5 °C) from block to block and a full 30 °F (17 °C) between the coastal fog belt and the heat island of downtown. The Noe Valley district for example, is typically warmer and sunnier than adjacent areas because the surrounding hills block some of the cool fog from the Pacific. The region as a whole, known as the San Francisco Bay Area can have a wide range of extremes in temperature. In the basins and valleys adjoining the coast, climate is subject to wide variations within short distances as a result of the influence of topography on the circulation of marine air. The San Francisco Bay Area offers many varieties of climate within a few miles. In the Bay Area, for example, the average maximum temperature in July is about at Half Moon Bay on the coast, at Walnut Creek only inland, and at Tracy, just inland. The Los Angeles and San Diego areas are also subject to phenomena typical of a microclimate. The temperatures can vary as much as ) between inland areas and the coast, with a temperature gradient of over one degree per mile (1.6 km) from the coast inland. Hills and mountains can also block coastal air masses. The San Fernando Valley is usually much warmer in summer than most of Los Angeles, because the Santa Monica Mountains usually block the cool ocean breezes and fog. Southern California has also a weather phenomenon called "June Gloom" or "May Grey", which sometimes gives overcast or foggy skies in the morning at the coast, but usually gives sunny skies by noon, during late spring and early summer. The Big Island of Hawaii is also an area known for microclimates, as Kailua-Kona and Hilo, Hawaii, experience rainfall of and per year, respectively, despite being just from each other. Calgary, Alberta, is also known for its microclimates. Especially notable are the differences between the downtown and river valley/flood plain regions and the areas to the west and north. This is largely due to an elevation difference within the city's boundaries of over , but can also be somewhat attributed to the effects of the seasonal Chinooks. Halifax, Nova Scotia, also has numerous microclimates. Coastal temperatures and weather conditions can differ considerably from areas located just inland. This is true in all seasons. Varying elevations are common throughout the city, and it is even possible to experience several microclimates while traveling on a single highway due to these changing elevations. Vancouver and its metro area also has many microclimates. North Vancouver and other regions situated on the mountain slopes get over of precipitation a year on average, while other regions to the south get around , although they are less than away. Temperatures in the Fraser Valley inland may be up to 10 °C (18 °F) warmer than the coast, while in winter they are several degrees colder. Chesapeake Bay is also known for its subtropical microclimate. It is most notable for its mild climatic effects on the area east and west of the lowlands of Maryland and Delmarva. Having over of water; (most of which is a mix of fresh and salt water) creates higher levels of humidity and heat in the spring and summer months. An example of this effect is the survival of subtropical palm trees and plants such as water hyacinths in the area. Chile Chico and Los Antiguos on the southern shores of General Carrera Lake have favourable conditions for agriculture despite being in inner Patagonia. New York City and its surrounding metro area feature an extensive urban heat island, and influence from the Atlantic Ocean. These factors cause it to be the northernmost major city in the US that Köppen describes as humid subtropical, with the city being in the 7a/7b/8a USDA zones, compared to nearby cities south of it, which feature lower zones. Europe The Ticino region in Switzerland has a microclimate in which palm trees and banana trees can grow. Gran Canaria is called "Miniature Continent" for its rich variety of microclimates. Tenerife is known for its wide variety of microclimates. Istanbul exhibits a multitude of distinct microclimates because of its hilly topography and maritime influences. Within the city, average summer mean temperatures range from depending on proximity to the Black Sea, with more significant differences on certain days. Rainfall also varies widely owing to the rain shadow of the hills in Istanbul, from around on the southern fringe at Florya to on the northern fringe at Bahçeköy. Furthermore, while the city itself lies in USDA hardiness zones 9a to 9b, its inland suburbs lie in zone 8b with isolated pockets of zone 8a, restricting the cultivation of cold-hardy subtropical plants to the coasts. Leeds, located in Yorkshire, England, is known to have a number of microclimates because of the number of valleys surrounding the city centre. The central west coast of Portugal, similarly to California, has huge differences in summer temperatures from the surrounding inland regions. In less than , average daily summer temperatures can vary through as much as 10 degrees Celsius/18 degrees Fahrenheit, from in Peniche or São Pedro de Moel to around in Santarém or Tomar. This phenomenon is caused by local upwelling created by the northern Nortada winds. The coastal areas in the Andalusia region of Spain has a microclimate. Further north along the coast, Cádiz has a summer average of with warm nights, whereas nearby Jerez de la Frontera has summer highs of with inland areas further north such as Seville being even hotter. Sorana, a commune in Italy's Pescia Valley with a microclimate considered ideal for growing the Sorana bean. The Nizza (Nice) district of Frankfurt-am-Main, Germany is a small area on the north bank of the River Main where wind shelter and sunlight reflected off the river produces a Mediterranean climate and supports one of the largest gardens of southern European plants north of the Alps. Asia and Oceania Amman, Jordan, has extreme examples of microclimate, and almost every neighbourhood exhibits its own weather. It is known among locals that some boroughs such as the northern and western suburbs are among the coldest in the city, and can be experiencing frost or snow whilst other warmer districts such as the city centre can be at much warmer temperatures at the same time. Sydney, Australia, has a microclimate occurring prominently in the warmer months. Inland, in Sydney's western suburbs, the climate is drier and significantly hotter with temperatures generally around above Sydney CBD and Eastern Suburbs (the coast), as sea breezes do not penetrate further inland. In summer, the coast averages at , while inland varies between , depending on the suburb. In extreme occasions, the Coast would have a temperature of , while a suburb ) inland bakes in heat. However, winter lows in the West are around cooler than the coastal suburbs, and may provide mild to moderate frost. Within the city and surrounds, rainfall varies, from around in the far-west to at Observatory Hill (the east or the coast).
Physical sciences
Climatology: General
Earth science
49803
https://en.wikipedia.org/wiki/IBM%20PC%20compatible
IBM PC compatible
An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models. Like the original IBM PC, an IBM PC–compatible computer uses an x86-based central processing unit, sourced either from Intel or a second source like AMD, Cyrix or other vendors such as Texas Instruments, Fujitsu, OKI, Mitsubishi or NEC and is capable of using interchangeable commodity hardware such as expansion cards. Initially such computers were referred to as PC clones, IBM clones or IBM PC clones, but the term "IBM PC compatible" is now a historical description only, as the vast majority of microcomputers produced since the 1990s are IBM compatible. IBM itself no longer sells personal computers, having sold its division to Lenovo in 2005. "Wintel" is a similar description that is more commonly used for modern computers. The designation "PC", as used in much of personal computer history, has not meant "personal computer" generally, but rather an x86 computer capable of running the same software that a contemporary IBM or Lenovo PC could. The term was initially in contrast to the variety of home computer systems available in the early 1980s, such as the Apple II, TRS-80, and Commodore 64. Later, the term was primarily used in contrast to Commodore's Amiga and Apple's Macintosh computers. Overview These "clones" duplicated almost all the significant features of the original IBM PC architectures. This was facilitated by IBM's choice of commodity hardware components, which were cheap, and by various manufacturers' ability to reverse-engineer the BIOS firmware using a "clean room design" technique. Columbia Data Products built the first clone of the IBM personal computer, the MPC 1600 by a clean-room reverse-engineered implementation of its BIOS. Other rival companies, Corona Data Systems, Eagle Computer, and the Handwell Corporation were threatened with legal action by IBM, who settled with them. Soon after in 1982, Compaq released the very successful Compaq Portable, also with a clean-room reverse-engineered BIOS, and also not challenged legally by IBM. Early IBM PC compatibles used the same computer buses as their IBM counterparts, switching from the 8-bit IBM PC and XT bus to the 16-bit IBM AT bus with the release of the AT. IBM's introduction of the proprietary Micro Channel architecture (MCA) in its Personal System/2 (PS/2) series resulted in the establishment of the Extended Industry Standard Architecture bus open standard by a consortium of IBM PC compatible vendors, redefining the 16-bit IBM AT bus as the Industry Standard Architecture (ISA) bus. Additional bus standards were subsequently adopted to improve compatibility between IBM PC compatibles, including the VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), and the Accelerated Graphics Port (AGP). Descendants of the x86 IBM PC compatibles, namely 64-bit computers based on "x86-64/AMD64" chips comprise the majority of desktop computers on the market as of 2021, with the dominant operating system being Microsoft Windows. Interoperability with the bus structure and peripherals of the original PC architecture may be limited or non-existent. Many modern computers are unable to use old software or hardware that depends on portions of the IBM PC compatible architecture which are missing or do not have equivalents in modern computers. For example, computers which boot using Unified Extensible Firmware Interface-based firmware that lack a Compatibility Support Module, or CSM, required to emulate the old BIOS-based firmware interface, or have their CSMs disabled, cannot natively run MS-DOS since MS-DOS depends on a BIOS interface to boot. Only the Macintosh had kept significant market share without having compatibility with the IBM PC, although that changed during the Intel Macs era running Mac OS X, often dual-booting Windows with Boot Camp. Origins IBM decided in 1980 to market a low-cost single-user computer as quickly as possible. On August 12, 1981, the first IBM PC went on sale. There were three operating systems (OS) available for it. The least expensive and most popular was PC DOS made by Microsoft. In a crucial concession, IBM's agreement allowed Microsoft to sell its own version, MS-DOS, for non-IBM computers. The only component of the original PC architecture exclusive to IBM was the BIOS (Basic Input/Output System). IBM at first asked developers to avoid writing software that addressed the computer's hardware directly and to instead make standard calls to BIOS functions that carried out hardware-dependent operations. This software would run on any machine using MS-DOS or PC DOS. Software that directly addressed the hardware instead of making standard calls was faster, however; this was particularly relevant to games. Software addressing IBM PC hardware in this way would not run on MS-DOS machines with different hardware (for example, the PC-98). The IBM PC was sold in high enough volumes to justify writing software specifically for it, and this encouraged other manufacturers to produce machines that could use the same programs, expansion cards, and peripherals as the PC. The x86 computer marketplace rapidly excluded all machines which were not hardware-compatible or software-compatible with the PC. The 640 KB barrier on "conventional" system memory available to MS-DOS is a legacy of that period; other non-clone machines, while subject to a limit, could exceed 640 KB. Rumors of "lookalike," compatible computers, created without IBM's approval, began almost immediately after the IBM PC's release. InfoWorld wrote on the first anniversary of the IBM PC that By June 1983 PC Magazine defined "PC 'clone as "a computer [that can] accommodate the user who takes a disk home from an IBM PC, walks across the room, and plugs it into the 'foreign' machine". Demand for the PC by then was so strong that dealers received 60% or less of the inventory they wanted, and many customers purchased clones instead. Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first product, an IBM PC compatible in November 1982, the Compaq Portable. The Compaq was the first sewing machine-sized portable computer that was essentially 100% PC-compatible. The court decision in Apple v. Franklin, was that BIOS code was protected by copyright law, but it could reverse-engineer the IBM BIOS and then write its own BIOS using clean room design. Note this was over a year after Compaq released the Portable. The money and research put into reverse-engineering the BIOS was a calculated risk. Compatibility issues Non-compatible MS-DOS computers: Workalikes At the same time, many manufacturers such as Tandy/RadioShack, Xerox, Hewlett-Packard, Digital Equipment Corporation, Sanyo, Texas Instruments, Tulip, Wang and Olivetti introduced personal computers that supported MS-DOS, but were not completely software- or hardware-compatible with the IBM PC. Tandy described the Tandy 2000, for example, as having a "'next generation' true 16-bit CPU", and with "More speed. More disk storage. More expansion" than the IBM PC or "other MS-DOS computers". While admitting in 1984 that many PC DOS programs did not work on the computer, the company stated that "the most popular, sophisticated software on the market" was available, either immediately or "over the next six months". Like IBM, Microsoft's apparent intention was that application writers would write to the application programming interfaces in MS-DOS or the firmware BIOS, and that this would form what would now be termed a hardware abstraction layer. Each computer would have its own Original Equipment Manufacturer (OEM) version of MS-DOS, customized to its hardware. Any software written for MS-DOS would operate on any MS-DOS computer, despite variations in hardware design. This expectation seemed reasonable in the computer marketplace of the time. Until then Microsoft's business was based primarily on computer languages such as BASIC. The established small system operating software was CP/M from Digital Research which was in use both at the hobbyist level and by the more professional of those using microcomputers. To achieve such widespread use, and thus make the product viable economically, the OS had to operate across a range of machines from different vendors that had widely varying hardware. Those customers who needed other applications than the starter programs could reasonably expect publishers to offer their products for a variety of computers, on suitable media for each. Microsoft's competing OS was intended initially to operate on a similar varied spectrum of hardware, although all based on the 8086 processor. Thus, MS-DOS was for several years sold only as an OEM product. There was no Microsoft-branded MS-DOS: MS-DOS could not be purchased directly from Microsoft, and each OEM release was packaged with the trade dress of the given PC vendor. Malfunctions were to be reported to the OEM, not to Microsoft. However, as machines that were compatible with IBM hardware—thus supporting direct calls to the hardware—became widespread, it soon became clear that the OEM versions of MS-DOS were virtually identical, except perhaps for the provision of a few utility programs. MS-DOS provided adequate functionality for character-oriented applications such as those that could have been implemented on a text-only terminal. Had the bulk of commercially important software been of this nature, low-level hardware compatibility might not have mattered. However, in order to provide maximum performance and leverage hardware features (or work around hardware bugs), PC applications quickly developed beyond the simple terminal applications that MS-DOS supported directly. Spreadsheets, WYSIWYG word processors, presentation software and remote communication software established new markets that exploited the PC's strengths, but required capabilities beyond what MS-DOS provided. Thus, from very early in the development of the MS-DOS software environment, many significant commercial software products were written directly to the hardware, for a variety of reasons: MS-DOS itself did not provide any way to position the text cursor other than to advance it after displaying each letter (teletype mode). While the BIOS video interface routines were adequate for rudimentary output, they were necessarily less efficient than direct hardware addressing, as they added extra processing; they did not have "string" output, but only character-by-character teletype output, and they inserted delays to prevent CGA hardware "snow" (a display artifact of CGA cards produced when writing directly to screen memory)——an especially bad artifact since they were called by IRQs, thus making multitasking very difficult. A program that wrote directly to video memory could achieve output rates 5 to 20 times faster than making system calls. Turbo Pascal used this technique from its earliest versions. Graphics capability was not taken seriously in the original IBM design brief; graphics were considered only from the perspective of generating static business graphics such as charts and graphs. MS-DOS did not have an API for graphics, and the BIOS only included the rudimentary graphics functions such as changing screen modes and plotting single points. To make a BIOS call for every point drawn or modified increased overhead considerably, making the BIOS interface notoriously slow. Because of this, line-drawing, arc-drawing, and blitting had to be performed by the application to achieve acceptable speed, which was usually done by bypassing the BIOS and accessing video memory directly. Software written to address IBM PC hardware directly would run on any IBM clone, but would have to be rewritten especially for each non-PC-compatible MS-DOS machine. Video games, even early ones, mostly required a true graphics mode. They also performed any machine-dependent trick the programmers could think of in order to gain speed. Though initially the major market for the PC was for business applications, games capability became an important factor motivating PC purchases as prices decreased. The availability and quality of games could mean the difference between the purchase of a PC compatible or a different platform with the ability to exchange data like the Amiga. Communications software directly accessed the UART serial port chip, because the MS-DOS API and the BIOS did not provide full support and was too slow to keep up with hardware which could transfer data at 19,200 bit/s. Even for standard business applications, speed of execution was a significant competitive advantage. Integrated software Context MBA preceded Lotus 1-2-3 to market and included more functions. Context MBA was written in UCSD p-System, making it very portable but too slow to be truly usable on a PC. 1-2-3 was written in x86 assembly language and performed some machine-dependent tricks. It was so much faster that it quickly surpassed Context MBA's sales. Disk copy-protection schemes, in common use at the time, worked by reading nonstandard data patterns on the diskette to verify originality. These patterns were impossible to detect using standard DOS or BIOS calls, so direct access to the disk controller hardware was necessary for the protection to work. Some software was designed to run only on a true IBM PC, and checked for an actual IBM BIOS. First-generation PC workalikes by IBM competitors "Operationally Compatible" In May 1983, Future Computing defined four levels of compatibility: Operationally Compatible. Can run "the top selling" IBM PC software, use PC expansion boards, and read and write PC disks. Has "complementary features" like portability or lower price that distinguish computer from the PC, which is sold in the same store. Examples: (Best) Columbia Data Products, Compaq; (Better) Corona; (Good) Eagle. Functionally Compatible. Runs own version of popular PC software. Cannot use PC expansion boards but can read and write PC disks. Cannot become Operationally Compatible. Example: TI Professional. Data Compatible. May not run top PC software. Can read and/or write PC disks. Can become Functionally Compatible. Examples: NCR Decision Mate, Olivetti M20, Wang PC, Zenith Z-100. Incompatible. Cannot read PC disks. Can become Data Compatible. Examples: Altos 586, DEC Rainbow 100, Grid Compass, Victor 9000. During development, Compaq engineers found that Microsoft Flight Simulator would not run because of what subLOGIC's Bruce Artwick described as "a bug in one of Intel's chips", forcing them to make their new computer bug compatible with the IBM PC. At first, few clones other than Compaq's offered truly full compatibility. Jerry Pournelle purchased an IBM PC in mid-1983, "rotten keyboard and all", because he had "four cubic feet of unevaluated software, much of which won't run on anything but an IBM PC. Although a lot of machines claim to be 100 percent IBM PC compatible, I've yet to have one arrive ... Alas, a lot of stuff doesn't run with Eagle, Z-100, Compupro, or anything else we have around here". Columbia Data Products's November 1983 sales brochure stated that during tests with retail-purchased computers in October 1983, its own and Compaq's products were compatible with all tested PC software, while Corona and Eagle's were less compatible. Columbia University reported in January 1984 that Kermit ran without modification on Compaq and Columbia Data Products clones, but not on those from Eagle or Seequa. Other MS-DOS computers also required custom code. By December 1983 Future Computing stated that companies like Compaq, Columbia Data Products, and Corona that emphasized IBM PC compatibility had been successful, while non-compatible computers had hurt the reputations of others like TI and DEC despite superior technology. At a San Francisco meeting it warned 200 attendees, from many American and foreign computer companies as well as IBM itself, to "Jump on the IBM PC-compatible bandwagon—quickly, and as compatibly as possible". Future Computing said in February 1984 that some computers were "press-release compatible", exaggerating their actual compatibility with the IBM PC. Many companies were reluctant to have their products' PC compatibility tested. When PC Magazine requested samples from computer manufacturers that claimed to produce compatibles for an April 1984 review, 14 of 31 declined. Corona specified that "Our systems run all software that conforms to IBM PC programming standards. And the most popular software does." When a BYTE journalist asked to test Peachtext at the Spring 1983 COMDEX, Corona representatives "hemmed and hawed a bit, but they finally led me ... off in the corner where no one would see it should it fail". The magazine reported that "Their hesitancy was unnecessary. The disk booted up without a problem". Zenith Data Systems was bolder, bragging that its Z-150 ran all applications people brought to test with at the 1984 West Coast Computer Faire. Creative Computing in 1985 stated, "we reiterate our standard line regarding the IBM PC compatibles: try the package you want to use before you buy the computer." Companies modified their computers' BIOS to work with newly discovered incompatible applications, and reviewers and users developed stress tests to measure compatibility; by 1984 the ability to operate Lotus 1-2-3 and Flight Simulator became the standard, with compatibles specifically designed to run them. IBM believed that some companies such as Eagle, Corona, and Handwell infringed on its copyright, and after Apple Computer, Inc. v. Franklin Computer Corp. successfully forced the clone makers to stop using the BIOS. The Phoenix BIOS in 1984, however, and similar products such as AMI BIOS, permitted computer makers to legally build essentially 100%-compatible clones without having to reverse-engineer the PC BIOS themselves. A September 1985 InfoWorld chart listed seven compatibles with RAM, two disk drives, and monochrome monitors for to , while the equivalent IBM PC cost . The Zenith Z-150 and inexpensive Leading Edge Model D are even compatible with IBM proprietary diagnostic software, unlike the Compaq Portable. By 1986 Compute! stated that "clones are generally reliable and about 99 percent compatible", and a 1987 survey in the magazine of the clone industry did not mention software compatibility, stating that "PC by now has come to stand for a computer capable of running programs that are managed by MS-DOS". The decreasing influence of IBM In February 1984 Byte wrote that "IBM's burgeoning influence in the PC community is stifling innovation because so many other companies are mimicking Big Blue", but The Economist stated in November 1983, "The main reason why an IBM standard is not worrying is that it can help competition to flourish". By 1983, IBM had about 25% of sales of personal computers between and , and computers with some PC compatibility were another 25%. As the market and competition grew IBM's influence diminished. In November 1985 PC Magazine stated "Now that it has created the [PC] market, the market doesn't necessarily need IBM for the machines. It may depend on IBM to set standards and to develop higher-performance machines, but IBM had better conform to existing standards so as to not hurt users". In January 1987, Bruce Webster wrote in Byte of rumors that IBM would introduce proprietary personal computers with a proprietary operating system: "Who cares? If IBM does it, they will most likely just isolate themselves from the largest marketplace, in which they really can't compete anymore anyway". He predicted that in 1987 the market "will complete its transition from an IBM standard to an Intel/MS-DOS/expansion bus standard ... Folks aren't so much concerned about IBM compatibility as they are about Lotus 1-2-3 compatibility". By 1992, Macworld stated that because of clones, "IBM lost control of its own market and became a minor player with its own technology". The Economist predicted in 1983 that "IBM will soon be as much a prisoner of its standards as its competitors are", because "Once enough IBM machines have been bought, IBM cannot make sudden changes in their basic design; what might be useful for shedding competitors would shake off even more customers". After the Compaq Deskpro 386 became the first 80386-based PC, PC wrote that owners of the new computer did not need to fear that future IBM products would be incompatible with the Compaq, because such changes would also affect millions of real IBM PCs: "In sticking it to the competition, IBM would be doing the same to its own people". After IBM announced the OS/2-oriented PS/2 line in early 1987, sales of existing DOS-compatible PC compatibles rose, in part because the proprietary operating system was not available. In 1988, Gartner Group estimated that the public purchased 1.5 clones for every IBM PC. By 1989 Compaq was so influential that industry executives spoke of "Compaq compatible", with observers stating that customers saw the company as IBM's equal or superior. After 1987, IBM PC compatibles dominated both the home and business markets of commodity computers, with other notable alternative architectures being used in niche markets, like the Macintosh computers offered by Apple Inc. and used mainly for desktop publishing at the time, the aging 8-bit Commodore 64 which was selling for $150 by this time and became the world's bestselling computer, the 32-bit Commodore Amiga line used for television and video production and the 32-bit Atari ST used by the music industry. However, IBM itself lost the main role in the market for IBM PC compatibles by 1990. A few events in retrospect are important: IBM designed the PC with an open architecture which permitted clone makers to use freely available non-proprietary components. Microsoft included a clause in its contract with IBM which permitted the sale of the finished PC operating system (PC DOS) to other computer manufacturers. These IBM competitors licensed it, as MS-DOS, in order to offer PC compatibility for less cost. The 1982 introduction of the Columbia Data Products MPC 1600, the first 100% IBM PC compatible computer. The 1983 introduction of the Compaq Portable, providing portability unavailable from IBM at the time. An Independent Business Unit (IBU) within IBM developed the IBM PC and XT. IBUs did not share in corporate R&D expense. After the IBU became the Entry Systems Division it lost this benefit, greatly decreasing margins. The availability by 1986 of sub- "Turbo XT" PC XT compatibles, including early offerings from Dell Computer, reducing demand for IBM's models. It was possible to buy two of these "generic" systems for less than the cost of one IBM-branded PC AT, and many companies did just that. By integrating more peripherals into the computer itself, compatibles like the Model D have more free ISA slots than the PC. Compaq was the first to release an Intel 80386-based computer, almost a year before IBM, with the Compaq Deskpro 386. Bill Gates later said that it was "the first time people started to get a sense that it wasn't just IBM setting the standards". IBM's 1987 introduction of the incompatible and proprietary MicroChannel Architecture (MCA) computer bus, for its Personal System/2 (PS/2) line. The split of the IBM-Microsoft partnership in development of OS/2. Tensions caused by the market success of Windows 3.0 ruptured the joint effort because IBM was committed to the 286's protected mode, which stunted OS/2's technical potential. Windows could take full advantage of the modern and increasingly affordable 386 / 386SX architecture. As well, there were cultural differences between the partners, and Windows was often bundled with new computers while OS/2 was only available for extra cost. The split left IBM the sole steward of OS/2 and it failed to keep pace with Windows. The 1988 introduction by the "Gang of Nine" companies of a rival bus, Extended Industry Standard Architecture, intended to compete with, rather than copy, MCA. The duelling expanded memory (EMS) and extended memory (XMS) standards of the late 1980s, both developed without input from IBM. Despite popularity of its ThinkPad set of laptop PC's, IBM finally relinquished its role as a consumer PC manufacturer during April 2005, when it sold its laptop and desktop PC divisions (ThinkPad/ThinkCentre) to Lenovo for . As of October 2007, Hewlett-Packard and Dell had the largest shares of the PC market in North America. They were also successful overseas, with Acer, Lenovo, and Toshiba also notable. Worldwide, a huge number of PCs are "white box" systems assembled by myriad local systems builders. Despite advances of computer technology, the IBM PC compatibles remained very much compatible with the original IBM PC computers, although most of the components implement the compatibility in special backward compatibility modes used only during a system boot. It was often more practical to run old software on a modern system using an emulator rather than relying on these features. In 2014 Lenovo acquired IBM's x86-based server (System x) business for . Expandability One of the strengths of the PC-compatible design is its modular hardware design. End-users could readily upgrade peripherals and, to some degree, processor and memory without modifying the computer's motherboard or replacing the whole computer, as was the case with many of the microcomputers of the time. However, as processor speed and memory width increased, the limits of the original XT/AT bus design were soon reached, particularly when driving graphics video cards. IBM did introduce an upgraded bus in the IBM PS/2 computer that overcame many of the technical limits of the XT/AT bus, but this was rarely used as the basis for IBM-compatible computers since it required license payments to IBM both for the PS/2 bus and any prior AT-bus designs produced by the company seeking a license. This was unpopular with hardware manufacturers and several competing bus standards were developed by consortiums, with more agreeable license terms. Various attempts to standardize the interfaces were made, but in practice, many of these attempts were either flawed or ignored. Even so, there were many expansion options, and despite the confusion of its users, the PC compatible design advanced much faster than other competing designs of the time, even if only because of its market dominance. "IBM PC compatible" becomes "Wintel" During the 1990s, IBM's influence on PC architecture started to decline. "IBM PC compatible" becomes "Standard PC" in 1990s, and later "ACPI PC" in 2000s. An IBM-brand PC became the exception rather than the rule. Instead of placing importance on compatibility with the IBM PC, vendors began to emphasize compatibility with Windows. In 1993, a version of Windows NT was released that could operate on processors other than the x86 set. While it required that applications be recompiled, which most developers did not do, its hardware independence was used for Silicon Graphics (SGI) x86 workstations–thanks to NT's Hardware abstraction layer (HAL), they could operate NT (and its vast application library). No mass-market personal computer hardware vendor dared to be incompatible with the latest version of Windows, and Microsoft's annual WinHEC conferences provided a setting in which Microsoft could lobby for—and in some cases dictate—the pace and direction of the hardware of the PC industry. Microsoft and Intel had become so important to the ongoing development of PC hardware that industry writers began using the word Wintel to refer to the combined hardware-software system. This terminology itself is becoming a misnomer, as Intel has lost absolute control over the direction of x86 hardware development with AMD's AMD64. Additionally, non-Windows operating systems like macOS and Linux have established a presence on the x86 architecture. Design limitations and more compatibility issues Although the IBM PC was designed for expandability, the designers could not anticipate the hardware developments of the 1980s, nor the size of the industry they would engender. To make things worse, IBM's choice of the Intel 8088 for the CPU introduced several limitations for developing software for the PC compatible platform. For example, the 8088 processor only had a 20-bit memory addressing space. To expand PCs beyond one megabyte, Lotus, Intel, and Microsoft jointly created expanded memory (EMS), a bank-switching scheme to allow more memory provided by add-in hardware, and accessed by a set of four 16-kilobyte "windows" inside the 20-bit addressing. Later, Intel CPUs had larger address spaces and could directly address 16 MB (80286) or more, causing Microsoft to develop extended memory (XMS) which did not require additional hardware. "Expanded" and "extended" memory have incompatible interfaces, so anyone writing software that used more than one megabyte had to provide for both systems for the greatest compatibility until MS-DOS began including EMM386, which simulated EMS memory using XMS memory. A protected mode OS can also be written for the 80286, but DOS application compatibility was more difficult than expected, not only because most DOS applications accessed the hardware directly, bypassing BIOS routines intended to ensure compatibility, but also that most BIOS requests were made by the first 32 interrupt vectors, which were marked as "reserved" for protected mode processor exceptions by Intel. Video cards suffered from their own incompatibilities. There was no standard interface for using higher-resolution SVGA graphics modes supported by later video cards. Each manufacturer developed their own methods of accessing the screen memory, including different mode numberings and different bank switching arrangements. The latter were used to address large images within a single 64 KB segment of memory. Previously, the VGA standard had used planar video memory arrangements to the same effect, but this did not easily extend to the greater color depths and higher resolutions offered by SVGA adapters. An attempt at creating a standard named VESA BIOS Extensions (VBE) was made, but not all manufacturers used it. When the 386 was introduced, again a protected mode OS could be written for it. This time, DOS compatibility was much easier because of virtual 8086 mode. Unfortunately programs could not switch directly between them, so eventually, some new memory-model APIs were developed, VCPI and DPMI, the latter becoming the most popular. Because of the great number of third-party adapters and no standard for them, programming the PC could be difficult. Professional developers would operate a large test-suite of various known-to-be-popular hardware combinations. Meanwhile, consumers were overwhelmed by the competing, incompatible standards and many different combinations of hardware on offer. To give them some idea of what sort of PC they would need to operate their software, the Multimedia PC (MPC) standard was set during 1990. A PC that met the minimum MPC standard could be marketed with the MPC logo, giving consumers an easy-to-understand specification to look for. Software that could operate on the most minimally MPC-compliant PC would be guaranteed to operate on any MPC. The MPC level 2 and MPC level 3 standards were set later, but the term "MPC compliant" never became popular. After MPC level 3 during 1996, no further MPC standards were established. Challenges to Wintel domination By the late 1990s, the success of Microsoft Windows had driven rival commercial operating systems into near-extinction, and had ensured that the "IBM PC compatible" computer was the dominant computing platform. This meant that if a developer made their software only for the Wintel platform, they would still be able to reach the vast majority of computer users. The only major competitor to Windows with more than a few percentage points of market share was Apple Inc.'s Macintosh. The Mac started out billed as "the computer for the rest of us", but high prices and closed architecture drove the Macintosh into an education and desktop publishing niche, from which it only emerged in the mid-2000s. By the mid-1990s the Mac's market share had dwindled to around 5% and introducing a new rival operating system had become too risky a commercial venture. Experience had shown that even if an operating system was technically superior to Windows, it would be a failure in the market (BeOS and OS/2 for example). In 1989, Steve Jobs said of his new NeXT system, "It will either be the last new hardware platform to succeed, or the first to fail." Four years later in 1993, NeXT announced it was ending production of the NeXTcube and porting NeXTSTEP to Intel processors. Very early on in PC history, some companies introduced their own XT-compatible chipsets. For example, Chips and Technologies introduced their 82C100 XT Controller which integrated and replaced six of the original XT circuits: one 8237 DMA controller, one 8253 interrupt timer, one 8255 parallel interface controller, one 8259 interrupt controller, one 8284 clock generator, and one 8288 bus controller. Similar non-Intel chipsets appeared for the AT-compatibles, for example OPTi's 82C206 or 82C495XLC which were found in many 486 and early Pentium systems. The x86 chipset market was very volatile though. In 1993, VLSI Technology had become the dominant market player only to be virtually wiped out by Intel a year later. Intel has been the uncontested leader ever since. As the "Wintel" platform gained dominance Intel gradually abandoned the practice of licensing its technologies to other chipset makers; in 2010 Intel was involved in litigation related to their refusal to license their processor bus and related technologies to other companies like Nvidia. Companies such as AMD and Cyrix developed alternative x86 CPUs that were functionally compatible with Intel's. Towards the end of the 1990s, AMD was taking an increasing share of the CPU market for PCs. AMD even ended up playing a significant role in directing the development of the x86 platform when its Athlon line of processors continued to develop the classic x86 architecture as Intel deviated with its NetBurst architecture for the Pentium 4 CPUs and the IA-64 architecture for the Itanium set of server CPUs. AMD developed AMD64, the first major extension not created by Intel, which Intel later adopted as x86-64. During 2006 Intel began abandoning NetBurst with the release of their set of "Core" processors that represented a development of the earlier Pentium III. A major alternative to Wintel domination is the rise of alternative operating systems since the early 2000s, which marked as the start of the post-PC era. This would include both the rapid growth of the smartphones (using Android or iOS) as an alternative to the personal computer; and the increasing prevalence of Linux and Unix-like operating systems in the server farms of large corporations such as Google or Amazon. The IBM PC compatible today The term "IBM PC compatible" is not commonly used presently because many current mainstream desktop and laptop computers are based on the PC architecture, and IBM no longer makes PCs. The competing hardware architectures have either been discontinued or, like the Amiga, have been relegated to niche, enthusiast markets. In the past, the most successful exception was Apple's Macintosh platform, which used non-Intel processors from its inception. Although Macintosh was initially based on the Motorola 68000 series, then transitioned to the PowerPC architecture, Macintosh computers transitioned to Intel processors beginning in 2006. Until 2020 Macintosh computers shared the same system architecture as their Wintel counterparts and could boot Microsoft Windows without a DOS Compatibility Card. However, with the transition to the internally developed ARM-based Apple silicon, they are again the exception to IBM compatibility. The processor speed and memory capacity of modern PCs are many orders of magnitude greater than they were for the original IBM PC and yet backwards compatibility has been largely maintained a 32-bit operating system can still operate many of the simpler programs written for the OS of the early 1980s without needing an emulator, though an emulator like DOSBox now has near-native functionality at full speed (and is necessary for certain games which may run too fast on modern processors). Additionally, many modern PCs can still run DOS directly, although special options such as USB legacy mode and SATA-to-PATA emulation may need to be set in the BIOS setup utility. Computers using the UEFI might need to be set at legacy BIOS mode to be able to boot DOS. However, the BIOS/UEFI options in most mass-produced consumer-grade computers are very limited and cannot be configured to truly handle OSes such as the original variants of DOS. The spread of the x86-64 architecture has further distanced current computers' and operating systems' internal similarity with the original IBM PC by introducing yet another processor mode with an instruction set modified for 64-bit addressing, but x86-64 capable processors also retain standard x86 compatibility.
Technology
Computer hardware
null
49871
https://en.wikipedia.org/wiki/Eucalyptus
Eucalyptus
Eucalyptus () is a genus of more than 700 species of flowering plants in the family Myrtaceae. Most species of Eucalyptus are trees, often mallees, and a few are shrubs. Along with several other genera in the tribe Eucalypteae, including Corymbia and Angophora, they are commonly known as eucalypts or "gum trees". Plants in the genus Eucalyptus have bark that is either smooth, fibrous, hard, or stringy and leaves that have oil glands. The sepals and petals are fused to form a "cap" or operculum over the stamens, hence the name from Greek eû ("well") and kaluptós ("covered"). The fruit is a woody capsule commonly referred to as a "gumnut". Most species of Eucalyptus are native to Australia, and every state and territory has representative species. About three-quarters of Australian forests are eucalypt forests. Many eucalypt species have adapted to wildfire, are able to resprout after fire, or have seeds that survive fire. A few species are native to islands north of Australia, and a smaller number are only found outside the continent. Eucalypts have been grown in plantations in many other countries because they are fast-growing, have valuable timber, or can be used for pulpwood, honey production, or essential oils. In some countries, however, they have been removed because of the danger of forest fires due to their high flammability. Description Size and habit Eucalypts vary in size and habit from shrubs to tall trees. Trees usually have a single main stem or trunk but many eucalypts are mallees that are multistemmed from ground level and rarely taller than . There is no clear distinction between a mallee and a shrub but in eucalypts, a shrub is a mature plant less than tall and growing in an extreme environment. Eucalyptus vernicosa in the Tasmanian highlands, E. yalatensis on the Nullarbor and E. surgens growing on coastal cliffs in Western Australia are examples of eucalypt shrubs. The terms "mallet" and "marlock" are only applied to Western Australian eucalypts. A mallet is a tree with a single thin trunk with a steeply branching habit but lacks both a lignotuber and epicormic buds. Eucalyptus astringens is an example of a mallet. A marlock is a shrub or small tree with a single, short trunk, that lacks a lignotuber and has spreading, densely leafy branches that often reach almost to the ground. Eucalyptus platypus is an example of a marlock. Eucalyptus trees, including mallets and marlocks, are single-stemmed and include Eucalyptus regnans, the tallest known flowering plant on Earth. The tallest reliably measured tree in Europe, Karri Knight, can be found in Coimbra, Portugal in Vale de Canas. It is a Eucalyptus diversicolor of 72.9 meters height and of 5.71 meters girth. Tree sizes follow the convention of: Small: to in height Medium-sized: Tall: Very tall: over Bark All eucalypts add a layer of bark every year and the outermost layer dies. In about half of the species, the dead bark is shed exposing a new layer of fresh, living bark. The dead bark may be shed in large slabs, in ribbons or in small flakes. These species are known as "smooth barks" and include E. sheathiana, E. diversicolor, E. cosmophylla and E. cladocalyx. The remaining species retain the dead bark which dries out and accumulates. In some of these species, the fibres in the bark are loosely intertwined (in stringybarks such as E. macrorhyncha or peppermints such as E. radiata) or more tightly adherent (as in the "boxes" such as E. leptophleba). In some species (the "ironbarks" such as E. crebra and E. jensenii) the rough bark is infused with gum resin. Many species are 'half-barks' or 'blackbutts' in which the dead bark is retained in the lower half of the trunks or stems—for example, E. brachycalyx, E. ochrophloia, and E. occidentalis—or only in a thick, black accumulation at the base, as in E. clelandii. In some species in this category, for example E. youngiana and E. viminalis, the rough basal bark is very ribbony at the top, where it gives way to the smooth upper stems. The smooth upper bark of the half-barks and that of the completely smooth-barked trees and mallees can produce remarkable colour and interest, for example E. deglupta. E. globulus bark cells are able to photosynthesize in the absence of foliage, conferring an "increased capacity to re-fix internal CO2 following partial defoliation". This allows the tree to grow in less-than-ideal climates, in addition to providing a better chance of recovery from damage sustained to its leaves in an event such as a fire. Different commonly recognised types of bark include: Stringybark—consists of long fibres and can be pulled off in long pieces. It is usually thick with a spongy texture. Ironbark—is hard, rough, and deeply furrowed. It is impregnated with dried kino (a sap exuded by the tree) which gives a dark red or even black colour. Tessellated—bark is broken up into many distinct flakes. They are corkish and can flake off. Box—has short fibres. Some also show tessellation. Ribbon—has the bark coming off in long, thin pieces, but is still loosely attached in some places. They can be long ribbons, firmer strips, or twisted curls. Leaves Nearly all Eucalyptus are evergreen, but some tropical species lose their leaves at the end of the dry season. As in other members of the myrtle family, Eucalyptus leaves are covered with oil glands. The copious oils produced are an important feature of the genus. Although mature Eucalyptus trees may be towering and fully leafed, their shade is characteristically patchy because the leaves usually hang downwards. The leaves on a mature Eucalyptus plant are commonly lanceolate, petiolate, apparently alternate and waxy or glossy green. In contrast, the leaves of seedlings are often opposite, sessile and glaucous. But many exceptions to this pattern exist. Many species such as E. melanophloia and E. setosa retain the juvenile leaf form even when the plant is reproductively mature. Some species, such as E. macrocarpa, E. rhodantha, and E. crucis, are sought-after ornamentals due to this lifelong juvenile leaf form. A few species, such as E. petraea, E. dundasii, and E. lansdowneana, have shiny green leaves throughout their life cycle. Eucalyptus caesia exhibits the opposite pattern of leaf development to most Eucalyptus, with shiny green leaves in the seedling stage and dull, glaucous leaves in mature crowns. The contrast between juvenile and adult leaf phases is valuable in field identification. Four leaf phases are recognised in the development of a Eucalyptus plant: the 'seedling', 'juvenile', 'intermediate', and 'adult' phases. However, no definite transitional point occurs between the phases. The intermediate phase, when the largest leaves are often formed, links the juvenile and adult phases. In all except a few species, the leaves form in pairs on opposite sides of a square stem, consecutive pairs being at right angles to each other (decussate). In some narrow-leaved species, for example E. oleosa, the seedling leaves after the second leaf pair are often clustered in a detectable spiral arrangement about a five-sided stem. After the spiral phase, which may last from several to many nodes, the arrangement reverts to decussate by the absorption of some of the leaf-bearing faces of the stem. In those species with opposite adult foliage the leaf pairs, which have been formed opposite at the stem apex, become separated at their bases by unequal elongation of the stem to produce the apparently alternate adult leaves. Flowers and fruits The most readily recognisable characteristics of Eucalyptus species are the distinctive flowers and fruit (capsules or "gumnuts"). Flowers have numerous fluffy stamens which may be white, cream, yellow, pink, or red; in bud, the stamens are enclosed in a cap known as an operculum which is composed of the fused sepals or petals, or both. Thus, flowers have no petals, but instead decorate themselves with the many showy stamens. As the stamens expand, the operculum is forced off, splitting away from the cup-like base of the flower; this is one of the features that unites the genus. The woody fruits or capsules are roughly cone-shaped and have valves at the end which open to release the seeds, which are waxy, rod-shaped, about 1 mm in length, and yellow-brown in colour. Most species do not flower until adult foliage starts to appear; E. cinerea and E. perriniana are notable exceptions. Taxonomy The genus Eucalyptus was first formally described in 1789 by Charles Louis L'Héritier de Brutelle who published the description in his book Sertum Anglicum, seu, Plantae rariores quae in hortis juxta Londinum along with a description of the type species, Eucalyptus obliqua. The name Eucalyptus is derived from the Ancient Greek words "eu" meaning 'well' and "calyptos" 'covered', referring to the operculum covering the flower buds. The type specimen was collected in 1777 by David Nelson, the gardener-botanist on Cook's third voyage. He collected the specimen on Bruny Island and sent it to de Brutelle who was working in London at that time. History Although eucalypts must have been seen by the very early European explorers and collectors, no botanical collections of them are known to have been made until 1770 when Joseph Banks and Daniel Solander arrived at Botany Bay with Captain James Cook. There they collected specimens of E. gummifera and later, near the Endeavour River in northern Queensland, E. platyphylla; neither of these species was named as such at the time. In 1777, on Cook's third expedition, David Nelson collected a eucalypt on Bruny Island in southern Tasmania. This specimen was taken to the British Museum in London, and was named Eucalyptus obliqua by the French botanist L'Héritier, who was working in London at the time. He coined the generic name from the Greek roots eu and calyptos, meaning "well" and "covered" in reference to the operculum of the flower bud which protects the developing flower parts as the flower develops and is shed by the pressure of the emerging stamens at flowering. The name obliqua was derived from the Latin obliquus, meaning "oblique", which is the botanical term describing a leaf base where the two sides of the leaf blade are of unequal length and do not meet the petiole at the same place. E. obliqua was published in 1788–89, which coincided with the European colonisation of Australia. Between then and the turn of the 19th century, several more species of Eucalyptus were named and published. Most of these were by the English botanist James Edward Smith and most were, as might be expected, trees of the Sydney region. These include the economically valuable E. pilularis, E. saligna and E. tereticornis. The first endemic Western Australian Eucalyptus to be collected and subsequently named was the Yate (E. cornuta) by the French botanist Jacques Labillardière, who collected in what is now the Esperance area in 1792. Several Australian botanists were active during the 19th century, particularly Ferdinand von Mueller, whose work on eucalypts contributed greatly to the first comprehensive account of the genus in George Bentham's Flora Australiensis in 1867. The account is the most important early systematic treatment of the genus. Bentham divided it into five series whose distinctions were based on characteristics of the stamens, particularly the anthers (Mueller, 1879–84), work elaborated by Joseph Henry Maiden (1903–33) and still further by William Faris Blakely (1934). The anther system became too complex to be workable and more recent systematic work has concentrated on the characteristics of buds, fruits, leaves and bark. Species and hybrids Over 700 species of Eucalyptus are known. Some have diverged from the mainstream of the genus to the extent that they are quite isolated genetically and are able to be recognised by only a few relatively invariant characteristics. Most, however, may be regarded as belonging to large or small groups of related species, which are often in geographical contact with each other and between which gene exchange still occurs. In these situations, many species appear to grade into one another, and intermediate forms are common. In other words, some species are relatively fixed genetically, as expressed in their morphology, while others have not diverged completely from their nearest relatives. Hybrid individuals have not always been recognised as such on first collection and some have been named as new species, such as E. chrysantha (E. preissiana × E. sepulcralis) and E. "rivalis" (E. marginata × E. megacarpa). Hybrid combinations are not particularly common in the field, but some other published species frequently seen in Australia have been suggested to be hybrid combinations. For example, Eucalyptus × erythrandra is believed to be E. angulosa × E. teraptera and due to its wide distribution is often referred to in texts. Renantherin, a phenolic compound present in the leaves of some Eucalyptus species, allows chemotaxonomic discrimination in the sections renantheroideae and renantherae and the ratio of the amount of leucoanthocyanins varies considerably in certain species. Related genera Eucalyptus is one of three similar genera that are commonly referred to as "eucalypts", the others being Corymbia and Angophora. Many species, though by no means all, are known as gum trees because they exude copious kino from any break in the bark (e.g., scribbly gum). The generic name is derived from the Greek words ευ (eu) "well" and καλύπτω (kalýpto) "to cover", referring to the operculum on the calyx that initially conceals the flower. Distribution There are more than 700 species of Eucalyptus and most are native to Australia; a very small number are found in adjacent areas of New Guinea and Indonesia. One species, Eucalyptus deglupta, ranges as far north as the Philippines. Of the 15 species found outside Australia, just nine are exclusively non-Australian. Species of Eucalyptus are cultivated widely in the tropical and temperate world, including the Americas, Europe, Africa, the Mediterranean Basin, the Middle East, China, and the Indian subcontinent. However, the range over which many eucalypts can be planted in the temperate zone is constrained by their limited cold tolerance. Australia is covered by of eucalypt forest, comprising three quarters of the area covered by native forest. The Blue Mountains of southeastern Australia have been a centre of eucalypt diversification; their name is in reference to the blue haze prevalent in the area, believed derived from the volatile terpenoids emitted by these trees. Fossil record The oldest definitive Eucalyptus fossils are from Patagonia in South America, where eucalypts are no longer native, though they have been introduced from Australia. The fossils are from the early Eocene (51.9 Mya), and were found in the Laguna del Hunco Formation in Chubut Province in Argentina. This shows that the genus had a Gondwanan distribution. Fossil leaves also occur in the Miocene of New Zealand, where the genus is not native today, but again have been introduced from Australia. Despite the prominence of Eucalyptus in modern Australia, estimated to contribute some 75% of the modern vegetation, the fossil record is very scarce throughout much of the Cenozoic, and suggests that this rise to dominance is a geologically more recent phenomenon. The oldest reliably dated macrofossil of Eucalyptus is a 21-million-year-old tree-stump encased in basalt in the upper Lachlan Valley in New South Wales. Other fossils have been found, but many are either unreliably dated or else unreliably identified. It is useful to consider where Eucalyptus fossils have not been found. Extensive research has gone into the fossil floras of the Paleocene to Oligocene of South-Eastern Australia, and has failed to uncover a single Eucalyptus specimen. Although the evidence is sparse, the best hypothesis is that in the mid-Tertiary, the continental margins of Australia only supported more mesic noneucalypt vegetation, and that eucalypts probably contributed to the drier vegetation of the arid continental interior. With the progressive drying out of the continent since the Miocene, eucalypts were displaced to the continental margins, and much of the mesic and rainforest vegetation that was once there was eliminated. The current superdominance of Eucalyptus in Australia may be an artefact of human influence on its ecology. In more recent sediments, numerous findings of a dramatic increase in the abundance of Eucalyptus pollen are associated with increased charcoal levels. Though this occurs at different rates throughout Australia, it is compelling evidence for a relationship between the artificial increase of fire frequency with the arrival of Aboriginals and increased prevalence of this exceptionally fire-tolerant genus. Tall timber Several eucalypt species are among the tallest trees in the world. Eucalyptus regnans, the Australian 'mountain ash', is the tallest of all flowering plants (angiosperms); today, the tallest measured specimen named Centurion is tall. Coast Douglas-fir is about the same height; only coast redwood is taller, and they are conifers (gymnosperms). Six other eucalypt species exceed 80 metres in height: Eucalyptus obliqua, Eucalyptus delegatensis, Eucalyptus diversicolor, Eucalyptus nitens, Eucalyptus globulus and Eucalyptus viminalis. Frost intolerance Most eucalypts are not tolerant of severe cold. Eucalypts do well in a range of climates but are usually damaged by anything beyond a light frost of ; the hardiest are the snow gums, such as Eucalyptus pauciflora, which is capable of withstanding cold and frost down to about . Two subspecies, E. pauciflora subsp. niphophila and E. pauciflora subsp. debeuzevillei in particular are even hardier and can tolerate even quite severe winters. Several other species, especially from the high plateau and mountains of central Tasmania such as Eucalyptus coccifera, Eucalyptus subcrenulata and Eucalyptus gunnii, have also produced extreme cold-hardy forms and it is seed procured from these genetically hardy strains that are planted for ornament in colder parts of the world. Animal relationships An essential oil extracted from Eucalyptus leaves contains compounds that are powerful natural disinfectants and can be toxic in large quantities. Several marsupial herbivores, notably koalas and some possums, are relatively tolerant of it. The close correlation of these oils with other more potent toxins called formylated phloroglucinol compounds (euglobals, macrocarpals and sideroxylonals) allows koalas and other marsupial species to make food choices based on the smell of the leaves. For koalas, these compounds are the most important factor in leaf choice. A wide variety of insects also feed exclusively on Eucalyptus leaves, such as beetles in the genus Paropsisterna. The eusocial beetle Austroplatypus incompertus makes and defends its galleries exclusively inside eucalypts, including some species of Eucalyptus and Corymbia. Diseases on plants Fungal species Mycosphaerella and Teratosphaeria have been associated with leaf disease on various Eucalyptus species. Several fungal species from Teratosphaeriaceae family are causal agents in leaf diseases and stem cankers of Eucalyptus in Uruguay, and Australia. Adaptation to fire Eucalypts originated between 35 and 50 million years ago, not long after Australia-New Guinea separated from Gondwana, their rise coinciding with an increase in fossil charcoal deposits (suggesting that fire was a factor even then), but they remained a minor component of the Tertiary rainforest until about 20 million years ago, when the gradual drying of the continent and depletion of soil nutrients led to the development of a more open forest type, predominantly Casuarina and Acacia species. The two valuable timber trees, alpine ash E. delegatensis and Australian mountain ash E. regnans, are killed by fire and only regenerate from seed. The same 2003 bushfire that had little impact on forests around Canberra resulted in thousands of hectares of dead ash forests. However, a small amount of ash survived and put out new ash trees as well. Fire hazard Eucalyptus oil is highly flammable and at high enough temperatures the oil expands quickening the spread of wildfires. Bushfires can travel easily through the oil-rich air of the tree crowns. Eucalypts obtain long-term fire survivability from their ability to regenerate from epicormic buds situated deep within their thick bark, or from lignotubers, or by producing serotinous fruits. In seasonally dry climates oaks are often fire-resistant, particularly in open grasslands, as a grass fire is insufficient to ignite the scattered trees. In contrast, a Eucalyptus forest tends to promote fire because of the volatile and highly combustible oils produced by the leaves, as well as the production of large amounts of litter high in phenolics, preventing its breakdown by fungi and thus accumulating as large amounts of dry, combustible fuel. Consequently, dense eucalypt plantings may be subject to catastrophic firestorms. In fact, almost thirty years before the Oakland firestorm of 1991, a study of Eucalyptus in the area warned that the litter beneath the trees builds up very rapidly and should be regularly monitored and removed. It has been estimated that 70% of the energy released through the combustion of vegetation in the Oakland fire was due to Eucalyptus. In a National Park Service study, it was found that the fuel load (in tons per acre) of non-native Eucalyptus woods is almost three times as great as native oak woodland. During World War II, one California town cut down their Eucalyptus trees to "about a third of their height in the vicinity of anti-aircraft guns" because of the known fire-fueling qualities of the trees, with the mayor telling a newspaper reporter, "If a shell so much as hits a leaf, it's supposed to explode." Falling branches Some species of Eucalyptus drop branches unexpectedly. In Australia, Parks Victoria warns campers not to camp under river red gums. Some councils in Australia such as Gosnells, Western Australia, have removed eucalypts after reports of damage from dropped branches, even in the face of lengthy, well publicised protests to protect particular trees. A former Australian National Botanic Gardens director and consulting arborist, Robert Boden, has been quoted referring to "summer branch drop". Dropping of branches is recognised in Australia literature through the fictional death of Judy in Seven Little Australians. Although all large trees can drop branches, the density of Eucalyptus wood is high due to its high resin content, increasing the hazard. Cultivation and uses Eucalypts were introduced from Australia to the rest of the world following the Cook expedition in 1770. Collected by Sir Joseph Banks, botanist on the expedition, they were subsequently introduced to many parts of the world, notably California, southern Europe, Africa, the Middle East, South Asia and South America. About 250 species are under cultivation in California. In Portugal and also Spain, eucalypts have been grown in plantations for the production of pulpwood. Eucalyptus are the basis for several industries, such as sawmilling, pulp, charcoal and others. Several species have become invasive and are causing major problems for local ecosystems, mainly due to the absence of wildlife corridors and rotations management. Eucalypts have many uses which have made them economically important trees, and they have become a cash crop in poor areas such as Timbuktu, Mali and the Peruvian Andes, despite concerns that the trees are invasive in some environments like those of South Africa. Best-known are perhaps the varieties karri and yellow box. Due to their fast growth, the foremost benefit of these trees is their wood. They can be chopped off at the root and grow back again. They provide many desirable characteristics for use as ornament, timber, firewood and pulpwood. Eucalyptus wood is also used in a number of industries, from fence posts (where the oil-rich wood's high resistance to decay is valued) and charcoal to cellulose extraction for biofuels. Fast growth also makes eucalypts suitable as windbreaks and to reduce erosion. Some Eucalyptus species have attracted attention from horticulturists, global development researchers, and environmentalists because of desirable traits such as being fast-growing sources of wood, producing oil that can be used for cleaning and as a natural insecticide, or an ability to be used to drain swamps and thereby reduce the risk of malaria. Eucalyptus oil finds many uses like in fuels, fragrances, insect repellence and antimicrobial activity. Eucalyptus trees show allelopathic effects; they release compounds which inhibit other plant species from growing nearby. Outside their natural ranges, eucalypts are both lauded for their beneficial economic impact on poor populations and criticised for being "water-guzzling" aliens, leading to controversy over their total impact. Eucalypts draw a tremendous amount of water from the soil through the process of transpiration. They have been planted (or re-planted) in some places to lower the water table and reduce soil salination. Eucalypts have also been used as a way of reducing malaria by draining the soil in Algeria, Lebanon, Sicily, elsewhere in Europe, in the Caucasus (Western Georgia), and California. Drainage removes swamps which provide a habitat for mosquito larvae, but can also destroy ecologically productive areas. This drainage is not limited to the soil surface, because the Eucalyptus roots are up to in length and can, depending on the location, even reach the phreatic zone. Pulpwood Eucalyptus is the most common short fibre source for pulpwood to make pulp. The types most often used in papermaking are Eucalyptus globulus (in temperate areas) and the Eucalyptus urophylla x Eucalyptus grandis hybrid (in the tropics). The fibre length of Eucalyptus is relatively short and uniform with low coarseness compared with other hardwoods commonly used as pulpwood. The fibres are slender, yet relatively thick walled. This gives uniform paper formation and high opacity that are important for all types of fine papers. The low coarseness is important for high quality coated papers. Eucalyptus is suitable for many tissue papers as the short and slender fibres gives a high number of fibres per gram and low coarseness contributes to softness. Eucalyptus oil Eucalyptus oil is readily steam distilled from the leaves and can be used for cleaning and as an industrial solvent, as an antiseptic, for deodorising, and in very small quantities in food supplements, especially sweets, cough drops, toothpaste and decongestants. It has insect-repellent properties, and serves as an active ingredient in some commercial mosquito-repellents. Aromatherapists have adopted Eucalyptus oils for a wide range of purposes. Eucalyptus globulus is the principal source of Eucalyptus oil worldwide. Musical instruments Eucalypt wood is also commonly used to make didgeridoos, a traditional Australian Aboriginal wind instrument. The trunk of the tree is hollowed out by termites, and then cut down if the bore is of the correct size and shape. Eucalypt wood is also being used as a tonewood and a fingerboard material for acoustic guitars, notably by the California-based Taylor company. Dyes All parts of Eucalyptus may be used to make dyes that are substantive on protein fibres (such as silk and wool), simply by processing the plant part with water. Colours to be achieved range from yellow and orange through green, tan, chocolate and deep rust red. Prospecting Eucalyptus trees in the Australian outback draw up gold from tens of metres underground through their root system and deposit it as particles in their leaves and branches. A Maia detector for x-ray elemental imaging at the Australian Synchrotron clearly showed deposits of gold and other metals in the structure of Eucalyptus leaves from the Kalgoorlie region of Western Australia that would have been untraceable using other methods. The microscopic leaf-bound "nuggets", about 8 micrometres wide on average, are not worth collecting themselves, but may provide an environmentally benign way of locating subsurface mineral deposits. Eucalyptus as plantation species In the 20th century, scientists around the world experimented with Eucalyptus species. They hoped to grow them in the tropics, but most experimental results failed until breakthroughs in the 1960s-1980s in species selection, silviculture, and breeding programs "unlocked" the potential of eucalypts in the tropics. Prior to then, as Brett Bennett noted in a 2010 article, eucalypts were something of the "El Dorado" of forestry. Today, Eucalyptus is the most widely planted type of tree in plantations around the world, in South America (mainly in Brazil, Argentina, Paraguay and Uruguay), South Africa, Australia, India, Galicia, Portugal and many more. North America California In the 1850s, Eucalyptus trees were introduced to California by Australians during the California Gold Rush. Much of California is similar in climate to parts of Australia. By the early 1900s, thousands of acres of eucalypts were planted with the encouragement of the state government. It was hoped that they would provide a renewable source of timber for construction, furniture making and railway sleepers. It was soon found that for the latter purpose Eucalyptus was particularly unsuitable, as the ties made from Eucalyptus had a tendency to twist while drying, and the dried ties were so tough that it was nearly impossible to hammer rail spikes into them. They went on to note that the promise of Eucalyptus in California was based on the old virgin forests of Australia. This was a mistake, as the young trees being harvested in California could not compare in quality to the centuries-old Eucalyptus timber of Australia. It reacted differently to harvest. The older trees didn't split or warp as the infant California crop did. There was a vast difference between the two, and this would doom the California Eucalyptus industry. The species E. camaldulensis, E. tereticornis, and E. cladocalyx are all present in California, but the blue gum E. globulus makes up by far the largest population in the state. One way in which the Eucalyptus, mainly the blue gum E. globulus, proved valuable in California was in providing windbreaks for highways, orange groves, and farms in the mostly treeless central part of the state. They are also admired as shade and ornamental trees in many cities and gardens. Eucalyptus plantations in California have been criticised, because they compete with native plants and typically do not support native animals. Eucalyptus has historically been planted to replace California's coast live oak population, and the new Eucalyptus is not as hospitable to native flora and fauna as the oaks. In appropriately foggy conditions on the California Coast, Eucalyptus can spread at a rapid rate. The absence of natural inhibitors such as the koala or pathogens native to Australia have aided in the spread of California Eucalyptus trees. This is not as big of an issue further inland, but on the coast invasive eucalypts can disrupt native ecosystems. Eucalyptus may have adverse effects on local streams due to their chemical composition, and their dominance threatens species that rely on native trees. Nevertheless, some native species have been known to adapt to the Eucalyptus trees. Notable examples are herons, great horned owl, and the monarch butterfly using Eucalyptus groves as habitat. Despite these successes, eucalypts generally has a net negative impact on the overall balance of the native ecosystem. A heavy concern regarding Eucalypts in California is their status as a fire hazard. Eucalyptus trees were a catalyst for the spread of the 1923 fire in Berkeley, which destroyed 568 homes. The 1991 Oakland Hills firestorm, which caused US$1.5 billion in damage, destroyed almost 3,000 homes, and killed 25 people, was partly fueled by large numbers of eucalypts close to the houses. Despite these issues, there are calls to preserve the Eucalyptus plants in California. Advocates for the tree claim its fire risk has been overstated. Some even claim that the Eucalyptus's absorption of moisture makes it a barrier against fire. These experts believe that the herbicides used to remove the Eucalyptus would negatively impact the ecosystem, and the loss of the trees would release carbon into the atmosphere unnecessarily. There is also an aesthetic argument for keeping the Eucalyptus; the trees are viewed by many as an attractive and iconic part of the California landscape. Many say that although the tree is not native, it has been in California long enough to become an essential part of the ecosystem and therefore should not be attacked as invasive. These arguments have caused experts and citizens in California, especially in the San Francisco Bay Area, to debate the merits of Eucalyptus removal versus preservation. However, the general consensus remains that some areas urgently require Eucalyptus management to stave off potential fire hazards. Efforts to remove some of California's Eucalyptus trees have been met with a mixed reaction from the public, and there have been protests against removal. Removing Eucalyptus trees can be expensive and often requires machinery or the use of herbicides. The trees struggle to reproduce on their own outside of the foggy regions of Coastal California, and therefore some inland Eucalyptus forests are predicted to die out naturally. In some parts of California, eucalypt plantations are being removed and native trees and plants restored. Individuals have also illegally destroyed some trees and are suspected of introducing insect pests from Australia which attack the trees. Certain Eucalyptus species may also be grown for ornament in warmer parts of the Pacific Northwest—western Washington, western Oregon and southwestern British Columbia. South America Argentina It was introduced in Argentina around 1870 by President Domingo F. Sarmiento, who had brought the seeds from Australia and it quickly became very popular. The most widely planted species were E. globulus, E. viminalis and E. rostrata. Currently, the Humid Pampas region has small forests and Eucalyptus barriers, some up to 80 years old, about 50 meters high and a maximum of one meter in diameter. Uruguay Antonio Lussich introduced Eucalyptus into Uruguay in approximately 1896, throughout what is now Maldonado Department, and it has spread all over the south-eastern and eastern coast. There had been no trees in the area because it consisted of dry sand dunes and stones. Lussich also introduced many other trees, particularly Acacia and pines, but they have not expanded so extensively. Uruguayan forestry crops using Eucalyptus species have been promoted since 1989, when the new National Forestry Law established that 20% of the national territory would be dedicated to forestry. As the main landscape of Uruguay is grassland (140,000 km2, 87% of the national territory), most of the forestry plantations would be established in prairie regions. The planting of Eucalyptus sp. has been criticised because of concerns that soil would be degraded by nutrient depletion and other biological changes. During the last ten years, in the northwestern regions of Uruguay the Eucalyptus sp. plantations have reached annual forestation rates of 300%. That zone has a potential forested area of 1 million hectares, approximately 29% of the national territory dedicated to forestry, of which approximately 800,000 hectares are currently forested by monoculture of Eucalyptus spp. It is expected that the radical and durable substitution of vegetation cover leads to changes in the quantity and quality of soil organic matter. Such changes may also influence soil fertility and soil physical and chemical properties. The soil quality effects associated with Eucalyptus sp. plantations could have adverse effects on soil chemistry; for example: soil acidification, iron leaching, allelopathic activities and a high C:N ratio of litter. Additionally, as most scientific understanding of land cover change effects is related to ecosystems where forests were replaced by grasslands or crops, or grassland was replaced by crops, the environmental effects of the current Uruguayan land cover changes are not well understood. The first scientific publication on soil studies in western zone tree plantations (focused on pulp production) appeared in 2004 and described soil acidification and soil carbon changes, similar to a podzolisation process, and destruction of clay (illite-like minerals), which is the main reservoir of potassium in the soil. Although these studies were carried out in an important zone for forest cultivation, they cannot define the current situation in the rest of the land area under eucalyptus cultivation. Moreover, recently Jackson and Jobbagy have proposed another adverse environmental impact that may result from Eucalyptus culture on prairie soils—stream acidification. The Eucalyptus species most planted are E. grandis, E. globulus and E. dunnii; they are used mainly for pulp mills. Approximately 80,000 ha of E. grandis situated in the departments of Rivera, Tacuarembó and Paysandú is primarily earmarked for the solid wood market, although a portion of it is used for sawlogs and plywood. The current area under commercial forest plantation is 6% of the total. The main uses of the wood produced are elemental chlorine free pulp mill production (for cellulose and paper), sawlogs, plywood and bioenergy (thermoelectric generation). Most of the products obtained from sawmills and pulp mills, as well as plywood and logs, are exported. This has raised the income of this sector with respect to traditional products from other sectors. Uruguayan forestry plantations have rates of growth of 30 cubic metres per hectare per year and commercial harvesting occurs after nine years. Brazil Eucalypts were introduced to Brazil in 1910, for timber substitution and the charcoal industry. It has thrived in the local environment, and today there are around 7 million hectares planted. The wood is highly valued by the charcoal and pulp and paper industries. The short rotation allows a larger wood production and supplies wood for several other activities, helping to preserve the native forests from logging. When well managed, the plantation soils can sustain endless replanting. Eucalyptus plantings are also used as wind breaks. Brazil's plantations have world-record rates of growth, typically over 40 cubic metres per hectare per year, and commercial harvesting occurs after years 5. Due to continual development and governmental funding, year-on-year growth is consistently being improved. Eucalyptus can produce up to 100 cubic metres per hectare per year. Brazil has become the top exporter and producer of Eucalyptus round wood and pulp, and has played an important role in developing the Australian market through the country's committed research in this area. The local iron producers in Brazil rely heavily on sustainably grown Eucalyptus for charcoal; this has greatly pushed up the price of charcoal in recent years. The plantations are generally owned and operated for national and international industry by timber asset companies such as Thomson Forestry, Greenwood Management or cellulose producers such as Aracruz Cellulose and Stora Enso. Overall, South America was expected to produce 55% of the world's Eucalyptus round-wood by 2010. Many environmental NGOs have criticised the use of exotic tree species for forestry in Latin America. Africa Angola In the East of Angola, the Benguela railway company created eucalyptus plantations for firing its steam locomotives. Ethiopia Eucalypts were introduced to Ethiopia in either 1894 or 1895, either by Emperor Menelik II's French advisor Mondon-Vidailhet or by the Englishman Captain O'Brian. Menelik II endorsed its planting around his new capital city of Addis Ababa because of the massive deforestation around the city for firewood. According to Richard R.K. Pankhurst, "The great advantage of the eucalypts was that they were fast growing, required little attention and when cut down grew up again from the roots; it could be harvested every ten years. The tree proved successful from the onset". Plantations of eucalypts spread from the capital to other growing urban centres such as Debre Marqos. Pankhurst reports that the most common species found in Addis Ababa in the mid-1960s was E. globulus, although he also found E. melliodora and E. rostrata in significant numbers. David Buxton, writing of central Ethiopia in the mid-1940s, observed that eucalyptus trees "have become an integral -- and a pleasing -- element in the Shoan landscape and has largely displaced the slow-growing native 'cedar' (Juniperus procera)." It was commonly believed that the thirst of the Eucalyptus "tended to dry up rivers and wells", creating such opposition to the species that in 1913 a proclamation was issued ordering a partial destruction of all standing trees, and their replacement with mulberry trees. Pankhurst reports, "The proclamation however remained a dead letter; there is no evidence of eucalypts being uprooted, still less of mulberry trees being planted." Eucalypts remain a defining feature of Addis Ababa. Madagascar Much of Madagascar's original native forest has been replaced with Eucalyptus, threatening biodiversity by isolating remaining natural areas such as Andasibe-Mantadia National Park. South Africa Numerous Eucalyptus species have been introduced into South Africa, mainly for timber and firewood but also for ornamental purposes. They are popular with beekeepers for the honey they provide. However, in South Africa they are considered invasive, with their water-sucking capabilities threatening water supplies. They also release a chemical into the surrounding soil which kills native competitors. Eucalyptus seedlings are usually unable to compete with the indigenous grasses, but after a fire when the grass cover has been removed, a seed-bed may be created. The following Eucalyptus species have been able to become naturalised in South Africa: E. camaldulensis, E. cladocalyx, E. diversicolor, E. grandis and E. lehmannii. Zimbabwe As in South Africa, many Eucalyptus species have been introduced into Zimbabwe, mainly for timber and firewood, and E. robusta and E. tereticornis have been recorded as having become naturalised there. Europe Portugal Eucalypts have been grown in Portugal since the mid 19th century, the first thought to be a specimen of E. obliqua introduced to Vila Nova de Gaia in 1829. First as an ornamental but soon after in plantations, these eucalypts are prized due to their long and upright trunks, rapid growth and the ability to regrow after cutting. These plantations now occupy around 800,000 hectares, 10% of the country's total land area, 90% of the trees being E. globulus. As of the late 20th century, there were an estimated 120 species of Eucalyptus in Portugal. The genus has also been subject to various controversies. Despite representing a large part of the agricultural economy, eucalypt plantations have a negative impact on soil destruction, inducing resistance to water infiltration and increasing the risks of erosion and soil loss, they are highly flammable, aggravating the risk for wildfires. Various Portuguese laws on eucalypt plantations have been formed and reformed to better suit both sides. There are various Eucalyptus species of public interest in Portugal, namely a Karri in Coimbra's Mata Nacional de Vale de Canas, considered to be Europe's tallest tree at high. Italy In Italy, the Eucalyptus only arrived at the turn of the 19th century and large scale plantations were started at the beginning of the 20th century with the aim of drying up swampy ground to defeat malaria. During the 1930s, Benito Mussolini had thousands of Eucalyptus planted in the marshes around Rome. This, their rapid growth in the Italian climate and excellent function as windbreaks, has made them a common sight in the south of the country, including the islands of Sardinia and Sicily. They are also valued for the characteristic smelling and tasting honey that is produced from them. The variety of Eucalyptus most commonly found in Italy is E. camaldulensis. Greece In Greece, eucalypts are widely found, especially in southern Greece and Crete. They are cultivated and used for various purposes, including as an ingredient in pharmaceutical products (e.g., creams, elixirs and sprays) and for leather production. They were imported in 1862 by botanist Theodoros Georgios Orphanides. The principal species is E. globulus. Ireland Eucalyptus has been grown in Ireland since trials in the 1930s and now grows wild in South Western Ireland in the mild climate. Asia Eucalyptus seeds of the species E. globulus were imported into Palestine in the 1860s, but did not acclimatise well. Later, E. camaldulensis was introduced more successfully and it is still a very common tree in Israel. The use of Eucalyptus trees to drain swampy land was a common practice in the late nineteenth and early twentieth centuries. The German Templer colony of Sarona had begun planting Eucalyptus for this purpose by 1874, though it is not known where the seeds came from. Many Zionist colonies also adopted the practice in the following years under the guidance of the Mikveh Israel Agricultural School. Eucalyptus trees are now considered an invasive species in the region. In India, the Institute of Forest Genetics and Tree Breeding, Coimbatore started a Eucalyptus breeding program in the 1990s. The organisation released four varieties of conventionally bred, high yielding and genetically improved clones for commercial and research interests in 2010. Eucalyptus trees were introduced to Sri Lanka in the late 19th century by tea and coffee planters, for wind protection, shade and fuel. Forestry replanting of Eucalyptus began in the 1930s in deforested mountain areas, and currently there are about 10 species present in the island. They account for 20% of major reforestation plantings. They provide railway sleepers, utility poles, sawn timber and fuelwood, but are controversial because of their adverse effect on biodiversity, hydrology and soil fertility. They are associated with another invasive species, the eucalyptus gall wasp, Leptocybe invasa. Pacific Islands In Hawaii, some 90 species of Eucalyptus have been introduced to the islands, where they have displaced some native species due to their higher maximum height, fast growth and lower water needs. Particularly noticeable is the rainbow eucalyptus (Eucalyptus deglupta), native to Indonesia and the Philippines, whose bark falls off to reveal a trunk that can be green, red, orange, yellow, pink and purple. Non-native Eucalyptus and biodiversity Due to similar favourable climatic conditions, Eucalyptus plantations have often replaced oak woodlands, for example in California, Spain and Portugal. The resulting monocultures have raised concerns about loss of biological diversity, through loss of acorns that mammals and birds feed on, absence of hollows that in oak trees provide shelter and nesting sites for birds and small mammals and for bee colonies, as well as lack of downed trees in managed plantations. A study of the relationship between birds and Eucalyptus in the San Francisco Bay Area found that bird diversity was similar in native forest versus Eucalyptus forest, but the species were different. One way in which the avifauna (local assortment of bird species) changes is that cavity-nesting birds including woodpeckers, owls, chickadees, wood ducks, etc. are depauperate in Eucalyptus groves because the decay-resistant wood of these trees prevents cavity formation by decay or excavation. Also, those bird species that glean insects from foliage, such as warblers and vireos, experience population declines when Eucalyptus groves replace oak forest. Birds that thrive in Eucalyptus groves in California tend to prefer tall vertical habitat. These avian species include herons and egrets, which also nest in redwoods. The Point Reyes Bird Observatory observes that sometimes short-billed birds like the ruby-crowned kinglet are found dead beneath Eucalyptus trees with their nostrils clogged with pitch. Monarch butterflies use Eucalyptus in California for overwintering, but in some locations have a preference for Monterey pines. Eucalyptus as an invasive species Eucalyptus trees are considered invasive to local ecosystems and negatively impact water resources in countries where they are introduced. South Africa In South Africa, Eucalyptus tree species E. camaldulensis, E. cladocalyx, E. conferruminata, E. diversicolor, E. grandis and E. tereticornis are listed as Category 1b invaders in the National Environmental Management: Biodiversity Act. This means most activities with regards to the species are prohibited (such as importing, propagating, translocating or trading) and it should be ensured that it does not spread beyond a plantation's domain. E. cladocalyx and E. diversicolor are considered Fynbos invaders, and use up to 20% more water than the native fynbos vegetation; with invasive species including Eucalyptus being cleared that reduce Cape Town's water resource by 55 billion litres or two months worth of water supply. Photo album
Biology and health sciences
Myrtales
null
49890
https://en.wikipedia.org/wiki/Herring
Herring
Herring are various species of forage fish, belonging to the order Clupeiformes. Herring often move in large schools around fishing banks and near the coast, found particularly in shallow, temperate waters of the North Pacific and North Atlantic Oceans, including the Baltic Sea, as well as off the west coast of South America. Three species of Clupea (the type genus of the herring family Clupeidae) are recognised, and comprise about 90% of all herrings captured in fisheries. The most abundant of these species is the Atlantic herring, which comprises over half of all herring capture. Fish called herring are also found in the Arabian Sea, Indian Ocean, and Bay of Bengal. Herring played an important role in the history of marine fisheries in Europe, and early in the 20th century, their study was fundamental to the development of fisheries science. These oily fish also have a long history as an important food fish, and are often salted, smoked, or pickled. Herring were also known as "silver darlings" in the United Kingdom. Species A number of different species, most belonging to the family Clupeidae, are commonly referred to as herrings. The origins of the term "herring" is somewhat unclear, though it may derive from the same source as the Old High German heri meaning a "host, multitude", in reference to the large schools they form. The type genus of the herring family Clupeidae is Clupea. Clupea contains only two species: the Atlantic herring (the type species) found in the North Atlantic, and the Pacific herring mainly found in the North Pacific. Subspecific divisions have been suggested for both the Atlantic and Pacific herrings, but their biological basis remains unclear. In addition, a number of related species, all in the Clupeidae, are commonly referred to as herrings. The table immediately below includes those members of the family Clupeidae referred to by FishBase as herrings which have been assessed by the International Union for Conservation of Nature. Also, a number of other species are called herrings, which may be related to clupeids or just share some characteristics of herrings (such as the lake herring, which is a salmonid). Just which of these species are called herrings can vary with locality, so what might be called a herring in one locality might be called something else in another locality. Some examples: Characteristics The species of Clupea belong to the larger family Clupeidae (herrings, shads, sardines, menhadens), which comprises some 200 species that share similar features. These silvery-coloured fish have a single dorsal fin, which is soft, without spines. They have no lateral line and have a protruding lower jaw. Their size varies between subspecies: the Baltic herring (Clupea harengus membras) is small, 14 to 18 cm (about 5.5 to 7 inches); the proper Atlantic herring (Clupea harengus harengus) can grow to about and weigh up ; and Pacific herring grow to about . Life cycle At least one stock of Atlantic herring spawns in every month of the year. Each spawns at a different time and place (spring, summer, autumn, and winter herrings). Greenland populations spawn in of water, while North Sea (bank) herrings spawn at down to in autumn. Eggs are laid on the sea bed, on rock, stones, gravel, sand or beds of algae. Females may deposit from 20,000 to 40,000 eggs, according to age and size, averaging about 30,000. In sexually mature herring, the genital organs grow before spawning, reaching about one-fifth of its total weight. The eggs sink to the bottom, where they stick in layers or clumps to gravel, seaweed, or stones, by means of their mucous coating, or to any other objects on which they chance to settle. If the egg layers are too thick they suffer from oxygen depletion and often die, entangled in a maze of mucus. They need substantial water microturbulence, generally provided by wave action or coastal currents. Survival is highest in crevices and behind solid structures, because predators feast on openly exposed eggs. The individual eggs are in diameter, depending on the size of the parent fish and also on the local race. Incubation time is about 40 days at , 15 days at , or 11 days at . Eggs die at temperatures above . The larvae are long at hatching, with a small yolk sac that is absorbed by the time the larvae reach . Only the eyes are well pigmented. The rest of the body is nearly transparent, virtually invisible under water and in natural lighting conditions. The dorsal fin forms at , the anal fin at about —the ventral fins are visible and the tail becomes well forked at 30 to — at about , the larva begins to look like a herring. Herring larvae are very slender and can easily be distinguished from all other young fish of their range by the location of the vent, which lies close to the base of the tail; however, distinguishing clupeoids one from another in their early stages requires critical examination, especially telling herring from sprats. At one year, they are about long, and they first spawn at three years. Ecology Prey Herrings consume copepods, arrow worms, pelagic amphipods, mysids, and krill in the pelagic zone. Conversely, they are a central prey item or forage fish for higher trophic levels. The reasons for this success are still enigmatic; one speculation attributes their dominance to the huge, extremely fast cruising schools they inhabit. Herring feed on phytoplankton, and as they mature, they start to consume larger organisms. They also feed on zooplankton, tiny animals found in oceanic surface waters, and small fish and fish larvae. Copepods and other tiny crustaceans are the most common zooplankton eaten by herring. During daylight, herring stay in the safety of deep water, feeding at the surface only at night when the chance of being seen by predators is less. They swim along with their mouths open, filtering the plankton from the water as it passes through their gills. Young herring mostly hunt copepods individually, by means of "particulate feeding" or "raptorial feeding", a feeding method also used by adult herring on larger prey items like krill. If prey concentrations reach very high levels, as in microlayers, at fronts, or directly below the surface, herring become filter feeders, driving several meters forward with wide open mouth and far expanded opercula, then closing and cleaning the gill rakers for a few milliseconds. Copepods, the primary zooplankton, are a major item on the forage fish menu. Copepods are typically long, with a teardrop-shaped body. Some scientists say they form the largest animal biomass on the planet. Copepods are very alert and evasive. They have large antennae (see photo below left). When they spread their antennae, they can sense the pressure wave from an approaching fish and jump with great speed over a few centimetres. If copepod concentrations reach high levels, schooling herrings adopt a method called ram feeding. In the photo below, herring ram feed on a school of copepods. They swim with their mouths wide open and their operculae fully expanded. The fish swim in a grid where the distance between them is the same as the jump length of their prey, as indicated in the animation above right. In the animation, juvenile herring hunt the copepods in this synchronised way. The copepods sense with their antennae the pressure wave of an approaching herring and react with a fast escape jump. The length of the jump is fairly constant. The fish align themselves in a grid with this characteristic jump length. A copepod can dart about 80 times before it tires. After a jump, it takes it 60 milliseconds to spread its antennae again, and this time delay becomes its undoing, as the almost endless stream of herring allows a herring to eventually snap up the copepod. A single juvenile herring could never catch a large copepod. Other pelagic prey eaten by herring includes fish eggs, larval snails, diatoms by herring larvae below , tintinnids by larvae below , molluscan larvae, menhaden larvae, krill, mysids, smaller fishes, pteropods, annelids, Calanus spp., Centropagidae, and Meganyctiphanes norvegica. Herrings, along with Atlantic cod and sprat, are the most important commercial species to humans in the Baltic Sea. The analysis of the stomach contents of these fish indicate Atlantic cod is the top predator, preying on the herring and sprat. Sprat are competitive with herring for the same food resources. This is evident in the two species' vertical migration in the Baltic Sea, where they compete for the limited zooplankton available and necessary for their survival. Sprat are highly selective in their diet and eat only zooplankton, while herring are more eclectic, adjusting their diet as they grow in size. In the Baltic, copepods of the genus Acartia can be present in large numbers. However, they are small in size with a high escape response, so herring and sprat avoid trying to catch them. These copepods also tend to dwell more in surface waters, whereas herring and sprat, especially during the day, tend to dwell in deeper waters. Predators Predators of herring include seabirds, marine mammals such as dolphins, porpoises, whales, seals, and sea lions, predatory fish such as sharks, billfish, tuna, salmon, striped bass, cod, and halibut. Fishermen also catch and eat herring. The predators often cooperate in groups, using different techniques to panic or herd a school of herring into a tight bait ball. Different predatory species then use different techniques to pick the fish off in the bait ball. The sailfish raises its sail to make it appear much larger. Swordfish charge at high speed through the bait balls, slashing with their swords to kill or stun prey. They then turn and return to consume their "catch". Thresher sharks use their long tails to stun the shoaling fish. These sharks compact their prey school by swimming around them and splashing the water with their tails, often in pairs or small groups. They then strike them sharply with the upper lobe of their tails to stun them. Spinner sharks charge vertically through the school, spinning on their axes with their mouths open and snapping all around. The sharks' momentum at the end of these spiraling runs often carries them into the air. Some whales lunge feed on bait balls. Lunge feeding is an extreme feeding method, where the whale accelerates from below the bait ball to a high velocity and then opens its mouth to a large gape angle. This generates the water pressure required to expand its mouth and engulf and filter a huge amount of water and fish. Lunge feeding by rorquals, a family of huge baleen whales that includes the blue whale, is said to be the largest biomechanical event on Earth. Fisheries Adult herring are harvested for their flesh and eggs, and they are often used as baitfish. The trade in herring is an important sector of many economies around the world. In Europe, the fish has been called the "silver of the sea", and its trade has been so significant to many countries that it has been regarded as the most commercially important fishery in history. As food Herring has been a staple food source since at least 3000 BC. The fish is served numerous ways, and many regional recipes are used: eaten raw, fermented, pickled, or cured by other techniques, such as being smoked as kippers. Herring are very high in the long-chain omega-3 fatty acids EPA and DHA. They are a source of vitamin D. Water pollution influences the amount of herring that may be safely consumed. For example, large Baltic herring slightly exceeds recommended limits with respect to PCB and dioxin, although some sources point out that the cancer-reducing effect of omega-3 fatty acids is statistically stronger than the carcinogenic effect of PCBs and dioxins. The contaminant levels depend on the age of the fish which can be inferred from their size. Baltic herrings larger than may be eaten twice a month, while herrings smaller than 17 cm can be eaten freely. Mercury in fish also influences the amount of fish that women who are pregnant or planning to be pregnant within the next one or two years may safely eat. History The herring has played a highly significant role in history both socially and economically. During the Middle Ages, herring prompted the founding of Great Yarmouth and Copenhagen and played a critical role in the medieval development of Amsterdam. In 1274, while on his deathbed at the monastery of Fossanova (south of Rome, Italy), when encouraged to eat something to regain his strength, Thomas Aquinas asked for fresh herring.
Biology and health sciences
Clupeiformes
null
49976
https://en.wikipedia.org/wiki/Chrysoberyl
Chrysoberyl
The mineral or gemstone chrysoberyl is an aluminate of beryllium with the formula BeAl2O4. The name chrysoberyl is derived from the Greek words χρυσός chrysos and βήρυλλος beryllos, meaning "a gold-white spar". Despite the similarity of their names, chrysoberyl and beryl are two completely different gemstones, although they both contain beryllium. Chrysoberyl is the third-hardest frequently encountered natural gemstone and lies at 8.5 on the Mohs scale of mineral hardness, between corundum (9) and topaz (8). An interesting feature of its crystals are the cyclic twins called trillings. These twinned crystals have a hexagonal appearance, but are the result of a triplet of twins with each "twin" oriented at 120° to its neighbors and taking up 120° of the cyclic trilling. If only two of the three possible twin orientations are present, a V-shaped twin results. Ordinary chrysoberyl is yellowish-green and transparent to translucent. When the mineral exhibits good pale green to yellow color and is transparent, then it is used as a gemstone. The three main varieties of chrysoberyl are: ordinary yellow-to-green chrysoberyl, cat's eye or cymophane, and alexandrite. Yellow-green chrysoberyl was referred to as "chrysolite" during the Victorian and Edwardian eras, which caused confusion since that name has also been used for the mineral olivine ("peridot" as a gemstone); that name is no longer used in the gemological nomenclature. Alexandrite, a strongly pleochroic (trichroic) gem, will exhibit emerald green, red and orange-yellow colors depending on viewing direction in partially polarised light. However, its most distinctive property is that it also changes color in artificial (tungsten/halogen) light compared to daylight. The color change from red to green is due to strong absorption of light in a narrow yellow portion of the spectrum, while allowing large bands of more blue-green and red wavelengths to be transmitted. Which of these prevails to give the perceived hue depends on the spectral balance of the illumination. Fine-quality alexandrite has a green to bluish-green color in daylight (relatively blue illumination of high color temperature), changing to a red to purplish-red color in incandescent light (relatively yellow illumination). However, fine-color material is extremely rare. Less-desirable stones may have daylight colors of yellowish-green and incandescent colors of brownish red. Cymophane is popularly known as "cat's eye". This variety exhibits pleasing chatoyancy or opalescence that reminds one of the eye of a cat. When cut to produce a cabochon, the mineral forms a light-green specimen with a silky band of light extending across the surface of the stone. Occurrence Chrysoberyl forms as a result of pegmatitic processes. Melting in the Earth's crust produces relatively low-density molten magma which can rise upwards towards the surface. As the main magma body cools, water originally present in low concentrations became more concentrated in the molten rock because it could not be incorporated into the crystallization of solid minerals. The remnant magma thus becomes richer in water, and also in rare elements that similarly do not fit in the crystal structures of major rock-forming minerals. The water extends the temperature range downwards before the magma becomes completely solid, allowing concentration of rare elements to proceed so far that they produce their own distinctive minerals. The resulting rock is igneous in appearance but formed at a low temperature from a water-rich melt, with large crystals of the common minerals such as quartz and feldspar, but also with elevated concentrations of rare elements such as beryllium, lithium, or niobium, often forming their own minerals; this is called a pegmatite. The high water content of the magma made it possible for the crystals to grow quickly, so pegmatite crystals are often quite large, which increases the likelihood of gem specimens forming. Chrysoberyl can also grow in the country rocks near to pegmatites, when Be- and Al-rich fluids from the pegmatite react with surrounding minerals. Hence, it can be found in mica schists and in contact with metamorphic deposits of dolomitic marble. Because it is a hard, dense mineral that is resistant to chemical alteration, it can be weathered out of rocks and deposited in river sands and gravels in alluvial deposits with other gem minerals such as diamond, corundum, topaz, spinel, garnet, and tourmaline. When found in such placers, it will have rounded edges instead of sharp, wedge-shape forms. Much of the chrysoberyl mined in Brazil and Sri Lanka is recovered from placers, as the host rocks have been intensely weathered and eroded. If the pegmatite fluid is rich in beryllium, crystals of beryl or chrysoberyl could form. Beryl has a high ratio of beryllium to aluminium, while the opposite is true for chrysoberyl. Both are stable with the common mineral quartz. For alexandrite to form, some chromium would also have had to be present. However, beryllium and chromium do not tend to occur in the same types of rock. Chromium is most common in mafic and ultramafic rocks in which beryllium is extremely rare. Beryllium becomes concentrated in felsic pegmatites in which chromium is almost absent. Therefore, the only situation where an alexandrite can grow is when Be-rich pegmatitic fluids react with Cr-rich country rock. This unusual requirement explains the rarity of this chrysoberyl variety. Alexandrite The alexandrite variety displays a color change dependent upon the nature of ambient lighting. Alexandrite results from small scale replacement of aluminium by chromium ions in the crystal structure, which causes intense absorption of light over a narrow range of wavelengths in the yellow region (520–620 nm) of the visible light spectrum. Because human vision is most sensitive to green light and least sensitive to red light, alexandrite appears greenish in daylight where the full spectrum of visible light is present, and reddish in incandescent light which emits less green and blue light. This color change is independent of any change of hue with viewing direction through the crystal that would arise from pleochroism. Alexandrite from the Ural Mountains in Russia can be green by daylight and red by incandescent light. Other varieties of alexandrite may be yellowish or pink in daylight and a columbine or raspberry red by incandescent light. Stones that show a dramatic color change and strong colors (e.g., red-to-green) are rare and sought-after, but stones that show less distinct colors (e.g. yellowish green changing to brownish yellow) may also be considered "alexandrite" by gem labs such as the Gemological Institute of America. According to a popular but controversial story, alexandrite was discovered by the Finnish mineralogist Nils Gustaf Nordenskiöld (1792–1866), and named alexandrite in honor of the future Emperor of All Russia Alexander II Romanov. Nordenskiöld's initial discovery occurred as a result of an examination of a newly found mineral sample he had received from Perovskii, which he identified as emerald at first. The first emerald mine had been opened in 1831. However, recent research suggests that the stone was discovered by Yakov Kokovin. Alexandrite and larger were traditionally thought to be found only in the Ural Mountains, but have since been found in larger sizes in Brazil. Other deposits are located in India (Andhra Pradesh), Madagascar, Tanzania and Sri Lanka. Alexandrite in sizes over three carats are very rare. Today, several labs can produce synthetic lab-grown stones with the same chemical and physical properties as natural alexandrite. Several methods can produce flux-grown alexandrite, Czochralski (or pulled) alexandrite, and hydrothermally-produced alexandrite. Flux-grown gems are fairly difficult to distinguish from natural alexandrite as they contain inclusions that seem natural. Czochralski or pulled alexandrite is easier to identify because it is very clean and contains curved striations visible under magnification. Although the color change in pulled stones can be from blue to red, the color change does not truly resemble that of natural alexandrite from any deposit. Hydrothermal lab-grown alexandrite has identical physical and chemical properties to real alexandrite. Some gemstones falsely described as lab-grown synthetic alexandrite are actually corundum laced with trace elements (e.g., vanadium) or color-change spinel and are not actually chrysoberyl. As a result, they would be more accurately described as simulated alexandrite rather than "synthetic". This alexandrite-like sapphire material has been around for almost 100 years and shows a characteristic purple-mauve colour change, which does not really look like alexandrite because there is never any green. Cymophane Translucent yellowish chatoyant chrysoberyl is called cymophane or cat's eye. Cymophane has its derivation also from the Greek words meaning 'wave' and 'appearance', in reference to the haziness that visually distorts what would normally be viewed as a well defined surface of a cabochon. This effect may be combined with a cat eye effect. In this variety, microscopic tubelike cavities or needle-like inclusions of rutile occur in an orientation parallel to the c-axis, producing a chatoyant effect visible as a single ray of light passing across the crystal. This effect is best seen in gemstones cut in cabochon forms perpendicular to the c-axis. The color in yellow chrysoberyl is due to Fe3+ impurities. Although other minerals such as tourmaline, scapolite, corundum, spinel and quartz can form "cat's eye" stones similar in appearance to cymophane, the jewelry industry designates these stones as "quartz cat's eyes", or "ruby cat's eyes" and only chrysoberyl can be referred to as "cat's eye" with no other designation. Gems lacking the silky inclusions required to produce the cat's eye effect are usually faceted. An alexandrite cat's eye is a chrysoberyl cat's eye that changes color. "Milk and honey" is a term commonly used to describe the color of the best cat's eyes. The effect refers to the sharp milky ray of white light normally crossing the cabochon as a center line along its length and overlying the honey-colored background. The honey color is considered to be top-grade by many gemologists but the lemon yellow colors are also popular and attractive. Cat's eye material is found as a small percentage of the overall chrysoberyl production wherever chrysoberyl is found. Cat's eye became significantly more popular by the end of the 19th century when the Duke of Connaught gave a ring with a cat's eye as an engagement token; this was sufficient to make the stone more popular and increase its value greatly. Until that time, cat's eye had predominantly been present in gem and mineral collections. The increased demand in turn created an intensified search for it in Sri Lanka.
Physical sciences
Minerals
Earth science
50047
https://en.wikipedia.org/wiki/Rainforest
Rainforest
Rainforests are forests characterized by a closed and continuous tree canopy, moisture-dependent vegetation, the presence of epiphytes and lianas and the absence of wildfire. Rainforests can be generally classified as tropical rainforests or temperate rainforests, but other types have been described. Estimates vary from 40% to 75% of all biotic species being indigenous to the rainforests. There may be many millions of species of plants, insects and microorganisms still undiscovered in tropical rainforests. Tropical rainforests have been called the "jewels of the Earth" and the "world's largest pharmacy", because over one quarter of natural medicines have been discovered there. Rainforests as well as endemic rainforest species are rapidly disappearing due to deforestation, the resulting habitat loss and pollution of the atmosphere. Definition Rainforests are characterized by a closed and continuous tree canopy, high humidity, the presence of moisture-dependent vegetation, a moist layer of leaf litter, the presence of epiphytes and lianas and the absence of wildfire. The largest areas of rainforest are tropical or temperate rainforests, but other vegetation associations including subtropical rainforest, littoral rainforest, cloud forest, vine thicket and even dry rainforest have been described. Tropical rainforest Tropical rainforests are characterized by a warm and wet climate with no substantial dry season: typically found within 10 degrees north and south of the equator. Mean monthly temperatures exceed during all months of the year. Average annual rainfall is no less than and can exceed although it typically lies between and . Many of the world's tropical forests are associated with the location of the monsoon trough, also known as the Intertropical Convergence Zone. The broader category of tropical moist forests are located in the equatorial zone between the Tropic of Cancer and Tropic of Capricorn. Tropical rainforests exist in Southeast Asia (from Myanmar (Burma)) to the Philippines, Malaysia, Indonesia, Papua New Guinea and Sri Lanka; also in Sub-Saharan Africa from the Cameroon to the Congo (Congo Rainforest), South America (e.g. the Amazon rainforest), Central America (e.g. Bosawás, the southern Yucatán Peninsula-El Peten-Belize-Calakmul), Australia, and on Pacific Islands (such as Hawaii). Tropical forests have been called the "Earth's lungs", although it is now known that rainforests contribute little net oxygen addition to the atmosphere through photosynthesis. Temperate rainforest Tropical forests cover a large part of the globe, but temperate rainforests only occur in a few regions around the world. Temperate rainforests are rainforests in temperate regions. They occur in North America (in the Pacific Northwest in Alaska, British Columbia, Washington, Oregon and California), in Europe (parts of the British Isles such as the coastal areas of Ireland and Scotland, southern Norway, parts of the western Balkans along the Adriatic coast, as well as in Galicia and coastal areas of the eastern Black Sea, including Georgia and coastal Turkey), in East Asia (in southern China, Highlands of Taiwan, much of Japan and Korea, and on Sakhalin Island and the adjacent Russian Far East coast), in South America (southern Chile) and also in Australia and New Zealand. Dry rainforest Dry rainforests have a more open canopy layer than other rainforests, and are found in areas of lower rainfall (). They generally have two layers of trees. Layers A tropical rainforest typically has a number of layers, each with different plants and animals adapted for life in that particular area. Examples include the emergent, canopy, understory and forest floor layers. Emergent layer The emergent layer contains a small number of very large trees called emergents, which grow above the general canopy, reaching heights of 45–55 m, although on occasion a few species will grow to 70–80 m tall. They need to be able to withstand the hot temperatures and strong winds that occur above the canopy in some areas. Eagles, butterflies, bats and certain monkeys inhabit this layer. Canopy layer The canopy layer contains the majority of the largest trees, typically to tall. The densest areas of biodiversity are found in the forest canopy, a more or less continuous cover of foliage formed by adjacent treetops. The canopy, by some estimates, is home to 50 percent of all plant species. Epiphytic plants attach to trunks and branches, and obtain water and minerals from rain and debris that collects on the supporting plants. The fauna is similar to that found in the emergent layer but more diverse. A quarter of all insect species are believed to exist in the rainforest canopy. Scientists have long suspected the richness of the canopy as a habitat, but have only recently developed practical methods of exploring it. As long ago as 1917, naturalist William Beebe declared that "another continent of life remains to be discovered, not upon the Earth, but one to two hundred feet above it, extending over thousands of square miles." A true exploration of this habitat only began in the 1980s, when scientists developed methods to reach the canopy, such as firing ropes into the trees using crossbows. Exploration of the canopy is still in its infancy, but other methods include the use of balloons and airships to float above the highest branches and the building of cranes and walkways planted on the forest floor. The science of accessing tropical forest canopy using airships or similar aerial platforms is called dendronautics. Understory layer The understory or understorey layer lies between the canopy and the forest floor. It is home to a number of birds, snakes and lizards, as well as predators such as jaguars, boa constrictors and leopards. The leaves are much larger at this level and insect life is abundant. Many seedlings that will grow to the canopy level are present in the understory. Only about 5% of the sunlight shining on the rainforest canopy reaches the understory. This layer can be called a shrub layer, although the shrub layer may also be considered a separate layer. Forest floor The forest floor, the bottom-most layer, receives only 2% of the sunlight. Only plants adapted to low light can grow in this region. Away from riverbanks, swamps and clearings, where dense undergrowth is found, the forest floor is relatively clear of vegetation because of the low sunlight penetration. It also contains decaying plant and animal matter, which disappears quickly, because the warm, humid conditions promote rapid decay. Many forms of fungi growing here help decay the animal and plant waste. Flora and fauna More than half of the world's species of plants and animals are found in rainforests. Rainforests support a very broad array of fauna, including mammals, reptiles, amphibians, birds and invertebrates. Mammals may include primates, felids and other families. Reptiles include snakes, turtles, chameleons and other families; while birds include such families as vangidae and Cuculidae. Dozens of families of invertebrates are found in rainforests. Fungi are also very common in rainforest areas as they can feed on the decomposing remains of plants and animals. The great diversity in rainforest species is in large part the result of diverse and numerous physical refuges, i.e. places in which plants are inaccessible to many herbivores, or in which animals can hide from predators. Having numerous refuges available also results in much higher total biomass than would otherwise be possible. Some species of fauna show a trend towards declining populations in rainforests, for example, reptiles that feed on amphibians and reptiles. This trend requires close monitoring. The seasonality of rainforests affects the reproductive patterns of amphibians, and this in turn can directly affect the species of reptiles that feed on these groups, particularly species with specialized feeding, since these are less likely to use alternative resources. Soils Despite the growth of vegetation in a tropical rainforest, soil quality is often quite poor. Rapid bacterial decay prevents the accumulation of humus. The concentration of iron and aluminium oxides by the laterization process gives the oxisols a bright red colour and sometimes produces mineral deposits such as bauxite. Most trees have roots near the surface because there are insufficient nutrients below the surface; most of the trees' minerals come from the top layer of decomposing leaves and animals. On younger substrates, especially of volcanic origin, tropical soils may be quite fertile. If rainforest trees are cleared, rain can accumulate on the exposed soil surfaces, creating run-off, and beginning a process of soil erosion. Eventually, streams and rivers form and flooding becomes possible. There are several reasons for the poor soil quality. First is that the soil is highly acidic. The roots of plants rely on an acidity difference between the roots and the soil in order to absorb nutrients. When the soil is acidic, there is little difference, and therefore little absorption of nutrients from the soil. Second, the type of clay particles present in tropical rainforest soil has a poor ability to trap nutrients and stop them from washing away. Even if humans artificially add nutrients to the soil, the nutrients mostly wash away and are not absorbed by the plants. Finally, these soils are poor due to the high volume of rain in tropical rainforests washes nutrients out of the soil more quickly than in other climates. Effect on global climate A natural rainforest emits and absorbs vast quantities of carbon dioxide. On a global scale, long-term fluxes are approximately in balance, so that an undisturbed rainforest would have a small net impact on atmospheric carbon dioxide levels, though they may have other climatic effects (on cloud formation, for example, by recycling water vapour). No rainforest today can be considered to be undisturbed. Human-induced deforestation plays a significant role in causing rainforests to release carbon dioxide, as do other factors, whether human-induced or natural, which result in tree death, such as burning and drought. Some climate models operating with interactive vegetation predict a large loss of Amazonian rainforest around 2050 due to drought, forest dieback and the subsequent release of more carbon dioxide. Human uses Tropical rainforests provide timber as well as animal products such as meat and hides. Rainforests also have value as tourism destinations and for the ecosystem services provided. Many foods originally came from tropical forests, and are still mostly grown on plantations in regions that were formerly primary forest. Also, plant-derived medicines are commonly used for fever, fungal infections, burns, gastrointestinal problems, pain, respiratory problems, and wound treatment. At the same time, rainforests are usually not used sustainably by non-native peoples but are being exploited or removed for agricultural purposes. Native people On 18 January 2007, FUNAI reported also that it had confirmed the presence of 67 different uncontacted tribes in Brazil, up from 40 in 2005. With this addition, Brazil has now overtaken the island of New Guinea as the country having the largest number of uncontacted tribes. The province of Irian Jaya or West Papua in the island of New Guinea is home to an estimated 44 uncontacted tribal groups. The tribes are in danger because of the deforestation, especially in Brazil. Central African rainforest is home of the Mbuti pygmies, one of the hunter-gatherer peoples living in equatorial rainforests characterised by their short height (below one and a half metres, or 59inches, on average). They were the subject of a study by Colin Turnbull, The Forest People, in 1962. Pygmies who live in Southeast Asia are, amongst others, referred to as "Negrito". There are many tribes in the rainforests of the Malaysian state of Sarawak. Sarawak is part of Borneo, the third largest island in the world. Some of the other tribes in Sarawak are: the Kayan, Kenyah, Kejaman, Kelabit, Punan Bah, Tanjong, Sekapan, and the Lahanan. Collectively, they are referred to as Dayaks or Orangulu which means "people of the interior". About half of Sarawak's 1.5 million people are Dayaks. Most Dayaks, it is believed by anthropologists, came originally from the South-East Asian mainland. Their mythologies support this. Deforestation Tropical and temperate rainforests have been subjected to heavy legal and illegal logging for their valuable hardwoods and agricultural clearance (slash-and-burn, clearcutting) throughout the 20th century and the area covered by rainforests around the world is shrinking. Biologists have estimated that large numbers of species are being driven to extinction (possibly more than 50,000 a year; at that rate, says E. O. Wilson of Harvard University, a quarter or more of all species on Earth could be exterminated within 50 years) due to the removal of habitat with destruction of the rainforests. Another factor causing the loss of rainforest is expanding urban areas. Littoral rainforest growing along coastal areas of eastern Australia is now rare due to ribbon development to accommodate the demand for seachange lifestyles. Forests are being destroyed at a rapid pace. Almost 90% of West Africa's rainforest has been destroyed. Since the arrival of humans, Madagascar has lost two thirds of its original rainforest. At present rates, tropical rainforests in Indonesia would be logged out in 10 years and Papua New Guinea in 13 to 16 years. According to Rainforest Rescue, an important reason for the increasing deforestation rate, especially in Indonesia, is the expansion of oil palm plantations to meet growing demand for cheap vegetable fats and biofuels. In Indonesia, palm oil is already cultivated on nine million hectares and, together with Malaysia, the island nation produces about 85 percent of the world's palm oil. Several countries, notably Brazil, have declared their deforestation a national emergency. Amazon deforestation jumped by 69% in 2008 compared to 2007's twelve months, according to official government data. However, a 30 January 2009 New York Times article stated, "By one estimate, for every acre of rainforest cut down each year, more than 50 acres of new forest are growing in the tropics." The new forest includes secondary forest on former farmland and so-called degraded forest.
Physical sciences
Forests
null
50048
https://en.wikipedia.org/wiki/Polar%20climate
Polar climate
The polar climate regions are characterized by a lack of warm summers but with varying winters. Every month a polar climate has an average temperature of less than . Regions with a polar climate cover more than 20% of the Earth's area. Most of these regions are far from the equator and near the poles, and in this case, winter days are extremely short and summer days are extremely long (they could last for the entirety of each season or longer). A polar climate consists of cool summers and very cold winters (or, in the case of ice cap climates, no real summer at all), which results in treeless tundras, glaciers, or a permanent or semi-permanent layer of ice. It is identified with the letter E in the Köppen climate classification. Subtypes There are two types of polar climate: ET, or tundra climate; and EF, or ice cap climate. A tundra climate is characterized by having at least one month whose average temperature is above , while an ice cap climate has no months averaging above . In a tundra climate, even coniferous trees cannot grow, but other specialized plants such as the arctic poppy can grow. In an ice cap climate, no plants can grow, and ice gradually accumulates until it flows or slides elsewhere. Many high altitude locations on Earth have a climate where no month has an average temperature of or higher, but as this is due to elevation, this climate is referred to as Alpine climate. Alpine climate can mimic either tundra or ice cap climate. Locations On Earth, the only continent where the ice cap polar climate is predominant is Antarctica. All but a few isolated coastal areas on the island of Greenland also have the ice cap climate. Summits of many high mountains also have ice cap climate due to their high elevation. Coastal regions of Greenland that do not have permanent ice sheets have the less extreme tundra climates. The northernmost part of the Eurasian land mass, from the extreme northeastern coast of Scandinavia and eastwards to the Bering Strait, large areas of northern Siberia and northern Iceland have tundra climate as well. Large areas in northern Canada and northern Alaska have tundra climate, changing to ice cap climate in the most northern parts of Canada. Southernmost Argentina (Tierra del Fuego where it abuts the Drake Passage) and such subantarctic islands such as the South Shetland Islands and the Falkland Islands have tundra climates of slight temperature range in which no month is as warm as . These subantarctic lowlands are found closer to the equator than the coastal tundras of the Arctic basin. Summits of many mountains of Earth also have polar climates, due to their higher elevations. Arctic Some parts of the Arctic are covered by ice (sea ice, glacial ice, or snow) year-round, especially at the most poleward parts; and nearly all parts of the Arctic experience long periods with some form of ice or snow on the surface. Average January temperatures range from about , and winter temperatures can drop below over large parts of the Arctic. Average July temperatures range from about , with some land areas occasionally exceeding in summer. The Arctic consists of ocean that is almost surrounded by landmasses like Russia and Canada. As such, the climate of much of the Arctic is moderated by the ocean water, which can never have a temperature below . In winter, this relatively warm water, even though covered by the polar ice pack, keeps the North Pole from being the coldest place in the Northern Hemisphere, and it is also part of the reason that Antarctica is so much colder than the Arctic. In summer, the presence of the nearby water keeps coastal areas from warming as much as they might otherwise, just as it does in temperate regions with maritime climates. Antarctica The climate of Antarctica is the coldest on Earth. Antarctica has the lowest naturally occurring temperature ever recorded: at Vostok Station in 1983. It is also extremely dry (technically a desert, or so called polar desert), averaging of precipitation per year, as weather fronts rarely penetrate far into the continent. Mountains Summits of most mountains also have polar climates, despite being in lower latitudes, due to their high elevations. All mountains of the Rocky Mountains, Alps, and the Caucasus have tundra climate. Some mountains of the Andes, the Saint Elias Mountains, and most mountains of the Himalayas, the Karakoram, the Hindu Kush Range, Pamir Mountains, the Tian Shan Mountains, and the Alaska Range also have ice cap climates at extremely high elevations, in addition to tundra climates at relatively lower elevations. Only the summit of Mount Rainier has an ice cap climate in the Cascade Range. Quantifying polar climate There have been several attempts at quantifying what constitutes a polar climate. Climatologist Wladimir Köppen demonstrated a relationship between the Arctic and Antarctic tree lines and the summer isotherm; i.e., places where the average temperature in the warmest calendar month of the year is below the fixed threshold of cannot support forests. See Köppen climate classification for more information. Otto Nordenskjöld theorized that winter conditions also play a role: His formula is , where W is the average temperature in the warmest month and C the average of the coldest month, both in degrees Celsius. For example, if a particular location had an average temperature of in its coldest month, the warmest month would need to average or higher for trees to be able to survive there as . Nordenskiöld's line tends to run to the north of Köppen's near the west coasts of the Northern Hemisphere continents, south of it in the interior sections, and at about the same latitude along the east coasts of both Asia and North America. In the Southern Hemisphere, all of Tierra del Fuego lies outside the polar region in Nordenskiöld's system, but part of the island (including Ushuaia, Argentina) is reckoned as being within the Antarctic under Köppen's. In 1947, Holdridge improved on these schemes, by defining biotemperature: the mean annual temperature, where all temperatures below (and above ) are treated as 0 °C (because it makes no difference to plant life, being dormant). If the mean biotemperature is between , Holdridge quantifies the climate as subpolar (or alpine, if the low temperature is caused by elevation).
Physical sciences
Climates
Earth science
50049
https://en.wikipedia.org/wiki/Subarctic%20climate
Subarctic climate
The subarctic climate (also called subpolar climate, or boreal climate) is a continental climate with long, cold (often very cold) winters, and short, warm to cool summers. It is found on large landmasses, often away from the moderating effects of an ocean, generally at latitudes from 50°N to 70°N, poleward of the humid continental climates. Like other Class D climates, they are rare in the Southern Hemisphere, only found at some isolated highland elevations. Subarctic or boreal climates are the source regions for the cold air that affects temperate latitudes to the south in winter. These climates represent Köppen climate classification Dfc, Dwc, Dsc, Dfd, Dwd and Dsd. Description This type of climate offers some of the most extreme seasonal temperature variations found on the planet: in winter, temperatures can drop to below and in summer, the temperature may exceed . However, the summers are short; no more than three months of the year (but at least one month) must have a 24-hour average temperature of at least to fall into this category of climate, and the coldest month should average below (or ). Record low temperatures can approach . With 5–7 consecutive months when the average temperature is below freezing, all moisture in the soil and subsoil freezes solidly to depths of many feet. Summer warmth is insufficient to thaw more than a few surface feet, so permafrost prevails under most areas not near the southern boundary of this climate zone. Seasonal thaw penetrates from , depending on latitude, aspect, and type of ground. Some northern areas with subarctic climates located near oceans (southern Alaska, northern Norway, Sakhalin Oblast and Kamchatka Oblast), have milder winters and no permafrost, and are more suited for farming unless precipitation is excessive. The frost-free season is very short, varying from about 45 to 100 days at most, and a freeze can occur anytime outside the summer months in many areas. Description The first D indicates continentality, with the coldest month below (or ). The second letter denotes precipitation patterns: s: A dry summer—the driest month in the high-sun half of the year (April to September in the Northern Hemisphere, October to March in the Southern Hemisphere) has less than / of rainfall and has exactly or less than the precipitation of the wettest month in the low-sun half of the year (October to March in the Northern Hemisphere, April to September in the Southern Hemisphere), w: A dry winter—the driest month in the low-sun half of the year has exactly or less than one‑tenth of the precipitation found in the wettest month in the summer half of the year, f: No dry season—does not meet either of the alternative specifications above; precipitation and humidity are often high year-round. The third letter denotes temperature: c: Regular subarctic, only one–three months above , coldest month between (or ) and . d: Severely cold subarctic, only one–three months above , coldest month at or below . Precipitation Most subarctic climates have little precipitation, typically no more than over an entire year due to the low temperatures and evapotranspiration. Away from the coasts, precipitation occurs mostly in the summer months, while in coastal areas with subarctic climates the heaviest precipitation is usually during the autumn months when the relative warmth of sea vis-à-vis land is greatest. Low precipitation, by the standards of more temperate regions with longer summers and warmer winters, is typically sufficient in view of the very low evapotranspiration to allow a water-logged terrain in many areas of subarctic climate and to permit snow cover during winter, which is generally persistent for an extended period. A notable exception to this pattern is that subarctic climates occurring at high elevations in otherwise temperate regions have extremely high precipitation due to orographic lift. Mount Washington, with temperatures typical of a subarctic climate, receives an average rain-equivalent of of precipitation per year. Coastal areas of Khabarovsk Krai also have much higher precipitation in summer due to orographic influences (up to in July in some areas), whilst the mountainous Kamchatka peninsula and Sakhalin island are even wetter, since orographic moisture isn't confined to the warmer months and creates large glaciers in Kamchatka. Labrador, in eastern Canada, is similarly wet throughout the year due to the semi-permanent Icelandic Low and can receive up to of rainfall equivalent per year, creating a snow cover of up to that does not melt until June. Vegetation and land use Vegetation in regions with subarctic climates is generally of low diversity, as only hardy tree species can survive the long winters and make use of the short summers. Trees are mostly limited to conifers, as few broadleaved trees are able to survive the very low temperatures in winter. This type of forest is also known as taiga, a term which is sometimes applied to the climate found therein as well. Even though the diversity may be low, the area and numbers are high, and the taiga (boreal) forest is the largest forest biome on the planet, with most of the forests located in Russia and Canada. The process by which plants become acclimated to cold temperatures is called hardening. Agricultural potential is generally poor, due to the natural infertility of soils and the prevalence of swamps and lakes left by departing ice sheets, and short growing seasons prohibit all but the hardiest of crops. Despite the short season, the long summer days at such latitudes do permit some agriculture. In some areas, ice has scoured rock surfaces bare, entirely stripping off the overburden. Elsewhere, rock basins have been formed and stream courses dammed, creating countless lakes. Neighboring regions Should one go northward or even toward a polar sea, one finds that the warmest month has an average temperature of less than , and the subarctic climate grades into a tundra climate not at all suitable for trees. Southward, this climate grades into the humid continental climates with longer summers (and usually less-severe winters) allowing broadleaf trees; in a few locations close to a temperate sea (as in northern Norway and southern Alaska), this climate can grade into a short-summer version of an oceanic climate, the subpolar oceanic climate, as the sea is approached where winter temperatures average near or above freezing despite maintaining the short, cool summers. In China and Mongolia, as one moves southwestwards or towards lower elevations, temperatures increase but precipitation is so low that the subarctic climate grades into a cold semi-arid climate. Distribution Dfc and Dfd distribution The Dfc climate, by far the most common subarctic type, is found in the following areas: Northern Eurasia The majority of Siberia (notable cities: Yakutsk, Surgut, Norilsk, Magadan) The Kamchatka Peninsula and the northern and central parts of the Kuril Islands and Sakhalin Island (notable cities: Petropavlovsk-Kamchatsky) The northern half of Fennoscandia and European Russia (milder winters in coastal areas) and higher elevations further south (notable cities: Oulu, Umeå, Tromsø, Murmansk, Arkhangelsk) The Western Alps between , and the Eastern Alps between Central Romania Some parts of central Germany and Poland The Tatra Mountains in Poland and Slovakia, above . The Pyrenees, between The Northeastern Anatolia Region and the Pontic Alps, between Mountain summits in Scotland, most notably in the Cairngorms and the Nevis Range The far northeast of Turkey Further north and east in Siberia, continentality increases so much that winters can be exceptionally severe, averaging below , even though the hottest month still averages more than . This creates Dfd climates, which are mostly found in the Sakha Republic: Northeast Siberian taiga Central Yakutian Lowland Oymyakon Verkhoyansk North America Most of Interior, Western and Southcentral Alaska (notable cities and towns: Anchorage, Wasilla, Nome, Fort Yukon) The high Rocky Mountains in Colorado, Wyoming, Idaho, Utah, Montana and the White Mountains of New Hampshire (notable cities: Fraser, Brian Head) Much of Canada from about 53–55°N to the tree line, including: Southern Labrador (notable cities: Labrador City) Certain areas within Newfoundland interior and along its northern coast Quebec: Jamésie, Côte-Nord and far southern Nunavik Far northern Ontario The northern Prairie Provinces Near, but not including, the city of Edmonton The Rocky Mountain Foothills in Alberta and British Columbia Most of the Yukon (Notable cities: Whitehorse, Dawson City) Most of the Northwest Territories (Notable cities: Yellowknife, Inuvik) Southwestern Nunavut In the Southern Hemisphere, the Dfc climate is found only in small, isolated pockets in the Snowy Mountains of Australia, the Southern Alps of New Zealand, and the Lesotho Highlands. In South America, this climate occurs on the western slope of the central Andes in Chile and Argentina, where climatic conditions are notably more humid compared to the eastern slope. The presence of the Andes mountain range contributes to a wetter climate on the western slope by capturing moisture from the Pacific Ocean, resulting in increased precipitation, especially during the winter months. This climate zone supports the presence of temperate rainforests, mostly on highest areas of the Valdivian rainforest in Chile and the subantarctic forest in Argentina. Dsc and Dsd distribution Climates classified as Dsc or Dsd, with a dry summer, are rare, occurring in very small areas at high elevation around the Mediterranean Basin, Iran, Kyrgyzstan, Tajikistan, Alaska and other parts of the northwestern United States (Eastern Washington, Eastern Oregon, Southern Idaho, California's Eastern Sierra), the Russian Far East, Akureyri, Iceland, Seneca, Oregon, and Atlin, British Columbia. Turkey and Afghanistan are exceptions; Dsc climates are common in Northeast Anatolia, in the Taurus and Köroğlu Mountains, and the Central Afghan highlands. In the Southern Hemisphere, the Dsc climate is present in South America as a subarctic climate influenced by Mediterranean characteristics, often considered a high-altitude variant of the Mediterranean climate. It is located on the eastern slopes of the central Argentine Andes and in some sections on the Chilean side. While there are no major settlements exhibiting this climate, several localities in the vicinity experience it, such as San Carlos de Bariloche, Villa La Angostura, San Martín de los Andes, Balmaceda, Punta de Vacas, and Termas del Flaco. Dwc and Dwd distribution Climates classified as Dwc or Dwd, with a dry winter, are found in parts of East Asia, like China, where the Siberian High makes the winters colder than places like Scandinavia or Alaska interior but extremely dry (typically with around of rainfall equivalent per month), meaning that winter snow cover is very limited. The Dwc climate can be found in: Much of northern Mongolia Russia: Most of Khabarovsk Krai except the south Southeastern Sakha Republic Southern Magadan Oblast Northern Amur Oblast Northern Buryatia Zabaykalsky Krai Irkutsk Oblast China: Tahe County and Mohe County in Heilongjiang Northern Hulunbuir in Inner Mongolia Gannan in Gansu (due to extreme elevation) Huangnan, eastern Hainan and eastern Guoluo in Qinghai (due to extreme elevation) Most of Garzê and Ngawa Autonomous Prefectures (due to extreme elevation) in Sichuan Most of Qamdo Prefecture (due to extreme elevation) in the Tibet Autonomous Region Parts of Ladakh (including Siachen Glacier) and Spiti regions of India Middle reaches of the Himalayas in Nepal, Bhutan, Myanmar, and Northeast India. Parts of Kaema Plateau (including Mount Baekdu, Samjiyon, Musan) in North Korea Southeast Fairbanks Census Area in Alaska In the Southern Hemisphere, small pockets of the Lesotho Highlands and the Drakensberg Mountains have a Dwc classification. Charts of selected sites
Physical sciences
Climates
Earth science
50051
https://en.wikipedia.org/wiki/Temperate%20climate
Temperate climate
In geography, the temperate climates of Earth occur in the middle latitudes (approximately 23.5° to 66.5° N/S of Equator), which span between the tropics and the polar regions of Earth. These zones generally have wider temperature ranges throughout the year and more distinct seasonal changes compared to tropical climates, where such variations are often small; they usually differ only in the amount of precipitation. In temperate climates, not only do latitudinal positions influence temperature changes, but various sea currents, prevailing wind direction, continentality (how large a landmass is) and altitude also shape temperate climates. The Köppen climate classification defines a climate as "temperate" C, when the mean temperature is above but below in the coldest month to account for the persistence of frost. However, some adaptations of Köppen set the minimum at . Continental climates are classified as D and considered to be varieties of temperate climates, having more extreme temperatures, with mean temperatures in the coldest month usually being below . Zones and climates The north temperate zone extends from the Tropic of Cancer (approximately 23.5° north latitude) to the Arctic Circle (approximately 66.5° north latitude). The south temperate zone extends from the Tropic of Capricorn (approximately 23.5° south latitude) to the Antarctic Circle (at approximately 66.5° south latitude). In some climate classifications, the temperate zone may be divided into several smaller climate zones, based on monthly temperatures, the coldest month, and rainfall. These can include the subtropical zone (humid subtropical and Mediterranean climate), and the cool temperate zone (oceanic and continental climates). Subtropical zone These climates are typically found in the more equatorial regions of the temperate zone, between 23.5° and 35° north or south. They are influenced more by the tropics than by other temperate climate types, usually experiencing warmer temperatures throughout the year, with longer, hotter summers and shorter, milder winters. Freezing precipitation is uncommon in this part of the temperate zone. Humid subtropical (Cfa) and monsoon subtropical (Cwa) climates Humid subtropical climates generally have long, hot and humid summers with frequent convective showers in summer, and a peak seasonal rainfall in the hottest months. Winters are normally mild and above freezing in the humid subtropics. Warm ocean currents are usually found in coastal areas with humid subtropical climates. This type of climate is normally located along leeward lower east coasts of continents such as in southeast and central Argentina, Uruguay and south of Brazil, Northern Vietnam, the southeast portions of East Asia, southern and portions of the northeast and midwestern United States and portions of, South Africa, Ethiopia, and eastern Australia. In some areas with a humid subtropical climate (most notably southeast China and North India), there is an even sharper wet-dry season, called a monsoon subtropical climate or subtropical monsoon (Cwa). In these regions, winters are quite chilly and dry and summers have very heavy rainfall. Some Cwa areas in southern China report more than 80% of annual precipitation in the five warmest months (southwest monsoon). Mediterranean climates (Csa, Csb) Mediterranean climates have the opposite rainfall pattern to dry-winter climates, with a dry summer and wet winter. This climate occurs mostly at the western edges and coasts of the continents and are bounded by arid deserts on their equatorward sides that brings dry winds causing the dry season of summer, and oceanic climates to the poleward sides that are influenced by cool ocean currents and air masses that bring the rainfall of winter. The five main Mediterranean regions of the world are the Mediterranean basin in North Africa, Southern Europe, and West Asia, coastal California in the United States, the South and West states of Australia, the Western Cape of South Africa and the south and southwestern coast of Chile. Subtropical highland climates (Cwb, Cfb) Subtropical highland climates are climate variants often grouped together with oceanic climates found in some mountainous areas of either the tropics or subtropics. They have characteristically mild temperatures year-round, featuring the four seasons in the subtropics and no marked seasons in the tropics, the latter usually remaining mild to cool through most of the year. Subtropical highland climates under the Cfb classification usually have rainfall spread relatively evenly in all months of the year similar to most oceanic climates while climates under the Cwb classification have significant monsoon influence, usually having dry winters and wet summers. Middle latitude zone These climates occur in the middle latitudes, between approximately 35° and 66.5° north and south of the equator. There is an equal climatic influence from both the polar and tropical zones in this climate region. Two types of climates are in this zone, a milder oceanic one and more severe seasonal continental one. Most prototypical temperate climates have a distinct four-season pattern, especially in the continental climate sector. Oceanic climates (Cfb) Oceanic climates are created by the on-shore flow from the cool high latitude oceans to their west. This causes the climate to have mild summers and cool (but not cold) winters, and relative humidity and precipitation evenly distributed throughout the year. These climates are frequently cloudy and cool, and winters are milder than those in the continental climate. Regions with oceanic climates include northwestern Europe, northwestern North America, southeastern and southwestern South America, southeastern Australia and most of New Zealand. Humid continental climates (Dfa, Dfb, Dwa, Dwb, Dsa, Dsb) Humid continental climates are considered as a variety of temperate climates due to lying in the temperate zones, although they are classified separately from other temperate climates in the Köppen climate classification. In contrast to oceanic climates, they are created by large land masses and seasonal changes in wind direction. This causes humid continental climates to have severe temperatures for the season compared to other temperate climates, meaning a hot summer and cold winter. Precipitation may be evenly distributed throughout the year, while in some locations there is a summer accent on rainfall. Regions with humid continental climates include southeastern Canada, the upper portions of the eastern United States, portions of eastern Europe, parts of China, Japan and the Korean Peninsula. Subpolar zone These are temperate climates that compared to the subtropics are on the poleward edge of the temperate zone. Therefore, they still have four marked seasons including a warmer one, but are far more influenced by the polar zones than any other but the very polar climates (tundra and ice cap climate). Subpolar oceanic and cold subtropical highland climates (Cfc, Cwc) Areas with subpolar oceanic climates feature an oceanic climate but are usually located closer to polar regions. As a result of their location, these regions tend to be on the cool end of oceanic climates. Snowfall tends to be more common here than in other oceanic climates. Subpolar oceanic climates are less prone to temperature extremes than subarctic climates or continental climates, featuring milder winters than these climates but still with similar summers. This variant of an oceanic climate is found in parts of coastal Iceland, the Faroe Islands, parts of Scotland, northwestern coastal areas of Norway such as Lofoten and reaching to 70° north on some islands, uplands near the coast of southwestern Norway, the Aleutian Islands of Alaska and northern parts of the Alaskan Panhandle, some parts of Southern Argentina and Chile (though most regions are still classified as continental subantarctic), and a few highland areas of Tasmania, the Australian Alps and Southern Alps of New Zealand. This type of climate is even found in tropical areas such as the Papuan Highlands in Indonesia. Cfc is the categorization for this regime. Even in the middle of summer, temperatures exceeding 20°C (68 °F) are exceptional weather events in the most maritime of those locations impacted by this regime. In some parts of this climate, temperatures as high as 30°C (86°F) have been recorded on rare occasions, while temperatures as low as have still been recorded on rare occasions. A cold variant of the monsoon-influenced subtropical highland climate similar to subpolar oceanic climates occurs in small areas in the Chinese provinces of Sichuan and Yunnan, and parts of the Altiplano between Bolivia, Peru and Chile, where summers are sufficiently short to be Cwc with fewer than four months over due to the high altitudes at these locations. El Alto, Bolivia, is one of the few confirmed towns that features this variation of the subtropical highland climate. Cold summer mediterranean climates (Csc) Cold summer mediterranean climates (Csc) are present in high-elevation areas around coastal Csb climate areas, where the strong maritime influence prevents the average winter monthly temperature from dropping below 0 °C. Despite the maritime influence, they are classified alongside other mediterranean climates in the Köppen classification rather than oceanic climates like subtropical highland climates due to the opposite rainfall pattern. This climate is rare and is predominantly found in climate fringes and isolated areas of the Cascades and Andes Mountains, as the dry-summer climate extends further poleward in the Americas than elsewhere. Human aspects Demography, fauna and flora The vast majority of the world's human population resides in temperate zones, especially in the Northern Hemisphere, due to its greater mass of land and lack of extreme temperatures. The biggest described number of taxa in a temperate region is found in southern Africa, where some 24,000 taxa (species and infraspecific taxa) have been described. Agriculture Farming is a large-scale practice in the temperate regions (except for boreal/subarctic regions) due to the plentiful rainfall and warm summers, because most agricultural activity occurs in the spring and summer, cold winters have a small effect on agricultural production. Extreme winters or summers have a huge impact on the productivity of agriculture which is less common. Urbanization Temperate regions have the majority of the world's population, which leads to large cities. There are a couple of factors why the climate of large city landscapes differs from the climate of rural areas. One factor is the strength of the absorption rate of buildings and asphalt, which is higher than that of natural land. The other large factor is the burning of fossil fuels from buildings and vehicles. These factors have led to the average climate of cities to be warmer than surrounding areas.
Physical sciences
Climatology
null
50111
https://en.wikipedia.org/wiki/Zebu
Zebu
The zebu (; Bos indicus), sometimes known in the plural as indicine cattle, Camel cow or humped cattle, is a species or subspecies of domestic cattle originating in South Asia. Zebu, like many Sanga cattle breeds, differs from taurine cattle by a fatty hump on their shoulders, a large dewlap, and sometimes drooping ears. They are well adapted to withstanding high temperatures and are farmed throughout the tropics. Zebu are used as draught and riding animals, dairy cattle and beef cattle, as well as for byproducts such as hides and dung for fuel and manure. Some small breeds such as Nadudana also known as the miniature zebu are also kept as pets. In some regions, zebu have significant religious meaning. Taxonomy Both scientific names Bos taurus and Bos indicus were introduced by Carl Linnaeus in 1758, with the latter used to describe humped cattle in China. The zebu was classified as a distinct species by Juliet Clutton-Brock in 1999, but as a subspecies of the domestic cattle, Bos taurus indicus, by both Clutton-Brock and Colin Groves in 2004 and by Peter Grubb in 2005. In 2011, Groves and Grubb classified it as a distinct species again. The American Society of Mammalogists considers it part of the species Bos taurus in analogy to Sanga cattle (Bos taurus africanus ). The International Commission on Zoological Nomenclature has not yet published a ruling on the classification of domestic derivatives and no scientific body advocates the abolition of the biological species concept for domestic animals. Currently (2024s), it is not correct to describe Zebu animals as Bos taurus indicus, but rather as Bos indicus, because they are a different species from Bos taurus. The extinct wild auroch (Bos primigenius) population diverged into two distinct genetic strains: the humpless Bos taurus (taurine) and the humped Bos indicus (indicine or zebu). Origin Zebu cattle were found to derive from the Indian form of aurochs and have first been domesticated between 7,000 and 6,000 YBP at Mehrgarh, present-day Pakistan, by people linked to or coming from Mesopotamia. Its wild ancestor, the Indian aurochs, became extinct during the Indus Valley civilisation likely due to habitat loss, caused by expanding pastoralism and interbreeding with domestic zebu. Its latest remains ever found were dated to 3,800 YBP, making it the first of the three aurochs subspecies to die out. Archaeological evidence including depictions on pottery and rocks suggests that humped cattle likely imported from the Near East was present in Egypt around 4,000 YBP. Its first appearance in the Subsahara is dated to after 700 AD and it was introduced to the Horn of Africa around 1000. Phylogenetic analysis revealed that all the zebu Y chromosome haplotype groups are found in three different lineages: Y3A, the most predominant and cosmopolitan lineage; Y3B, only observed in West Africa; and Y3C, predominant in south and northeast India. Characteristics Zebu, as well as many Sanga cattle, have humps on the shoulders, large dewlaps and droopy ears. Compared to taurine cattle, the zebu is well adapted to the hot tropical savanna climate and steppe environments. These adaptations result in higher tolerance for drought, heat and sunlight exposure. Behaviour and ecology Studies on the natural weaning of zebu cattle have shown that cows wean their calves over a 2-week period, but after that, continue to show strong affiliatory behavior with their offspring and preferentially choose them for grooming and as grazing partners for at least 4–5 years. Reproduction Zebu are generally mature enough to give birth when they are 29 months old. This is based on the development of their bodies to withstand the strain of carrying the calf and lactation. Early reproduction can place too much stress on the body and possibly shorten lifespans. The gestation period averages 285 days, but varies depending on the age and nutrition of the mother. The sex of the calf may also affect the carrying time, as male calves are carried for a longer period than females. Location, breed, body weight, and season affect the overall health of the animal and in return may also affect the gestation period. Health and diseases The zebu is susceptible to nagana as it does not exhibit trypanotolerance. It is said to be resilient to parasites. Breeds and hybrids Zebu are very common in much of Asia, including Pakistan, India, Nepal, Bangladesh and China. In Asia, taurine cattle are mainly found in the northern regions such as Japan, Korea, northern China and Mongolia. In China, taurine cattle are most common in northern breeds, zebu more common in southern breeds, with hybrids in between. Geneticists at the International Livestock Research Institute (ILRI) in Nairobi, Kenya and in Addis Ababa, Ethiopia discovered that cattle had been domesticated in Africa independently of domestication in the Near East. They concluded that the southern African cattle populations derive originally from East Africa rather than from a southbound migration of taurine cattle. The results are inconclusive as to whether domestication occurred first in Africa or the Near East. Sanga cattle breeds is considered to have originated from hybridization of zebu with taurine cattle leading to the Afrikaner, Red Fulani, Ankole, Boran and many other breeds. Some 75 breeds of zebu are known, split about evenly between African and Indian breeds. Other breeds of zebu are quite local, like the Hariana from Haryana, Punjab or the Rath from Alwar district, Rajasthan. Zebu, which are adapted to high temperatures, were imported into Brazil in the early 20th century. Their importation marked a change in cattle ranching in Brazil as they were considered "ecological" since and their meat was lean and without chemical residues. In the early 20th century in Brazil, Zebu were crossbred with Charolais cattle, a European taurine breed. The resulting breed, 63% Charolais and 37% zebu, is called the Canchim. It has a better meat quality than the zebu and better heat resistance than European cattle. The zebu breeds used were primarily Indo-Brazilian with some Nelore and Guzerat. Another Charolais cross-breed with Brahmans is called Australian Charbray and is recognised as a breed in some countries. From the 1960s onwards, Nelore which is an off breed of Ongole Cattle became the primary breed of cattle in Brazil because of its hardiness, heat-resistance, and because it thrives on poor-quality forage and breeds easily, with the calves rarely requiring human intervention to survive. Currently more than 80% of beef cattle in Brazil (approximately 167,000,000 animals) are either purebred or hybrid Ongole Cattle which is originated from Ongle region of Andhra Pradesh. Uses Zebu are used as draught and riding animals, beef cattle, dairy cattle, as well as for byproducts such as hides, dung for fuel and manure, and horn for knife handles and the like. Zebu, mostly miniature zebu, are kept as pets. In India, the number of draft cattle in 1998 was estimated at 65.7 million head. Zebu cows commonly have low production of milk. They do not produce milk until maturation later in their lives and do not produce much. When zebus are crossed with taurine cattle, milk production generally increases. In Madagascar, zebu outnumber people, and there are an "astonishing" 6,813 Malagasy proverbs, common sayings, and expressions referring to zebu in parlance on the island. Zebu are wrestled by young men in a competitive ritual of courtship called tolon'omby. Within the Indian state of Tamil Nadu, zebu are used for jallikattu. In 1999, researchers at Texas A&M University successfully cloned a zebu. Hindu tradition Zebu are venerated in Hinduism of India. In the historical Vedic religion they were a symbol of plenty. In later times they gradually acquired their present status. According to the Mahabharata, they are to be treated with the same respect 'as one's mother'. In the middle of the first millennium, the consumption of beef began to be disfavoured by lawgivers. Milk and milk products were used in Vedic rituals. In the postvedic period products like milk, curd, ghee, but also cow dung and urine gomutra, or the combination of these five panchagavya began to assume an increasingly important role in ritual purification and expiation.
Biology and health sciences
Bovidae
Animals
50185
https://en.wikipedia.org/wiki/Jellyfish
Jellyfish
Jellyfish, also known as sea jellies, are the medusa-phase of certain gelatinous members of the subphylum Medusozoa, which is a major part of the phylum Cnidaria. Jellyfish are mainly free-swimming marine animals, although a few are anchored to the seabed by stalks rather than being motile. They are made of an umbrella-shaped main body made of mesoglea, known as the bell, and a collection of trailing tentacles on the underside. Via pulsating contractions, the bell can provide propulsion for locomotion through open water. The tentacles are armed with stinging cells and may be used to capture prey or to defend against predators. Jellyfish have a complex life cycle, and the medusa is normally the sexual phase, which produces planula larvae. These then disperse widely and enter a sedentary polyp phase which may include asexual budding before reaching sexual maturity. Jellyfish are found all over the world, from surface waters to the deep sea. Scyphozoans (the "true jellyfish") are exclusively marine, but some hydrozoans with a similar appearance live in fresh water. Large, often colorful, jellyfish are common in coastal zones worldwide. The medusae of most species are fast-growing, and mature within a few months then die soon after breeding, but the polyp stage, attached to the seabed, may be much more long-lived. Jellyfish have been in existence for at least 500 million years, and possibly 700 million years or more, making them the oldest multi-organ animal group. Jellyfish are eaten by humans in certain cultures. They are considered a delicacy in some Asian countries, where species in the Rhizostomeae order are pressed and salted to remove excess water. Australian researchers have described them as a "perfect food": sustainable and protein-rich but relatively low in food energy. They are also used in cell and molecular biology research, especially the green fluorescent protein used by some species for bioluminescence. This protein has been adapted as a fluorescent reporter for inserted genes and has had a large impact on fluorescence microscopy. The stinging cells used by jellyfish to subdue their prey can injure humans. Thousands of swimmers worldwide are stung every year, with effects ranging from mild discomfort to serious injury or even death. When conditions are favourable, jellyfish can form vast swarms, which may damage fishing gear by filling fishing nets, and sometimes clog the cooling systems of power and desalination plants which draw their water from the sea. Names The name jellyfish, in use since 1796, has traditionally been applied to medusae and all similar animals including the comb jellies (ctenophores, another phylum). The term jellies or sea jellies is more recent, having been introduced by public aquaria in an effort to avoid use of the word "fish" with its modern connotation of an animal with a backbone, though shellfish, cuttlefish and starfish are not vertebrates either. In scientific literature, "jelly" and "jellyfish" have been used interchangeably. Many sources refer to only scyphozoans as "true jellyfish". A group of jellyfish is called a "smack" or a "smuck". Mapping to taxonomic groups Phylogeny Definition The term jellyfish broadly corresponds to medusae, that is, a life-cycle stage in the Medusozoa. The American evolutionary biologist Paulyn Cartwright gives the following general definition: The Merriam-Webster dictionary defines jellyfish as follows: Given that jellyfish is a common name, its mapping to biological groups is inexact. Some authorities have called the comb jellies and certain salps jellyfish, though other authorities state that neither of these are jellyfish, which they consider should be limited to certain groups within the medusozoa. The non-medusozoan clades called jellyfish by some but not all authorities (both agreeing and disagreeing citations are given in each case) are indicated with "???" on the following cladogram of the animal kingdom: Medusozoan jellyfish Jellyfish are not a clade, as they include most of the Medusozoa, barring some of the Hydrozoa. The medusozoan groups included by authorities are indicated on the following phylogenetic tree by the presence of citations. Names of included jellyfish, in English where possible, are shown in boldface; the presence of a named and cited example indicates that at least that species within its group has been called a jellyfish. Taxonomy The subphylum Medusozoa includes all cnidarians with a medusa stage in their life cycle. The basic cycle is egg, planula larva, polyp, medusa, with the medusa being the sexual stage. The polyp stage is sometimes secondarily lost. The subphylum include the major taxa, Scyphozoa (large jellyfish), Cubozoa (box jellyfish) and Hydrozoa (small jellyfish), and excludes Anthozoa (corals and sea anemones). This suggests that the medusa form evolved after the polyps. Medusozoans have tetramerous symmetry, with parts in fours or multiples of four. The four major classes of medusozoan Cnidaria are: Scyphozoa are sometimes called true jellyfish, though they are no more truly jellyfish than the others listed here. They have tetra-radial symmetry. Most have tentacles around the outer margin of the bowl-shaped bell, and long, oral arms around the mouth in the center of the subumbrella. Cubozoa (box jellyfish) have a (rounded) box-shaped bell, and their velarium assists them to swim more quickly. Box jellyfish may be related more closely to scyphozoan jellyfish than either are to the Hydrozoa. Hydrozoa medusae also have tetra-radial symmetry, nearly always have a velum (diaphragm used in swimming) attached just inside the bell margin, do not have oral arms, but a much smaller central stalk-like structure, the manubrium, with terminal mouth opening, and are distinguished by the absence of cells in the mesoglea. Hydrozoa show great diversity of lifestyle; some species maintain the polyp form for their entire life and do not form medusae at all (such as Hydra, which is hence not considered a jellyfish), and a few are entirely medusal and have no polyp form. Staurozoa (stalked jellyfish) are characterized by a medusa form that is generally sessile, oriented upside down and with a stalk emerging from the apex of the "calyx" (bell), which attaches to the substrate. At least some Staurozoa also have a polyp form that alternates with the medusoid portion of the life cycle. Until recently, Staurozoa were classified within the Scyphozoa. There are over 200 species of Scyphozoa, about 50 species of Staurozoa, about 50 species of Cubozoa, and the Hydrozoa includes about 1000–1500 species that produce medusae, but many more species that do not. Fossil history Since jellyfish have no hard parts, fossils are rare. The oldest unambiguous fossil of a free-swimming medusa is Burgessomedusa from the mid-Cambrian Burgess Shale of Canada, which is likely either a stem group of box jellyfish (Cubozoa) or Acraspeda (the clade including Staurozoa, Cubozoa, and Scyphozoa). Other claimed records from the Cambrian of China and Utah in the United States are uncertain, and possibly represent ctenophores instead. Anatomy The main feature of a true jellyfish is the umbrella-shaped bell. This is a hollow structure consisting of a mass of transparent jelly-like matter known as mesoglea, which forms the hydrostatic skeleton of the animal. The mesoglea is 95% or more composed of water, and also contains collagen and other fibrous proteins, as well as wandering amebocytes that can engulf debris and bacteria. The mesogloea is bordered by the epidermis on the outside and the gastrodermis on the inside. The edge of the bell is often divided into rounded lobes known as lappets, which allow the bell to flex. In the gaps or niches between the lappets are dangling rudimentary sense organs known as rhopalia, and the margin of the bell often bears tentacles. On the underside of the bell is the manubrium, a stalk-like structure hanging down from the centre, with the mouth, which also functions as the anus, at its tip. There are often four oral arms connected to the manubrium, streaming away into the water below. The mouth opens into the gastrovascular cavity, where digestion takes place and nutrients are absorbed. This is subdivided by four thick septa into a central stomach and four gastric pockets. The four pairs of gonads are attached to the septa, and close to them four septal funnels open to the exterior, perhaps supplying good oxygenation to the gonads. Near the free edges of the septa, gastric filaments extend into the gastric cavity; these are armed with nematocysts and enzyme-producing cells and play a role in subduing and digesting the prey. In some scyphozoans, the gastric cavity is joined to radial canals which branch extensively and may join a marginal ring canal. Cilia in these canals circulate the fluid in a regular direction. The box jellyfish is largely similar in structure. It has a squarish, box-like bell. A short pedalium or stalk hangs from each of the four lower corners. One or more long, slender tentacles are attached to each pedalium. The rim of the bell is folded inwards to form a shelf known as a velarium which restricts the bell's aperture and creates a powerful jet when the bell pulsates, allowing box jellyfish to swim faster than true jellyfish. Hydrozoans are also similar, usually with just four tentacles at the edge of the bell, although many hydrozoans are colonial and may not have a free-living medusal stage. In some species, a non-detachable bud known as a gonophore is formed that contains a gonad but is missing many other medusal features such as tentacles and rhopalia. Stalked jellyfish are attached to a solid surface by a basal disk, and resemble a polyp, the oral end of which has partially developed into a medusa with tentacle-bearing lobes and a central manubrium with four-sided mouth. Most jellyfish do not have specialized systems for osmoregulation, respiration and circulation, and do not have a central nervous system. Nematocysts, which deliver the sting, are located mostly on the tentacles; true jellyfish also have them around the mouth and stomach. Jellyfish do not need a respiratory system because sufficient oxygen diffuses through the epidermis. They have limited control over their movement, but can navigate with the pulsations of the bell-like body; some species are active swimmers most of the time, while others largely drift. The rhopalia contain rudimentary sense organs which are able to detect light, water-borne vibrations, odour and orientation. A loose network of nerves called a "nerve net" is located in the epidermis. Although traditionally thought not to have a central nervous system, nerve net concentration and ganglion-like structures could be considered to constitute one in most species. A jellyfish detects stimuli, and transmits impulses both throughout the nerve net and around a circular nerve ring, to other nerve cells. The rhopalial ganglia contain pacemaker neurones which control swimming rate and direction. In many species of jellyfish, the rhopalia include ocelli, light-sensitive organs able to tell light from dark. These are generally pigment spot ocelli, which have some of their cells pigmented. The rhopalia are suspended on stalks with heavy crystals of calcium carbonate at one end, acting like gyroscopes to orient the eyes skyward. Certain jellyfish look upward at the mangrove canopy while making a daily migration from mangrove swamps into the open lagoon, where they feed, and back again. Box jellyfish have more advanced vision than the other groups. Each individual has 24 eyes, two of which are capable of seeing colour, and four parallel information processing areas that act in competition, supposedly making them one of the few kinds of animal to have a 360-degree view of its environment. Box jellyfish eye The study of jellyfish eye evolution is an intermediary to a better understanding of how visual systems evolved on Earth. Jellyfish exhibit immense variation in visual systems ranging from photoreceptive cell patches seen in simple photoreceptive systems to more derived complex eyes seen in box jellyfish. Major topics of jellyfish visual system research (with an emphasis on box jellyfish) include: the evolution of jellyfish vision from simple to complex visual systems), the eye morphology and molecular structures of box jellyfish (including comparisons to vertebrate eyes), and various uses of vision including task-guided behaviors and niche specialization. Evolution Experimental evidence for photosensitivity and photoreception in cnidarians antecedes the mid 1900s, and a rich body of research has since covered evolution of visual systems in jellyfish. Jellyfish visual systems range from simple photoreceptive cells to complex image-forming eyes. More ancestral visual systems incorporate extraocular vision (vision without eyes) that encompass numerous receptors dedicated to single-function behaviors. More derived visual systems comprise perception that is capable of multiple task-guided behaviors. Although they lack a true brain, cnidarian jellyfish have a "ring" nervous system that plays a significant role in motor and sensory activity. This net of nerves is responsible for muscle contraction and movement and culminates the emergence of photosensitive structures. Across Cnidaria, there is large variation in the systems that underlie photosensitivity. Photosensitive structures range from non-specialized groups of cells, to more "conventional" eyes similar to those of vertebrates. The general evolutionary steps to develop complex vision include (from more ancestral to more derived states): non-directional photoreception, directional photoreception, low-resolution vision, and high-resolution vision. Increased habitat and task complexity has favored the high-resolution visual systems common in derived cnidarians such as box jellyfish. Basal visual systems observed in various cnidarians exhibit photosensitivity representative of a single task or behavior. Extraocular photoreception (a form of non-directional photoreception), is the most basic form of light sensitivity and guides a variety of behaviors among cnidarians. It can function to regulate circadian rhythm (as seen in eyeless hydrozoans) and other light-guided behaviors responsive to the intensity and spectrum of light. Extraocular photoreception can function additionally in positive phototaxis (in planula larvae of hydrozoans), as well as in avoiding harmful amounts of UV radiation via negative phototaxis. Directional photoreception (the ability to perceive direction of incoming light) allows for more complex phototactic responses to light, and likely evolved by means of membrane stacking. The resulting behavioral responses can range from guided spawning events timed by moonlight to shadow responses for potential predator avoidance. Light-guided behaviors are observed in numerous scyphozoans including the common moon jelly, Aurelia aurita, which migrates in response to changes in ambient light and solar position even though they lack proper eyes. The low-resolution visual system of box jellyfish is more derived than directional photoreception, and thus box jellyfish vision represents the most basic form of true vision in which multiple directional photoreceptors combine to create the first imaging and spatial resolution. This is different from the high-resolution vision that is observed in camera or compound eyes of vertebrates and cephalopods that rely on focusing optics. Critically, the visual systems of box jellyfish are responsible for guiding multiple tasks or behaviors in contrast to less derived visual systems in other jellyfish that guide single behavioral functions. These behaviors include phototaxis based on sunlight (positive) or shadows (negative), obstacle avoidance, and control of swim-pulse rate. Box jellyfish possess "proper eyes" (similar to vertebrates) that allow them to inhabit environments that lesser derived medusae cannot. In fact, they are considered the only class in the clade Medusozoa that have behaviors necessitating spatial resolution and genuine vision. However, the lens in their eyes are more functionally similar to cup-eyes exhibited in low-resolution organisms, and have very little to no focusing capability. The lack of the ability to focus is due to the focal length exceeding the distance to the retina, thus generating unfocused images and limiting spatial resolution. The visual system is still sufficient for box jellyfish to produce an image to help with tasks such as object avoidance. Utility as a model organism Box jellyfish eyes are a visual system that is sophisticated in numerous ways. These intricacies include the considerable variation within the morphology of box jellyfishes' eyes (including their task/behavior specification), and the molecular makeup of their eyes including: photoreceptors, opsins, lenses, and synapses. The comparison of these attributes to more derived visual systems can allow for a further understanding of how the evolution of more derived visual systems may have occurred, and puts into perspective how box jellyfish can play the role as an evolutionary/developmental model for all visual systems. Characteristics Box jellyfish visual systems are both diverse and complex, comprising multiple photosystems. There is likely considerable variation in visual properties between species of box jellyfish given the significant inter-species morphological and physiological variation. Eyes tend to differ in size and shape, along with number of receptors (including opsins), and physiology across species of box jellyfish. Box jellyfish have a series of intricate lensed eyes that are similar to those of more derived multicellular organisms such as vertebrates. Their 24 eyes fit into four different morphological categories. These categories consist of two large, morphologically different medial eyes (a lower and upper lensed eye) containing spherical lenses, a lateral pair of pigment slit eyes, and a lateral pair of pigment pit eyes. The eyes are situated on rhopalia (small sensory structures) which serve sensory functions of the box jellyfish and arise from the cavities of the exumbrella (the surface of the body) on the side of the bells of the jellyfish. The two large eyes are located on the mid-line of the club and are considered complex because they contain lenses. The four remaining eyes lie laterally on either side of each rhopalia and are considered simple. The simple eyes are observed as small invaginated cups of epithelium that have developed pigmentation. The larger of the complex eyes contains a cellular cornea created by a mono ciliated epithelium, cellular lens, homogenous capsule to the lens, vitreous body with prismatic elements, and a retina of pigmented cells. The smaller of the complex eyes is said to be slightly less complex given that it lacks a capsule but otherwise contains the same structure as the larger eye. Box jellyfish have multiple photosystems that comprise different sets of eyes. Evidence includes immunocytochemical and molecular data that show photopigment differences among the different morphological eye types, and physiological experiments done on box jellyfish to suggest behavioral differences among photosystems. Each individual eye type constitutes photosystems that work collectively to control visually guided behaviors. Box jellyfish eyes primarily use c-PRCs (ciliary photoreceptor cells) similar to that of vertebrate eyes. These cells undergo phototransduction cascades (process of light absorption by photoreceptors) that are triggered by c-opsins. Available opsin sequences suggest that there are two types of opsins possessed by all cnidarians including an ancient phylogenetic opsin, and a sister ciliary opsin to the c-opsins group. Box jellyfish could have both ciliary and cnidops (cnidarian opsins), which is something not previously believed to appear in the same retina. Nevertheless, it is not entirely evident whether cnidarians possess multiple opsins that are capable of having distinctive spectral sensitivities. Comparison with other organisms Comparative research on genetic and molecular makeup of box jellyfishes' eyes versus more derived eyes seen in vertebrates and cephalopods focuses on: lenses and crystallin composition, synapses, and Pax genes and their implied evidence for shared primordial (ancestral) genes in eye evolution. Box jellyfish eyes are said to be an evolutionary/developmental model of all eyes based on their evolutionary recruitment of crystallins and Pax genes. Research done on box jellyfish including Tripedalia cystophora has suggested that they possess a single Pax gene, PaxB. PaxB functions by binding to crystallin promoters and activating them. PaxB in situ hybridization resulted in PaxB expression in the lens, retina, and statocysts. These results and the rejection of the prior hypothesis that Pax6 was an ancestral Pax gene in eyes has led to the conclusion that PaxB was a primordial gene in eye evolution, and that the eyes of all organisms likely share a common ancestor. The lens structure of box jellyfish appears very similar to those of other organisms, but the crystallins are distinct in both function and appearance. Weak reactions were seen within the sera and there were very weak sequence similarities within the crystallins among vertebrate and invertebrate lenses. This is likely due to differences in lower molecular weight proteins and the subsequent lack of immunological reactions with antisera that other organisms' lenses exhibit. All four of the visual systems of box jellyfish species investigated with detail (Carybdea marsupialis, Chiropsalmus quadrumanus, Tamoya haplonema and Tripedalia cystophora) have invaginated synapses, but only in the upper and lower lensed eyes. Different densities were found between the upper and lower lenses, and between species. Four types of chemical synapses have been discovered within the rhopalia which could help in understanding neural organization including: clear unidirectional, dense-core unidirectional, clear bidirectional, and clear and dense-core bidirectional. The synapses of the lensed eyes could be useful as markers to learn more about the neural circuit in box jellyfish retinal areas. Evolution as a response to natural stimuli The primary adaptive responses to environmental variation observed in box jellyfish eyes include pupillary constriction speeds in response to light environments, as well as photoreceptor tuning and lens adaptations to better respond to shifts between light environments and darkness. Some box jellyfish species' eyes appear to have evolved more focused vision in response to their habitat. Pupillary contraction appears to have evolved in response to variation in the light environment across ecological niches across three species of box jellyfish (Chironex fleckeri, Chiropsella bronzie, and Carukia barnesi). Behavioral studies suggest that faster pupil contraction rates allow for greater object avoidance, and in fact, species with more complex habitats exhibit faster rates. Ch. bronzie inhabit shallow beach fronts that have low visibility and very few obstacles, thus, faster pupil contraction in response to objects in their environment is not important. Ca. barnesi and Ch. fleckeri are found in more three-dimensionally complex environments like mangroves with an abundance of natural obstacles, where faster pupil contraction is more adaptive. Behavioral studies support the idea that faster pupillary contraction rates assist with obstacle avoidance as well as depth adjustments in response to differing light intensities. Light/dark adaptation via pupillary light reflexes is an additional form of an evolutionary response to the light environment. This relates to the pupil's response to shifts between light intensity (generally from sunlight to darkness). In the process of light/dark adaptation, the upper and lower lens eyes of different box jellyfish species vary in specific function. The lower lens-eyes contain pigmented photoreceptors and long pigment cells with dark pigments that migrate on light/dark adaptation, while the upper-lens eyes play a concentrated role in light direction and phototaxis given that they face upward towards the water surface (towards the sun or moon). The upper lens of Ch. bronzie does not exhibit any considerable optical power while Tr. cystophora (a box jellyfish species that tends to live in mangroves) does. The ability to use light to visually guide behavior is not of as much importance to Ch. bronzie as it is to species in more obstacle-filled environments. Differences in visually guided behavior serve as evidence that species that share the same number and structure of eyes can exhibit differences in how they control behavior. Largest and smallest Jellyfish range from about one millimeter in bell height and diameter, to nearly in bell height and diameter; the tentacles and mouth parts usually extend beyond this bell dimension. The smallest jellyfish are the peculiar creeping jellyfish in the genera Staurocladia and Eleutheria, which have bell disks from to a few millimeters in diameter, with short tentacles that extend out beyond this, which these jellyfish use to move across the surface of seaweed or the bottoms of rocky pools; many of these tiny creeping jellyfish cannot be seen in the field without a hand lens or microscope. They can reproduce asexually by fission (splitting in half). Other very small jellyfish, which have bells about one millimeter, are the hydromedusae of many species that have just been released from their parent polyps; some of these live only a few minutes before shedding their gametes in the plankton and then dying, while others will grow in the plankton for weeks or months. The hydromedusae Cladonema radiatum and Cladonema californicum are also very small, living for months, yet never growing beyond a few mm in bell height and diameter. The lion's mane jellyfish, Cyanea capillata, was long-cited as the largest jellyfish, and arguably the longest animal in the world, with fine, thread-like tentacles that may extend up to long (though most are nowhere near that large). They have a moderately painful, but rarely fatal, sting. The increasingly common giant Nomura's jellyfish, Nemopilema nomurai, found in some, but not all years in the waters of Japan, Korea and China in summer and autumn is another candidate for "largest jellyfish", in terms of diameter and weight, since the largest Nomura's jellyfish in late autumn can reach in bell (body) diameter and about in weight, with average specimens frequently reaching in bell diameter and about in weight. The large bell mass of the giant Nomura's jellyfish can dwarf a diver and is nearly always much greater than the Lion's Mane, whose bell diameter can reach . The rarely encountered deep-sea jellyfish Stygiomedusa gigantea is another candidate for "largest jellyfish", with its thick, massive bell up to wide, and four thick, "strap-like" oral arms extending up to in length, very different from the typical fine, threadlike tentacles that rim the umbrella of more-typical-looking jellyfish, including the Lion's Mane. Desmonema glaciale, which lives in the Antarctic region, can reach a very large size (several meters). Purple-striped jelly (Chrysaora colorata) can also be extremely long (up to 15 feet). Life history and behavior Life cycle Jellyfish have a complex life cycle which includes both sexual and asexual phases, with the medusa being the sexual stage in most instances. Sperm fertilize eggs, which develop into larval planulae, become polyps, bud into ephyrae and then transform into adult medusae. In some species certain stages may be skipped. Upon reaching adult size, jellyfish spawn regularly if there is a sufficient supply of food. In most species, spawning is controlled by light, with all individuals spawning at about the same time of day; in many instances this is at dawn or dusk. Jellyfish are usually either male or female (with occasional hermaphrodites). In most cases, adults release sperm and eggs into the surrounding water, where the unprotected eggs are fertilized and develop into larvae. In a few species, the sperm swim into the female's mouth, fertilizing the eggs within her body, where they remain during early development stages. In moon jellies, the eggs lodge in pits on the oral arms, which form a temporary brood chamber for the developing planula larvae. The planula is a small larva covered with cilia. When sufficiently developed, it settles onto a firm surface and develops into a polyp. The polyp generally consists of a small stalk topped by a mouth that is ringed by upward-facing tentacles. The polyps resemble those of closely related anthozoans, such as sea anemones and corals. The jellyfish polyp may be sessile, living on the bottom of boat hulls or other substrates, or it may be free-floating or attached to tiny bits of free-living plankton or rarely, fish or other invertebrates. Polyps may be solitary or colonial. Most polyps are only millimetres in diameter and feed continuously. The polyp stage may last for years. After an interval and stimulated by seasonal or hormonal changes, the polyp may begin reproducing asexually by budding and, in the Scyphozoa, is called a segmenting polyp, or a scyphistoma. Budding produces more scyphistomae and also ephyrae. Budding sites vary by species; from the tentacle bulbs, the manubrium (above the mouth), or the gonads of hydromedusae. In a process known as strobilation, the polyp's tentacles are reabsorbed and the body starts to narrow, forming transverse constrictions, in several places near the upper extremity of the polyp. These deepen as the constriction sites migrate down the body, and separate segments known as ephyra detach. These are free-swimming precursors of the adult medusa stage, which is the life stage that is typically identified as a jellyfish. The ephyrae, usually only a millimeter or two across initially, swim away from the polyp and grow. Limnomedusae polyps can asexually produce a creeping frustule larval form, which crawls away before developing into another polyp. A few species can produce new medusae by budding directly from the medusan stage. Some hydromedusae reproduce by fission. Lifespan Little is known of the life histories of many jellyfish as the places on the seabed where the benthic forms of those species live have not been found. However, an asexually reproducing strobila form can sometimes live for several years, producing new medusae (ephyra larvae) each year. An unusual species, Turritopsis dohrnii, formerly classified as Turritopsis nutricula, might be effectively immortal because of its ability under certain circumstances to transform from medusa back to the polyp stage, thereby escaping the death that typically awaits medusae post-reproduction if they have not otherwise been eaten by some other organism. So far this reversal has been observed only in the laboratory. Locomotion Using the moon jelly Aurelia aurita as an example, jellyfish have been shown to be the most energy-efficient swimmers of all animals. They move through the water by radially expanding and contracting their bell-shaped bodies to push water behind them. They pause between the contraction and expansion phases to create two vortex rings. Muscles are used for the contraction of the body, which creates the first vortex and pushes the animal forward, but the mesoglea is so elastic that the expansion is powered exclusively by relaxing the bell, which releases the energy stored from the contraction. Meanwhile, the second vortex ring starts to spin faster, sucking water into the bell and pushing against the centre of the body, giving a secondary and "free" boost forward. The mechanism, called passive energy recapture, only works in relatively small jellyfish moving at low speeds, allowing the animal to travel 30 percent farther on each swimming cycle. Jellyfish achieved a 48 percent lower cost of transport (food and oxygen intake versus energy spent in movement) than other animals in similar studies. One reason for this is that most of the gelatinous tissue of the bell is inactive, using no energy during swimming. Ecology Diet Jellyfish are, like other cnidarians, generally carnivorous (or parasitic), feeding on planktonic organisms, crustaceans, small fish, fish eggs and larvae, and other jellyfish, ingesting food and voiding undigested waste through the mouth. They hunt passively using their tentacles as drift lines, or sink through the water with their tentacles spread widely; the tentacles, which contain nematocysts to stun or kill the prey, may then flex to help bring it to the mouth. Their swimming technique also helps them to capture prey; when their bell expands it sucks in water which brings more potential prey within reach of the tentacles. A few species such as Aglaura hemistoma are omnivorous, feeding on microplankton which is a mixture of zooplankton and phytoplankton (microscopic plants) such as dinoflagellates. Others harbour mutualistic algae (Zooxanthellae) in their tissues; the spotted jellyfish (Mastigias papua) is typical of these, deriving part of its nutrition from the products of photosynthesis, and part from captured zooplankton. The upside-down jellyfish (Cassiopea andromeda) also has a symbiotic relationship with microalgae, but captures tiny animals to supplement their diet. This is done by releasing tiny balls of living cells composed of mesoglea. These use cilia to drive them through water and stinging cells which stun the prey. The blobs also seems to have digestive capabilities. Predation Other species of jellyfish are among the most common and important jellyfish predators. Sea anemones may eat jellyfish that drift into their range. Other predators include tunas, sharks, swordfish, sea turtles and penguins. Jellyfish washed up on the beach are consumed by foxes, other terrestrial mammals and birds. In general however, few animals prey on jellyfish; they can broadly be considered to be top predators in the food chain. Once jellyfish have become dominant in an ecosystem, for example through overfishing which removes predators of jellyfish larvae, there may be no obvious way for the previous balance to be restored: they eat fish eggs and juvenile fish, and compete with fish for food, preventing fish stocks from recovering. Symbiosis Some small fish are immune to the stings of the jellyfish and live among the tentacles, serving as bait in a fish trap; they are safe from potential predators and are able to share the fish caught by the jellyfish. The cannonball jellyfish has a symbiotic relationship with ten different species of fish, and with the longnose spider crab, which lives inside the bell, sharing the jellyfish's food and nibbling its tissues. Blooms Jellyfish form large masses or blooms in certain environmental conditions of ocean currents, nutrients, sunshine, temperature, season, prey availability, reduced predation and oxygen concentration. Currents collect jellyfish together, especially in years with unusually high populations. Jellyfish can detect marine currents and swim against the current to congregate in blooms. Jellyfish are better able to survive in nutrient-rich, oxygen-poor water than competitors, and thus can feast on plankton without competition. Jellyfish may also benefit from saltier waters, as saltier waters contain more iodine, which is necessary for polyps to turn into jellyfish. Rising sea temperatures caused by climate change may also contribute to jellyfish blooms, because many species of jellyfish are able to survive in warmer waters. Increased nutrients from agricultural or urban runoff with nutrients including nitrogen and phosphorus compounds increase the growth of phytoplankton, causing eutrophication and algal blooms. When the phytoplankton die, they may create dead zones, so-called because they are hypoxic (low in oxygen). This in turn kills fish and other animals, but not jellyfish, allowing them to bloom. Jellyfish populations may be expanding globally as a result of land runoff and overfishing of their natural predators. Jellyfish are well placed to benefit from disturbance of marine ecosystems. They reproduce rapidly; they prey upon many species, while few species prey on them; and they feed via touch rather than visually, so they can feed effectively at night and in turbid waters. It may be difficult for fish stocks to re-establish themselves in marine ecosystems once they have become dominated by jellyfish, because jellyfish feed on plankton, which includes fish eggs and larvae. As suspected at the turn of this century, jellyfish blooms are increasing in frequency. Between 2013 and 2020 the Mediterranean Science Commission monitored on a weekly basis the frequency of such outbreaks in coastal waters from Morocco to the Black Sea, revealing a relatively high frequency of these blooms nearly all year round, with peaks observed from March to July and often again in the autumn. The blooms are caused by different jellyfish species, depending on their localisation within the Basin: one observes a clear dominance of Pelagia noctiluca and Velella velella outbreaks in the western Mediterranean, of Rhizostoma pulmo and Rhopilema nomadica outbreaks in the eastern Mediterranean, and of Aurelia aurita and Mnemiopsis leidyi outbreaks in the Black Sea. Some jellyfish populations that have shown clear increases in the past few decades are invasive species, newly arrived from other habitats: examples include the Black Sea, Caspian Sea, Baltic Sea, central and eastern Mediterranean, Hawaii, and tropical and subtropical parts of the West Atlantic (including the Caribbean, Gulf of Mexico and Brazil). Jellyfish blooms can have significant impact on community structure. Some carnivorous jellyfish species prey on zooplankton while others graze on primary producers. Reductions in zooplankton and ichthyoplankton due to a jellyfish bloom can ripple through the trophic levels. High-density jellyfish populations can outcompete other predators and reduce fish recruitment. Increased grazing on primary producers by jellyfish can also interrupt energy transfer to higher trophic levels. During blooms, jellyfish significantly alter the nutrient availability in their environment. Blooms require large amounts of available organic nutrients in the water column to grow, limiting availability for other organisms. Some jellyfish have a symbiotic relationship with single-celled dinoflagellates, allowing them to assimilate inorganic carbon, phosphorus, and nitrogen creating competition for phytoplankton. Their large biomass makes them an important source of dissolved and particulate organic matter for microbial communities through excretion, mucus production, and decomposition. The microbes break down the organic matter into inorganic ammonium and phosphate. However, the low carbon availability shifts the process from production to respiration creating low oxygen areas making the dissolved inorganic nitrogen and phosphorus largely unavailable for primary production. These blooms have very real impacts on industries. Jellyfish can outcompete fish by utilizing open niches in over-fished fisheries. Catch of jellyfish can strain fishing gear and lead to expenses relating to damaged gear. Power plants have been shut down due to jellyfish blocking the flow of cooling water. Blooms have also been harmful for tourism, causing a rise in stings and sometimes the closure of beaches. Jellyfish form a component of jelly-falls, events where gelatinous zooplankton fall to the seafloor, providing food for the benthic organisms there. In temperate and subpolar regions, jelly-falls usually follow immediately after a bloom. Habitats Most jellyfish are marine animals, although a few hydromedusae inhabit freshwater. The best known freshwater example is the cosmopolitan hydrozoan jellyfish, Craspedacusta sowerbii. It is less than an inch (2.5 cm) in diameter, colorless and does not sting. Some jellyfish populations have become restricted to coastal saltwater lakes, such as Jellyfish Lake in Palau. Jellyfish Lake is a marine lake where millions of golden jellyfish (Mastigias spp.) migrate horizontally across the lake daily. Although most jellyfish live well off the ocean floor and form part of the plankton, a few species are closely associated with the bottom for much of their lives and can be considered benthic. The upside-down jellyfish in the genus Cassiopea typically lie on the bottom of shallow lagoons where they sometimes pulsate gently with their umbrella top facing down. Even some deep-sea species of hydromedusae and scyphomedusae are usually collected on or near the bottom. All of the stauromedusae are found attached to either seaweed or rocky or other firm material on the bottom. Some species explicitly adapt to tidal flux. In Roscoe Bay, jellyfish ride the current at ebb tide until they hit a gravel bar, and then descend below the current. They remain in still waters until the tide rises, ascending and allowing it to sweep them back into the bay. They also actively avoid fresh water from mountain snowmelt, diving until they find enough salt. Parasites Jellyfish are hosts to a wide variety of parasitic organisms. They act as intermediate hosts of endoparasitic helminths, with the infection being transferred to the definitive host fish after predation. Some digenean trematodes, especially species in the family Lepocreadiidae, use jellyfish as their second intermediate hosts. Fish become infected by the trematodes when they feed on infected jellyfish. Relation to humans Fisheries Jellyfish have long been eaten in some parts of the world. Fisheries have begun harvesting the American cannonball jellyfish, Stomolophus meleagris, along the southern Atlantic coast of the United States and in the Gulf of Mexico for export to Asia. Jellyfish are also harvested for their collagen, which is being investigated for use in a variety of applications including the treatment of rheumatoid arthritis. Aquaculture and fisheries of other species often suffer severe losses – and so losses of productivity – due to jellyfish. Products Aristotle stated in the Parts of Animals IV, 6 that jellyfish (sea-nettles) were eaten in wintertime in a fish stew. In some countries, including China, Japan, and Korea, jellyfish are a delicacy. The jellyfish is dried to prevent spoiling. Only some 12 species of scyphozoan jellyfish belonging to the order Rhizostomeae are harvested for food, mostly in southeast Asia. Rhizostomes, especially Rhopilema esculentum in China ( hǎizhé, 'sea stingers') and Stomolophus meleagris (cannonball jellyfish) in the United States, are favored because of their larger and more rigid bodies and because their toxins are harmless to humans. Traditional processing methods, carried out by a jellyfish master, involve a 20- to 40-day multi-phase procedure in which, after removing the gonads and mucous membranes, the umbrella and oral arms are treated with a mixture of table salt and alum, and compressed. Processing makes the jellyfish drier and more acidic, producing a crisp texture. Jellyfish prepared this way retain 7–10% of their original weight, and the processed product consists of approximately 94% water and 6% protein. Freshly processed jellyfish has a white, creamy color and turns yellow or brown during prolonged storage. In China, processed jellyfish are desalted by soaking in water overnight and eaten cooked or raw. The dish is often served shredded with a dressing of oil, soy sauce, vinegar and sugar, or as a salad with vegetables. In Japan, cured jellyfish are rinsed, cut into strips and served with vinegar as an appetizer. Desalted, ready-to-eat products are also available. Biotechnology Pliny the Elder reported in his Natural History that the slime of the jellyfish "" produced light when rubbed on a walking stick. In 1961, Osamu Shimomura extracted green fluorescent protein (GFP) and another bioluminescent protein, called aequorin, from the large and abundant hydromedusa Aequorea victoria, while studying photoproteins that cause bioluminescence in this species. Three decades later, Douglas Prasher sequenced and cloned the gene for GFP. Martin Chalfie figured out how to use GFP as a fluorescent marker of genes inserted into other cells or organisms. Roger Tsien later chemically manipulated GFP to produce other fluorescent colors to use as markers. In 2008, Shimomura, Chalfie and Tsien won the Nobel Prize in Chemistry for their work with GFP. Man-made GFP became widely used as a fluorescent tag to show which cells or tissues express specific genes. The genetic engineering technique fuses the gene of interest to the GFP gene. The fused DNA is then put into a cell, to generate either a cell line or (via IVF techniques) an entire animal bearing the gene. In the cell or animal, the artificial gene turns on in the same tissues and the same time as the normal gene, making a fusion of the normal protein with GFP attached to the end, illuminating the animal or cell reveals what tissues express that protein—or at what stage of development. The fluorescence shows where the gene is expressed. Aquarium display Jellyfish are displayed in many public aquariums. Often the tank's background is blue and the animals are illuminated by side light, increasing the contrast between the animal and the background. In natural conditions, many jellies are so transparent that they are nearly invisible. Jellyfish are not adapted to closed spaces. They depend on currents to transport them from place to place. Professional exhibits as in the Monterey Bay Aquarium feature precise water flows, typically in circular tanks to avoid trapping specimens in corners. The outflow is spread out over a large surface area and the inflow enters as a sheet of water in front of the outflow, so the jellyfish do not get sucked into it. As of 2009, jellyfish were becoming popular in home aquariums, where they require similar equipment. Stings Jellyfish are armed with nematocysts, a type of specialized stinging cell. Contact with a jellyfish tentacle can trigger millions of nematocysts to pierce the skin and inject venom, but only some species' venom causes an adverse reaction in humans. In a study published in Communications Biology, researchers found a jellyfish species called Cassiopea xamachana which when triggered will release tiny balls of cells that swim around the jellyfish stinging everything in their path. Researchers described these as "self-propelling microscopic grenades" and named them cassiosomes. The effects of stings range from mild discomfort to extreme pain and death. Most jellyfish stings are not deadly, but stings of some box jellyfish (Irukandji jellyfish), such as the sea wasp, can be deadly. Stings may cause anaphylaxis (a form of shock), which can be fatal. Jellyfish kill 20 to 40 people a year in the Philippines alone. In 2006 the Spanish Red Cross treated 19,000 stung swimmers along the Costa Brava. Vinegar (3–10% aqueous acetic acid) may help with box jellyfish stings but not the stings of the Portuguese man o' war. Clearing the area of jelly and tentacles reduces nematocyst firing. Scraping the affected skin, such as with the edge of a credit card, may remove remaining nematocysts. Once the skin has been cleaned of nematocysts, hydrocortisone cream applied locally reduces pain and inflammation. Antihistamines may help to control itching. Immunobased antivenins are used for serious box jellyfish stings. In Elba Island and Corsica dittrichia viscosa is now used by residents and tourists to heal stings from jellyfish, bees and wasps by pressing fresh leaves on the skin with quick results. Mechanical issues Jellyfish in large quantities can fill and split fishing nets and crush captured fish. They can clog cooling equipment, having disabled power stations in several countries; jellyfish caused a cascading blackout in the Philippines in 1999, as well as damaging the Diablo Canyon Power Plant in California in 2008. They can also stop desalination plants and ships' engines.
Biology and health sciences
Cnidarians
null
50245
https://en.wikipedia.org/wiki/Sugar%20beet
Sugar beet
A sugar beet is a plant whose root contains a high concentration of sucrose and that is grown commercially for sugar production. In plant breeding, it is known as the Altissima cultivar group of the common beet (Beta vulgaris). Together with other beet cultivars, such as beetroot and chard, it belongs to the subspecies Beta vulgaris subsp. vulgaris but classified as var. saccharifera. Its closest wild relative is the sea beet (Beta vulgaris subsp. maritima). Sugar beets are grown in climates that are too cold for sugarcane. In 2020, Russia, the United States, Germany, France and Turkey were the world's five largest sugar beet producers. In 2010–2011, Europe, and North America except Arctic territories failed to supply the overall domestic demand for sugar and were all net importers of sugar. The US harvested of sugar beets in 2008. In 2009, sugar beets accounted for 20% of the world's sugar production and nearly 30% by 2013. Sugarcane accounts for most of the rest of sugar produced globally. In February 2015, a USDA factsheet reported that sugar beets generally account for about 55 percent of domestically produced sugar, and sugar cane for about 45 percent. Description The sugar beet has a conical, white, fleshy root (a taproot) with a flat crown. The plant consists of the root and a rosette of leaves. Sugar is formed by photosynthesis in the leaves and is then stored in the root. The root of the beet contains 75% water, about 20% sugar, and 5% pulp. The exact sugar content can vary between 12% and 21%, depending on the cultivar and growing conditions. Sugar is the primary value of sugar beet as a cash crop. The pulp, insoluble in water and mainly composed of cellulose, hemicellulose, lignin, and pectin, is used in animal feed. The byproducts of the sugar beet crop, such as pulp and molasses, add another 10% to the value of the harvest. Sugar beets grow exclusively in the temperate zone, in contrast to sugarcane, which grows exclusively in the tropical and subtropical zones. The average weight of a sugar beet ranges between . Sugar beet foliage has a rich, brilliant green color and grows to a height of about . The leaves are numerous and broad and grow in a tuft from the crown of the beet, which is usually level with or just above the ground surface. History of the sugar beet Discovery of beet sugar The species beet consists of several cultivar groups. The 16th-century French scientist Olivier de Serres discovered a process for preparing sugar syrup from (red) beetroot. He wrote: "The beet-root, when being boiled, yields a juice similar to syrup of sugar, which is beautiful to look at on account of its vermilion colour" (1575). Because crystallized cane sugar was already available and had a better taste, this process did not become popular. Modern sugar beets date to the mid-18th century Silesia where Frederick the Great, king of Prussia, subsidized experiments to develop processes for sugar extraction. In 1747, Andreas Sigismund Marggraf, professor of physics in the Academy of Science of Berlin, isolated sugar from beetroots and found them at concentrations of 1.3–1.6%. He also demonstrated that the sugar that could be extracted from beets was identical to that produced from cane. He found the best of these vegetable sources for sugar was the white beet. Despite Marggraf's success in isolating sugar from beets, it did not lead to commercial sugar production. Development of the sugar beet Marggraf's student and successor Franz Karl Achard began plant breeding sugar beet in Kaulsdorf near Berlin in 1786. Achard started his plant breeding by evaluating 23 varieties of beet for sugar content. In the end he selected a local strain from Halberstadt in modern-day Saxony-Anhalt, Germany. Moritz Baron von Koppy and his son further selected white, conical tubers from this strain. The selection was named weiße schlesische Zuckerrübe, meaning white Silesian sugar beet. In about 1800, this cultivar boasted about 5–6% sucrose by (dry) weight. It would go on to be the progenitor of all modern sugar beets. The plant breeding process has continued since then, leading to a sucrose content of around 18% in modern varieties. History of the beet sugar industry Franz Karl Achard opened the world's first beet sugar factory in 1801, at Kunern, Silesia (now Konary, Poland). The idea to produce sugar from beet was soon introduced to France, whence the European sugar beet industry rapidly expanded. By 1840, about 5% of the world's sugar was derived from sugar beets, and by 1880, this number had risen more than tenfold to over 50%. In North America, the first commercial production started in 1879 at a farm in Alvarado, California. The sugar beet was introduced to Chile by German settlers around 1850. Culture The sugar beet, like sugarcane, needs a particular soil and a proper climate for its successful cultivation. The most important requirements are that the soil must contain a large supply of nutrients, be rich in humus, and be able to contain a great deal of moisture. A certain amount of alkali is not necessarily detrimental, as sugar beets are not especially susceptible to injury by some alkali. The ground should be fairly level and well-drained, especially where irrigation is practiced. Generous crops can be grown in both sandy soil and heavy loams, but the ideal soil is a sandy loam, i.e., a mixture of organic matter, clay and sand. A subsoil of gravel, or the presence of hardpan, is not desirable, as cultivation to a depth of from is necessary to produce the best results. Climatic conditions, temperature, sunshine, rainfall and winds have an important bearing upon the success of sugar beet agriculture. A temperature ranging from during the growing months is most favorable. In the absence of adequate irrigation, of rainfall are necessary to raise an average crop. High winds are harmful, as they generally crust the land and prevent the young beets from coming through the ground. The best results are obtained along the coast of southern California, where warm, sunny days succeeded by cool, foggy nights seem to meet sugar beet's favored growth conditions. Sunshine of long duration but not of great intensity is the most important factor in the successful cultivation of sugar beets. Near the equator, the shorter days and the greater heat of the sun sharply reduce the sugar content in the beet. In high elevation regions such as those of Idaho, Colorado and Utah, where the temperature is high during the daytime, but where the nights are cool, the quality of the sugar beet is excellent. In Michigan, the long summer days from the relatively high latitude (the Lower Peninsula, where production is concentrated, lies between the 41st and 46th parallels North) and the influence of the Great Lakes result in satisfactory climatic conditions for sugar beet culture. Sebewaing, Michigan, lies in the Thumb region of Michigan; both the region and state are major sugar beet producers. Sebewaing is home to one of four Michigan Sugar Company factories. The town sponsors an annual Michigan Sugar Festival. To cultivate beets successfully, the land must be properly prepared. Deep ploughing is the first principle of beet culture. It allows the roots to penetrate the subsoil without much obstruction, thereby preventing the beet from growing out of the ground, besides enabling it to extract considerable nourishment and moisture from the lower soil. If the latter is too hard, the roots will not penetrate it readily and, as a result, the plant will be pushed up and out of the earth during the process of growth. A hard subsoil is impervious to water and prevents proper drainage. It should not be too loose, however, as this allows the water to pass through more freely than is desirable. Ideally, the soil should be deep, fairly fine and easily penetrable by the roots. It should also be capable of retaining moisture and at the same time admit of a free circulation of air and good drainage. Sugar beet crops exhaust the soil rapidly. Crop rotation is recommended and necessary. Normally, beets are grown in the same ground every third year, peas, beans or grain being raised the other two years. In most temperate climates, beets are planted in the spring and harvested in the autumn. At the northern end of its range, growing seasons as short as 100 days can produce commercially viable sugar beet crops. In warmer climates, such as in California's Imperial Valley, sugar beets are a winter crop, planted in the autumn and harvested in the spring. In recent years, Syngenta has developed the so-called tropical sugar beet. It allows the plant to grow in tropical and subtropical regions. Beets are planted from a small seed; of beet seed comprises 100,000 seeds and will plant over of ground ( will plant about . Until the latter half of the 20th century, sugar beet production was highly labor-intensive, as weed control was managed by densely planting the crop, which then had to be manually thinned two or three times with a hoe during the growing season. Harvesting also required many workers. Although the roots could be lifted by a plough-like device that could be pulled by a horse team, the rest of the preparation was by hand. One laborer grabbed the beets by their leaves, knocked them together to shake free loose soil, and then laid them in a row, root to one side, greens to the other. A second worker equipped with a beet hook (a short-handled tool between a billhook and a sickle) followed behind, and would lift the beet and swiftly chop the crown and leaves from the root with a single action. Working this way, he would leave a row of beets that could be forked into the back of a cart. Today, mechanical sowing, herbicide application for weed control, and mechanical harvesting have displaced this reliance on manual farm work. A root beater uses a series of blades to chop the leaf and crown (which is high in nonsugar impurities) from the root. The beet harvester lifts the root, and removes excess soil from the root in a single pass over the field. A modern harvester is typically able to cover six rows at the same time. The beets are dumped into trucks as the harvester rolls down the field, and then delivered to the factory. The conveyor then removes more soil. If the beets are to be left for later delivery, they are formed into clamps. Straw bales are used to shield the beets from the weather. Provided the clamp is well built with the right amount of ventilation, the beets do not significantly deteriorate. Beets that freeze and then defrost, produce complex carbohydrates that cause severe production problems in the factory. In the UK, loads may be hand examined at the factory gate before being accepted. In the US, the fall harvest begins with the first hard frost, which arrests photosynthesis and the further growth of the root. Depending on the local climate, it may be carried out over the course of a few weeks or be prolonged throughout the winter months. The harvest and processing of the beet is referred to as "the campaign", reflecting the organization required to deliver the crop at a steady rate to processing factories that run 24 hours a day for the duration of the harvest and processing (for the UK, the campaign lasts about five months). In the Netherlands, this period is known as , a time to be careful when driving on local roads in the area while the beets are being grown, because the naturally high clay content of the soil tends to cause slippery roads when soil falls from the trailers during transport. Production statistics The world harvested of sugar beets in 2022. The world's largest producer was Russia, with a harvest. The average yield of sugar beet crops worldwide was 60.8 tonnes per hectare. The most productive sugar beet farms in the world, in 2022, were in Chile, with a nationwide average yield of 106.2 tonnes per hectare. Imperial Valley (California) farmers have achieved yields of about 160 tonnes per hectare and over 26 tonnes sugar per hectare. Imperial Valley farms benefit from high intensities of incident sunlight and intensive use of irrigation and fertilizers. From sugar beet to white sugar Most sugar beet are used to create white sugar. This is done in a beet sugar factory, often abbreviated to sugar factory. Nowadays these usually also act as a sugar refinery, but historically the beet sugar factory produced raw sugar and the sugar refinery refined raw sugar to create white sugar. Sugar factory In the 1960s, beet sugar processing was described as consisting of these steps. Harvesting and storage in a way that preserves the beet while they wait to be processed Washing and scrubbing to remove soil and debris Slicing the beet in small pieces called cossettes or chips Removing the sugar from the beet in an osmosis process, resulting in raw juice and beet pulp. Nowadays, most sugar factories then refine the raw juice themselves, without moving it to a sugar refinery. The beet pulp is processed on site to become cattle fodder. Sugar refinery The next steps to produce white sugar are not specific for producing sugar from sugar beet. They also apply to producing white sugar from sugar cane. As such, they belong to the sugar refining process, not to the beet sugar production process per se. Purification, the raw juice undergoes a chemical process to remove impurities and create thin juice. Evaporation, the thin juice is concentrated by evaporation to make a "thick juice", roughly 60% sucrose by weight. Crystallization, by boiling under reduced pressure the sugar liquor is turned into crystals and remaining liquor. Centrifugation, in a centrifuge the white sugar crystals are separated from the remaining sugar liquor. The remaining liquor is then boiled and centrifuged, producing a lower grade of crystallised sugar (which is redissolved to feed the white sugar pans) and molasses. Further sugar can be recovered from the molasses by methods such as the Steffen Process. Ethanol and alcohol From molasses There are two obvious methods to produce alcohol (ethanol) from sugar beet. The first method produces alcohol as a byproduct of manufacturing sugar. It is about fermenting the sugar beet molasses that are left after (the second) centrifugation. This strongly resembles the manufacture of rum from sugar cane molasses. In a number of countries, notably the Czech Republic and Slovakia, this analogy led to making a rum-like distilled spirit called Tuzemak. On the Åland Islands, a similar drink is made under the brand name Kobba Libre. From sugar beet The second method to produce alcohol from sugar beet is to ferment the sugar beet themselves. I.e. without attempting to produce sugar. The idea to distill sugar from the beet came up soon after the first beet sugar factory was established. Between 1852 and 1854 Champonnois devised a good system to distill alcohol from sugar beet. Within a few years a large sugar distilling industry was created in France. The current process to produce alcohol by fermenting and distilling sugar beet consists of these steps: Adding Starch milk Liquefaction and Saccharification Fermentation in fermentation vats Distillation Dehydration, this results in Bioethanol Rectification Refining, the result is a highly pure alcohol Large sugar beet distilleries remain limited to Europe. In 2023 Tereos had 8 beet sugar distilleries, located in France, Czechia and Romania. In many European countries rectified spirit from sugar beet is used to make Liquor, e.g. vodka, Gin etc.. Other uses Sugary syrup An unrefined sugary syrup can be produced directly from sugar beet. This thick, dark syrup is produced by cooking shredded sugar beet for several hours, then pressing the resulting mash and concentrating the juice produced until it has a consistency similar to that of honey. No other ingredients are used. In Germany, particularly the Rhineland area, and in the Netherlands, this sugar beet syrup (called Zuckerrüben-Sirup or Zapp in German, or Suikerstroop in Dutch) is used as a spread for sandwiches, as well as for sweetening sauces, cakes and desserts. Dutch people generally top their pancakes with stroop. Suikerstroop made according to the Dutch tradition is a Traditional Speciality Guaranteed under EU and UK law. Commercially, if the syrup has a dextrose equivalency (DE) above 30, the product has to be hydrolyzed and converted to a high-fructose syrup, much like high-fructose corn syrup, or isoglucose syrup in the EU. Uridine Uridine can be isolated from sugar beet. Alternative fuel BP and Associated British Foods plan to use agricultural surpluses of sugar beet to produce biobutanol in East Anglia in the United Kingdom. The feedstock-to-yield ratio for sugarbeet is 56:9. Therefore, it takes 6.22 kg of sugar beet to produce 1 kg of ethanol (approximately 1.27 L at room temperature). In 2006 it was found that producing ethanol from sugar beet or cane became profitable when market prices for ethanol were close to $4 per gallon. According to Atlantic Biomass president Robert Kozak, a study at University of Maryland Eastern Shore indicates sugar beets appear capable of producing 860 to 900 gallons (3,256 to 3,407 liters) of ethanol per acre. Cattle feed In New Zealand, sugar beet is widely grown and harvested as feed for dairy cattle. It is regarded as superior to fodder beet, because it has a lower water content (resulting in better storage properties). Both the beet bulb and the leaves (with 25% protein) are fed to cattle. Although long considered toxic to cattle, harvested beet bulbs can be fed to cattle if they are appropriately transitioned to their new diet. Dairy cattle in New Zealand can thrive on just pasture and beets, without silage or other supplementary feed. The crop is also now grown in some parts of Australia as cattle feed. Monosodium glutamate Molasses can serve to produce monosodium glutamate (MSG). Agriculture Sugar beets are an important part of a crop rotation cycle. Sugar beet plants are susceptible to Rhizomania ("root madness"), which turns the bulbous tap root into many small roots, making the crop economically unprocessable. Strict controls are enforced in European countries to prevent the spread, but it is already present in some areas. It is also susceptible to both the beet leaf curl virus, which causes crinkling and stunting of the leaves and beet yellows virus. Continual research looks for varieties with resistance, as well as increased sugar yield. Sugar beet breeding research in the United States is most prominently conducted at various USDA Agricultural Research Stations, including one in Fort Collins, Colorado, headed by Linda Hanson and Leonard Panella; one in Fargo, North Dakota, headed by John Wieland; and one at Michigan State University in East Lansing, Michigan, headed by Rachel Naegele. Other economically important members of the subfamily Chenopodioideae: Beetroot Chard Mangelwurzel or fodder beet Genetic modification In the United States, genetically modified sugar beets, engineered for resistance to glyphosate, a herbicide marketed as Roundup, were developed by Monsanto as a genetically modified crop. In 2005, the US Department of Agriculture-Animal and Plant Health Inspection Service (USDA-APHIS) deregulated glyphosate-resistant sugar beets after it conducted an environmental assessment and determined glyphosate-resistant sugar beets were highly unlikely to become a plant pest. Sugar from glyphosate-resistant sugar beets has been approved for human and animal consumption in multiple countries, but commercial production of biotech beets has been approved only in the United States and Canada. Studies have concluded the sugar from glyphosate-resistant sugar beets has the same nutritional value as sugar from conventional sugar beets. After deregulation in 2005, glyphosate-resistant sugar beets were extensively adopted in the United States. About 95% of sugar beet acres in the US were planted with glyphosate-resistant seed in 2011. Weeds may be chemically controlled using glyphosate without harming the crop. After planting sugar beet seed, weeds emerge in fields and growers apply glyphosate to control them. Glyphosate is commonly used in field crops because it controls a broad spectrum of weed species and has a low toxicity. A study from the UK suggests yields of genetically modified beet were greater than conventional, while another from the North Dakota State University extension service found lower yields. The introduction of glyphosate-resistant sugar beets may contribute to the growing number of glyphosate-resistant weeds, so Monsanto has developed a program to encourage growers to use different herbicide modes of action to control their weeds. In 2008, the Center for Food Safety, the Sierra Club, the Organic Seed Alliance and High Mowing Seeds filed a lawsuit against USDA-APHIS regarding their decision to deregulate glyphosate-resistant sugar beets in 2005. The organizations expressed concerns regarding glyphosate-resistant sugar beets' ability to potentially cross-pollinate with conventional sugar beets. U.S. District Judge Jeffrey S. White, US District Court for the Northern District of California, revoked the deregulation of glyphosate-resistant sugar beets and declared it unlawful for growers to plant glyphosate-resistant sugar beets in the spring of 2011. Believing a sugar shortage would occur USDA-APHIS developed three options in the environmental assessment to address the concerns of environmentalists. In 2011, a federal appeals court for the Northern district of California in San Francisco overturned the ruling. In July 2012, after completing an environmental impact assessment and a plant pest risk assessment the USDA deregulated Monsanto's Roundup Ready sugar beets. Genome and genetics The sugar beet genome shares a triplication event somewhere super-Caryophyllales and at or sub-Eudicots. It has been sequenced and two reference genome sequences have already been generated. The genome size of the sugar beet is approximately 731 (714–758) Megabases, and sugar beet DNA is packaged in 18 metacentric chromosomes (2n=2x=18). All sugar beet centromeres are made up of a single satellite DNA family and centromere-specific LTR retrotransposons. More than 60% of sugar beet's DNA is repetitive, mostly distributed in a dispersed way along the chromosomes. Crop wild beet populations (B. vulgaris ssp. maritima) have been sequenced as well, allowing for identification of the resistance gene Rz2 in the wild progenitor. Rz2 confers resistance to rhizomania, commonly known as the sugar beet root madness disease. Breeding Sugar beets have been bred for increased sugar content, from 8% to 18% in the 200 years , resistance to viral and fungal diseases, increased taproot size, monogermy, and less bolting. Breeding has been eased by discovery of a cytoplasmic male sterility line – this has especially been useful in yield breeding.
Biology and health sciences
Caryophyllales
null
50263
https://en.wikipedia.org/wiki/Domain%20of%20a%20function
Domain of a function
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by or , where is the function. In layman's terms, the domain of a function can generally be thought of as "what x can be". More precisely, given a function , the domain of is . In modern mathematical language, the domain is part of the definition of a function rather than a property of it. In the special case that and are both sets of real numbers, the function can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the -axis of the graph, as the projection of the graph of the function onto the -axis. For a function , the set is called the codomain: the set to which all outputs must belong. The set of specific outputs the function assigns to elements of is called its range or image. The image of f is a subset of , shown as the yellow oval in the accompanying diagram. Any function can be restricted to a subset of its domain. The restriction of to , where , is written as . Natural domain If a real function is given by a formula, it may be not defined for some values of the variable. In this case, it is a partial function, and the set of real numbers on which the formula can be evaluated to a real number is called the natural domain or domain of definition of . In many contexts, a partial function is called simply a function, and its natural domain is called simply its domain. Examples The function defined by cannot be evaluated at 0. Therefore, the natural domain of is the set of real numbers excluding 0, which can be denoted by or . The piecewise function defined by has as its natural domain the set of real numbers. The square root function has as its natural domain the set of non-negative real numbers, which can be denoted by , the interval , or . The tangent function, denoted , has as its natural domain the set of all real numbers which are not of the form for some integer , which can be written as . Other uses The term domain is also commonly used in a different sense in mathematical analysis: a domain is a non-empty connected open set in a topological space. In particular, in real and complex analysis, a domain is a non-empty connected open subset of the real coordinate space or the complex coordinate space Sometimes such a domain is used as the domain of a function, although functions may be defined on more general sets. The two concepts are sometimes conflated as in, for example, the study of partial differential equations: in that case, a domain is the open connected subset of where a problem is posed, making it both an analysis-style domain and also the domain of the unknown function(s) sought. Set theoretical notions For example, it is sometimes convenient in set theory to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition, functions do not have a domain, although some authors still use it informally after introducing a function in the form .
Mathematics
Functions: General
null
50337
https://en.wikipedia.org/wiki/Fructose
Fructose
Fructose (), or fruit sugar, is a ketonic simple sugar found in many plants, where it is often bonded to glucose to form the disaccharide sucrose. It is one of the three dietary monosaccharides, along with glucose and galactose, that are absorbed by the gut directly into the blood of the portal vein during digestion. The liver then converts most fructose and galactose into glucose for distribution in the bloodstream or deposition into glycogen. Fructose was discovered by French chemist Augustin-Pierre Dubrunfaut in 1847. The name "fructose" was coined in 1857 by the English chemist William Allen Miller. Pure, dry fructose is a sweet, white, odorless, crystalline solid, and is the most water-soluble of all the sugars. Fructose is found in honey, tree and vine fruits, flowers, berries, and most root vegetables. Commercially, fructose is derived from sugar cane, sugar beets, and maize. High-fructose corn syrup is a mixture of glucose and fructose as monosaccharides. Sucrose is a compound with one molecule of glucose covalently linked to one molecule of fructose. All forms of fructose, including those found in fruits and juices, are commonly added to foods and drinks for palatability and taste enhancement, and for browning of some foods, such as baked goods. As of 2004, about 240,000 tonnes of crystalline fructose were being produced annually. Excessive consumption of sugars, including fructose, (especially from sugar-sweetened beverages) may contribute to insulin resistance, obesity, elevated LDL cholesterol and triglycerides, leading to metabolic syndrome. The European Food Safety Authority (EFSA) stated in 2011 that fructose may be preferable over sucrose and glucose in sugar-sweetened foods and beverages because of its lower effect on postprandial blood sugar levels, while also noting the potential downside that "high intakes of fructose may lead to metabolic complications such as dyslipidaemia, insulin resistance, and increased visceral adiposity". The UK's Scientific Advisory Committee on Nutrition in 2015 disputed the claims of fructose causing metabolic disorders, stating that "there is insufficient evidence to demonstrate that fructose intake, at levels consumed in the normal UK diet, leads to adverse health outcomes independent of any effects related to its presence as a component of total and free sugars." Etymology The word "fructose" was coined in 1857 from the Latin for fructus (fruit) and the generic chemical suffix for sugars, -ose. It is also called fruit sugar and levulose or laevulose, due to its ability to rotate plane polarised light in a laevorotary fashion (anti-clockwise/to the left) when a beam is shone through it in solution. Likewise, dextrose (an isomer of glucose) is given its name due to its ability to rotate plane polarised light in a dextrorotary fashion (clockwise/to the right). Chemical properties Fructose is a 6-carbon polyhydroxyketone. Crystalline fructose adopts a cyclic six-membered structure, called β--fructopyranose, owing to the stability of its hemiketal and internal hydrogen-bonding. In solution, fructose exists as an equilibrium mixture of the tautomers β--fructopyranose, β--fructofuranose, α--fructofuranose, α--fructopyranose and keto--fructose (the non-cyclic form). The distribution of -fructose tautomers in solution is related to several variables, such as solvent and temperature. -Fructopyranose and -fructofuranose distributions in water have been identified multiple times as roughly 70% fructopyranose and 22% fructofuranose. Reactions Fructose and fermentation Fructose may be anaerobically fermented by yeast and bacteria. Yeast enzymes convert sugar (sucrose, glucose, and fructose, but not lactose) to ethanol and carbon dioxide. Some of the carbon dioxide produced during fermentation will remain dissolved in water, where it will reach equilibrium with carbonic acid. The dissolved carbon dioxide and carbonic acid produce the carbonation in some fermented beverages, such as champagne. Fructose and Maillard reaction Fructose undergoes the Maillard reaction, non-enzymatic browning, with amino acids. Because fructose exists to a greater extent in the open-chain form than does glucose, the initial stages of the Maillard reaction occur more rapidly than with glucose. Therefore, fructose has potential to contribute to changes in food palatability, as well as other nutritional effects, such as excessive browning, volume and tenderness reduction during cake preparation, and formation of mutagenic compounds. Dehydration Fructose readily dehydrates to give hydroxymethylfurfural ("HMF", ), which can be processed into liquid dimethylfuran (). This process, in the future, may become part of a low-cost, carbon-neutral system to produce replacements for petrol and diesel from plants. Physical and functional properties Sweetness of fructose The primary reason that fructose is used commercially in foods and beverages, besides its low cost, is its high relative sweetness. It is the sweetest of all naturally occurring carbohydrates. The relative sweetness of fructose has been reported in the range of 1.2–1.8 times that of sucrose. However, it is the 6-membered ring form of fructose that is sweeter; the 5-membered ring form tastes about the same as usual table sugar. Warming fructose leads to formation of the 5-membered ring form. Therefore, the relative sweetness decreases with increasing temperature. However, it has been observed that the absolute sweetness of fructose is identical at 5 °C as 50 °C and thus the relative sweetness to sucrose is not due to anomeric distribution but a decrease in the absolute sweetness of sucrose at higher temperatures. The sweetness of fructose is perceived earlier than that of sucrose or glucose, and the taste sensation reaches a peak (higher than that of sucrose), and diminishes more quickly than that of sucrose. Fructose can also enhance other flavors in the system. Fructose exhibits a sweetness synergy effect when used in combination with other sweeteners. The relative sweetness of fructose blended with sucrose, aspartame, or saccharin is perceived to be greater than the sweetness calculated from individual components. Fructose solubility and crystallization Fructose has higher water solubility than other sugars, as well as other sugar alcohols. Fructose is, therefore, difficult to crystallize from an aqueous solution. Sugar mixes containing fructose, such as candies, are softer than those containing other sugars because of the greater solubility of fructose. Fructose hygroscopicity and humectancy Fructose is quicker to absorb moisture and slower to release it to the environment than sucrose, glucose, or other nutritive sweeteners. Fructose is an excellent humectant and retains moisture for a long period of time even at low relative humidity (RH). Therefore, fructose can contribute a more palatable texture, and longer shelf life to the food products in which it is used. Freezing point Fructose has a greater effect on freezing point depression than disaccharides or oligosaccharides, which may protect the integrity of cell walls of fruit by reducing ice crystal formation. However, this characteristic may be undesirable in soft-serve or hard-frozen dairy desserts. Fructose and starch functionality in food systems Fructose increases starch viscosity more rapidly and achieves a higher final viscosity than sucrose because fructose lowers the temperature required during gelatinizing of starch, causing a greater final viscosity. Although some artificial sweeteners are not suitable for home baking, many traditional recipes use fructose. Food sources Natural sources of fructose include fruits, vegetables (including sugar cane), and honey. Fructose is often further concentrated from these sources. The highest dietary sources of fructose, besides pure crystalline fructose, are foods containing white sugar (sucrose), high-fructose corn syrup, agave nectar, honey, molasses, maple syrup, fruit and fruit juices, as these have the highest percentages of fructose (including fructose in sucrose) per serving compared to other common foods and ingredients. Fructose exists in foods either as a free monosaccharide or bound to glucose as sucrose, a disaccharide. Fructose, glucose, and sucrose may all be present in food; however, different foods will have varying levels of each of these three sugars. The sugar contents of common fruits and vegetables are presented in Table 1. In general, in foods that contain free fructose, the ratio of fructose to glucose is approximately 1:1; that is, foods with fructose usually contain about an equal amount of free glucose. A value that is above 1 indicates a higher proportion of fructose to glucose and below 1 a lower proportion. Some fruits have larger proportions of fructose to glucose compared to others. For example, apples and pears contain more than twice as much free fructose as glucose, while for apricots the proportion is less than half as much fructose as glucose. Apple and pear juices are of particular interest to pediatricians because the high concentrations of free fructose in these juices can cause diarrhea in children. The cells (enterocytes) that line children's small intestines have less affinity for fructose absorption than for glucose and sucrose. Unabsorbed fructose creates higher osmolarity in the small intestine, which draws water into the gastrointestinal tract, resulting in osmotic diarrhea. This phenomenon is discussed in greater detail in the Health Effects section. Table 1 also shows the amount of sucrose found in common fruits and vegetables. Sugarcane and sugar beet have a high concentration of sucrose, and are used for commercial preparation of pure sucrose. Extracted cane or beet juice is clarified, removing impurities; and concentrated by removing excess water. The end product is 99.9%-pure sucrose. Sucrose-containing sugars include common white sugar and powdered sugar, as well as brown sugar. The carbohydrate figure is calculated in FoodData Central and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber". All data with a unit of g (gram) are based on 100 g of a food item. The fructose/glucose ratio is calculated by dividing the sum of free fructose plus half sucrose by the sum of free glucose plus half sucrose. Fructose is also found in the manufactured sweetener, high-fructose corn syrup (HFCS), which is produced by treating corn syrup with enzymes, converting glucose into fructose. The common designations for fructose content, HFCS-42 and HFCS-55, indicate the percentage of fructose present in HFCS. HFCS-55 is commonly used as a sweetener for soft drinks, whereas HFCS-42 is used to sweeten processed foods, breakfast cereals, bakery foods, and some soft drinks. Carbohydrate content of commercial sweeteners (percent on dry basis) for HFCS, and USDA for fruits and vegetables and the other refined sugars. Cane and beet sugars have been used as the major sweetener in food manufacturing for centuries. However, with the development of HFCS, a significant shift occurred in the type of sweetener consumption in certain countries, particularly the United States. Contrary to the popular belief, however, with the increase of HFCS consumption, the total fructose intake relative to the total glucose intake has not dramatically changed. Granulated sugar is 99.9%-pure sucrose, which means that it has equal ratio of fructose to glucose. The most commonly used forms of HFCS, HFCS-42, and HFCS-55, have a roughly equal ratio of fructose to glucose, with minor differences. HFCS has simply replaced sucrose as a sweetener. Therefore, despite the changes in the sweetener consumption, the ratio of glucose to fructose intake has remained relatively constant. Nutritional information Providing 368 kcal per 100 grams of dry powder (table), fructose has 95% the caloric value of sucrose by weight. Fructose powder is 100% carbohydrates and supplies no other nutrients in significant amount (table). Fructose digestion and absorption in humans Fructose exists in foods either as a monosaccharide (free fructose) or as a unit of a disaccharide (sucrose). Free fructose is a ketonic simple sugar and one of the three dietary monosaccharides absorbed directly by the intestine. When fructose is consumed in the form of sucrose, it is digested (broken down) and then absorbed as free fructose. As sucrose comes into contact with the membrane of the small intestine, the enzyme sucrase catalyzes the cleavage of sucrose to yield one glucose unit and one fructose unit, which are then each absorbed. After absorption, it enters the hepatic portal vein and is directed toward the liver. The mechanism of fructose absorption in the small intestine is not completely understood. Some evidence suggests active transport, because fructose uptake has been shown to occur against a concentration gradient. However, the majority of research supports the claim that fructose absorption occurs on the mucosal membrane via facilitated transport involving GLUT5 transport proteins. Since the concentration of fructose is higher in the lumen, fructose is able to flow down a concentration gradient into the enterocytes, assisted by transport proteins. Fructose may be transported out of the enterocyte across the basolateral membrane by either GLUT2 or GLUT5, although the GLUT2 transporter has a greater capacity for transporting fructose, and, therefore, the majority of fructose is transported out of the enterocyte through GLUT2. Capacity and rate of absorption The absorption capacity for fructose in monosaccharide form ranges from less than 5 g to 50 g (per individual serving) and adapts with changes in dietary fructose intake. Studies show the greatest absorption rate occurs when glucose and fructose are administered in equal quantities. When fructose is ingested as part of the disaccharide sucrose, absorption capacity is much higher because fructose exists in a 1:1 ratio with glucose. It appears that the GLUT5 transfer rate may be saturated at low levels, and absorption is increased through joint absorption with glucose. One proposed mechanism for this phenomenon is a glucose-dependent cotransport of fructose. In addition, fructose transfer activity increases with dietary fructose intake. The presence of fructose in the lumen causes increased mRNA transcription of GLUT5, leading to increased transport proteins. High-fructose diets (>2.4 g/kg body wt) increase the transport proteins within three days of intake. Malabsorption Several studies have measured the intestinal absorption of fructose using the hydrogen breath test. These studies indicate that fructose is not completely absorbed in the small intestine. When fructose is not absorbed in the small intestine, it is transported into the large intestine, where it is fermented by the colonic flora. Hydrogen is produced during the fermentation process and dissolves into the blood of the portal vein. This hydrogen is transported to the lungs, where it is exchanged across the lungs and is measurable by the hydrogen breath test. The colonic flora also produces carbon dioxide, short-chain fatty acids, organic acids, and trace gases in the presence of unabsorbed fructose. The presence of gases and organic acids in the large intestine causes gastrointestinal symptoms such as bloating, diarrhea, flatulence, and gastrointestinal pain. Exercise immediately after consumption can exacerbate these symptoms by decreasing transit time in the small intestine, resulting in a greater amount of fructose emptied into the large intestine. Fructose metabolism All three dietary monosaccharides are transported into the liver by the GLUT2 transporter. Fructose and galactose are phosphorylated in the liver by fructokinase (Km= 0.5 mM) and galactokinase (Km = 0.8 mM), respectively. By contrast, glucose tends to pass through the liver (Km of hepatic glucokinase = 10 mM) and can be metabolised anywhere in the body. Uptake of fructose by the liver is not regulated by insulin. However, insulin is capable of increasing the abundance and functional activity of GLUT5, fructose transporter, in skeletal muscle cells. Fructolysis The initial catabolism of fructose is sometimes referred to as fructolysis, in analogy with glycolysis, the catabolism of glucose. In fructolysis, the enzyme fructokinase initially produces fructose 1-phosphate, which is split by aldolase B to produce the trioses dihydroxyacetone phosphate (DHAP) and glyceraldehyde. Unlike glycolysis, in fructolysis the triose glyceraldehyde lacks a phosphate group. A third enzyme, triokinase, is therefore required to phosphorylate glyceraldehyde, producing glyceraldehyde 3-phosphate. The resulting trioses are identical to those obtained in glycolysis and can enter the gluconeogenic pathway for glucose or glycogen synthesis, or be further catabolized through the lower glycolytic pathway to pyruvate. Metabolism of fructose to DHAP and glyceraldehyde The first step in the metabolism of fructose is the phosphorylation of fructose to fructose 1-phosphate by fructokinase, thus trapping fructose for metabolism in the liver. Fructose 1-phosphate then undergoes hydrolysis by aldolase B to form DHAP and glyceraldehydes; DHAP can either be isomerized to glyceraldehyde 3-phosphate by triosephosphate isomerase or undergo reduction to glycerol 3-phosphate by glycerol 3-phosphate dehydrogenase. The glyceraldehyde produced may also be converted to glyceraldehyde 3-phosphate by glyceraldehyde kinase or further converted to glycerol 3-phosphate by glycerol 3-phosphate dehydrogenase. The metabolism of fructose at this point yields intermediates in the gluconeogenic pathway leading to glycogen synthesis as well as fatty acid and triglyceride synthesis. Synthesis of glycogen from DHAP and glyceraldehyde 3-phosphate The resultant glyceraldehyde formed by aldolase B then undergoes phosphorylation to glyceraldehyde 3-phosphate. Increased concentrations of DHAP and glyceraldehyde 3-phosphate in the liver drive the gluconeogenic pathway toward glucose and subsequent glycogen synthesis. It appears that fructose is a better substrate for glycogen synthesis than glucose and that glycogen replenishment takes precedence over triglyceride formation. Once liver glycogen is replenished, the intermediates of fructose metabolism are primarily directed toward triglyceride synthesis. Synthesis of triglyceride from DHAP and glyceraldehyde 3-phosphate Carbons from dietary fructose are found in both the free fatty acid and glycerol moieties of plasma triglycerides. High fructose consumption can lead to excess pyruvate production, causing a buildup of Krebs cycle intermediates. Accumulated citrate can be transported from the mitochondria into the cytosol of hepatocytes, converted to acetyl CoA by citrate lyase and directed toward fatty acid synthesis. In addition, DHAP can be converted to glycerol 3-phosphate, providing the glycerol backbone for the triglyceride molecule. Triglycerides are incorporated into very-low-density lipoproteins (VLDL), which are released from the liver destined toward peripheral tissues for storage in both fat and muscle cells. Potential health effects In 2022, the European Food Safety Authority stated that there is research evidence that fructose and other added free sugars may be associated with increased risk of several chronic diseases: the risk is moderate for obesity and dyslipidemia (more than 50%), and low for non-alcoholic fatty liver disease, type 2 diabetes (from 15% to 50%) and hypertension. EFSA further stated that clinical research did "not support a positive relationship between the intake of dietary sugars, in isocaloric exchange with other macronutrients, and any of the chronic metabolic diseases or pregnancy-related endpoints assessed" but advised "the intake of added and free sugars should be as low as possible in the context of a nutritionally adequate diet." Cardiometabolic diseases When fructose is consumed in excess as a sweetening agent in foods or beverages, it may be associated with increased risk of obesity, diabetes, and cardiovascular disorders that are part of metabolic syndrome. Compared with sucrose Fructose was found to increase triglycerides in type-2 but not type-1 diabetes and moderate use of it has previously been considered acceptable as a sweetener for diabetics, possibly because it does not trigger the production of insulin by pancreatic β cells. For a 50 gram reference amount, fructose has a glycemic index of 23, compared with 100 for glucose and 60 for sucrose. Fructose is also 73% sweeter than sucrose at room temperature, allowing diabetics to use less of it per serving. Fructose consumed before a meal may reduce the glycemic response of the meal. Fructose-sweetened food and beverage products cause less of a rise in blood glucose levels than do those manufactured with either sucrose or glucose.
Biology and health sciences
Biochemistry and molecular biology
null
50357
https://en.wikipedia.org/wiki/Larva
Larva
A larva (; : larvae ) is a distinct juvenile form many animals undergo before metamorphosis into their next life stage. Animals with indirect development such as insects, some arachnids, amphibians, or cnidarians typically have a larval phase of their life cycle. A larva's appearance is generally very different from the adult form (e.g. caterpillars and butterflies) including different unique structures and organs that do not occur in the adult form. Their diet may also be considerably different. In the case of smaller primitive arachnids, the larval stage differs by having three instead of four pairs of legs. Larvae are frequently adapted to different environments than adults. For example, some larvae such as tadpoles live almost exclusively in aquatic environments, but can live outside water as adult frogs. By living in a distinct environment, larvae may be given shelter from predators and reduce competition for resources with the adult population. Animals in the larval stage will consume food to fuel their transition into the adult form. In some organisms like polychaetes and barnacles, adults are immobile but their larvae are mobile, and use their mobile larval form to distribute themselves. These larvae used for dispersal are either planktotrophic (feeding) or lecithotrophic (non-feeding). Some larvae are dependent on adults to feed them. In many eusocial Hymenoptera species, the larvae are fed by female workers. In Ropalidia marginata (a paper wasp) the males are also capable of feeding larvae but they are much less efficient, spending more time and getting less food to the larvae. The larvae of some organisms (for example, some newts) can become pubescent and do not develop further into the adult form. This is a type of neoteny. It is a misunderstanding that the larval form always reflects the group's evolutionary history. This could be the case, but often the larval stage has evolved secondarily, as in insects. In these cases, the larval form may differ more than the adult form from the group's common origins. Selected types of larvae Insect larvae Within Insects, only Endopterygotes show complete metamorphosis, including a distinct larval stage. Several classifications have been suggested by many entomologists, and following classification is based on Antonio Berlese classification in 1913. There are four main types of endopterygote larvae types: Apodous larvae – no legs at all and are poorly sclerotized. Based on sclerotization. All Apocrita are apodous. Three apodous forms are recognized. Eucephalous – with well sclerotized head capsule. Found in Nematocera, Buprestidae and Cerambycidae families. Hemicephalus – with a reduced head capsule, retractable in to the thorax. Found in Tipulidae and Brachycera families. Acephalus – without head capsule. Found in Cyclorrhapha Protopod larvae – larva have many different forms and often unlike a normal insect form. They hatch from eggs which contain very little yolk. E.g. first instar larvae of parasitic hymenoptera. Polypod larvae – also known as eruciform larvae, these larvae have abdominal prolegs, in addition to usual thoracic legs. They are poorly sclerotized and relatively inactive. They live in close contact with their food. Best example is caterpillars of lepidopterans. Oligopod larvae – have well-developed head capsule and mouthparts are similar to the adult, but without compound eyes. They have six legs. No abdominal prolegs. Two types can be seen: Campodeiform – well sclerotized, dorso-ventrally flattened body. Usually long legged predators with prognathous mouthparts. (lacewing, trichopterans, mayflies and some coleopterans). Scarabeiform – poorly sclerotized, flat thorax and abdomen. Usually short legged and inactive burrowing forms. (Scarabaeoidea and other coleopterans).
Biology and health sciences
Animal ontogeny
null
50378
https://en.wikipedia.org/wiki/High-speed%20rail
High-speed rail
High-speed rail (HSR) is a type of rail transport network utilising trains that run significantly faster than those of traditional rail, using an integrated system of specialised rolling stock and dedicated tracks. While there is no single definition or standard that applies worldwide, lines built to handle speeds above or upgraded lines in excess of are generally considered to be high-speed. The first high-speed rail system, the Tōkaidō Shinkansen, began operations in Honshu, Japan, in 1964. Due to the streamlined spitzer-shaped nose cone of the trains, the system also became known by its English nickname bullet train. Japan's example was followed by several European countries, initially in Italy with the Direttissima line, followed shortly thereafter by France, Germany, and Spain. Today, much of Europe has an extensive network with numerous international connections. More recent construction since the 21st century has led to China taking a leading role in high-speed rail. , China's HSR network accounted for over two-thirds of the world's total. In addition to these, many other countries have developed high-speed rail infrastructure to connect major cities, including: Austria, Belgium, Denmark, Finland, Greece, Indonesia, Morocco, the Netherlands, Norway, Poland, Portugal, Russia, Saudi Arabia, Serbia, South Korea, Sweden, Switzerland, Taiwan, Turkey, the United Kingdom, the United States, and Uzbekistan. Only in continental Europe and Asia does high-speed rail cross international borders. High-speed trains mostly operate on standard gauge tracks of continuously welded rail on grade-separated rights of way with large radii. However, certain regions with wider legacy railways, including Russia and Uzbekistan, have sought to develop a high-speed railway network in Russian gauge. There are no narrow gauge high-speed railways. Countries whose legacy network is entirely or mostly of a different gauge than 1435mm – including Japan and Spain – have however often opted to build their high speed lines to standard gauge instead of the legacy railway gauge. High-speed rail is the fastest and most efficient ground-based method of commercial transportation. However, due to requirements for large track curves, gentle gradients and grade separated track the construction of high-speed rail is more costly than conventional rail and therefore does not always present an economical advantage over conventional speed rail. Definitions Multiple definitions for high-speed rail are in use worldwide, with various international organisations and regional bodies establishing different standards. Several countries have also developed their own legal definitions and technical standards for high-speed rail. International Union of Railways definition The International Union of Railways (UIC) identifies three categories of high-speed rail: Category I: New tracks specially constructed for high speeds, allowing a maximum running speed of at least 250 km/h (155 mph). Category II: Existing tracks specially upgraded for high speeds, allowing a maximum running speed of at least 200 km/h (124 mph). Category III: Existing tracks specially upgraded for high speeds, allowing a maximum running speed of at least 200 km/h, but with some sections having a lower allowable speed (for example due to topographic constraints, or passage through urban areas). A third definition of high-speed and very high-speed rail requires simultaneous fulfilment of the following two conditions: Maximum achievable running speed in excess of , or for very high-speed, Average running speed across the corridor in excess of , or for very high-speed. The International Union of Railways prefers to use "definitions" (plural) because they consider that there is no single standard definition of high-speed rail, nor even standard usage of the terms ("high speed", or "very high speed"). They make use of the European EC Directive 96/48, stating that high speed is a combination of all the elements which constitute the system: infrastructure, rolling stock and operating conditions. The International Union of Railways states that high-speed rail is a set of unique features, not merely a train travelling above a particular speed. Many conventionally hauled trains are able to reach in commercial service but are not considered to be high-speed trains. These include the French SNCF Intercités and German DB IC. The criterion of is selected for several reasons; above this speed, the impacts of geometric defects are intensified, track adhesion is decreased, aerodynamic resistance is greatly increased, pressure fluctuations within tunnels cause passenger discomfort, and it becomes difficult for drivers to identify trackside signalling. Standard signaling equipment is often limited to speeds below , with the traditional limits of in the US, in Germany and in Britain. Above those speeds positive train control or the European Train Control System becomes necessary or legally mandatory. European Union definition The European Union Directive 96/48/EC, Annex 1 (see also Trans-European high-speed rail network) defines high-speed rail in terms of: Infrastructure: Track built specially for high-speed travel or specially upgraded for high-speed travel. Minimum speed limit: Minimum speed of on lines specially built for high speed and of about on existing lines which have been specially upgraded. This must apply to at least one section of the line. Rolling stock must be able to reach a speed of at least 200 km/h to be considered high speed. Operating conditions: Rolling stock must be designed alongside its infrastructure for complete compatibility, safety and quality of service. National legal definitions Some national legal definitions of high-speed rail include: Australia According to the High Speed Rail Authority Act 2022, high-speed rail in Australia is defined as a railway capable of supporting trains that can travel at speeds exceeding 250 km/h. As of 2025, Australia does not have any any railways which meet this definition. China According to China's Ministry of Railways Order No. 34 (2013), high-speed rail refers to new passenger rail lines designed to operate at speeds of 250 km/h or higher, with initial service running at least 200 km/h. Japan The first law defining high-speed rail was Japan's "Law number 71 for Construction of Nation-Wide High-Speed Railways", adopted on May 18, 1970. Article 2 of this law provided the following definition: "An artery railway that is capable of operating at the speed of 200km/h or more in its predominating section." This law formalised the definition of high-speed railways in Japan and established a framework for the Shinkansen network, which had started in operation since 1964. South Korea South Korea legally defines high-speed rail through the Railway Service Act (2004), which categorises railway lines and trains into three types: High-speed railway lines: Can run at speeds of 300 km/h or more on the majority of tracks. Semi-high-speed railway lines: Can run at speeds between 200 km/h to 300 km/h on the majority of tracks. Conventional Lines: Can run at a maximum speed of less than 200 km/h on the majority of tracks. The Act also categorises trains into corresponding types based on their maximum speeds. United States United States federal law defines high-speed rail is as intercity passenger rail service expected to reach speeds of at least . History Railways were the first form of rapid land transportation and had an effective monopoly on long-distance passenger traffic until the development of the motor car and airliners in the early-mid 20th century. Speed had always been an important factor for railroads and they constantly tried to achieve higher speeds and decrease journey times. Rail transportation in the late 19th century was not much slower than non-high-speed trains today, and many railroads regularly operated relatively fast express trains which averaged speeds of around . Early research First experiments High-speed rail development began in Germany in 1899 when the Prussian state railway joined with ten electrical and engineering firms and electrified of military owned railway between Marienfelde and Zossen. The line used three-phase current at 10 kilovolts and 45 Hz. The Van der Zypen & Charlier company of Deutz, Cologne built two railcars, one fitted with electrical equipment from Siemens-Halske, the second with equipment from Allgemeine Elektrizitäts-Gesellschaft (AEG), that were tested on the Marienfelde–Zossen line during 1902 and 1903 (see Experimental three-phase railcar). On 23 October 1903, the S&H-equipped railcar achieved a speed of and on 27 October the AEG-equipped railcar achieved . These trains demonstrated the feasibility of electric high-speed rail; however, regularly scheduled electric high-speed rail travel was still more than 30 years away. High-speed aspirations After the breakthrough of electric railroads, it was clearly the infrastructure – especially the cost of it – which hampered the introduction of high-speed rail. Several disasters happened – derailments, head-on collisions on single-track lines, collisions with road traffic at grade crossings, etc. The physical laws were well-known, i.e. if the speed was doubled, the curve radius should be quadrupled; the same was true for the acceleration and braking distances. In 1891 engineer Károly Zipernowsky proposed a high-speed line from Vienna to Budapest for electric railcars at . In 1893 Wellington Adams proposed an air-line from Chicago to St. Louis of , at a speed of only . Alexander C. Miller had greater ambitions. In 1906, he launched the Chicago-New York Electric Air Line Railroad project to reduce the running time between the two big cities to ten hours by using electric locomotives. After seven years of effort, however, less than of arrow-straight track was finished. A part of the line is still used as one of the last interurbans in the US. High-speed interurbans In the US, some of the interurbans (i.e. trams or streetcars which run from city to city) of the early 20th century were very high-speed for their time (also Europe had and still does have some interurbans). Several high-speed rail technologies have their origin in the interurban field. In 1903 – 30 years before the conventional railways started to streamline their trains – the officials of the Louisiana Purchase Exposition organised the Electric Railway Test Commission to conduct a series of tests to develop a carbody design that would reduce wind resistance at high speeds. A long series of tests was carried. In 1905, St. Louis Car Company built a railcar for the traction magnate Henry E. Huntington, capable of speeds approaching . Once it ran between Los Angeles and Long Beach in 15 minutes, an average speed of . However, it was too heavy for much of the tracks, so Cincinnati Car Company, J. G. Brill and others pioneered lightweight constructions, use of aluminium alloys, and low-level bogies which could operate smoothly at extremely high speeds on rough interurban tracks. Westinghouse and General Electric designed motors compact enough to be mounted on the bogies. From 1930 on, the Red Devils from Cincinnati Car Company and a some other interurban rail cars reached about in commercial traffic. The Red Devils weighed only 22 tons though they could seat 44 passengers. Extensive wind tunnel research – the first in the railway industry – was done before J. G. Brill in 1931 built the Bullet cars for Philadelphia and Western Railroad (P&W). They were capable of running at . Some of them were almost 60 years in service. P&W's Norristown High Speed Line is still in use, almost 110 years after P&W in 1907 opened their double-track Upper Darby–Strafford line without a single grade crossing with roads or other railways. The entire line was governed by an absolute block signal system. Early German high-speed network On 15 May 1933, the Deutsche Reichsbahn-Gesellschaft company introduced the diesel-powered "Fliegender Hamburger" in regular service between Hamburg and Berlin (), thereby achieving a new top speed for a regular service, with a top speed of . This train was a streamlined multi-powered unit, albeit diesel, and used Jakobs bogies. Following the success of the Hamburg line, the steam-powered Henschel-Wegmann Train was developed and introduced in June 1936 for service from Berlin to Dresden, with a regular top speed of . Incidentally no train service since the cancelation of this express train in 1939 has traveled between the two cities in a faster time . In August 2019, the travel time between Dresden-Neustadt and Berlin-Südkreuz was 102 minutes. See Berlin–Dresden railway. Further development allowed the usage of these "Fliegenden Züge" (flying trains) on a rail network across Germany. The "Diesel-Schnelltriebwagen-Netz" (diesel high-speed-vehicle network) had been in the planning since 1934 but it never reached its envisaged size. All high-speed service stopped in August 1939 shortly before the outbreak of World War II. American Streamliners On 26 May 1934, one year after Fliegender Hamburger introduction, the Burlington Railroad set an average speed record on long distance with their new streamlined train, the Zephyr, at with peaks at . The Zephyr was made of stainless steel and, like the Fliegender Hamburger, was diesel powered, articulated with Jacobs bogies, and could reach as commercial speed. The new service was inaugurated 11 November 1934, traveling between Kansas City and Lincoln, but at a lower speed than the record, on average speed . In 1935, the Milwaukee Road introduced the Morning Hiawatha service, hauled at by steam locomotives. In 1939, the largest railroad of the world, the Pennsylvania Railroad introduced a duplex steam engine Class S1, which was designed to be capable of hauling 1200 tons passenger trains at . The S1 engine was assigned to power the popular all-coach overnight premier train the Trail Blazer between New York and Chicago since the late 1940s and it consistently reached in its service life. These were the last "high-speed" trains to use steam power. In 1936, the Twin Cities Zephyr entered service, from Chicago to Minneapolis, with an average speed of . Many of these streamliners posted travel times comparable to or even better than their modern Amtrak successors, which are limited to top speed on most of the network. Italian electric and the last steam record The German high-speed service was followed in Italy in 1938 with an electric-multiple-unit ETR 200, designed for , between Bologna and Naples. It too reached in commercial service, and achieved a world mean speed record of between Florence and Milan in 1938. In Great Britain in the same year, the streamlined steam locomotive Mallard achieved the official world speed record for steam locomotives at . The external combustion engines and boilers on steam locomotives were large, heavy and time and labor-intensive to maintain, and the days of steam for high speed were numbered. Introduction of the Talgo system In 1945, a Spanish engineer, Alejandro Goicoechea, developed a streamlined, articulated train that was able to run on existing tracks at higher speeds than contemporary passenger trains. This was achieved by providing the locomotive and cars with a unique axle system that used one axle set per car end, connected by a Y-bar coupler. Amongst other advantages, the centre of mass was only half as high as usual. This system became famous under the name of Talgo (Tren Articulado Ligero Goicoechea Oriol), and for half a century was the main Spanish provider of high-speed trains. First above 300 km/h developments In the early 1950s, the French National Railway started to receive their new powerful CC 7100 electric locomotives, and began to study and evaluate running at higher speeds. In 1954, the CC 7121 hauling a full train achieved a record during a test on standard track. The next year, two specially tuned electric locomotives, the CC 7107 and the prototype BB 9004, broke previous speed records, reaching respectively and , again on standard track. For the first time, was surpassed, allowing the idea of higher-speed services to be developed and further engineering studies commenced. Especially, during the 1955 records, a dangerous hunting oscillation, the swaying of the bogies which leads to dynamic instability and potential derailment was discovered. This problem was solved by yaw dampers which enabled safe running at high speeds today. Research was also made about "current harnessing" at high-speed by the pantographs, which was solved 20 years later by the Zébulon TGV's prototype. Breakthrough: Shinkansen Japanese research and development With some 45 million people living in the densely populated Tokyo–Osaka corridor, congestion on road and rail became a serious problem after World War II, and the Japanese government began thinking about ways to transport people in and between cities. Because Japan was resource limited and did not want to import petroleum for security reasons, energy-efficient high-speed rail was an attractive potential solution. Japanese National Railways (JNR) engineers began to study the development of a high-speed regular mass transit service. In 1955, they were present at the Lille's Electrotechnology Congress in France, and during a 6-month visit, the head engineer of JNR accompanied the deputy director Marcel Tessier at the DETE (SNCF Electric traction study department). JNR engineers returned to Japan with a number of ideas and technologies they would use on their future trains, including alternating current for rail traction, and international standard gauge. First narrow-gauge Japanese high-speed service In 1957, the engineers at the private Odakyu Electric Railway in Greater Tokyo Area launched the Odakyu 3000 series SE EMU. This EMU set a world record for narrow gauge trains at , giving the Odakyu engineers confidence they could safely and reliably build even faster trains at standard gauge. Conventional Japanese railways up until that point had largely been built in the Cape gauge, however widening the tracks to standard gauge () would make very high-speed rail much simpler due to improved stability of the wider rail gauge, and thus standard gauge was adopted for high-speed service. With the sole exceptions of Russia, Finland, and Uzbekistan all high-speed rail lines in the world are still standard gauge, even in countries where the preferred gauge for legacy lines is different. A new train on a new line The new service, named Shinkansen (meaning new main line) would provide a new alignment, 25% wider standard gauge utilising continuously welded rails between Tokyo and Osaka with new rolling stock, designed for . However, the World Bank, whilst supporting the project, considered the design of the equipment as unproven for that speed, and set the maximum speed to . After initial feasibility tests, the plan was fast-tracked and construction of the first section of the line started on 20 April 1959. In 1963, on the new track, test runs hit a top speed of . Five years after the beginning of the construction work, in October 1964, just in time for the Olympic Games, the first modern high-speed rail, the Tōkaidō Shinkansen, was opened between the two cities; a line between Tokyo and Ōsaka. As a result of its speeds, the Shinkansen earned international publicity and praise, and it was dubbed the "bullet train." The first Shinkansen trains, the 0 Series Shinkansen, built by Kawasaki Heavy Industriesin English often called "Bullet Trains", after the original Japanese name outclassed the earlier fast trains in commercial service. They traversed the distance in 3 hours 10 minutes, reaching a top speed of and sustaining an average speed of with stops at Nagoya and Kyoto. High-speed rail for the masses Speed was not only a part of the Shinkansen revolution: the Shinkansen offered high-speed rail travel to the masses. The first Bullet trains had 12 cars and later versions had up to 16, and double-deck trains further increased the capacity. After three years, more than 100 million passengers had used the trains, and the milestone of the first one billion passengers was reached in 1976. In 1972, the line was extended a further , and further construction has resulted in the network expanding to of high speed lines as of 2024, with a further of extensions currently under construction and due to open in 2038. The cumulative patronage on the entire system since 1964 is over 10 billion, the equivalent of approximately 140% of the world's population, without a single train passenger fatality. (Suicides, passengers falling off the platforms, and industrial accidents have resulted in fatalities.) Since their introduction, Japan's Shinkansen systems have been undergoing constant improvement, not only increasing line speeds. Over a dozen train models have been produced, addressing diverse issues such as tunnel boom noise, vibration, aerodynamic drag, lines with lower patronage ("Mini shinkansen"), earthquake and typhoon safety, braking distance, problems due to snow, and energy consumption (newer trains are twice as energy-efficient as the initial ones despite greater speeds). Future developments After decades of research and successful testing on a test track, in 2014 JR Central began constructing a Maglev Shinkansen line, which is known as the Chūō Shinkansen. These Maglev trains still have the traditional underlying tracks and the cars have wheels. This serves a practical purpose at stations and a safety purpose out on the lines in the event of a power failure. However, in normal operation, the wheels are raised up into the car as the train reaches certain speeds where the magnetic levitation effect takes over. It is proposed to link Tokyo and Osaka by 2037, with the section from Tokyo to Nagoya expected to be operational by 2034. Maximum speed is anticipated at . The first generation train can be ridden by tourists visiting the test track. China is developing two separate high-speed maglev systems. the CRRC 600, is based on the Transrapid technology and is being developed by the CRRC under license from Thyssen-Krupp. A test track has been operating since 2006 at the Jiading Campus of Tongji University, northwest of Shanghai. A prototype vehicle was developed in 2019 and was tested in June 2020. In July 2021 a four car train was unveiled. A high-speed test track is under development and in April 2021 there was consideration given to re-opening the Emsland test facility in Germany. An incompatible system has been developed at Southwest Jiaotong University in Chengdu, the design uses high-temperature super conducting magnets, which the university has been researching since 2000, and is capable of . A prototype was demonstrated in January 2021 on a test track. Europe and North America First demonstrations at In Europe, high-speed rail began during the International Transport Fair in Munich in June 1965, when Dr Öpfering, the director of Deutsche Bundesbahn (German Federal Railways), performed 347 demonstrations at between Munich and Augsburg by DB Class 103 hauled trains. The same year the Aérotrain, a French hovercraft monorail train prototype, reached within days of operation. Le Capitole After the successful introduction of the Japanese Shinkansen in 1964, at , the German demonstrations up to in 1965, and the proof-of-concept jet-powered Aérotrain, SNCF ran its fastest trains at . In 1966, French Infrastructure Minister Edgard Pisani consulted engineers and gave the French National Railways twelve months to raise speeds to . The classic line Paris–Toulouse was chosen, and fitted, to support rather than . Some improvements were set, notably the signals system, development of on board "in-cab" signalling system, and curve revision. The next year, in May 1967, a regular service at was inaugurated by the TEE Le Capitole between Paris and Toulouse, with specially adapted SNCF Class BB 9200 locomotives hauling classic UIC cars, and a full red livery. It averaged over the . At the same time, the Aérotrain prototype 02 reached on a half-scale experimental track. In 1969, it achieved on the same track. On 5 March 1974, the full-scale commercial prototype Aérotrain I80HV, jet powered, reached . US Metroliner trains In the United States, following the creation of Japan's first high-speed Shinkansen, President Lyndon B. Johnson as part of his Great Society infrastructure building initiatives asked the Congress to devise a way to increase speeds on the railroads. Congress delivered the High Speed Ground Transportation Act of 1965 which passed with overwhelming bipartisan support and helped to create regular Metroliner service between New York City, Philadelphia, and Washington, D.C. The new service was inaugurated in 1969, with top speeds of and averaging along the route, with the travel time as little as 2 hours 30 minutes. In a 1967 competition with a GE powered Metroliner on Penn Central's mainline, the United Aircraft Corporation TurboTrain set a record of . United Kingdom, Italy and Germany In 1976, British Rail introduced a high-speed service able to reach using the InterCity 125 diesel-electric trainsets under the brand name of High Speed Train (HST). It was the fastest diesel-powered train in regular service and it improved upon its forerunners in speed and acceleration. As of 2019 it is still in regular service as the fastest diesel-powered train. The train was as a reversible multi-car set having driving power-cars at both ends and a fixed formation of passenger cars between them. Journey times were reduced by an hour for example on the East Coast Main Line, and passenger numbers increased. As of 2019 many of these trains are still in service, private operators have often preferred to rebuild the units with new engines rather than replace them. Prior to COVID-19, ridership of the UK's High Speed Intercity Services had exceeded 40 million journeys per annum. The next year, in 1977, Germany finally introduced a new service at , on the Munich–Augsburg line. That same year, Italy inaugurated the first European High-Speed line, the Direttissima between Rome and Florence, designed for , but used by FS E444 hauled train at . In France this year also saw the abandonment for political reasons of the Aérotrain project, in favour of the TGV. Evolution in Europe Italy The earliest European high-speed railway to be built was the Italian Florence–Rome high-speed railway (also called "Direttissima") in 1977. High-speed trains in Italy were developed during the 1960s. E444 locomotives were the first standard locomotives capable of top speed of , while an ALe 601 electrical multiple unit (EMU) reached a speed of during a test. Other EMUs, such as the ETR 220, ETR 250 and ETR 300, were also updated for speeds up to . The braking systems of cars were updated to match the increased travelling speeds. On 25 June 1970, work was started on the Rome–Florence Direttissima, the first high-speed line in Italy and in Europe. It included the bridge on the Paglia river, then the longest in Europe. Works were completed in the early 1990s. In 1975, a program for a widespread updating of the rolling stock was launched. However, as it was decided to put more emphasis on local traffic, this caused a shifting of resources from the ongoing high-speed projects, with their subsequent slowing or, in some cases, total abandonment. Therefore, 160 E.656 electric and 35 D.345 locomotives for short-medium range traffic were acquired, together with 80 EMUs of the ALe 801/940 class, 120 ALn 668 diesel railcars. Some 1,000 much-needed passenger and 7,000 freight cars were also ordered. In the 1990s, work started on the Treno Alta Velocità (TAV) project, which involved building a new high-speed network on the routes Milan – (Bologna–Florence–Rome–Naples) – Salerno, Turin – (Milan–Verona–Venice) – Trieste and Milan–Genoa. Most of the planned lines have already been opened, while international links with France, Switzerland, Austria and Slovenia are underway. Most of the Rome–Naples line opened in December 2005, the Turin–Milan line partially opened in February 2006 and the Milan–Bologna line opened in December 2008. The remaining sections of the Rome–Naples and the Turin–Milan lines and the Bologna–Florence line were completed in December 2009. All these lines are designed for speeds up to . Since then, it is possible to travel from Turin to Salerno (ca. ) in less than 5 hours. More than 100 trains per day are operated. Other proposed high-speed lines are Salerno-Reggio Calabria (connected to Sicily with the future bridge over the Strait of Messina), Palermo-Catania and Naples–Bari. The main public operator of high-speed trains (alta velocità AV, formerly Eurostar Italia) is Trenitalia, part of FSI. Trains are divided into three categories (called "Le Frecce"): Frecciarossa ("Red arrow") trains operate at a maximum of on dedicated high-speed tracks; Frecciargento (Silver arrow) trains operate at a maximum of on both high-speed and mainline tracks; Frecciabianca (White arrow) trains operate at a maximum of on mainline tracks only. Since 2012, a new and Italy's first private train operator, NTV (branded as Italo), run high-speed services in competition with Trenitalia. Even nowadays, Italy is the only country in Europe with a private high-speed train operator. Construction of the Milan-Venice high-speed line has begun in 2013 and in 2016 the Milan-Treviglio section has been opened to passenger traffic; the Milan-Genoa high-speed line (Terzo Valico dei Giovi) is also under construction. Today it is possible to travel from Rome to Milan in less than 3 hours (2h 55') with the Frecciarossa 1000, the new high-speed train. To cover this route, there's a train every 30 minutes. France Following the 1955 records, two divisions of the SNCF began to study high-speed services. In 1964, the DETMT (petrol-engine traction studies department of SNCF) investigated the use of gas turbines: a diesel-powered railcar was modified with a gas-turbine, and was called "TGV" (Turbotrain Grande Vitesse). It reached in 1967, and served as a basis for the future Turbotrain and the real TGV. At the same time, the new "SNCF Research Department", created in 1966, was studying various projects, including one code-named "C03: Railways possibilities on new infrastructure (tracks)". In 1969, the "C03 project" was transferred to public administration while a contract with Alstom was signed for the construction of two gas-turbine high-speed train prototypes, named "TGV 001". The prototype consisted of a set of five carriages, plus a power car at each end, both powered by two gas-turbine engines. The sets used Jacobs bogies, which reduce drag and increase safety. In 1970, the DETMT's Turbotrain began operations on the Paris–Cherbourg line, and operated at despite being designed for usage at . It used gas-turbine powered multiple elements and was the basis for future experimentation with TGV services, including shuttle services and regular high rate schedules. In 1971, the "C03" project, now known as "TGV Sud-Est", was validated by the government, against Bertin's Aerotrain. Until this date, there was a rivalry between the French Land Settlement Commission (DATAR), supporting the Aérotrain, and the SNCF and its ministry, supporting conventional rail. The "C03 project" included a new High-Speed line between Paris and Lyon, with new multi-engined trains running at . At that time, the classic Paris-Lyon line was already congested and a new line was required; this busy corridor, neither too short (where high speeds give limited reductions in end to end times) nor too long (where planes are faster in city center to city center travel time), was the best choice for the new service. The 1973 oil crisis substantially increased oil prices. In the continuity of the De Gaulle "energy self-sufficiency" and nuclear-energy policy (Pierre Messmer then French Prime Minister announced an ambitious buildout of nuclear power in France in 1974), a ministry decision switched the future TGV from now costly gas-turbine to full electric energy in 1974. An electric railcar named Zébulon was developed for testing at very high speeds, reaching a speed of . It was used to develop pantographs capable of withstanding speeds of over . After intensive tests with the gas-turbine "TGV 001" prototype, and the electric "Zébulon", in 1977, the SNCF placed an order to the group Alstom–Francorail–MTE for 87 TGV Sud-Est trainsets. They used the "TGV 001" concept, with a permanently coupled set of eight cars, sharing Jacobs bogies, and hauled by two electric-power cars, one at each end. In 1981, the first section of the new Paris–Lyon High-Speed line was inaugurated, with a top speed (then soon after). Being able to use both dedicated high-speed and conventional lines, the TGV offered the ability to join every city in the country at shorter journey times. After the introduction of the TGV on some routes, air traffic on these routes decreased and in some cases disappeared. The TGV set a publicised speed records in 1981 at , in 1990 at , and then in 2007 at , although these were test speeds, rather than operation train speeds. Germany Following the ETR 450 and Direttissima in Italy and French TGV, in 1991 Germany was the third country in Europe to inaugurate a high-speed rail service, with the launch of the Intercity-Express (ICE) on the new Hannover–Würzburg high-speed railway, operating at a top speed of . The German ICE train was similar to the TGV, with dedicated streamlined power cars at both ends, but a variable number of trailers between them. Unlike the TGV, the trailers had two conventional bogies per car, and could be uncoupled, allowing the train to be lengthened or shortened. This introduction was the result of ten years of study with the ICE-V prototype, originally called Intercity Experimental, which broke the world speed record in 1988, reaching . Spain In 1992, just in time for the Barcelona Olympic Games and Seville Expo '92, the Madrid–Seville high-speed rail line opened in Spain with 25 kV AC electrification, and standard gauge, differing from all other Spanish lines which used Iberian gauge. This allowed the AVE rail service to begin operations using Class 100 trainsets built by Alstom, directly derived in design from the French TGV trains. The service was very popular and development continued on high-speed rail in Spain. In 2005, the Spanish government announced an ambitious plan, (PEIT 2005–2020) envisioning that by 2020, 90 percent of the population would live within of a station served by AVE. Spain began building the largest HSR network in Europe: , five of the new lines have opened (Madrid–Zaragoza–Lleida–Tarragona–Barcelona, Córdoba–Malaga, Madrid–Toledo, Madrid–Segovia–Valladolid, Madrid–Cuenca–Valencia) and another were under construction. Opened in early 2013, the Perpignan–Barcelona high-speed rail line provides a link with neighbouring France with trains running to Paris, Lyon, Montpellier and Marseille. , the Spanish high-speed rail network is the longest HSR network in Europe with and the second longest in the world, after China's. Turkey In 2009, Turkey inaugurated a high-speed service between Ankara and Eskişehir. This has been followed up by an Ankara – Konya route, and the Eskisehir line has been extended to Istanbul (European part). In this extension, Europe and Asia were connected by an undersea tunnel, Marmaray in the Bosphorus. The first connection between two continents in the world as a high-speed train line was made in Istanbul. The last station of this line in Europe is Halkalı station. An extension to Sivas was opened in April 2023. United States In 1992, the United States Congress passed the Amtrak Authorization and Development Act that authorised Amtrak to start working on service improvements on the segment between Boston and New York City of the Northeast Corridor. The primary objectives were to electrify the line north of New Haven, Connecticut, to eliminate grade crossings and replace the then 30-year-old Metro liners with new trains, so that the distance between Boston and New York City could be covered in 3 hours or less. Amtrak started testing two trains, the Swedish X2000 and the German ICE 1, in the same year along its fully electrified segment between New York City and Washington, D.C. The officials favored the X2000 as it had a tilting mechanism. However, the Swedish manufacturer never bid on the contract as the burdensome United States railroad regulations required them to heavily modify the train resulting in added weight, among other things. Eventually, a custom-made tilting train derived from TGV, manufactured by Alstom and Bombardier, won the contract and was put into service in December 2000. The new service was named "Acela Express" and linked Boston, New York City, Philadelphia, Baltimore, and Washington, D.C. The service did not meet the 3-hour travel time objective between Boston and New York City. The time was 3 hours and 24 minutes as it partially ran on regular lines, limiting its average speed, with a maximum speed of being reached on a small section of its route through Rhode Island and Massachusetts. As of November 2021, the U.S. has one high-speed rail line under construction (California High-Speed Rail) in California, and advanced planning by a company called Texas Central Railway in Texas, higher-speed rail projects in the Pacific Northwest, Midwest and Southeast, as well as upgrades on the high-speed Northeast Corridor. The private higher speed rail venture Brightline in Florida started operations along part of its route in early 2018. The top speed is but most of the line still runs at . Expansion in East Asia For four decades from its opening in 1964, the Japanese Shinkansen was the only high-speed rail service outside of Europe. In the 2000s a number of new high-speed rail services started operating in East Asia. Chinese CRH and CR High-speed rail was introduced to China in 2003 with the Qinhuangdao–Shenyang high-speed railway. The Chinese government made high-speed rail construction a cornerstone of its economic stimulus program in order to combat the effects of the 2008 global financial crisis and the result has been a rapid development of the Chinese rail system into the world's most extensive high-speed rail network. By 2013 the system had of operational track, accounting for about half of the world's total at the time. By the end of 2018, the total high-speed railway (HSR) in China had risen to over . Over 1.71 billion trips were made in 2017, more than half of China's total railway passenger delivery, making it the world's busiest network. State planning for high-speed railway began in the early 1990s, and the country's first high-speed rail line, the Qinhuangdao–Shenyang Passenger Railway, was built in 1999 and opened to commercial operation in 2003. This line could accommodate commercial trains running at up to . Planners also considered Germany's Transrapid maglev technology and built the Shanghai maglev train, which runs on a track linking the Pudong, the city's financial district, and the Pudong International Airport. The maglev train service began operating in 2004 with trains reaching a top speed of , and remains the fastest high-speed service in the world. Maglev, however, was not adopted nationally and all subsequent expansion features high-speed rail on conventional tracks. In the 1990s, China's domestic train production industry designed and produced a series of high-speed train prototypes but few were used in commercial operation and none were mass-produced. The Chinese Ministry of Railways (MOR) then arranged for the purchase of foreign high-speed trains from French, German, and Japanese manufacturers along with certain technology transfers and joint ventures with domestic trainmakers. In 2007, the MOR introduced the China Railways High-speed (CRH) service, also known as "Harmony Trains", a version of the German Siemens Velaro high-speed train. In 2008, high-speed trains began running at a top speed of on the Beijing–Tianjin intercity railway, which opened during the 2008 Summer Olympics in Beijing. The following year, trains on the newly opened Wuhan–Guangzhou high-speed railway set a world record for average speed over an entire trip, at over . A collision of high-speed trains on 23 July 2011 in Zhejiang province killed 40 and injured 195, raising concerns about operational safety. A credit crunch later that year slowed the construction of new lines. In July 2011, top train speeds were lowered to . But by 2012, the high-speed rail boom had renewed with new lines and new rolling stock by domestic producers that had indigenised foreign technology. On 26 December 2012, China opened the Beijing–Guangzhou–Shenzhen–Hong Kong high-speed railway, the world's longest high-speed rail line, which runs from Beijing West railway station to Shenzhen North Railway Station. The network set a target to create the 4+4 National high-speed rail Grid by 2015, and continues to rapidly expand with the July 2016 announcement of the 8+8 National high-speed rail Grid. In 2017, services resumed on the Beijing–Shanghai high-speed railway, once again refreshing the world record for average speed with select services running between Beijing South to Nanjing South reaching average speeds of . South Korean KTX In South Korea, construction of the high-speed line from Seoul to Busan began in 1992. The Seoul–Busan corridor is Korea's busiest running between the two largest cities. In 1982, it represented 65.8% of South Korea's population, a number that grew to 73.3% by 1995, along with 70% of freight traffic and 66% of passenger traffic. With both the Gyeongbu Expressway and Korail's Gyeongbu Line congested as of the late 1970s, the government saw the pressing need for another form of transportation. The line known as Korea Train Express (KTX) was launched on 1 April 2004, using French (TGV) technology. Top speed for trains in regular service is currently , though the infrastructure is designed for . The initial rolling stock was based on Alstom's TGV Réseau, and was partly built in Korea. The domestically developed HSR-350x, which achieved in tests, resulted in a second type of high-speed trains now operated by Korail, the KTX Sancheon. The next generation KTX train, HEMU-430X, achieved in 2013, making South Korea the world's fourth country after France, Japan, and China to develop a high-speed train running on conventional rail above . Taiwan HSR Taiwan High Speed Rail's first and only HSR line opened for service on 5 January 2007, using Japanese trains with a top speed of . The service traverses from to in as little as 105 minutes. While it contains only one line, its route covers Western Taiwan where over 90% of Taiwan's population live; connecting most major cities of Taiwan: Taipei, New Taipei, Taoyuan, Hsinchu, Taichung, Chiayi, Tainan, and Kaohsiung. Once THSR began operations, almost all passengers switched from airlines flying parallel routes while road traffic was also reduced. Middle East and Central Asia Saudi Arabia Uzbekistan Uzbekistan opened the Afrosiyob service from Tashkent to Samarkand in 2011, which was upgraded in 2013 to an average operational speed of and peak speed of . The Talgo 250 service has been extended to Karshi as of August 2015 whereby the train travels in 3 hours. As of August 2016, the train service was extended to Bukhara, and the extension will take 3 hours and 20 minutes down from 7 hours. Africa Egypt , there are no operational high-speed rail lines in Egypt. Plans have been announced for three lines, aiming to connect the Nile river valley, the Mediterranean coast, and the Red Sea. Construction had started on at least two lines. Morocco In November 2007, the Moroccan government decided to undertake the construction of a high-speed rail line between the economic capital Casablanca and Tangier, one of the largest harbour cities on the Strait of Gibraltar. The line will also serve the capital Rabat and Kenitra. The first section of the line, the Kenitra–Tangier high-speed rail line, was completed in 2018. Future projects include expansions south to Marrakech and Agadir, and east to Meknes, Fes and Oujda. Network Maps Technologies Continuous welded rail is generally used to reduce track vibrations and misalignment. Almost all high-speed lines are electrically driven via overhead lines, have in-cab signalling, and use advanced switches using very low entry and frog angles. HSR tracks may also be designed to reduce vibrations originating from high speed rail use. Road-rail parallel layout The road-rail parallel layout uses land beside highways for railway lines. Examples include Paris/Lyon and Köln–Frankfurt in which 15% and 70% of the track runs beside highways, respectively. There are synergies to be achieved from such a setup as noise mitigation measures for the road benefit the railway and vice versa and furthermore less land must be taken through expropriation as land may have already been acquired for the construction of the other infrastructure. In addition to that, habitats of local wildlife are disrupted only once (by the combined rail/road right of way) instead of at multiple points. However, downsides include the fact that roads usually allow steeper grades and sharper turns than high-speed rail lines and thus co-locating them may not always be suitable. Moreover, both roads and railways often make use of narrow river valleys or mountain passes which do not allow a lot of infrastructure to be sited next to each other. Track sharing In China, high-speed lines at speeds between may carry freight or passengers, while lines operating at speeds over are used only by passenger CRH/CR trains. In the United Kingdom, HS1 is also used by regional trains run by Southeastern at speeds of up to , and occasionally freight trains that run to central Europe. In Germany, some lines are shared with Inter-City and regional trains at day and freight trains at night. In France, some lines are shared with regional trains that travel at , for example TER Nantes-Laval. Mixing trains of vastly different speeds and/or stopping patterns on the same tracks drastically reduces capacity, so usually a temporal separation (e.g. freight trains use the high-speed line only at night when no or only a few passenger trains operate) is employed or the slower train has to wait at a station or passing siding for the faster train to overtake - even if the faster train is delayed, thus delaying the slower train, too. Cost The cost per kilometre in Spain was estimated at between €9 million (Madrid–Andalucía) and €22 million (Madrid–Valladolid). In Italy, the cost was between €24 million (Roma–Napoli) and €68 million (Bologna–Firenze). In the 2010s, costs per kilometre in France ranged from €18 million (BLP Brittany) to €26 million (Sud Europe Atlantique). The World Bank estimated in 2019 that the Chinese HSR network was built at an average cost of $17–21 million per km. Freight high-speed rail All high-speed trains have been designed to carry passengers only. There are very few high-speed freight services in the world; they all use trains that were originally designed to carry passengers. During the planning of the Tokaido Shinkansen, the Japanese National Railways were planning for freight services along the route. This plan was discarded before the line opened, but since 2019 light freight has been carried on some Shinkansen services. The French TGV La Poste was for a long time the sole very high-speed train service, transporting mail in France for La Poste at a maximum top speed of , between 1984 and 2015. The trainsets were either specifically adapted and built, either converted, passenger TGV Sud-Est trainsets. In Italy, Mercitalia Fast is a high-speed freight service launched in October 2018 by Mercitalia. It uses converted passenger ETR 500 trainsets to carry goods at average speeds of , at first between Caserta and Bologna, with plans to extend the network throughout Italy. In some countries, high-speed rail is integrated with courier services to provide fast door-to-door intercity deliveries. For example, China Railways has partnered with SF Express for high-speed cargo deliveries and offers express deliveries within Germany as well as to some major cities outside the country on the ICE network. Rather than using dedicated freight trains, these use luggage racks and other unused space in passenger trains. Non-high-speed freight trains running on high-speed lines is much more common; for example, High Speed 1 sees weekly freight services. However, high speed lines tend to be steeper than regular (non-mountain) railways, which poses a problem for most freight trains as they have a lower power to weight ratio and thus more difficulty climbing steep slopes. For example, the Frankfurt Cologne high speed line has inclines up to 40‰. If a high-speed line through even somewhat hilly terrain is to be usable for freight, expensive engineered structures will need to be built, as is the case with the Hannover Würzburg high-speed line which contains the longest and the second longest mainline rail tunnel in Germany and altogether runs on tunnels or bridges for roughly half of its length. Rolling stock Key technologies used in high-speed train rolling stock include tilting trainsets, aerodynamic designs (to reduce drag, lift, and noise), air brakes, regenerative braking, engine technology and dynamic weight shifting. Notable high-speed train manufacturers include Alstom, Hitachi, Kawasaki, Siemens, Stadler Rail, and CRRC. Comparison with other modes of transport Optimal distance While commercial high-speed trains have lower maximum speeds than jet aircraft, they offer shorter total trip times than air travel for short distances. They typically connect city centre rail stations to each other, while air transport connects airports that are typically farther from city centres. High-speed rail (HSR) is best suited for journeys of 1 to hours (about ), for which the train can beat air and car trip time. For trips under about , the process of checking in and going through airport security, as well as travelling to and from the airport, makes the total air journey time equal to or slower than HSR. European authorities treat HSR as competitive with passenger air for HSR trips under hours. HSR eliminated air transport from routes such as Paris–Lyon, Paris–Brussels, Cologne–Frankfurt, Nanjing–Wuhan, Chongqing–Chengdu, Taipei–Kaohsiung, Tokyo–Nagoya, Tokyo–Sendai and Tokyo–Niigata, while also greatly reducing air traffic on routes such as Amsterdam–Brussels, Barcelona-Madrid and Naples–Rome–Milan. China Southern Airlines, China's largest airline, expects the construction of China's high-speed railway network to impact (through increased competition and falling revenues) 25% of its route network in the coming years. Market shares European data indicate that air traffic is more sensitive than road traffic (car and bus) to competition from HSR, at least on journeys of and more. TGV Sud-Est reduced the travel time Paris–Lyon from almost four to about two hours. Market share rose from 40 to 72%. Air and road market shares shrunk from 31 to 7% and from 29 to 21%, respectively. On the Madrid–Seville link, the AVE connection increased share from 16 to 52%; air traffic shrunk from 40 to 13%; road traffic from 44 to 36%, hence the rail market amounted to 80% of combined rail and air traffic. This figure increased to 89% in 2009, according to Spanish rail operator Renfe. According to Peter Jorritsma, the rail market share s, as compared to planes, can be computed approximately as a function of the travelling time in minutes t by the logistic formula According to this formula, a journey time of three hours yields a 65% market share, not taking into account any price differential in tickets. In Japan, there is a so-called "4-hour wall" in high-speed rail's market share: If the high-speed rail journey time exceeds 4 hours, then people likely choose planes over high-speed rail. For instance, from Tokyo to Osaka, a 2h22m-journey by Shinkansen, high-speed rail has an 85% market share whereas planes have 15%. From Tokyo to Hiroshima, a 3h44m-journey by Shinkansen, high-speed rail has a 67% market share whereas planes have 33%. The situation is the reverse on the Tokyo to Fukuoka route where high-speed rail takes 4h47m and rail only has 10% market share and planes 90%. In Taiwan, China Airlines cancelled all flights to Taichung Airport within a year of Taiwan high-speed rail starting operations. Completion of the high-speed railway in 2007 led to drastically fewer flights along the island's west coast, with flights between Taipei and Kaohsiung ceasing altogether in 2012. Energy efficiency Travel by rail is more competitive in areas of higher population density or where gasoline is expensive because conventional trains are more fuel-efficient than cars when ridership is high, similar to other forms of mass transit. Very few high-speed trains consume diesel or other fossil fuels but the power stations that provide electric trains with electricity can consume fossil fuels. In Japan (prior to the Fukushima Daiichi nuclear disaster) and France, with very extensive high-speed rail networks, a large proportion of electricity comes from nuclear power. On the Eurostar, which primarily runs off the French grid, emissions from traveling by train from London to Paris are 90% lower than by flying. In Germany 38.5% of all electricity was produced from renewable sources in 2017, however railways run on their own grid partially independent from the general grid and relying in part on dedicated power plants. Even using electricity generated from coal, fossil gas or oil, high-speed trains are significantly more fuel-efficient per passenger per kilometer traveled (despite the greater resistance to motion of the railcars at higher speeds) than the typical automobile because of economies of scale in generator technology and trains themselves, as well as lower air friction and rolling resistance at the same speed. Automobiles and buses High-speed rail can accommodate more passengers at far higher speeds than automobiles. Generally, the longer the journey, the better the time advantage of rail over the road if going to the same destination. However, high-speed rail can be competitive with cars on shorter distances, , for example for commuting, especially if the car users experience road congestion or expensive parking fees. In Norway, the Gardermoen Line has made the rail market share for passengers from Oslo to the airport (42 km) rise to 51% in 2014, compared to 17% for buses and 28% for private cars and taxis. On such short lines−particularly services which call at stations close to one another−the acceleration capabilities of the trains may be more important than their maximum speed. Extreme commuting has been enabled by high-speed rail with commuters covering distances by rail daily that they would not usually by car. Furthermore, stations in less densely populated areas within the larger conurbation of larger cities, like Montabaur railway station and Limburg Süd railway station between Frankfurt and Cologne, are attractive for commuters as the housing prices are more affordable than in the central cities - even when taking into account the price of a yearly ticket for the train. Consequently, Montabaur has the highest per capita rate of Bahn Card 100 in Germany — a ticket that allows unlimited travel on all trains in Germany for a fixed yearly price. Moreover, a typical passenger rail carries 2.83 times as many passengers per hour per meter width as a road. A typical capacity is the Eurostar, which provides capacity for 12 trains per hour and 800 passengers per train, totaling 9,600 passengers per hour in each direction. By contrast, the Highway Capacity Manual gives a maximum capacity of 2,250 passenger cars per hour per lane, excluding other vehicles, assuming an average vehicle occupancy of 1.57 people. A standard twin track railway has a typical capacity 13% greater than a 6-lane highway (3 lanes each way), while requiring only 40% of the land (1.0/3.0 versus 2.5/7.5 hectares per kilometre of direct/indirect land consumption). The Tokaido Shinkansen line in Japan, has a much higher ratio (with as many as 20,000 passengers per hour per direction). Similarly, commuter roads tend to carry fewer than 1.57 persons per vehicle (Washington State Department of Transportation, for instance, uses 1.2 persons per vehicle) during commute times. Compare this to the capacity of typical small to mid-sized airliners like the Airbus A320 which in a high-density arrangement has 186 seats or the Boeing 737-800 which has an absolute maximum seated capacity of 189 in a high-density single-class layout - as employed for example by Ryanair. If a business or first class section is provided, those airliners will have lower seating capacities than that. Air travel HSR advantages Less boarding infrastructure: Although air transit moves at higher speeds than high-speed rail, total time to destination can be increased by travel to/from far out airports, check-in, baggage handling, security, and boarding, which may also increase cost to air travel. Short range advantages: Trains may be preferred in short to mid-range distances since rail stations are typically closer to urban centers than airports. Likewise, air travel needs longer distances to have a speed advantage after accounting for both processing time and transit to the airport. Urban centers: Particularly for dense city centers, short-hop air travel may not be ideal to serve these areas as airports tend to be far out of the city, due to land scarcity, short runway limitations, building heights, as well as airspace issues. Weather: Rail travel also requires less weather dependency than air travel. A well-designed and operated rail system can only be affected by severe weather conditions, such as heavy snow, heavy fog, and major storm. Flights however, often face cancellations or delays under less severe conditions. Comfort: High-speed trains also have comfort advantages, since train passengers are allowed to move freely about the train at any point in the journey. Since airlines have complicated calculations to try to minimise weight to save fuel or to allow takeoff at certain runway lengths, rail seats are also less subject to weight restrictions than on planes, and as such may have more padding and legroom. Technology advances such as continuously welded rail have minimised the vibration found on slower railways, while air travel remains affected by turbulence when adverse wind conditions arise. Trains can also accommodate intermediate stops at lower time and energetic costs than planes, though this applies less to HSR than to the slower conventional trains. Delays: On particular busy air-routes – those that HSR has historically been most successful on – trains are also less prone to delays due to congested airports, or in the case of China, airspace. A train that is late by a couple of minutes will not have to wait for another slot to open up, unlike airplanes at congested airports. Furthermore, many airlines see short-haul flights as increasingly uneconomic and in some countries airlines rely on high-speed rail instead of short-haul flights for connecting services. De-icing: HSR does not need to spend time deicing as planes do, which is time-consuming but critical; it can dent airline profitability as planes remain on the ground and pay airport fees by the hour, as well as take up parking space and contributing to congestive delays. Hot and high: Some airlines have cancelled or move their flights to takeoff at night due to hot and high conditions. Such is the case for Hainan Airlines in Las Vegas in 2017, which moved its long haul takeoff slot to after midnight. Similarly, Norwegian Air Shuttle cancelled all its Europe-bound flights during summer due to heat. High-speed rail may complement airport operations during hot hours when takeoffs become uneconomical or otherwise problematic. Noise and pollution: Major airports are heavy polluters, downwind of LAX particulate pollution doubles, even accounting for Port of LA/Long Beach shipping and heavy freeway traffic. Trains may run on renewable energy, and electric trains produce no local pollution in critical urban areas at any rate. Noise also is an issue for residents. Ability to serve multiple stops: An airplane spends significant amounts of time loading and unloading cargo and/or passengers as well as landing, taxiing and starting again. Trains spend only a few minutes stopping at intermediate stations, often greatly enhancing the business case at little cost. Energy: high-speed trains are more fuel-efficient per passenger space offered than planes. Furthermore, they usually run on electricity, which can be produced from a wider range of sources than kerosene. Disadvantages HSR usually requires land acquisition, for example in Fresno, California, where it was caught up in legal paperwork. HSR is subject to land subsidence, where expensive fixes sent costs soaring in Taiwan. HSR is affected by topography of the terrain as crossing mountain ranges or large bodies of water requires expensive tunnels and bridges. HSR is costly due to required specialised infrastructure as well as advanced technologies and multiple safety systems. The infrastructure is fixed hence the services provided are limited and can not be changed in response to changing market conditions. However, for passengers this can present an advantage as services are less likely to be withdrawn from railways compared to flight routes. As the infrastructure can be extremely expensive, it is not possible to create a direct route between every major city. This means that a train might be transiting or stopping in intermediate stations, increasing the length and duration of a journey. Railways require the security and cooperation of all geographies and governments involved. As most HSRs are electrified they require an extended electricity grid to supply the Overhead lines Pollution High-speed rail usually implements electric power and therefore its energy sources can be distant or renewable. The usage of electric power in high-speed rails can thereby result in a reduction of air pollutants as shown in a case study on China's high-speed railways throughout its development. This is an advantage over air travel, which currently uses fossil fuels and is a major source of pollution. Studies regarding busy airports such as LAX, have shown that over an area of about downwind of the airport, where hundreds of thousands of people live or work, the particle number concentration was at least twice that of nearby urban areas, showing that airplane pollution far exceeded road pollution, even from heavy freeway traffic. Safety HSR is much simpler to control due to its predictable course. High-speed rail systems reduce (but do not eliminate) collisions with automobiles or people, by using non-grade level track and eliminating grade-level crossings. To date, the only three deadly accidents involving a high-speed train on high-speed tracks in revenue service were the 1998 Eschede train disaster, the 2011 Wenzhou train collision (in which speed was not a factor), and the 2020 Livraga derailment. Shinkansen trains have anti-derailment devices installed under passenger cars, which do not strictly prevent derailment, but prevent the train from travelling a large distance away from train tracks in case a derailment occurs. Accidents In general, travel by high-speed rail has been demonstrated to be remarkably safe. The first high-speed rail network, the Japanese Shinkansen has not had any fatal accidents involving passengers since it began operating in 1964. Notable major accidents involving high-speed trains include the following. 1998 Eschede accident In 1998, after over thirty years of high-speed rail operations worldwide without fatal accidents, the Eschede accident occurred in Germany: a poorly designed ICE 1 wheel fractured at a speed of near Eschede, resulting in the derailment and destruction of almost the entire set of 16 cars, and the deaths of 101 people. The derailment began at a switch; the accident was made worse when the derailed cars travelling at high speed struck and collapsed a road bridge located just past the switch. 2011 Wenzhou accident On 23 July 2011, 13 years after the Eschede train accident, a Chinese CRH2 travelling at collided with a CRH1 which was stopped on a viaduct in the suburbs of Wenzhou, Zhejiang province, China. The two trains derailed, and four cars fell off the viaduct. Forty people were killed and at least 192 were injured, 12 of them severely. The disaster led to a number of changes in management and exploitation of high-speed rail in China. Despite the fact that speed itself was not a factor in the cause of the accident, one of the major changes was to further lower the maximum speeds in high-speed and higher-speed railways in China, the remaining becoming , becoming 200, and becoming 160. Six years later they started to be restored to their original high speeds. 2013 Santiago de Compostela accident In July 2013, a high-speed train in Spain travelling at attempted to negotiate a curve whose speed limit is . The train derailed and overturned, resulting in 78 fatalities. Normally high-speed rail has automatic speed limiting restrictions, but this track section is a conventional section and in this case the automatic speed limit was said to be disabled by the driver several kilometers before the station. A few days later, the train worker's union claimed that the speed limiter didn't work properly because of lack of proper funding, acknowledging the budget cuts made by the current government. Two days after the accident, the driver was provisionally charged with homicide by negligence. This is the first accident that occurred with a Spanish high-speed train, but it occurred in a section that was not high speed and as mentioned safety equipment mandatory on high-speed track would have prevented the accident. 2015 Eckwersheim accident On 14 November 2015, a specialised TGV EuroDuplex was performing commissioning tests on the unopened second phase of the LGV Est high-speed line in France, when it entered a curve, overturned, and struck the parapet of a bridge over the Marne–Rhine Canal. The rear power car came to a rest in the canal, while the remainder of the train came to a rest in the grassy median between the northern and southern tracks. Approximately 50 people were on board, consisting of SNCF technicians and, reportedly, some unauthorised guests. Eleven were killed and 37 were injured. The train was performing tests at 10 percent above the planned speed limit for the line and should have slowed from to before entering the curve. Officials have indicated that excessive speed may have caused the accident. During testing, some safety features that usually prevent accidents like this one are switched off. 2018 Ankara train collision On 13 December 2018, a high-speed passenger train travelling at and a locomotive collided near Yenimahalle in Ankara Province, Turkey. Three cars (carriages/coaches) of the passenger train derailed in the collision. Three railroad engineers and five passengers were killed at the scene, and 84 people were injured. Another injured passenger later died, and 34 passengers, including two in critical condition, were treated in several hospitals. 2020 Livraga derailment On 6 February 2020, a high-speed train travelling at derailed at Livraga, Lombardy, Italy. The two drivers were killed and a number of passengers were injured. The cause as reported by investigators was that a faulty set of junction points was in the reverse position, but was reported by the signaling system as being in the normal – i.e. straight – position. Ridership High-speed rail ridership has been increasing rapidly since 2000. At the beginning of the century, the largest share of ridership was on the Japanese Shinkansen network. In 2000, the Shinkansen was responsible for about 85% of the cumulative world ridership up to that point. This has been progressively surpassed by the Chinese high-speed rail network, which has been the largest contributor of global ridership growth since its inception. As of 2018, annual ridership of the Chinese high-speed rail network is over five times larger than that of the Shinkansen. Records Speed There are several definitions of "maximum speed": The maximum speed at which a train is allowed to run by law or policy in daily service (MOR) The maximum speed at which an unmodified train is proved to be capable of running The maximum speed at which specially modified train is proved to be capable of running Absolute speed record Overall rail record The speed record for a pre-production unconventional passenger train was set by a seven-car L0 series manned maglev train at on 21 April 2015 in Yamanashi Prefecture, Japan. Conventional rail Since the 1955 record, where France recorded a world record of speed of 331 km/h, France has nearly continuously held the absolute world speed record. The latest record is held by a TGV POS trainset, which reached in 2007, on the newly constructed LGV Est high-speed line. This run was for proof of concept and engineering, not to test normal passenger service. Maximum speed in service , the fastest trains currently in commercial operation are : Shanghai Maglev : (in China, on the lone maglev track) CR400AF/KCIC400AF, CR400BF, CRH2C, CRH3C, CRH380A & AL, CRH380B, BL & CL, CRH380D : (in China and Indonesia) TGV Duplex, TGV Réseau, TGV POS, TGV Euroduplex : (in France) Eurostar e320 : (in France and GB) E5 Series Shinkansen, E6 Series Shinkansen, H5 Series Shinkansen: (in Japan) ICE 3 Class 403, 406, 407 : (in Germany) AVE Class 103 : (in Spain) KTX-I, KTX-II, KTX-III : (in South Korea) AGV 575, ETR 1000 (Frecciarossa 1000): (in Italy) ETR 500: (in Italy) Many of these trains and their networks are technically capable of higher speeds but they are capped out of economic and commercial considerations (cost of electricity, increased maintenance, resulting ticket price, etc.) Levitation trains The Shanghai Maglev Train reaches during its daily service on its dedicated line, holding the speed record for commercial train service. Conventional rail The fastest operating conventional trains are the Chinese CR400A and CR400B running on Beijing–Shanghai HSR, after China relaunched its 350 km/h class service on select services effective 21 September 2017. In China, from July 2011 until September 2017, the maximum speed was officially , but a tolerance was acceptable, and trains often reached . Before that, from August 2008 to July 2011, China Railway High-speed trains held the highest commercial operating speed record with on some lines such as the Wuhan–Guangzhou high-speed railway. The speed of the service was reduced in 2011 due to high costs and safety concerns the top speeds in China were reduced to on 1 July 2011. Six years later they started to be restored to their original high speeds. Other fast conventional trains are the French TGV POS, German ICE 3, and Japanese E5 and E6 Series Shinkansen with a maximum commercial speed of , the former two on some French high-speed lines, and the latter on a part of Tohoku Shinkansen line. In Spain, on the Madrid–Barcelona HSL, maximum speed is . Service distance The China Railway G403/4, G405/6 and D939/40 Beijing–Kunming train (, 10 hours 43 minutes to 14 hours 54 minutes), which began service on 28 December 2016, are the longest high-speed rail services in the world. Existing systems by country and region The early high-speed lines, built in France, Japan, Italy and Spain, were between pairs of large cities. In France, this was Paris–Lyon, in Japan, Tokyo–Osaka, in Italy, Rome–Florence, in Spain, Madrid–Seville (then Barcelona). In European and East Asian countries, dense networks of urban subways and railways provide connections with high-speed rail lines. Asia China China has the largest network of high-speed railways in the world. it encompassed over of high-speed rail or over two-thirds of the world's total. It is also the world's busiest with an annual ridership of over 1.44 billion in 2016 and 2.01 billion in 2018, more than 60% of total passenger rail volume. By the end of 2018, cumulative passengers delivered by high-speed railway trains was reported to be over 9 billion. According to Railway Gazette International, select trains between Beijing South to Nanjing South on the Beijing–Shanghai high-speed railway have the fastest average operating speed in the world at . The improved mobility and interconnectivity created by these new high-speed rail lines has generated a whole new high-speed commuter market around some urban areas. Commutes via high-speed rail to and from surrounding Hebei and Tianjin into Beijing have become increasingly common, likewise are between the cities surrounding Shanghai, Shenzhen and Guangzhou. Hong Kong A , entirely underground express rail link connects Hong Kong West Kowloon railway station near Kwun Chung to the border with Chinese mainland, where the railway continues onwards to Shenzhen's Futian station. A depot and the stabling sidings are located in Shek Kong. Parts of the West Kowloon station are not under the jurisdiction of Hong Kong to facilitate co-location of border clearance. Indonesia Indonesia operates a high-speed rail line connecting its two largest cities in Western Java, the Whoosh HSR with an operational speed of . Operations commenced in October 2023. It is the first high-speed rail in Southeast Asia and the Southern Hemisphere. Japan In Japan, the Shinkansen was the first high-speed train and has a cumulative ridership of 10+ billion passengers with zero passenger fatalities due to operational accidents in its 60+ years of operation. It is the second largest high-speed rail system in Asia with of high-speed lines. Saudi Arabia Plans in Saudi Arabia to begin service on a high-speed line consist of a phased opening starting with the route from Medina to King Abdullah Economic City followed up with the rest of the line to Mecca the following year. The Haramain high-speed railway opened in 2018. South Korea Since its opening in 2004, KTX has transferred over 360 million passengers until April 2013, and now Asia's third largest with of rail lines. For any transportation involving travel above , the KTX secured a market share of 57% over other modes of transport, which is by far the largest. Taiwan Taiwan has a single north–south high-speed line, Taiwan high-speed rail. It is approximately long, along the west coast of Taiwan from the national capital Taipei to the southern city of Kaohsiung. The construction was managed by Taiwan high-speed rail Corporation and the total cost of the project was US$18 billion. The private company operates the line fully, and the system is based primarily on Japan's Shinkansen technology. Eight initial stations were built during the construction of the high-speed rail system: Taipei, Banqiao, Taoyuan, Hsinchu, Taichung, Chiayi, Tainan, and Zuoying (Kaohsiung). The line now has 12 total stations (Nangang, Taipei, Banqiao, Taoyuan, Hsinchu, Miaoli, Taichung, Changhua, Yunlin, Chiayi, Tainan and Zuoying) as of August 2018. There is a planned and approved extension to Yilan and Pingtung, which are set to enter service by 2030. Uzbekistan Uzbekistan has a single high-speed rail line, the Tashkent–Samarkand high-speed rail line, which allows trains to reach up to with of rail lines. There are also electrified extensions at lower speeds to Bukhara and Dehkanabad. Africa Morocco In November 2007, the Moroccan government decided to undertake the construction of a high-speed rail line between the economic capital Casablanca and Tangier, one of the largest harbour cities on the Strait of Gibraltar. The line will also serve the capital Rabat and Kenitra. The first section of the line, the Kenitra–Tangier high-speed rail line, was completed in 2018. Europe In Europe, several nations are interconnected with cross-border high-speed rail, such as London-Paris, Paris-Brussel-Rotterdam, Madrid-Perpignan, and other future connecting projects exist. France France has of high-speed rail lines, making it one of the largest network in Europe and the world. Market segmentation has principally focused on the business travel market. The French original focus on business travellers is reflected by the early design of the TGV trains. Pleasure travel was a secondary market; now many of the French extensions connect with vacation beaches on the Atlantic and Mediterranean, as well as major amusement parks and also the ski resorts in France and Switzerland. Friday evenings are the peak time for TGVs (train à grande vitesse). The system lowered prices on long-distance travel to compete more effectively with air services, and as a result some cities within an hour of Paris by TGV have become commuter communities, increasing the market while restructuring land use. On the Paris–Lyon service, the number of passengers grew sufficiently to justify the introduction of double-decker coaches. Later high-speed rail lines, such as the LGV Atlantique, the LGV Est, and most high-speed lines in France, were designed as feeder routes branching into conventional rail lines, serving a larger number of medium-sized cities. Germany Germany's first high-speed lines ran north–south, for historical reasons, and later developed east–west after German unification. In the early 1900s, Germany became the first country to run a prototype electric train at speeds in excess of 200 km/h, and during the 1930s several steam and diesel trains achieved revenue speeds of 160 km/h in daily service. The InterCityExperimental briefly held the world speed record for a steel-wheel-on-steel-rails vehicle during the 1980s. The InterCityExpress entered revenue service in 1991 and serves purpose-built high-speed lines (), upgraded legacy lines (), and unmodified legacy lines. Lufthansa, Germany's flag carrier, has entered into a codeshare agreement with where ICEs run as "feeder flights" bookable with a Lufthansa flight number under the AIRail program. Greece In 2022, Greece's first high-speed train began operations between Athens and Thessaloniki. The 512 km (318 miles) route is covered in 3 to 4 hours with trains reaching speeds of up to 250 km/h (160 miles/h). The 180 km (112 mile) line from Athens to Patras is also being upgraded to high speed with an expected completion by 2026. The route between Athens and Thessaloniki was previously among the busiest passenger air routes in Europe. Italy During the 1920s and 1930s, Italy was one of the first countries to develop the technology for high-speed rail. The country constructed the Direttissime railways connecting major cities on dedicated electrified high-speed track (although at speeds lower to what today would be considered high-speed rail) and developed the fast ETR 200 trainset. After the Second World War and the fall of the fascist regime, interest in high-speed rail dwindled, with the successive governments considering it too costly and developing the tilting Pendolino, to run at medium-high speed (up to ) on conventional lines, instead. A true dedicated high-speed rail network was developed during the 1980s and the 1990s, and of high-speed rail were fully operational by 2010. Frecciarossa services are operated with ETR 500 and ETR1000 non-tilting trains at 25kVAC, 50 Hz power. The operational speed of the service is . Over 100 million passengers used the Frecciarossa from the service introduction up to the first months of 2012. The high-speed rail system serves about 20 billion passenger-km per year as of 2016. Italian high-speed services are profitable without government funding. Nuovo Trasporto Viaggiatori, the world's first private open-access operator of high-speed rail, is operative in Italy since 2012. Norway As of 2015, Norway's fastest trains have a commercial top speed of and the FLIRT trains may attain . A velocity of is permitted on the Gardermoen Line, which links the Gardermoen airport to Oslo and a part of the main line northwards to Trondheim. Some parts of the trunk railways around Oslo are renewed and built for : The Follo Line southwards from Oslo, a line Oslo–Ski on the Østfold Line, mainly in tunnel, planned to be ready in 2021. The Holm–Holmestrand–Nykirke part of the Vestfold Line (west to southwest of Oslo). The Farriseidet project, between Larvik and Porsgrunn on the Vestfold Line, in tunnel. Russia The existing Saint Petersburg–Moscow Railway can operate at maximum speeds of 250 km/h; the Helsinki–Saint Petersburg railway, dismantled after the 2022 Russian invasion of Ukraine, was capable of a maximum of 200 km/h. A new Moscow–Saint Petersburg high-speed railway, designed specifically for high-speed rail, is currently under construction: once completed, it is expected to have the maximum speed of 400 km/h. Future areas include freight lines, such as the Trans-Siberian Railway in Russia, which would allow 3-day Far East to Europe service for freight, potentially fitting in between the months by ship and hours by air. Serbia A high-speed line of SOKO (, meaning "falcon") trains connects the country's two most populous cities: Belgrade, the capital of the country, and Novi Sad, the capital of Vojvodina. In contrast to the slower Stadler FLIRT trains used for the Regio lines, the Stadler KISS-es take 36 minutes to go across two cities. In addition to the two main stations, the trains only stop in New Belgrade. The line is currently being extended to reach Subotica, Serbia's northernmost city. The work is expected to be finished until the end of 2024, with an anticipated travel time between Belgrade and Subotica being around 70 minutes. Spain Spain has built an extensive high-speed rail network, with a length of (2021), the longest in Europe. It uses standard gauge as opposed to the Iberian gauge used in most of the national railway network, meaning that the high-speed tracks are separated and not shared with local trains or freight. Although standard gauge is the norm for Spanish high-speed rail, since 2011 there exists a regional high-speed service running on Iberian gauge with special trains that connects the cities of Ourense, Santiago de Compostela, A Coruña, and Vigo in northwestern Spain. Connections to the French network exist since 2013, with direct trains from Paris to Barcelona. Although on the French side, conventional speed tracks are used from Perpignan to Montpellier. Switzerland High-speed north–south freight lines in Switzerland are under construction, avoiding slow mountainous truck traffic, and lowering labour costs. The new lines, in particular the Gotthard Base Tunnel, are built for . But the short high-speed parts and the mix with freight will lower the average speeds. The limited size of the country gives fairly short domestic travel times anyway. Switzerland is investing money in lines on French and German soil to enable better access to the high-speed rail networks of those countries from Switzerland. Turkey The Turkish State Railways started building high-speed rail lines in 2003. The first section of the line, between Ankara and Eskişehir, was inaugurated on 13 March 2009. It is a part of the Istanbul to Ankara high-speed rail line. A subsidiary of Turkish State Railways, Yüksek Hızlı Tren is the sole commercial operator of high-speed trains in Turkey. The construction of three separate high-speed lines from Ankara to Istanbul, Konya and Sivas, as well as taking an Ankara–İzmir line to the launch stage, form part of the Turkish Ministry of Transport's strategic aims and targets. United Kingdom The UK's fastest high-speed line (High Speed 1) connects London St Pancras with Brussels, Paris and Amsterdam through the Channel Tunnel. At speeds of up to , it is the only high-speed line in Britain with an operating speed of more than . The Great Western Main Line, South Wales Main Line, West Coast Main Line, Midland Main Line, Cross Country Route and East Coast Main Line all have maximum speed limits of . Attempts to increase speeds to on both the West Coast Main Line and East Coast Main Line were abandoned in the 1980s, due to trains operating on those lines not being capable of cab signalling, which was made a legal requirement in the UK for tracks permitted to operate any service at speeds greater than , due to the impracticality of observing lineside signals at such speeds. North America United States The United States has domestic definitions for high-speed rail varying between jurisdictions. The United States Code defines high-speed rail as services "reasonably expected to reach sustained speeds of more than ", The Federal Railroad Administration uses a definition of top speeds at and above. The Congressional Research Service uses the term "higher-speed rail" for speeds up to and "very high-speed rail" for the rail on dedicated tracks with speeds over 150 mph. Amtrak's Acela Express (reaching ), Northeast Regional, Keystone Service, Silver Star, Vermonter and certain MARC Penn Line express trains (the three reaching ) are currently the only high-speed services on the American continent according to the American definition, although they are not considered high-speed by international standards. These services are all limited to the Northeast Corridor. The Acela Express links Boston, New York City, Philadelphia, Baltimore, and Washington, D.C., and while Northeast Regional trains travel the whole of the same route, but make more station stops. All other high-speed rail services travel over portions of the route. As of 2024, there are two high-speed rail projects under construction in the United States. The California High-Speed Rail project, eventually linking the 5 largest cities in California, is planned to have its first operating segment, between Merced and Bakersfield, begin passenger service as soon as 2030. The Brightline West project is planned to be privately operated and link the Las Vegas Valley and Rancho Cucamonga in the Greater Los Angeles area, with service set to begin in as soon as 2028. Inter-city effects With high-speed rail there has been an increase in accessibility within cities. It allows for urban regeneration, accessibility in cities near and far, and efficient inter-city relationships. Better inter-city relationships lead to high-level services to companies, advanced technology, and marketing. The most important effect of HSR is the increase of accessibility due to shorter travel times. HSR lines have been used to create long-distance routes which in many cases cater to business travellers. However, there have also been short-distance routes that have revolutionised the concepts of HSR. They create commuting relationships between cities opening up more opportunities. Using both longer distance and shorter distance rail in one country allows for the best case of economic development, widening the labor and residential market of a metropolitan area and extending it to smaller cities. Therefore, HSR is highly related to urban development, it attracts offices and start-ups, induces industrial displacement, and promotes firm innovation. Closures The KTX Incheon International Airport to Seoul Line (operates on Incheon AREX) was closed in 2018, due to a mix of issues, including poor ridership and track sharing. The AREX was not constructed as high-speed rail, resulting a cap of 150 km/h on KTX service in its section. In China, many conventional lines upgraded up to 200 km/h had high-speed services shifted to parallel high-speed lines. These lines, often passing through towns and having level crossings, are still used for local trains and freight trains. For example, all (passenger) EMU services on the Hankou–Danjiangkou railway were routed over the Wuhan–Shiyan high-speed railway on its opening to free up capacity for freight trains on the slower railway.
Technology
Trains
null
50397
https://en.wikipedia.org/wiki/Cerebellum
Cerebellum
The cerebellum (: cerebella or cerebellums; Latin for "little brain") is a major feature of the hindbrain of all vertebrates. Although usually smaller than the cerebrum, in some animals such as the mormyrid fishes it may be as large as it or even larger. In humans, the cerebellum plays an important role in motor control and cognitive functions such as attention and language as well as emotional control such as regulating fear and pleasure responses, but its movement-related functions are the most solidly established. The human cerebellum does not initiate movement, but contributes to coordination, precision, and accurate timing: it receives input from sensory systems of the spinal cord and from other parts of the brain, and integrates these inputs to fine-tune motor activity. Cerebellar damage produces disorders in fine movement, equilibrium, posture, and motor learning in humans. Anatomically, the human cerebellum has the appearance of a separate structure attached to the bottom of the brain, tucked underneath the cerebral hemispheres. Its cortical surface is covered with finely spaced parallel grooves, in striking contrast to the broad irregular convolutions of the cerebral cortex. These parallel grooves conceal the fact that the cerebellar cortex is actually a thin, continuous layer of tissue tightly folded in the style of an accordion. Within this thin layer are several types of neurons with a highly regular arrangement, the most important being Purkinje cells and granule cells. This complex neural organization gives rise to a massive signal-processing capability, but almost all of the output from the cerebellar cortex passes through a set of small deep nuclei lying in the white matter interior of the cerebellum. In addition to its direct role in motor control, the cerebellum is necessary for several types of motor learning, most notably learning to adjust to changes in sensorimotor relationships. Several theoretical models have been developed to explain sensorimotor calibration in terms of synaptic plasticity within the cerebellum. These models derive from those formulated by David Marr and James Albus, based on the observation that each cerebellar Purkinje cell receives two dramatically different types of input: one comprises thousands of weak inputs from the parallel fibers of the granule cells; the other is an extremely strong input from a single climbing fiber. The basic concept of the Marr–Albus theory is that the climbing fiber serves as a "teaching signal", which induces a long-lasting change in the strength of parallel fiber inputs. Observations of long-term depression in parallel fiber inputs have provided some support for theories of this type, but their validity remains controversial. Structure At the level of gross anatomy, the cerebellum consists of a tightly folded layer of cortex, with white matter underneath and a fluid-filled ventricle at the base. Four deep cerebellar nuclei are embedded in the white matter. Each part of the cortex consists of the same small set of neuronal elements, laid out in a highly stereotyped geometry. At an intermediate level, the cerebellum and its auxiliary structures can be separated into several hundred or thousand independently functioning modules called "microzones" or "microcompartments". Gross anatomy The cerebellum is located in the posterior cranial fossa. The fourth ventricle, pons and medulla are in front of the cerebellum. It is separated from the overlying cerebrum by a layer of leathery dura mater, the cerebellar tentorium; all of its connections with other parts of the brain travel through the pons. Anatomists classify the cerebellum as part of the metencephalon, which also includes the pons; the metencephalon is the upper part of the rhombencephalon or "hindbrain". Like the cerebral cortex, the cerebellum is divided into two cerebellar hemispheres; it also contains a narrow midline zone (the vermis). A set of large folds is, by convention, used to divide the overall structure into 10 smaller "lobules". Because of its large number of tiny granule cells, the cerebellum contains more neurons than the total from the rest of the brain, but takes up only 10% of the total brain volume. The number of neurons in the cerebellum is related to the number of neurons in the neocortex. There are about 3.6 times as many neurons in the cerebellum as in the neocortex, a ratio that is conserved across many different mammalian species. The unusual surface appearance of the cerebellum conceals the fact that most of its volume is made up of a very tightly folded layer of gray matter: the cerebellar cortex. Each ridge or gyrus in this layer is called a folium. High‑resolution MRI finds the adult human cerebellar cortex has an area of 730 square cm, packed within a volume of dimensions 6 cm × 5 cm × 10 cm. Underneath the gray matter of the cortex lies white matter, made up largely of myelinated nerve fibers running to and from the cortex. Embedded within the white matter—which is sometimes called the arbor vitae (tree of life) because of its branched, tree-like appearance in cross-section—are four deep cerebellar nuclei, composed of gray matter. Connecting the cerebellum to different parts of the nervous system are three paired cerebellar peduncles. These are the superior cerebellar peduncle, the middle cerebellar peduncle and the inferior cerebellar peduncle, named by their position relative to the vermis. The superior cerebellar peduncle is mainly an output to the cerebral cortex, carrying efferent fibers via thalamic nuclei to upper motor neurons in the cerebral cortex. The fibers arise from the deep cerebellar nuclei. The middle cerebellar peduncle is connected to the pons and receives all of its input from the pons mainly from the pontine nuclei. The input to the pons is from the cerebral cortex and is relayed from the pontine nuclei via transverse pontine fibers to the cerebellum. The middle peduncle is the largest of the three and its afferent fibers are grouped into three separate fascicles taking their inputs to different parts of the cerebellum. The inferior cerebellar peduncle receives input from afferent fibers from the vestibular nuclei, spinal cord and the tegmentum. Output from the inferior peduncle is via efferent fibers to the vestibular nuclei and the reticular formation. The whole of the cerebellum receives modulatory input from the inferior olivary nucleus via the inferior cerebellar peduncle. Subdivisions Based on the surface appearance, three lobes can be distinguished within the cerebellum: the anterior lobe (above the primary fissure), the posterior lobe (below the primary fissure), and the flocculonodular lobe (below the posterior fissure). These lobes divide the cerebellum from rostral to caudal (in humans, top to bottom). In terms of function, however, there is a more important distinction along the medial-to-lateral dimension. Leaving out the flocculonodular lobe, which has distinct connections and functions, the cerebellum can be parsed functionally into a medial sector called the spinocerebellum and a larger lateral sector called the cerebrocerebellum. A narrow strip of protruding tissue along the midline is called the cerebellar vermis. (Vermis is Latin for "worm".) The smallest region, the flocculonodular lobe, is often called the vestibulocerebellum. It is the oldest part in evolutionary terms (archicerebellum) and participates mainly in balance and spatial orientation; its primary connections are with the vestibular nuclei, although it also receives visual and other sensory input. Damage to this region causes disturbances of balance and gait. The medial zone of the anterior and posterior lobes constitutes the spinocerebellum, also known as paleocerebellum. This sector of the cerebellum functions mainly to fine-tune body and limb movements. It receives proprioceptive input from the dorsal columns of the spinal cord (including the spinocerebellar tract) and from the cranial trigeminal nerve, as well as from visual and auditory systems. It sends fibers to deep cerebellar nuclei that, in turn, project to both the cerebral cortex and the brain stem, thus providing modulation of descending motor systems. The lateral zone, which in humans is by far the largest part, constitutes the cerebrocerebellum, also known as neocerebellum. It receives input exclusively from the cerebral cortex (especially the parietal lobe) via the pontine nuclei (forming cortico-ponto-cerebellar pathways), and sends output mainly to the ventrolateral thalamus (in turn connected to motor areas of the premotor cortex and primary motor area of the cerebral cortex) and to the red nucleus. There is disagreement about the best way to describe the functions of the lateral cerebellum: It is thought to be involved in planning movement that is about to occur, in evaluating sensory information for action, and in a number of purely cognitive functions, such as determining the verb which best fits with a certain noun (as in "sit" for "chair"). Microanatomy Two types of neuron play dominant roles in the cerebellar circuit: Purkinje cells and granule cells. Three types of axons also play dominant roles: mossy fibers and climbing fibers (which enter the cerebellum from outside), and parallel fibers (which are the axons of granule cells). There are two main pathways through the cerebellar circuit, originating from mossy fibers and climbing fibers, both eventually terminating in the deep cerebellar nuclei. Mossy fibers project directly to the deep nuclei, but also give rise to the following pathway: mossy fibers → granule cells → parallel fibers → Purkinje cells → deep nuclei. Climbing fibers project to Purkinje cells and also send collaterals directly to the deep nuclei. The mossy fiber and climbing fiber inputs each carry fiber-specific information; the cerebellum also receives dopaminergic, serotonergic, noradrenergic, and cholinergic inputs that presumably perform global modulation. The cerebellar cortex is divided into three layers. At the bottom lies the thick granular layer, densely packed with granule cells, along with interneurons, mainly Golgi cells but also including Lugaro cells and unipolar brush cells. In the middle lies the Purkinje layer, a narrow zone that contains the cell bodies of Purkinje cells and Bergmann glial cells. At the top lies the molecular layer, which contains the flattened dendritic trees of Purkinje cells, along with the huge array of parallel fibers penetrating the Purkinje cell dendritic trees at right angles. This outermost layer of the cerebellar cortex also contains two types of inhibitory interneuron: stellate cells and basket cells. Both stellate and basket cells form GABAergic synapses onto Purkinje cell dendrites. Layers of the cerebellar cortex Molecular layer The top, outermost layer of the cerebellar cortex is the molecular layer. This layer contains the flattened dendritic trees of Purkinje cells, and the huge array of parallel fibers, from the granular layer, that penetrate the Purkinje cell dendritic trees at right angles. The molecular layer also contains two types of inhibitory interneuron: stellate cells and basket cells. Both stellate and basket cells form GABAergic synapses onto Purkinje cell dendrites. Purkinje layer Purkinje cells are among the most distinctive neurons in the brain, and one of the earliest types to be recognized—they were first described by the Czech anatomist Jan Evangelista Purkyně in 1837. They are distinguished by the shape of their dendritic tree: the dendrites branch very profusely, but are severely flattened in a plane perpendicular to the cerebellar folds. Thus, the dendrites of a Purkinje cell form a dense planar net, through which parallel fibers pass at right angles. The dendrites are covered with dendritic spines, each of which receives synaptic input from a parallel fiber. Purkinje cells receive more synaptic inputs than any other type of cell in the brain—estimates of the number of spines on a single human Purkinje cell run as high as 200,000. The large, spherical cell bodies of Purkinje cells are packed into a narrow layer (one cell thick) of the cerebellar cortex, called the Purkinje layer. After emitting collaterals that affect nearby parts of the cortex, their axons travel into the deep cerebellar nuclei, where they make on the order of 1,000 contacts each with several types of nuclear cells, all within a small domain. Purkinje cells use GABA as their neurotransmitter, and therefore exert inhibitory effects on their targets. Purkinje cells form the heart of the cerebellar circuit, and their large size and distinctive activity patterns have made it relatively easy to study their response patterns in behaving animals using extracellular recording techniques. Purkinje cells normally emit action potentials at a high rate even in the absence of the synaptic input. In awake, behaving animals, mean rates averaging around 40 Hz are typical. The spike trains show a mixture of what are called simple and complex spikes. A simple spike is a single action potential followed by a refractory period of about 10 ms; a complex spike is a stereotyped sequence of action potentials with very short inter-spike intervals and declining amplitudes. Physiological studies have shown that complex spikes (which occur at baseline rates around 1 Hz and never at rates much higher than 10 Hz) are reliably associated with climbing fiber activation, while simple spikes are produced by a combination of baseline activity and parallel fiber input. Complex spikes are often followed by a pause of several hundred milliseconds during which simple spike activity is suppressed. A specific, recognizable feature of Purkinje neurons is the expression of calbindin. Calbindin staining of rat brain after unilateral chronic sciatic nerve injury suggests that Purkinje neurons may be newly generated in the adult brain, initiating the organization of new cerebellar lobules. Granular layer Cerebellar granule cells, in contrast to Purkinje cells, are among the smallest neurons in the brain. They are also the most numerous neurons in the brain: In humans, estimates of their total number average around 50 billion, which means that about 3/4 of the brain's neurons are cerebellar granule cells. Their cell bodies are packed into a thick layer at the bottom of the cerebellar cortex. A granule cell emits only four to five dendrites, each of which ends in an enlargement called a dendritic claw. These enlargements are sites of excitatory input from mossy fibers and inhibitory input from Golgi cells. The thin, unmyelinated axons of granule cells rise vertically to the upper (molecular) layer of the cortex, where they split in two, with each branch traveling horizontally to form a parallel fiber; the splitting of the vertical branch into two horizontal branches gives rise to a distinctive "T" shape. A human parallel fiber runs for an average of 3 mm in each direction from the split, for a total length of about 6 mm (about 1/10 of the total width of the cortical layer). As they run along, the parallel fibers pass through the dendritic trees of Purkinje cells, contacting one of every 3–5 that they pass, making a total of 80–100 synaptic connections with Purkinje cell dendritic spines. Granule cells use glutamate as their neurotransmitter, and therefore exert excitatory effects on their targets. Granule cells receive all of their input from mossy fibers, but outnumber them by 200 to 1 (in humans). Thus, the information in the granule cell population activity state is the same as the information in the mossy fibers, but recoded in a much more expansive way. Because granule cells are so small and so densely packed, it is difficult to record their spike activity in behaving animals, so there is little data to use as a basis for theorizing. The most popular concept of their function was proposed in 1969 by David Marr, who suggested that they could encode combinations of mossy fiber inputs. The idea is that with each granule cell receiving input from only 4–5 mossy fibers, a granule cell would not respond if only a single one of its inputs were active, but would respond if more than one were active. This combinatorial coding scheme would potentially allow the cerebellum to make much finer distinctions between input patterns than the mossy fibers alone would permit. Mossy fibers Mossy fibers enter the granular layer from their points of origin, many arising from the pontine nuclei, others from the spinal cord, vestibular nuclei etc. In the human cerebellum, the total number of mossy fibers has been estimated at 200 million. These fibers form excitatory synapses with the granule cells and the cells of the deep cerebellar nuclei. Within the granular layer, a mossy fiber generates a series of enlargements called rosettes. The contacts between mossy fibers and granule cell dendrites take place within structures called glomeruli. Each glomerulus has a mossy fiber rosette at its center, and up to 20 granule cell dendritic claws contacting it. Terminals from Golgi cells infiltrate the structure and make inhibitory synapses onto the granule cell dendrites. The entire assemblage is surrounded by a sheath of glial cells. Each mossy fiber sends collateral branches to several cerebellar folia, generating a total of 20–30 rosettes; thus a single mossy fiber makes contact with an estimated 400–600 granule cells. Climbing fibers Purkinje cells also receive input from the inferior olivary nucleus on the contralateral side of the brainstem via climbing fibers. Although the inferior olive lies in the medulla oblongata and receives input from the spinal cord, brainstem and cerebral cortex, its output goes entirely to the cerebellum. A climbing fiber gives off collaterals to the deep cerebellar nuclei before entering the cerebellar cortex, where it splits into about 10 terminal branches, each of which gives input to a single Purkinje cell. In striking contrast to the 100,000-plus inputs from parallel fibers, each Purkinje cell receives input from exactly one climbing fiber; but this single fiber "climbs" the dendrites of the Purkinje cell, winding around them and making a total of up to 300 synapses as it goes. The net input is so strong that a single action potential from a climbing fiber is capable of producing an extended complex spike in the Purkinje cell: a burst of several spikes in a row, with diminishing amplitude, followed by a pause during which activity is suppressed. The climbing fiber synapses cover the cell body and proximal dendrites; this zone is devoid of parallel fiber inputs. Climbing fibers fire at low rates, but a single climbing fiber action potential induces a burst of several action potentials in a target Purkinje cell (a complex spike). The contrast between parallel fiber and climbing fiber inputs to Purkinje cells (over 100,000 of one type versus exactly one of the other type) is perhaps the most provocative feature of cerebellar anatomy, and has motivated much of the theorizing. In fact, the function of climbing fibers is the most controversial topic concerning the cerebellum. There are two schools of thought, one following Marr and Albus in holding that climbing fiber input serves primarily as a teaching signal, the other holding that its function is to shape cerebellar output directly. Both views have been defended in great length in numerous publications. In the words of one review, "In trying to synthesize the various hypotheses on the function of the climbing fibers, one has the sense of looking at a drawing by Escher. Each point of view seems to account for a certain collection of findings, but when one attempts to put the different views together, a coherent picture of what the climbing fibers are doing does not appear. For the majority of researchers, the climbing fibers signal errors in motor performance, either in the usual manner of discharge frequency modulation or as a single announcement of an 'unexpected event'. For other investigators, the message lies in the degree of ensemble synchrony and rhythmicity among a population of climbing fibers." Deep nuclei The deep nuclei of the cerebellum are clusters of gray matter lying within the white matter at the core of the cerebellum. They are, with the minor exception of the nearby vestibular nuclei, the sole sources of output from the cerebellum. These nuclei receive collateral projections from mossy fibers and climbing fibers as well as inhibitory input from the Purkinje cells of the cerebellar cortex. The four nuclei (dentate, globose, emboliform, and fastigial) each communicate with different parts of the brain and cerebellar cortex. (The globose and the emboliform nuclei are also referred to as combined in the interposed nucleus). The fastigial and interposed nuclei belong to the spinocerebellum. The dentate nucleus, which in mammals is much larger than the others, is formed as a thin, convoluted layer of gray matter, and communicates exclusively with the lateral parts of the cerebellar cortex. The flocculus of the flocculonodular lobe is the only part of the cerebellar cortex that does not project to the deep nuclei—its output goes to the vestibular nuclei instead. The majority of neurons in the deep nuclei have large cell bodies and spherical dendritic trees with a radius of about 400 μm, and use glutamate as their neurotransmitter. These cells project to a variety of targets outside the cerebellum. Intermixed with them are a lesser number of small cells, which use GABA as a neurotransmitter and project exclusively to the inferior olivary nucleus, the source of climbing fibers. Thus, the nucleo-olivary projection provides an inhibitory feedback to match the excitatory projection of climbing fibers to the nuclei. There is evidence that each small cluster of nuclear cells projects to the same cluster of olivary cells that send climbing fibers to it; there is strong and matching topography in both directions. When a Purkinje cell axon enters one of the deep nuclei, it branches to make contact with both large and small nuclear cells, but the total number of cells contacted is only about 35 (in cats). Conversely, a single deep nuclear cell receives input from approximately 860 Purkinje cells (again in cats). Compartments From the viewpoint of gross anatomy, the cerebellar cortex appears to be a homogeneous sheet of tissue, and, from the viewpoint of microanatomy, all parts of this sheet appear to have the same internal structure. There are, however, a number of respects in which the structure of the cerebellum is compartmentalized. There are large compartments that are generally known as zones; these can be divided into smaller compartments known as microzones. The first indications of compartmental structure came from studies of the receptive fields of cells in various parts of the cerebellar cortex. Each body part maps to specific points in the cerebellum, but there are numerous repetitions of the basic map, forming an arrangement that has been called "fractured somatotopy". A clearer indication of compartmentalization is obtained by immunostaining the cerebellum for certain types of protein. The best-known of these markers are called "zebrins", because staining for them gives rise to a complex pattern reminiscent of the stripes on a zebra. The stripes generated by zebrins and other compartmentalization markers are oriented perpendicular to the cerebellar folds—that is, they are narrow in the mediolateral direction, but much more extended in the longitudinal direction. Different markers generate different sets of stripes, the widths and lengths vary as a function of location, but they all have the same general shape. Oscarsson in the late 1970s proposed that these cortical zones can be partitioned into smaller units called microzones. A microzone is defined as a group of Purkinje cells all having the same somatotopic receptive field. Microzones were found to contain on the order of 1000 Purkinje cells each, arranged in a long, narrow strip, oriented perpendicular to the cortical folds. Thus, as the adjoining diagram illustrates, Purkinje cell dendrites are flattened in the same direction as the microzones extend, while parallel fibers cross them at right angles. It is not only receptive fields that define the microzone structure: The climbing fiber input from the inferior olivary nucleus is equally important. The branches of a climbing fiber (usually numbering about 10) usually activate Purkinje cells belonging to the same microzone. Moreover, olivary neurons that send climbing fibers to the same microzone tend to be coupled by gap junctions, which synchronize their activity, causing Purkinje cells within a microzone to show correlated complex spike activity on a millisecond time scale. Also, the Purkinje cells belonging to a microzone all send their axons to the same small cluster of output cells within the deep cerebellar nuclei. Finally, the axons of basket cells are much longer in the longitudinal direction than in the mediolateral direction, causing them to be confined largely to a single microzone. The consequence of all this structure is that cellular interactions within a microzone are much stronger than interactions between different microzones. In 2005, Richard Apps and Martin Garwicz summarized evidence that microzones themselves form part of a larger entity they call a multizonal microcomplex. Such a microcomplex includes several spatially separated cortical microzones, all of which project to the same group of deep cerebellar neurons, plus a group of coupled olivary neurons that project to all of the included microzones as well as to the deep nuclear area. Blood supply The cerebellum is provided with blood from three paired major arteries: the superior cerebellar artery (SCA), the anterior inferior cerebellar artery (AICA), and the posterior inferior cerebellar artery (PICA). The SCA supplies the upper region of the cerebellum. It divides at the upper surface and branches into the pia mater where the branches anastomose with those of the anterior and posterior inferior cerebellar arteries. The AICA supplies the front part of the undersurface of the cerebellum. The PICA arrives at the undersurface, where it divides into a medial branch and a lateral branch. The medial branch continues backward to the cerebellar notch between the two hemispheres of the cerebellum; while the lateral branch supplies the under surface of the cerebellum, as far as its lateral border, where it anastomoses with the AICA and the SCA. Function The strongest clues to the function of the cerebellum have come from examining the consequences of damage to it. Animals and humans with cerebellar dysfunction show, above all, problems with motor control, on the same side of the body as the damaged part of the cerebellum. They continue to be able to generate motor activity but lose precision, producing erratic, uncoordinated, or incorrectly timed movements. A standard test of cerebellar function is to reach with the tip of the finger for a target at arm's length: A healthy person will move the fingertip in a rapid straight trajectory, whereas a person with cerebellar damage will reach slowly and erratically, with many mid-course corrections. Deficits in non-motor functions are more difficult to detect. Thus, the general conclusion reached decades ago is that the basic function of the cerebellum is to calibrate the detailed form of a movement, not to initiate movements or to decide which movements to execute. Prior to the 1990s the function of the cerebellum was almost universally believed to be purely motor-related, but newer findings have brought that view into question. Functional imaging studies have shown cerebellar activation in relation to language, attention, and mental imagery; correlation studies have shown interactions between the cerebellum and non-motor areas of the cerebral cortex; and a variety of non-motor symptoms have been recognized in people with damage that appears to be confined to the cerebellum. In particular, the cerebellar cognitive affective syndrome or Schmahmann's syndrome has been described in adults and children. Estimates based on functional mapping of the cerebellum using functional MRI suggest that more than half of the cerebellar cortex is interconnected with association zones of the cerebral cortex. Kenji Doya has argued that the cerebellum's function is best understood not in terms of the behaviors it affects, but the neural computations it performs; the cerebellum consists of a large number of more or less independent modules, all with the same geometrically regular internal structure, and therefore all, it is presumed, performing the same computation. If the input and output connections of a module are with motor areas (as many are), then the module will be involved in motor behavior; but, if the connections are with areas involved in non-motor cognition, the module will show other types of behavioral correlates. Thus the cerebellum has been implicated in the regulation of many differing functional traits such as affection, emotion including emotional body language perception and behavior. The cerebellum, Doya proposes, is best understood as predictive action selection based on "internal models" of the environment or a device for supervised learning, in contrast to the basal ganglia, which perform reinforcement learning, and the cerebral cortex, which performs unsupervised learning. Three decades of brain research have led to the proposal that the cerebellum generates optimized mental models and interacts closely with the cerebral cortex, where updated internal models are experienced as creative intuition ("a ha") in working memory. Principles The comparative simplicity and regularity of the cerebellar anatomy led to an early hope that it might imply a similar simplicity of computational function, as expressed in one of the first books on cerebellar electrophysiology, The Cerebellum as a Neuronal Machine by John C. Eccles, Masao Ito, and János Szentágothai. Although a full understanding of cerebellar function has remained elusive, at least four principles have been identified as important: (1) feedforward processing, (2) divergence and convergence, (3) modularity, and (4) plasticity. Feedforward processing: The cerebellum differs from most other parts of the brain (especially the cerebral cortex) in that the signal processing is almost entirely feedforward—that is, signals move unidirectionally through the system from input to output, with very little recurrent internal transmission. The small amount of recurrence that does exist consists of mutual inhibition; there are no mutually excitatory circuits. This feedforward mode of operation means that the cerebellum, in contrast to the cerebral cortex, cannot generate self-sustaining patterns of neural activity. Signals enter the circuit, are processed by each stage in sequential order, and then leave. As Eccles, Ito, and Szentágothai wrote, "This elimination in the design of all possibility of reverberatory chains of neuronal excitation is undoubtedly a great advantage in the performance of the cerebellum as a computer, because what the rest of the nervous system requires from the cerebellum is presumably not some output expressing the operation of complex reverberatory circuits in the cerebellum but rather a quick and clear response to the input of any particular set of information." Divergence and convergence: In the human cerebellum, information from 200 million mossy fiber inputs is expanded to 40 billion granule cells, whose parallel fiber outputs then converge onto 15 million Purkinje cells. Because of the way that they are lined up longitudinally, the 1000 or so Purkinje cells belonging to a microzone may receive input from as many as 100 million parallel fibers, and focus their own output down to a group of less than 50 deep nuclear cells. Thus, the cerebellar network receives a modest number of inputs, processes them very extensively through its rigorously structured internal network, and sends out the results via a very limited number of output cells. Modularity: The cerebellar system is functionally divided into more or less independent modules, which probably number in the hundreds to thousands. All modules have a similar internal structure, but different inputs and outputs. A module (a multizonal microcompartment in the terminology of Apps and Garwicz) consists of a small cluster of neurons in the inferior olivary nucleus, a set of long narrow strips of Purkinje cells in the cerebellar cortex (microzones), and a small cluster of neurons in one of the deep cerebellar nuclei. Different modules share input from mossy fibers and parallel fibers, but in other respects they appear to function independently—the output of one module does not appear to significantly influence the activity of other modules. Plasticity: The synapses between parallel fibers and Purkinje cells, and the synapses between mossy fibers and deep nuclear cells, are both susceptible to modification of their strength. In a single cerebellar module, input from as many as a billion parallel fibers converges onto a group of less than 50 deep nuclear cells, and the influence of each parallel fiber on those nuclear cells is adjustable. This arrangement gives tremendous flexibility for fine-tuning the relationship between the cerebellar inputs and outputs. Learning There is considerable evidence that the cerebellum plays an essential role in some types of motor learning. The tasks where the cerebellum most clearly comes into play are those in which it is necessary to make fine adjustments to the way an action is performed. There has, however, been much dispute about whether learning takes place within the cerebellum itself, or whether it merely serves to provide signals that promote learning in other brain structures. Most theories that assign learning to the circuitry of the cerebellum are derived from the ideas of David Marr and James Albus, who postulated that climbing fibers provide a teaching signal that induces synaptic modification in parallel fiber–Purkinje cell synapses. Marr assumed that climbing fiber input would cause synchronously activated parallel fiber inputs to be strengthened. Most subsequent cerebellar-learning models, however, have followed Albus in assuming that climbing fiber activity would be an error signal, and would cause synchronously activated parallel fiber inputs to be weakened. Some of these later models, such as the Adaptive Filter model of Fujita made attempts to understand cerebellar function in terms of optimal control theory. The idea that climbing fiber activity functions as an error signal has been examined in many experimental studies, with some supporting it but others casting doubt. In a pioneering study by Gilbert and Thach from 1977, Purkinje cells from monkeys learning a reaching task showed increased complex spike activity—which is known to reliably indicate activity of the cell's climbing fiber input—during periods when performance was poor. Several studies of motor learning in cats observed complex spike activity when there was a mismatch between an intended movement and the movement that was actually executed. Studies of the vestibulo-ocular reflex (which stabilizes the visual image on the retina when the head turns) found that climbing fiber activity indicated "retinal slip", although not in a very straightforward way. One of the most extensively studied cerebellar learning tasks is the eyeblink conditioning paradigm, in which a neutral conditioned stimulus (CS) such as a tone or a light is repeatedly paired with an unconditioned stimulus (US), such as an air puff, that elicits a blink response. After such repeated presentations of the CS and US, the CS will eventually elicit a blink before the US, a conditioned response or CR. Experiments showed that lesions localized either to a specific part of the interposed nucleus (one of the deep cerebellar nuclei) or to a few specific points in the cerebellar cortex would abolish learning of a conditionally timed blink response. If cerebellar outputs are pharmacologically inactivated while leaving the inputs and intracellular circuits intact, learning takes place even while the animal fails to show any response, whereas, if intracerebellar circuits are disrupted, no learning takes place—these facts taken together make a strong case that the learning, indeed, occurs inside the cerebellum. Theories and computational models The large base of knowledge about the anatomical structure and behavioral functions of the cerebellum have made it a fertile ground for theorizing—there are perhaps more theories of the function of the cerebellum than of any other part of the brain. The most basic distinction among them is between "learning theories" and "performance theories"—that is, theories that make use of synaptic plasticity within the cerebellum to account for its role in learning, versus theories that account for aspects of ongoing behavior on the basis of cerebellar signal processing. Several theories of both types have been formulated as mathematical models and simulated using computers. Perhaps the earliest "performance" theory was the "delay line" hypothesis of Valentino Braitenberg. The original theory put forth by Braitenberg and Roger Atwood in 1958 proposed that slow propagation of signals along parallel fibers imposes predictable delays that allow the cerebellum to detect time relationships within a certain window. Experimental data did not support the original form of the theory, but Braitenberg continued to argue for modified versions. The hypothesis that the cerebellum functions essentially as a timing system has also been advocated by Richard Ivry. Another influential "performance" theory is the Tensor network theory of Pellionisz and Llinás, which provided an advanced mathematical formulation of the idea that the fundamental computation performed by the cerebellum is to transform sensory into motor coordinates. Theories in the "learning" category almost all derive from publications by Marr and Albus. Marr's 1969 paper proposed that the cerebellum is a device for learning to associate elemental movements encoded by climbing fibers with mossy fiber inputs that encode the sensory context. Albus proposed in 1971 that a cerebellar Purkinje cell functions as a perceptron, a neurally inspired abstract learning device. The most basic difference between the Marr and Albus theories is that Marr assumed that climbing fiber activity would cause parallel fiber synapses to be strengthened, whereas Albus proposed that they would be weakened. Albus also formulated his version as a software algorithm he called a CMAC (Cerebellar Model Articulation Controller), which has been tested in a number of applications. Clinical significance Damage to the cerebellum often causes motor-related symptoms, the details of which depend on the part of the cerebellum involved and how it is damaged. Damage to the flocculonodular lobe may show up as a loss of equilibrium and in particular an altered, irregular walking gait, with a wide stance caused by difficulty in balancing. Damage to the lateral zone typically causes problems in skilled voluntary and planned movements which can cause errors in the force, direction, speed and amplitude of movements. Other manifestations include hypotonia (decreased muscle tone), dysarthria (problems with speech articulation), dysmetria (problems judging distances or ranges of movement), dysdiadochokinesia (inability to perform rapid alternating movements such as walking), impaired check reflex or rebound phenomenon, and intention tremor (involuntary movement caused by alternating contractions of opposing muscle groups). Damage to the midline portion may disrupt whole-body movements, whereas damage localized more laterally is more likely to disrupt fine movements of the hands or limbs. Damage to the upper part of the cerebellum tends to cause gait impairments and other problems with leg coordination; damage to the lower part is more likely to cause uncoordinated or poorly aimed movements of the arms and hands, as well as difficulties in speed. This complex of motor symptoms is called ataxia. To identify cerebellar problems, neurological examination includes assessment of gait (a broad-based gait being indicative of ataxia), finger-pointing tests and assessment of posture. If cerebellar dysfunction is indicated, a magnetic resonance imaging scan can be used to obtain a detailed picture of any structural alterations that may exist. The list of medical problems that can produce cerebellar damage is long, including stroke, hemorrhage, swelling of the brain (cerebral edema), tumors, alcoholism, physical trauma such as gunshot wounds or explosives, and chronic degenerative conditions such as olivopontocerebellar atrophy. Some forms of migraine headache may also produce temporary dysfunction of the cerebellum, of variable severity. Infection can result in cerebellar damage in such conditions as the prion diseases and Miller Fisher syndrome, a variant of Guillain–Barré syndrome. Aging The human cerebellum changes with age. These changes may differ from those of other parts of the brain. The cerebellum is the youngest brain region (and body part) in centenarians according to an epigenetic biomarker of tissue age known as epigenetic clock: it is about 15 years younger than expected in a centenarian. Further, gene expression patterns in the human cerebellum show less age-related alteration than that in the cerebral cortex. Some studies have reported reductions in numbers of cells or volume of tissue, but the amount of data relating to this question is not very large. Developmental and degenerative disorders Congenital malformation, hereditary disorders, and acquired conditions can affect cerebellar structure and, consequently, cerebellar function. Unless the causative condition is reversible, the only possible treatment is to help people live with their problems. Visualization of the fetal cerebellum by ultrasound scan at 18 to 20 weeks of pregnancy can be used to screen for fetal neural tube defects with a sensitivity rate of up to 99%. In normal development, endogenous sonic hedgehog signaling stimulates rapid proliferation of cerebellar granule neuron progenitors (CGNPs) in the external granule layer (EGL). Cerebellar development occurs during late embryogenesis and the early postnatal period, with CGNP proliferation in the EGL peaking during early development (postnatal day 7 in the mouse). As CGNPs terminally differentiate into cerebellar granule cells (also called cerebellar granule neurons, CGNs), they migrate to the internal granule layer (IGL), forming the mature cerebellum (by post-natal day 20 in the mouse). Mutations that abnormally activate Sonic hedgehog signaling predispose to cancer of the cerebellum (medulloblastoma) in humans with Gorlin Syndrome and in genetically engineered mouse models. Congenital malformation or underdevelopment (hypoplasia) of the cerebellar vermis is a characteristic of both Dandy–Walker syndrome and Joubert syndrome. In very rare cases, the entire cerebellum may be absent. The inherited neurological disorders Machado–Joseph disease, ataxia telangiectasia, and Friedreich's ataxia cause progressive neurodegeneration linked to cerebellar loss. Congenital brain malformations outside the cerebellum can, in turn, cause herniation of cerebellar tissue, as seen in some forms of Arnold–Chiari malformation. Other conditions that are closely linked to cerebellar degeneration include the idiopathic progressive neurological disorders multiple system atrophy and Ramsay Hunt syndrome type I, and the autoimmune disorder paraneoplastic cerebellar degeneration, in which tumors elsewhere in the body elicit an autoimmune response that causes neuronal loss in the cerebellum. Cerebellar atrophy can result from an acute deficiency of vitamin B1 (thiamine) as seen in beriberi and in Wernicke–Korsakoff syndrome, or vitamin E deficiency. Cerebellar atrophy has been observed in many other neurological disorders including Huntington's disease, multiple sclerosis, essential tremor, progressive myoclonus epilepsy, and Niemann–Pick disease. Cerebellar atrophy can also occur as a result of exposure to toxins including heavy metals or pharmaceutical or recreational drugs. Pain There is a general consensus that the cerebellum is involved in pain processing. The cerebellum receives pain input from both descending cortico-cerebellar pathways and ascending spino-cerebellar pathways, through the pontine nuclei and inferior olives. Some of this information is transferred to the motor system inducing a conscious motor avoidance of pain, graded according to pain intensity. These direct pain inputs, as well as indirect inputs, are thought to induce long-term pain avoidance behavior that results in chronic posture changes and consequently, in functional and anatomical remodeling of vestibular and proprioceptive nuclei. As a result, chronic neuropathic pain can induce macroscopic anatomical remodeling of the hindbrain, including the cerebellum. The magnitude of this remodeling and the induction of neuron progenitor markers suggest the contribution of adult neurogenesis to these changes. Comparative anatomy and evolution The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to all animal species with a brain. There is considerable variation in the size and shape of the cerebellum in different vertebrate species. In amphibians, it is little developed, and in lampreys, and hagfish, the cerebellum is barely distinguishable from the brain-stem. Although the spinocerebellum is present in these groups, the primary structures are small, paired-nuclei corresponding to the vestibulocerebellum. The cerebellum is a bit larger in reptiles, considerably larger in birds, and larger still in mammals. The large paired and convoluted lobes found in humans are typical of mammals, but the cerebellum is, in general, a single median lobe in other groups, and is either smooth or only slightly grooved. In mammals, the neocerebellum is the major part of the cerebellum by mass, but, in other vertebrates, it is typically the spinocerebellum. The cerebellum of cartilaginous and bony fishes is extraordinarily large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. In mormyrid fish (a family of weakly electrosensitive freshwater fish), the cerebellum is considerably larger than the rest of the brain. The largest part of it is a special structure called the valvula, which has an unusually regular architecture and receives much of its input from the electrosensory system. The hallmark of the mammalian cerebellum is an expansion of the lateral lobes, whose main interactions are with the neocortex. As monkeys evolved into great apes, the expansion of the lateral lobes continued, in tandem with the expansion of the frontal lobes of the neocortex. In ancestral hominids, and in Homo sapiens until the middle Pleistocene period, the cerebellum continued to expand, but the frontal lobes expanded more rapidly. The most recent period of human evolution, however, may actually have been associated with an increase in the relative size of the cerebellum, as the neocortex reduced its size somewhat while the cerebellum expanded. The size of the human cerebellum, compared to the rest of the brain, has been increasing in size while the cerebrum decreased in size With both the development and implementation of motor tasks, visual-spatial skills and learning taking place in the cerebellum, the growth of the cerebellum is thought to have some form of correlation to greater human cognitive abilities. The lateral hemispheres of the cerebellum are now 2.7 times greater in both humans and apes than they are in monkeys. These changes in the cerebellum size cannot be explained by greater muscle mass. They show that either the development of the cerebellum is tightly linked to that of the rest of the brain or that neural activities taking place in the cerebellum were important during Hominidae evolution. Due to the cerebellum's role in cognitive functions, the increase in its size may have played a role in cognitive expansion. Cerebellum-like structures Most vertebrate species have a cerebellum and one or more cerebellum-like structures, brain areas that resemble the cerebellum in terms of cytoarchitecture and neurochemistry. The only cerebellum-like structure found in mammals is the dorsal cochlear nucleus (DCN), one of the two primary sensory nuclei that receive input directly from the auditory nerve. The DCN is a layered structure, with the bottom layer containing granule cells similar to those of the cerebellum, giving rise to parallel fibers that rise to the superficial layer and travel across it horizontally. The superficial layer contains a set of GABAergic neurons called cartwheel cells that resemble Purkinje cells anatomically and chemically—they receive parallel fiber input, but do not have any inputs that resemble climbing fibers. The output neurons of the DCN are pyramidal cells. They are glutamatergic, but also resemble Purkinje cells in some respects—they have spiny, flattened superficial dendritic trees that receive parallel fiber input, but they also have basal dendrites that receive input from auditory nerve fibers, which travel across the DCN in a direction at right angles to the parallel fibers. The DCN is most highly developed in rodents and other small animals, and is considerably reduced in primates. Its function is not well understood; the most popular speculations relate it to spatial hearing in one way or another. Most species of fish and amphibians possess a lateral line system that senses pressure waves in water. One of the brain areas that receives primary input from the lateral line organ, the medial octavolateral nucleus, has a cerebellum-like structure, with granule cells and parallel fibers. In electrosensitive fish, the input from the electrosensory system goes to the dorsal octavolateral nucleus, which also has a cerebellum-like structure. In ray-finned fishes (by far the largest group), the optic tectum has a layer—the marginal layer—that is cerebellum-like. All of these cerebellum-like structures appear to be primarily sensory-related rather than motor-related. All of them have granule cells that give rise to parallel fibers that connect to Purkinje-like neurons with modifiable synapses, but none have climbing fibers comparable to those of the cerebellum—instead they receive direct input from peripheral sensory organs. None has a demonstrated function, but the most influential speculation is that they serve to transform sensory inputs in some sophisticated way, perhaps to compensate for changes in body posture. In fact, James M. Bower and others have argued, partly on the basis of these structures and partly on the basis of cerebellar studies, that the cerebellum itself is fundamentally a sensory structure, and that it contributes to motor control by moving the body in a way that controls the resulting sensory signals. Despite Bower's viewpoint, there is also strong evidence that the cerebellum directly influences motor output in mammals. History Descriptions Even the earliest anatomists were able to recognize the cerebellum by its distinctive appearance. Aristotle and Herophilus (quoted in Galen) called it the παρεγκεφαλίς (parenkephalis), as opposed to the ἐγκέφαλος (enkephalos) or brain proper. Galen's extensive description is the earliest that survives. He speculated that the cerebellum was the source of motor nerves. Further significant developments did not come until the Renaissance. Vesalius discussed the cerebellum briefly, and the anatomy was described more thoroughly by Thomas Willis in 1664. More anatomical work was done during the 18th century, but it was not until early in the 19th century that the first insights into the function of the cerebellum were obtained. Luigi Rolando in 1809 established the key finding that damage to the cerebellum results in motor disturbances. Jean Pierre Flourens in the first half of the 19th century carried out detailed experimental work, which revealed that animals with cerebellar damage can still move, but with a loss of coordination (strange movements, awkward gait, and muscular weakness), and that recovery after the lesion can be nearly complete unless the lesion is very extensive. By the beginning of the 20th century, it was widely accepted that the primary function of the cerebellum relates to motor control; the first half of the 20th century produced several detailed descriptions of the clinical symptoms associated with cerebellar disease in humans. Etymology The name cerebellum is a diminutive of cerebrum (brain); it can be translated literally as little brain. The Latin name is a direct translation of the Ancient Greek παρεγκεφαλίς (parenkephalis), which was used in the works of Aristotle, the first known writer to describe the structure. No other name is used in the English-language literature, but historically a variety of Greek or Latin-derived names have been used, including cerebrum parvum, encephalion, encranion, cerebrum posterius, and parencephalis.
Biology and health sciences
Nervous system
null
50408
https://en.wikipedia.org/wiki/Computer%20engineering
Computer engineering
Computer engineering (CoE or CpE) is a branch of electrical engineering that integrates several fields of electrical engineering, electronics engineering and computer science required to develop computer hardware and software. Computer engineering is referred to as Electrical and Computer engineering OR Computer Science and Engineering at some universities Computer engineers require training in electrical engineering, electronic engineering, physics, computer science, hardware-software integration, software design, and software engineering. It uses the techniques and principles of electrical engineering and computer science, and can encompass areas such as electromagnetism, artificial intelligence (AI), robotics, computer networks, computer architecture and operating systems. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also on how to integrate them into the larger picture. Robotics are one of the applications of computer engineering. Computer engineering usually deals with areas including writing software and firmware for embedded microcontrollers, designing VLSI chips, analog sensors, mixed signal circuit boards, Thermodynamics and Control systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors. In many institutions of higher learning, computer engineering students are allowed to choose areas of in-depth study in their junior and senior years because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of general engineering before declaring computer engineering as their primary focus. History Computer engineering began in 1939 when John Vincent Atanasoff and Clifford Berry began developing the world's first electronic digital computer through physics, mathematics, and electrical engineering. John Vincent Atanasoff was once a physics and mathematics teacher for Iowa State University and Clifford Berry a former graduate under electrical engineering and physics. Together, they created the Atanasoff-Berry computer, also known as the ABC which took five years to complete. While the original ABC was dismantled and discarded in the 1940s, a tribute was made to the late inventors; a replica of the ABC was made in 1997, where it took a team of researchers and engineers four years and $350,000 to build. The modern personal computer emerged in the 1970s, after several breakthroughs in semiconductor technology. These include the first working transistor by William Shockley, John Bardeen and Walter Brattain at Bell Labs in 1947, in 1955, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick, the first planar silicon dioxide transistors by Frosch and Derick in 1957, planar process by Jean Hoerni, the monolithic integrated circuit chip by Robert Noyce at Fairchild Semiconductor in 1959, the metal–oxide–semiconductor field-effect transistor (MOSFET, or MOS transistor) demonstrated by a team at Bell Labs in 1960 and the single-chip microprocessor (Intel 4004) by Federico Faggin, Marcian Hoff, Masatoshi Shima and Stanley Mazor at Intel in 1971. History of computer engineering education The first computer engineering degree program in the United States was established in 1971 at Case Western Reserve University in Cleveland, Ohio. , there were 250 ABET-accredited computer engineering programs in the U.S. In Europe, accreditation of computer engineering schools is done by a variety of agencies as part of the EQANIE network. Due to increasing job requirements for engineers who can concurrently design hardware, software, firmware, and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor's degree generally called computer engineering. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum. As with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers. Education Computer engineering is referred to as computer science and engineering at some universities. Most entry-level computer engineering jobs require at least a bachelor's degree in computer engineering, electrical engineering or computer science. Typically one must learn an array of mathematics such as calculus, linear algebra and differential equations, along with computer science. Degrees in electronic or electric engineering also suffice due to the similarity of the two fields. Because hardware engineers commonly work with computer software systems, a strong background in computer programming is necessary. According to BLS, "a computer engineering major is similar to electrical engineering but with some computer science courses added to the curriculum". Some large firms or specialized jobs require a master's degree. It is also important for computer engineers to keep up with rapid advances in technology. Therefore, many continue learning throughout their careers. This can be helpful, especially when it comes to learning new skills or improving existing ones. For example, as the relative cost of fixing a bug increases the further along it is in the software development cycle, there can be greater cost savings attributed to developing and testing for quality code as soon as possible in the process, particularly before release. Professions A person with a profession in computer engineering is called a computer engineer. Applications and practice There are two major focuses in computer engineering: hardware and software. Computer hardware engineering According to the BLS, Job Outlook employment for computer hardware engineers, the expected ten-year growth from 2019 to 2029 for computer hardware engineering was an estimated 2% and a total of 71,100 jobs. ("Slower than average" in their own words when compared to other occupations)". This is a decrease from the 2014 to 2024 BLS computer hardware engineering estimate of 3% and a total of 77,700 jobs; "and is down from 7% for the 2012 to 2022 BLS estimate and is further down from 9% in the BLS 2010 to 2020 estimate." Today, computer hardware is somewhat equal to electronic and computer engineering (ECE) and has been divided into many subcategories, the most significant being embedded system design. Computer software engineering According to the U.S. Bureau of Labor Statistics (BLS), "computer applications software engineers and computer systems software engineers are projected to be among the faster than average growing occupations" The expected ten-year growth for computer software engineering was an estimated seventeen percent and there was a total of 1,114,000 jobs that same year. This is down from the 2012 to 2022 BLS estimate of 22% for software developers. And, further down from the 30% 2010 to 2020 BLS estimate. In addition, growing concerns over cybersecurity add up to put computer software engineering high above the average rate of increase for all fields. However, some of the work will be outsourced in foreign countries. Due to this, job growth will not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead go to computer software engineers in countries such as India. In addition, the BLS Job Outlook for Computer Programmers, 2014–24 has an −8% (a decline, in their words), then a Job Outlook, 2019-29 of -9% (Decline), then a 10% decline for 2021-2031 and now an 11% decline for 2022-2032 for those who program computers (i.e. embedded systems) who are not computer application developers. Furthermore, women in software fields has been declining over the years even faster than other engineering fields. Computer engineering licensing and practice Computer engineering is generally practiced within larger product development firms, and such practice may not be subject to licensing. However, independent consultants who advertise computer engineering, just like any form of engineering, may be subject to state laws which restrict professional engineer practice to only those who have received the appropriate License. The National Council of Examiners for Engineering and Surveying (NCEES) first offered a Principles and Practice of Engineering Examination for computer engineering in 2003. Specialty areas There are many specialty areas in the field of computer engineering. Processor design Processor design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. CPU design is divided into design of the following components: datapaths (such as ALUs and pipelines), control unit: logic which controls the datapaths, memory components such as register files, caches, clock circuitry such as clock drivers, PLLs, clock distribution networks, pad transceiver circuitry, logic gate cell library which is used to implement the logic. Coding, cryptography, and information protection Computer engineers work in coding, applied cryptography, and information protection to develop new methods for protecting various information, such as digital images and music, fragmentation, copyright infringement and other forms of tampering by, for example, digital watermarking. Communications and wireless networks Those focusing on communications and wireless networks, work advancements in telecommunications systems and networks (especially wireless networks), modulation and error-control coding, and information theory. High-speed network design, interference suppression and modulation, design, and analysis of fault-tolerant system, and storage and transmission schemes are all a part of this specialty. Compilers and operating systems This specialty focuses on compilers and operating systems design and development. Engineers in this field develop new operating system architecture, program analysis techniques, and new techniques to assure quality. Examples of work in this field include post-link-time code transformation algorithm development and new operating system development. Computational science and engineering Computational science and engineering is a relatively new discipline. According to the Sloan Career Cornerstone Center, individuals working in this area, "computational methods are applied to formulate and solve complex mathematical problems in engineering and the physical and the social sciences. Examples include aircraft design, the plasma processing of nanometer features on semiconductor wafers, VLSI circuit design, radar detection systems, ion transport through biological channels, and much more". Computer networks, mobile computing, and distributed systems In this specialty, engineers build integrated environments for computing, communications, and information access. Examples include shared-channel wireless networks, adaptive resource management in various systems, and improving the quality of service in mobile and ATM environments. Some other examples include work on wireless network systems and fast Ethernet cluster wired systems. Computer systems: architecture, parallel processing, and dependability Engineers working in computer systems work on research projects that allow for reliable, secure, and high-performance computer systems. Projects such as designing processors for multithreading and parallel processing are included in this field. Other examples of work in this field include the development of new theories, algorithms, and other tools that add performance to computer systems. Computer architecture includes CPU design, cache hierarchy layout, memory organization, and load balancing. Computer vision and robotics In this specialty, computer engineers focus on developing visual sensing technology to sense an environment, representation of an environment, and manipulation of the environment. The gathered three-dimensional information is then implemented to perform a variety of tasks. These include improved human modeling, image communication, and human-computer interfaces, as well as devices such as special-purpose cameras with versatile vision sensors. Embedded systems Individuals working in this area design technology for enhancing the speed, reliability, and performance of systems. Embedded systems are found in many devices from a small FM radio to the space shuttle. According to the Sloan Cornerstone Career Center, ongoing developments in embedded systems include "automated vehicles and equipment to conduct search and rescue, automated transportation systems, and human-robot coordination to repair equipment in space." , computer embedded systems specializations include system-on-chip design, the architecture of edge computing and the Internet of things. Integrated circuits, VLSI design, testing and CAD This specialty of computer engineering requires adequate knowledge of electronics and electrical systems. Engineers working in this area work on enhancing the speed, reliability, and energy efficiency of next-generation very-large-scale integrated (VLSI) circuits and microsystems. An example of this specialty is work done on reducing the power consumption of VLSI algorithms and architecture. Signal, image and speech processing Computer engineers in this area develop improvements in human–computer interaction, including speech recognition and synthesis, medical and scientific imaging, or communications systems. Other work in this area includes computer vision development such as recognition of human facial features. Quantum computing This area integrates the quantum behaviour of small particles such as superposition, interference and entanglement, with classical computers to solve complex problems and formulate algorithms much more efficiently. Individuals focus on fields like Quantum cryptography, physical simulations and quantum algorithms. Benefits of Engineering in Society An accessible avenue for obtaining information and opportunities in technology, especially for young students, is through digital platforms, enabling learning, exploration, and potential income generation at minimal cost and in regional languages, none of which would be possible without engineers. Computer engineering is important in the changes involved in industry 4.0, with engineers responsible for designing and optimizing the technology that surrounds our lives, from big data to AI. Their work not only facilitates global connections and knowledge access, but also plays a pivotal role in shaping our future, as technology continues to evolve rapidly, leading to a growing demand for skilled computer engineers. Engineering contributes to improving society by creating devices and structures impacting various aspects of our lives, from technology to infrastructure. Engineers also address challenges such as environmental protection and sustainable development, while developing medical treatments. As of 2016, the median annual wage across all BLS engineering categories was over $91,000. Some were much higher, with engineers working for petroleum companies at the top (over $128,000). Other top jobs include: Computer Hardware Engineer – $115,080, Aerospace Engineer – $109,650, Nuclear Engineer – $102,220.
Technology
Disciplines
null
50416
https://en.wikipedia.org/wiki/Differential%20calculus
Differential calculus
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve. The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point. Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration. Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories. Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra. Derivative The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below: For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to : The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar: The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and , where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula . This gives As gets closer and closer to , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as The above expression means 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as . If , the derivative can also be written as , with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if , then the derivative of is provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is : As approaches , approaches . Therefore, . This proof can be generalised to show that if and are constants. This is known as the power rule. For example, . However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability. A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at . Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted . The linearization of in all directions at once is called the total derivative. History of differentiation The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems). The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem". The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive ) of the cubic occurs when , and concluded therefrom that the equation has exactly one positive solution when , and two positive solutions whenever . The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known. The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today. Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane. The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations. Applications of derivatives Optimization If is a differentiable function on (or an open interval) and is a local maximum or a local minimum of , then the derivative of at is zero. Points where are called critical points or stationary points (and the value of at is called a critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points. If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at : if it is positive, is a local minimum; if it is negative, is a local maximum; if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.) This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the on each side of the critical point. Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints. This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points. In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive. Calculus of variations One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations. Physics Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics: velocity is the derivative (with respect to time) of an object's displacement (distance from the original position) acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position. For example, if an object's position on a line is given by then the object's velocity is and the object's acceleration is which is constant. Differential equations A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod. Mean value theorem The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words, In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function. Taylor polynomials and Taylor series The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible. In the neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals . The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic. Implicit function theorem Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.) The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.
Mathematics
Calculus and analysis
null
50425
https://en.wikipedia.org/wiki/Quantum%20Hall%20effect
Quantum Hall effect
The quantum Hall effect (or integer quantum Hall effect) is a quantized version of the Hall effect which is observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall resistance exhibits steps that take on the quantized values where is the Hall voltage, is the channel current, is the elementary charge and is the Planck constant. The divisor can take on either integer () or fractional () values. Here, is roughly but not exactly equal to the filling factor of Landau levels. The quantum Hall effect is referred to as the integer or fractional quantum Hall effect depending on whether is an integer or fraction, respectively. The striking feature of the integer quantum Hall effect is the persistence of the quantization (i.e. the Hall plateau) as the electron density is varied. Since the electron density remains constant when the Fermi level is in a clean spectral gap, this situation corresponds to one where the Fermi level is an energy with a finite density of states, though these states are localized (see Anderson localization). The fractional quantum Hall effect is more complicated and still considered an open research problem. Its existence relies fundamentally on electron–electron interactions. In 1988, it was proposed that there was a quantum Hall effect without Landau levels. This quantum Hall effect is referred to as the quantum anomalous Hall (QAH) effect. There is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. Applications Electrical resistance standards The quantization of the Hall conductance () has the important property of being exceedingly precise. Actual measurements of the Hall conductance have been found to be integer or fractional multiples of to better than one part in a billion. It has allowed for the definition of a new practical standard for electrical resistance, based on the resistance quantum given by the von Klitzing constant . This is named after Klaus von Klitzing, the discoverer of exact quantization. The quantum Hall effect also provides an extremely precise independent determination of the fine-structure constant, a quantity of fundamental importance in quantum electrodynamics. In 1990, a fixed conventional value was defined for use in resistance calibrations worldwide. On 16 November 2018, the 26th meeting of the General Conference on Weights and Measures decided to fix exact values of (the Planck constant) and (the elementary charge), superseding the 1990 conventional value with an exact permanent value (intrinsic standard) . Research status The fractional quantum Hall effect is considered part of exact quantization. Exact quantization in full generality is not completely understood but it has been explained as a very subtle manifestation of the combination of the principle of gauge invariance together with another symmetry (see Anomalies). The integer quantum Hall effect instead is considered a solved research problem and understood in the scope of TKNN formula and Chern–Simons Lagrangians. The fractional quantum Hall effect is still considered an open research problem. The fractional quantum Hall effect can be also understood as an integer quantum Hall effect, although not of electrons but of charge–flux composites known as composite fermions. Other models to explain the fractional quantum Hall effect also exists. Currently it is considered an open research problem because no single, confirmed and agreed list of fractional quantum numbers exists, neither a single agreed model to explain all of them, although there are such claims in the scope of composite fermions and Non Abelian Chern–Simons Lagrangians. History In 1957, Carl Frosch and Lincoln Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. This enabled physicists to study electron behavior in a nearly ideal two-dimensional gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures. The integer quantization of the Hall conductance was originally predicted by University of Tokyo researchers Tsuneya Ando, Yukio Matsumoto and Yasutada Uemura in 1975, on the basis of an approximate calculation which they themselves did not believe to be true. In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji subsequently observed the effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery that the Hall resistance was exactly quantized. For this finding, von Klitzing was awarded the 1985 Nobel Prize in Physics. A link between exact quantization and gauge invariance was subsequently proposed by Robert Laughlin, who connected the quantized conductivity to the quantized charge transport in a Thouless charge pump. Most integer quantum Hall experiments are now performed on gallium arsenide heterostructures, although many other semiconductor materials can be used. In 2007, the integer quantum Hall effect was reported in graphene at temperatures as high as room temperature, and in the magnesium zinc oxide ZnO–MgxZn1−xO. Integer quantum Hall effect Landau levels In two dimensions, when classical electrons are subjected to a magnetic field they follow circular cyclotron orbits. When the system is treated quantum mechanically, these orbits are quantized. To determine the values of the energy levels the Schrödinger equation must be solved. Since the system is subjected to a magnetic field, it has to be introduced as an electromagnetic vector potential in the Schrödinger equation. The system considered is an electron gas that is free to move in the x and y directions, but is tightly confined in the z direction. Then, a magnetic field is applied in the z direction and according to the Landau gauge the electromagnetic vector potential is and the scalar potential is . Thus the Schrödinger equation for a particle of charge and effective mass in this system is: where is the canonical momentum, which is replaced by the operator and is the total energy. To solve this equation it is possible to separate it into two equations since the magnetic field just affects the movement along x and y axes. The total energy becomes then, the sum of two contributions . The corresponding equations in z axis is: To simplify things, the solution is considered as an infinite well. Thus the solutions for the z direction are the energies , and the wavefunctions are sinusoidal. For the and directions, the solution of the Schrödinger equation can be chosen to be the product of a plane wave in -direction with some unknown function of , i.e., . This is because the vector potential does not depend on and the momentum operator therefore commutes with the Hamiltonian. By substituting this Ansatz into the Schrödinger equation one gets the one-dimensional harmonic oscillator equation centered at . where is defined as the cyclotron frequency and the magnetic length. The energies are: , And the wavefunctions for the motion in the plane are given by the product of a plane wave in and Hermite polynomials attenuated by the gaussian function in , which are the wavefunctions of a harmonic oscillator. From the expression for the Landau levels one notices that the energy depends only on , not on . States with the same but different are degenerate. Density of states At zero field, the density of states per unit surface for the two-dimensional electron gas taking into account degeneration due to spin is independent of the energy . As the field is turned on, the density of states collapses from the constant to a Dirac comb, a series of Dirac functions, corresponding to the Landau levels separated . At finite temperature, however, the Landau levels acquire a width being the time between scattering events. Commonly it is assumed that the precise shape of Landau levels is a Gaussian or Lorentzian profile. Another feature is that the wave functions form parallel strips in the -direction spaced equally along the -axis, along the lines of . Since there is nothing special about any direction in the -plane if the vector potential was differently chosen one should find circular symmetry. Given a sample of dimensions and applying the periodic boundary conditions in the -direction being an integer, one gets that each parabolic potential is placed at a value . The number of states for each Landau Level and can be calculated from the ratio between the total magnetic flux that passes through the sample and the magnetic flux corresponding to a state. Thus the density of states per unit surface is . Note the dependency of the density of states with the magnetic field. The larger the magnetic field is, the more states are in each Landau level. As a consequence, there is more confinement in the system since fewer energy levels are occupied. Rewriting the last expression as it is clear that each Landau level contains as many states as in a 2DEG in a . Given the fact that electrons are fermions, for each state available in the Landau levels it corresponds to two electrons, one electron with each value for the spin . However, if a large magnetic field is applied, the energies split into two levels due to the magnetic moment associated with the alignment of the spin with the magnetic field. The difference in the energies is being a factor which depends on the material ( for free electrons) and the Bohr magneton. The sign is taken when the spin is parallel to the field and when it is antiparallel. This fact called spin splitting implies that the density of states for each level is reduced by a half. Note that is proportional to the magnetic field so, the larger the magnetic field is, the more relevant is the split. In order to get the number of occupied Landau levels, one defines the so-called filling factor as the ratio between the density of states in a 2DEG and the density of states in the Landau levels. In general the filling factor is not an integer. It happens to be an integer when there is an exact number of filled Landau levels. Instead, it becomes a non-integer when the top level is not fully occupied. In actual experiments, one varies the magnetic field and fixes electron density (and not the Fermi energy!) or varies the electron density and fixes the magnetic field. Both cases correspond to a continuous variation of the filling factor and one cannot expect to be an integer. Since , by increasing the magnetic field, the Landau levels move up in energy and the number of states in each level grow, so fewer electrons occupy the top level until it becomes empty. If the magnetic field keeps increasing, eventually, all electrons will be in the lowest Landau level () and this is called the magnetic quantum limit. Longitudinal resistivity It is possible to relate the filling factor to the resistivity and hence, to the conductivity of the system. When is an integer, the Fermi energy lies in between Landau levels where there are no states available for carriers, so the conductivity becomes zero (it is considered that the magnetic field is big enough so that there is no overlap between Landau levels, otherwise there would be few electrons and the conductivity would be approximately ). Consequently, the resistivity becomes zero too (At very high magnetic fields it is proven that longitudinal conductivity and resistivity are proportional). With the conductivity one finds If the longitudinal resistivity is zero and transversal is finite, then . Thus both the longitudinal conductivity and resistivity become zero. Instead, when is a half-integer, the Fermi energy is located at the peak of the density distribution of some Landau Level. This means that the conductivity will have a maximum . This distribution of minimums and maximums corresponds to ¨quantum oscillations¨ called Shubnikov–de Haas oscillations which become more relevant as the magnetic field increases. Obviously, the height of the peaks are larger as the magnetic field increases since the density of states increases with the field, so there are more carriers which contribute to the resistivity. It is interesting to notice that if the magnetic field is very small, the longitudinal resistivity is a constant which means that the classical result is reached. Transverse resistivity From the classical relation of the transverse resistivity and substituting one finds out the quantization of the transverse resistivity and conductivity: One concludes then, that the transverse resistivity is a multiple of the inverse of the so-called conductance quantum if the filling factor is an integer. In experiments, however, plateaus are observed for whole plateaus of filling values , which indicates that there are in fact electron states between the Landau levels. These states are localized in, for example, impurities of the material where they are trapped in orbits so they can not contribute to the conductivity. That is why the resistivity remains constant in between Landau levels. Again if the magnetic field decreases, one gets the classical result in which the resistivity is proportional to the magnetic field. Photonic quantum Hall effect The quantum Hall effect, in addition to being observed in two-dimensional electron systems, can be observed in photons. Photons do not possess inherent electric charge, but through the manipulation of discrete optical resonators and coupling phases or on-site phases, an artificial magnetic field can be created. This process can be expressed through a metaphor of photons bouncing between multiple mirrors. By shooting the light across multiple mirrors, the photons are routed and gain additional phase proportional to their angular momentum. This creates an effect like they are in a magnetic field. Topological classification The integers that appear in the Hall effect are examples of topological quantum numbers. They are known in mathematics as the first Chern numbers and are closely related to Berry's phase. A striking model of much interest in this context is the Azbel–Harper–Hofstadter model whose quantum phase diagram is the Hofstadter butterfly shown in the figure. The vertical axis is the strength of the magnetic field and the horizontal axis is the chemical potential, which fixes the electron density. The colors represent the integer Hall conductances. Warm colors represent positive integers and cold colors negative integers. Note, however, that the density of states in these regions of quantized Hall conductance is zero; hence, they cannot produce the plateaus observed in the experiments. The phase diagram is fractal and has structure on all scales. In the figure there is an obvious self-similarity. In the presence of disorder, which is the source of the plateaus seen in the experiments, this diagram is very different and the fractal structure is mostly washed away. Also, the experiments control the filling factor and not the Fermi energy. If this diagram is plotted as a function of filling factor, all the features are completely washed away, hence, it has very little to do with the actual Hall physics. Concerning physical mechanisms, impurities and/or particular states (e.g., edge currents) are important for both the 'integer' and 'fractional' effects. In addition, Coulomb interaction is also essential in the fractional quantum Hall effect. The observed strong similarity between integer and fractional quantum Hall effects is explained by the tendency of electrons to form bound states with an even number of magnetic flux quanta, called composite fermions. Bohr atom interpretation of the von Klitzing constant The value of the von Klitzing constant may be obtained already on the level of a single atom within the Bohr model while looking at it as a single-electron Hall effect. While during the cyclotron motion on a circular orbit the centrifugal force is balanced by the Lorentz force responsible for the transverse induced voltage and the Hall effect, one may look at the Coulomb potential difference in the Bohr atom as the induced single atom Hall voltage and the periodic electron motion on a circle as a Hall current. Defining the single atom Hall current as a rate a single electron charge is making Kepler revolutions with angular frequency and the induced Hall voltage as a difference between the hydrogen nucleus Coulomb potential at the electron orbital point and at infinity: One obtains the quantization of the defined Bohr orbit Hall resistance in steps of the von Klitzing constant as which for the Bohr atom is linear but not inverse in the integer n. Relativistic analogs Relativistic examples of the integer quantum Hall effect and quantum spin Hall effect arise in the context of lattice gauge theory.
Physical sciences
Electrodynamics
Physics