id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
8945793 | https://en.wikipedia.org/wiki/Underwater%20tunnel | Underwater tunnel | An underwater tunnel is a tunnel which is partly or wholly constructed under the sea or a river. They are often used where building a bridge or operating a ferry link is unviable, or to provide competition or relief for existing bridges or ferry links. While short tunnels are often road tunnels which may admit motorized traffic, unmotorized traffic or both, concerns with ventilation lead to the longest tunnels (such as the Channel Tunnel or the Seikan Tunnel) being electrified rail tunnels.
Types of tunnel
Various methods are used to construct underwater tunnels, including an immersed tube and a submerged floating tunnel. The immersed tube method involves steel tube segments that are positioned in a trench in the sea floor and joined together. The trench is then covered and the water pumped from the tunnel. Submerged floating tunnels use the law of buoyancy to remain submerged, with the tunnel attached to the sea bed by columns or tethers, or hung from pontoons on the surface.
Advantages
Compared with bridges
One such advantage would be that a tunnel would still allow shipping to pass. A low bridge would need an opening or swing bridge to allow shipping to pass, which can cause traffic congestion. Conversely, a higher bridge that does allow shipping may be unsightly and opposed by the public. Higher bridges can also be more expensive than lower ones. Bridges can also be closed due to harsh weather such as high winds.
Tunneling makes excavated soil available that can be used to create new land (see land reclamation). This was done with the rock excavated for the Channel Tunnel, which was used to create Samphire Hoe.
Compared with ferry links
As with bridges, albeit with more chance, ferry links will also be closed during adverse weather. Strong winds or the tidal limits may also affect the workings of a ferry crossing. Travelling through a tunnel is significantly quicker than travelling using a ferry link, shown by the times for travelling through the Channel Tunnel (75–90 minutes for Ferry and 21 minutes on the Eurostar). Ferries offer much lower frequency and capacity and travel times tend to be longer with a ferry than a tunnel. Ferries also usually use fossil fuels emitting greenhouse gases in the process while most railway tunnels are electrified. In the Baltic Sea, one of the busiest areas for passenger ferries in the world, sea ice is a problem, causing seasonal disruption or requiring expensive ice-breaking ships. In the Øresund region the construction of the bridge-tunnel has been cited as enhancing regional integration and giving an economic boom not possible with the previous ferry links. Similar arguments are used by proponents of the Helsinki-Tallinn tunnel in the Talsinki region. There are various issues with the safety of both tunnels and ferries, in the case of tunnels, fire is a particular hazard with several fires having broken out in the Channel Tunnel. On the other hand, the free surface effect is a significant safety risk for RORO ferries as seen in the sinking of MS Estonia. Tunnels which exclude dangerous, combustible freights and the fuel or lithium-ion batteries carried aboard motorcars can significantly reduce fire risk.
Disadvantages
Compared with bridges
Tunnels require far higher costs of security and construction than bridges. This may mean that over short distances bridges may be preferred rather than tunnels (for example Dartford Crossing). As stated earlier, bridges may not allow shipping to pass, so solutions such as the Øresund Bridge have been constructed.
Compared with ferry links
As with bridges, ferry links are far cheaper to construct than tunnels, but not to operate. Also tunnels don't have the flexibility to be deployed over different routes as transport demand changes over time. Without the cost of a new ferry, the route over which a ferry provides transport can easily be changed. However, this flexibility can be a downside for customers who have come to rely on the ferry service only to see it abandoned. Fixed infrastructure such as bridges or tunnels represent a much more concrete commitment to sustained service.
List of notable examples
Proposed
Road
Rogfast tunnel in Norway – construction having started in 2018, at 27 km length, 392 m depth, it will be the longest road tunnel and deepest undersea tunnel in the world.
Karnaphuli Tunnel or Bangabandhu Sheikh Mujibur Rahman Tunnel in Bangladesh Tunnel is an underwater expressway tunnel in the port city of Chittagong, Bangladesh under the Karnaphuli river.
Underwater Road Tunnel Salamina island-Perama - planned road tunnel in Attica, Greece. Currently at the second stage of the tender from which the concessionaire will be selected.
India-Sri Lanka Sea Tunnel (proposed)
Penang Undersea Tunnel in Malaysia – to open in 2025
Western Harbour Tunnel in Sydney, New South Wales, Australia – to open in 2028
Suðuroyartunnilin in the Faroe Islands – at least 25 km in length, it would connect the islands of Suðuroy and Skúgvoy to Sandoy, which is part of the fixed-link interconnected Faroese "mainland".
Rail
Bohai Strait tunnel in China between Dalian and Yantai (decided, construction to start 'as soon as possible'.)
Helsinki to Tallinn Tunnel under the Gulf of Finland (proposed)
Irish Sea Tunnel (suggested)
Rio de Janeiro Metro Bay Tunnel (Line 3 – Rio de Janeiro-Niterói) (proposed)
Fehmarn Belt Fixed Link between Denmark and Germany (decided, construction started in January 2021)
Mumbai–Ahmedabad high-speed rail corridor of India (decided, construction start November 2018)
Taiwan Strait Tunnel - if built would become the longest rail tunnel in the world. Engineering challenges and the unsolved political status of Taiwan make construction unlikely
Strait of Gibraltar Tunnel - linking Gibraltar or the Spanish mainland to the African mainland. If built it would most likely become the deepest tunnel ever built
| Technology | Transport infrastructure | null |
8946189 | https://en.wikipedia.org/wiki/Smoking%20pipe | Smoking pipe | A smoking pipe is used to taste the smoke of a burning substance; most common is a tobacco pipe. Pipes are commonly made from briar, heather, corncob, meerschaum, clay, cherry, glass, porcelain, ebonite and acrylic.
Dutch pipe smoking
During the 17th century, pipe smoking became a new trend among the Dutch young, in specific the upper and middle class students. These students copied the Spanish sailors and soldiers in the area by joining them in participation of pipe smoking. In particular they were interested in the novelty it brought, which was the taste of smoke. However, the only way to smoke tobacco was through a pipe. Popularity grew throughout and became a mainstream habit for the Dutch during this time. “In a relatively short period of time, from 1590 to 1650, the Dutch Republic had gone from being a country of non-smokers to being a tobaccophile of Europe.” Typically, these young folk did their smoking in smoking rooms or parlors, also known as “tobacco houses.” They smoked for social habit, usually with other smokers. “It took more than a century for this new practice to come into fashion.” The popularity of pipes grew interest in artists. Although pipes has once been associated with the lower class, it turned into a symbol of prestige and vanity. Images of pipes could be found in numerous painting during the time. For example, in Willem Buytewech’s painting The Merry Company (circa 1620–1622), there are three young men and a woman sitting around a table with a tobacco pipe lying in the middle. Additionally, in artist Adriaen Brouwer’s portrait The Smokers (1636), he too was interested in the pipe. The smokers in the painting are sucking on their pipes.
Types
Bowl (smoking), pipes of various designs for smoking cannabis
Bong, also known as a water pipe
Ceremonial pipe, used by some Native American peoples
Chalice, a pipe used by Rastafari in cannabis rituals
Chibouk, a long-stemmed Turkish tobacco pipe with a clay bowl, often ornamented with precious stones
Chillum (pipe), conical smoking pipe originally from India
Hookah, tall stemmed pipe in which the smoke is cooled and filtered by passing through water, also known as a water pipe
Kiseru, Japanese pipe traditionally used for smoking finely shredded tobacco
Love rose, a pipe for smoking crack cocaine
Midwakh, small smoking pipe of Arabian origin
Pizzo (pipe), a pipe designed for freebasing drugs
Sebsi, traditional Moroccan smoking pipe
TEC Pipe, Thermal electric cooling Pipe.
| Biology and health sciences | Health and fitness: General | Health |
8948725 | https://en.wikipedia.org/wiki/Wentletrap | Wentletrap | Wentletraps are small, often white, very high-spired, predatory or ectoparasitic sea snails, marine gastropod mollusks in the family Epitoniidae.
The word wentletrap originated in Dutch (wenteltrap), and it means spiral staircase. These snails are sometimes also called "staircase shells", and "ladder shells".
The family Epitoniidae belongs to the superfamily Epitonioidea. Since 2017 this family also includes the former families Janthinidae (the pelagic purple snails) and Nystiellidae, all part of the informal group Ptenoglossa.
Epitoniidae is a rather large family, with an estimated number of species about 630.
Distribution
Wentletraps inhabit all seas and oceans worldwide, from the tropical zones to the Arctic and Antarctic zones.
Shell description
Most species of wentletrap are white, and have a porcelain-like appearance. They are notable for their intricately geometric shell architecture, and the shells of the larger species are prized by collectors.
The more or less turret-shaped shell consists of tightly-wound (sometimes loosely coiled), convex whorls, which create a high, conical spiral. Fine or microscopic spiral sculpture (also called "striae") is present in many species. The shells sometimes feature an umbilicus. Wentletrap shells have a roundish or oval aperture, but its inner lip is often reduced to strip of callus. The round and horny operculum is paucispiral and fits the aperture tightly. Most of the species in the family are small to minute, although some are larger, and overall the adult shell length in the family varies between 0.6 and 11.7 cm.
Within the genus Epitonium, the type genus of the family, the shell has predominantly axial sculpture of high, sharply ribbed "costae". These costae may offer some protection against other predatory snails, which would find it difficult or impossible to bore a hole in a shell with such obstructions.
Ecology
Wentletraps are usually found on sandy bottoms near sea anemones or corals, which serve as a food source for them. Some species are foragers and search for anemones.
Little is known about the biology of most wentletraps. Keen (1958) is most often cited. He observed that many wentletraps reveal a hint of purple body color, suggestive of carnivorous feeding. The animal can exude through its salivary gland a pink or purplish dye that may have an anaesthetic effect on its prey.
Keen also cited direct observation of a wentletrap feeding by insertion of its proboscis into a sea anemone.
A sequence of a wentletrap feeding on an anemone has been published. These snails also prey on corals and other coelenterates.
Female wentletraps lay egg capsules that are bound together with a supple string. The young emerge from these capsules as free-swimming larvae.
Genera
Genera within the family Epitoniidae include:
Acirsa Mörch, 1857
Acrilla H. Adams, 1860
Acrilloscala Sacco 1891
Alexania Strand, 1928
Alora (H. Adams, 1861)
Amaea H. & A. Adams, 1853
Boreoscala Kobelt, 1902 (possibly a synonym of Cirsotrema)
† Cavoscala Whitfield, 1892
† Cerithiscala de Boury, 1887
Chuniscala Thiele, 1928
Cingulacirsa Higo & Goto, 1993 (unaccepted > nomen nudum)
† Circuloscala de Boury, 1886
Cirsotrema Mörch, 1852
Clathroscala de Boury, 1890
† Clathrus Agassiz, 1837
Claviscala de Boury, 1909
† Confusiscala de Boury, 1909
Couthouyella Bartsch 1909
Crebriscala de Boury, 1909
Cycloscala Dall, 1889
Cylindriscala de Boury, 1909
Ecclesiogyra Dall, 1892
Eglisia Gray, 1842
Epidendrium A. Gittenberger & E. Gittenberger, 2005
Epifungium A. Gittenberger & E. Gittenberger, 2005
Epitonium Röding, 1798
Eulima Risso, 1826
Filiscala de Boury, 1911
† Foratiscala de Boury, 1887
Fragilopalia Azuma, 1972
Funiscala de Boury, 1890
† Gibboscala Kollmann, 2005
Globiscala de Boury, 1909
† Goniscala Marwick, 1943
Gregorioiscala Cossman, 1912
Gyroscala de Boury, 1887
Iphitus Jeffreys, 1883
Janthina Röding, 1798
Kurodacirsa Masahito & Habe, 1975
† Liapinella Guzhov, 2006
Mammiscala de Boury, 1909
Minabescala Nakayama, 1994
Murdochella H. J. Finlay, 1926
Narrimania Taviani, 1984
Narvaliscala Iredale, 1936
Opalia H. & A. Adams, 1853
Opaliopsis Thiele, 1928
Periapta Bouchet & Waren, 1986
† Plicacerithium Gerasimov, 1992
Propescala Cotton & Godfrey, 1931
† Proscala Cossmann, 1912
Punctiscala Philippi, 1844
Recluzia Petit de la Saussaye, 1853
Rectacirsa Iredale, 1936
Rutelliscala Kilburn, 1985
Sthenorhytis Conrad 1862
Striatiscala de Boury, 1909
Surrepifungium A. Gittenberger & E. Gittenberger, 2005
Tenuiscala de Boury, 1887
Tumidiacirsa de Boury, 1911
† Turriscala de Boury, 1890 †
Variciscala de Boury, 1909
Varicopalia Kuroda MS, 1960 (nomen nudum)
Synonyms
Acrilla H. Adams, 1860: synonym of Amaea H. Adams & A. Adams, 1853
Acutiscala de Boury, 1909 : synonym of Epitonium Röding, 1798
Amiciscala Jousseaume 1912 : synonym of Epitonium Röding, 1798
Asperiscala de Boury, 1909: synonym of Epitonium Röding, 1798
Cinctiscala de Boury 1909 : synonym of Asperiscala de Boury, 1909
Cirratiscala de Boury, 1909 : synonym of Epitonium Röding, 1798
Clathroscala de Boury 1889 : synonym of Amaea H. Adams & A. Adams, 1853
Clathrus Oken 1815 : synonym of Epitonium Röding, 1798
Compressiscala Masahito (Prince) & Habe 1976 : synonym of Gregorioiscala Cossmann, 1912
Dannevigena Iredale 1936 : synonym of Cirsotrema Mörch, 1852
Depressiscala de Boury 1909 : synonym of Epitonium Röding, 1798
Foliaceiscala de Boury 1912 : synonym of Epitonium Röding, 1798
Fragiliscala Azuma 1962 : synonym of Amaea H. Adams & A. Adams, 1853
FragilopaliaAzuma 1972 : synonym of Amaea H. Adams & A. Adams, 1853
Glabriscala de Boury 1909 : synonym of Epitonium Röding, 1798
Lampropalia Kuroda & Ito, 1961 : synonym of Cylindriscala de Boury, 1909
Mazescala Iredale 1936 : synonym of Epitonium Röding, 1798
Nipponoscala Masahito (Prince) & Habe 1973 : synonym of Epitonium Röding, 1798
Nodiscala de Boury 1889 : synonym of Opalia H. Adams & A. Adams, 1853
Nystiella Clench & Turner, 1952 : synonym of Opaliopsis Thiele, 1928
Plastiscala Iredale, 1936 : synonym of Acirsa Mörch, 1857 (junior subjective synonym)
Problitora Iredale, 1931 : synonym of Alexania Strand, 1928 (uncertain synonym)
Sagamiscala Masahito, Kuroda & Habe, 1971 : synonym of Globiscala de Boury, 1909
Scala Mörch, 1852 : synonym of Epitonium Röding, 1798
Scalina Conrad, 1865 : synonym of Amaea H. Adams & A. Adams, 1853
Spiniscala de Boury, 1909 : synonym of Epitonium Röding, 1798
Turbiniscala de Boury 1909 : synonym of Epitonium Röding, 1798
Viciniscala de Boury 1909 : synonym of Epitonium Röding, 1798
| Biology and health sciences | Gastropods | Animals |
7432140 | https://en.wikipedia.org/wiki/Rambouillet%20sheep | Rambouillet sheep | The Rambouillet is a breed of sheep (Ovis aries). It is also known as the Rambouillet Merino or the French Merino.
History
The development of the Rambouillet breed started in 1786, when Louis XVI purchased over 300 Spanish Merinos (318 ewes, 41 rams, seven wethers) from his cousin Charles III of Spain. The flock was subsequently developed on an experimental royal farm, the Bergerie royale (now Bergerie nationale) built during the reign of Louis XVI, at his request, on his domain of Rambouillet, 50 km southwest of Paris, which Louis XVI had purchased in December 1783 from his cousin Louis Jean Marie de Bourbon, Duke of Penthièvre. The flock was raised exclusively at the Bergerie, with no sheep being sold for years, well into the 19th century.
Outcrossing with English long-wool breeds and selection produced a well-defined breed, differing in several important points from the original Spanish Merino. The size was greater, with full-grown ewes weighing up to 200 lb and rams up to 300 lb. The wool clips were larger and the wool length had increased to greater than 3 in (80 mm).
In 1889, a Rambouillet Association was formed in the United States by Larmon Bronson Townsend & Larmon George Townsend in Ionia, Michigan, with the aim of preserving the breed. An estimated 50% of the sheep on the US western ranges are of Rambouillet blood. Rambouillet stud has also had an enormous influence on the development of the Australian Merino industry through Emperor and the Peppin Merino stud.
Value of the fleece
The fleece was valuable in the manufacture of cloth, at times being woven in a mixed fabric of cotton warp and wool weft.
Uses
The breed is well known for its wool, but also for its meat, both lamb and mutton. It has been described as a dual-purpose breed, with superior wool and near-mutton breed characteristics. This breed was also used for the development of the "Barbado" or American Blackbelly sheep, which was crossed with Barbados Blackbelly and mouflon for their horns at hunting ranches.
| Biology and health sciences | Sheep | Animals |
9637495 | https://en.wikipedia.org/wiki/Winter%20squash | Winter squash | Winter squash is an annual fruit representing several squash species within the genus Cucurbita. Late-growing, less symmetrical, odd-shaped, rough or warty varieties, small to medium in size, but with long-keeping qualities and hard rinds, are usually called winter squash. They differ from summer squash in that they are harvested and eaten in the mature stage when their seeds within have matured fully and their skin has hardened into a tough rind. At this stage, most varieties of this vegetable can be stored for use during the winter. Winter squash is generally cooked before being eaten, and the skin or rind is not usually eaten as it is with summer squash.
Varieties
Four species in the genus Cucurbita yield cultivars that are grown as winter squashes: C. argyrosperma, C. maxima, C. moschata, and C. pepo.
Cultivars of winter squash that are round and orange are called pumpkins. In New Zealand and Australian English, the term pumpkin generally refers to the broader category called winter squash elsewhere.
Planting and harvesting
Squash is a frost-tender plant meaning that the seeds do not germinate in cold soil. Winter squash seeds germinate best when the soil temperature is , with the warmer end of the range being optimal. It is harvested whenever the fruit has turned a deep, solid color and the skin is hard. Most winter squash is harvested in September or October in the Northern Hemisphere, before the danger of heavy frosts.
Although winter squashes are grown in many regions, they are relatively economically unimportant, with few exceptions. They are grown extensively in tropical America, Japan, Northern Italy, and certain areas of the United States. The calabazas of the West Indies and the forms grown by the people of Mexico and Central America are not uniform, pure varieties but extremely variable in size, shape, and color. Since these species are normally cross-pollinated, it is now difficult to keep a variety pure.
Nutritional value
Raw winter squash (such as acorn or butternut squash) is 90% water, 9% carbohydrates, 1% protein. It contains negligible fat (table), except in the oil-rich seeds. In a 100 gram reference amount, it supplies 34 calories and is a moderate source (10-19% of the Daily Value, DV) of vitamin C (15% DV) and vitamin B6 (12% DV), with no other micronutrients in significant content (table). It is also a source of the provitamin A carotenoid, beta-carotene.
| Biology and health sciences | Botanical fruits used as culinary vegetables | Plants |
4277163 | https://en.wikipedia.org/wiki/NGC%205548 | NGC 5548 | NGC 5548 is a Type I Seyfert galaxy with a bright, active nucleus. This activity is caused by matter flowing onto a 65 million solar mass () supermassive black hole at the core. Morphologically, this is an unbarred lenticular galaxy with tightly-wound spiral arms, while shell and tidal tail features suggest that it has undergone a cosmologically-recent merger or interaction event. NGC 5548 is approximately 245 million light years away and appears in the constellation Boötes. The apparent visual magnitude of NGC 5548 is approximately 13.3 in the V band.
In 1943, this galaxy was one of twelve nebulae listed by American astronomer Carl Keenan Seyfert that showed broad emission lines in their nuclei. Members of this class of objects became known as Seyfert galaxies, and they were noted to have a higher than normal surface brightness in their nuclei. Observation of NGC 5548 during the 1960s with radio telescopes showed an enhanced level of radio emission. Spectrograms of the nucleus made in 1966 showed that the energized region was confined to a volume a few parsecs across, where temperature were around and the plasma had a dispersion velocity of ±450 km/s.
Among astronomers, the accepted explanation for the active nucleus in NGC 5548 is the accretion of matter onto a supermassive black hole (SMBH) at the core. This object is surrounded by an orbiting disk of accreted matter drawn in from the surroundings. As material is drawn into the outer parts of this disk, it becomes photoionized, producing broad emission lines in the optical and ultraviolet bands of the electromagnetic spectrum. A wind of ionized matter, organized in filamentary structures at distances of 1–14 light days from the center, is flowing outward in the direction perpendicular to the accretion disk plane.
The mass of the central black hole can be estimated based on the properties of the emission lines in the core region. Combined measurements yield an estimated mass of . In other words, it is some 65 million times the mass of the Sun. This result is consistent with other methods of estimating the mass of the SMBH in the nucleus of NGC 5548. Matter is falling onto this black hole at the estimated rate of per year, whereas mass is flowing outward from the core at or above the rate of each year. The inner part of the accretion disk surrounding the SMBH forms a thick, hot corona spanning several light hours that is emitting X-rays. When this radiation reaches the optically thick part of the accretion disk at a radius of around 1–2 light days, the X-rays are converted into heat.
| Physical sciences | Notable galaxies | Astronomy |
4279049 | https://en.wikipedia.org/wiki/Organophosphorus%20chemistry | Organophosphorus chemistry | Organophosphorus chemistry is the scientific study of the synthesis and properties of organophosphorus compounds, which are organic compounds containing phosphorus. They are used primarily in pest control as an alternative to chlorinated hydrocarbons that persist in the environment. Some organophosphorus compounds are highly effective insecticides, although some are extremely toxic to humans, including sarin and VX nerve agents.
Phosphorus, like nitrogen, is in group 15 of the periodic table, and thus phosphorus compounds and nitrogen compounds have many similar properties. The definition of organophosphorus compounds is variable, which can lead to confusion. In industrial and environmental chemistry, an organophosphorus compound need contain only an organic substituent, but need not have a direct phosphorus-carbon (P-C) bond. Thus a large proportion of pesticides (e.g., malathion), are often included in this class of compounds.
Phosphorus can adopt a variety of oxidation states, and it is general to classify organophosphorus compounds based on their being derivatives of phosphorus(V) vs phosphorus(III), which are the predominant classes of compounds. In a descriptive but only intermittently used nomenclature, phosphorus compounds are identified by their coordination number σ and their valency λ. In this system, a phosphine is a σ3λ3 compound.
Organophosphorus(V) compounds, main categories
Phosphate esters and amides
Phosphate esters have the general structure P(=O)(OR)3 feature P(V). Such species are of technological importance as flame retardant agents, and plasticizers. Lacking a P−C bond, these compounds are in the technical sense not organophosphorus compounds but esters of phosphoric acid. Many derivatives are found in nature, such as phosphatidylcholine. Phosphate ester are synthesized by alcoholysis of phosphorus oxychloride. A variety of mixed amido-alkoxo derivatives are known, one medically significant example being the anti-cancer drug cyclophosphamide. Also derivatives containing the thiophosphoryl group (P=S) include the pesticide malathion. The organophosphates prepared on the largest scale are the zinc dithiophosphates, as additives for motor oil. Several million kilograms of this coordination complex are produced annually by the reaction of phosphorus pentasulfide with alcohols.
Phosphoryl thioates are thermodynamically much stabler than thiophosphates, which can rearrange at high temperature or with a catalytic alkylant to the former:
SP(OR)3 → OP(OR)2SR
In the environment, all these phosphorus(V) compounds break down via hydrolysis to eventually afford phosphate and the organic alcohol or amine from which they are derived.
Phosphonic and phosphinic acids and their esters
Phosphonates are esters of phosphonic acid and have the general formula RP(=O)(OR')2. Phosphonates have many technical applications, a well-known member being glyphosate, better known as Roundup. With the formula (HO)2P(O)CH2NHCH2CO2H, this derivative of glycine is one of the most widely used herbicides. Bisphosphonates are a class of drugs to treat osteoporosis. The nerve gas agent sarin, containing both C–P and F–P bonds, is a phosphonate.
Phosphinates feature two P–C bonds, with the general formula R2P(=O)(OR'). A commercially significant member is the herbicide glufosinate. Similar to glyphosate mentioned above, it has the structure CH3P(O)(OH)CH2CH2CH(NH2)CO2H.
The Michaelis–Arbuzov reaction is the main method for the synthesis of these compounds. For example, dimethylmethylphosphonate (see figure above) arises from the rearrangement of trimethylphosphite, which is catalyzed by methyl iodide. In the Horner–Wadsworth–Emmons reaction and the Seyferth–Gilbert homologation, phosphonates are used in reactions with carbonyl compounds. The Kabachnik–Fields reaction is a method for the preparation of aminophosphonates. These compounds contain a very inert bond between phosphorus and carbon. Consequently, they hydrolyze to give phosphonic and phosphinic acid derivatives, but not phosphate.
Phosphine oxides, imides, and chalcogenides
Phosphine oxides (designation σ4λ5) have the general structure R3P=O with formal oxidation state V. Phosphine oxides form hydrogen bonds and some are therefore soluble in water. The P=O bond is very polar with a dipole moment of 4.51 D for triphenylphosphine oxide.
Compounds related to phosphine oxides include phosphine imides (R3PNR') and related chalcogenides (R3PE, where E = S, Se, Te). These compounds are some of the most thermally stable organophosphorus compounds. In general, they are less basic than the corresponding phosphine oxides, which can adduce to thiophosphoryl halides:
R3PO + X3PS → R3P+–O–P+X2–S− + X−
Some phosphorus sulfides can undergo a reverse Arbuzov rearrangement to a dialkylthiophosphinate ester.
Phosphonium salts and phosphoranes
Compounds with the formula [PR4+]X− comprise the phosphonium salts. These species are tetrahedral phosphorus(V) compounds. From the commercial perspective, the most important member is tetrakis(hydroxymethyl)phosphonium chloride, [P(CH2OH)4]Cl, which is used as a fire retardant in textiles. Approximately 2M kg are produced annually of the chloride and the related sulfate. They are generated by the reaction of phosphine with formaldehyde in the presence of the mineral acid:
PH3 + HX + 4 CH2O → [P(CH2OH)4+]X−
A variety of phosphonium salts can be prepared by alkylation and arylation of organophosphines:
PR3 + R'X → [PR3R'+]X−
The methylation of triphenylphosphine is the first step in the preparation of the Wittig reagent.
The parent phosphorane (σ5λ5) is PH5, which is unknown. Related compounds containing both halide and organic substituents on phosphorus are fairly common. Those with five organic substituents are rare, although P(C6H5)5 is known, being derived from P(C6H5)4+ by reaction with phenyllithium.
Phosphorus ylides are unsaturated phosphoranes, known as Wittig reagents, e.g. CH2P(C6H5)3. These compounds feature tetrahedral phosphorus(V) and are considered relatives of phosphine oxides. They also are derived from phosphonium salts, but by deprotonation not alkylation.
Organophosphorus(III) compounds, main categories
Phosphites, phosphonites, and phosphinites
Phosphites, sometimes called phosphite esters, have the general structure P(OR)3 with oxidation state +3. Such species arise from the alcoholysis of phosphorus trichloride:
PCl3 + 3 ROH → P(OR)3 + 3 HCl
The reaction is general, thus a vast number of such species are known. Phosphites are employed in the Perkow reaction and the Michaelis–Arbuzov reaction. They also serve as ligands in organometallic chemistry.
Intermediate between phosphites and phosphines are phosphonites (P(OR)2R') and phosphinite (P(OR)R'2). Such species arise via alcoholysis reactions of the corresponding phosphonous and phosphinous chlorides ((PCl2R') and (PClR'2) , respectively). The latter are produced by reaction of a phosphorus trichloride with a poor metal-alkyl complex, e.g. organomercury, organolead, or a mixed lithium-organoaluminum compound.
Phosphines
The parent compound of the phosphines is PH3, called phosphine in the US and British Commonwealth, but phosphane elsewhere. Replacement of one or more hydrogen centers by an organic substituents (alkyl, aryl), gives PH3−xRx, an organophosphine, generally referred to as phosphines.
From the commercial perspective, the most important phosphine is triphenylphosphine, several million kilograms being produced annually. It is prepared from the reaction of chlorobenzene, PCl3, and sodium. Phosphines of a more specialized nature are usually prepared by other routes. Phosphorus halides undergo nucleophilic displacement by organometallic reagents such as Grignard reagents. Organophosphines are nucleophiles and ligands. Two major applications are as reagents in the Wittig reaction and as supporting phosphine ligands in homogeneous catalysis.
Their nucleophilicity is evidenced by their reactions with alkyl halides to give phosphonium salts. Phosphines are nucleophilic catalysts in organic synthesis, e.g. the Rauhut–Currier reaction and Baylis-Hillman reaction. Phosphines are reducing agents, as illustrated in the Staudinger reduction for the conversion of organic azides to amines and in the Mitsunobu reaction for converting alcohols into esters. In these processes, the phosphine is oxidized to phosphorus(V). Phosphines have also been found to reduce activated carbonyl groups, for instance the reduction of an α-keto ester to an α-hydroxy ester.
A few halophosphines are known, although phosphorus' strong nucleophilicity predisposes them to decomposition, and dimethylphosphinyl fluoride spontaneously disproportionates to dimethylphosphine trifluoride and tetramethylbiphosphine. One common synthesis adds halogens to tetramethylbiphosphine disulfide. Alternatively alkylation of phosphorus trichloride gives a halophosphonium cation, which metals reduce to halophosphines.
Phosphaalkenes and phosphaalkynes
Compounds with carbon phosphorus(III) multiple bonds are called phosphaalkenes (R2C=PR) and phosphaalkynes (RC≡P). They are similar in structure, but not in reactivity, to imines (R2C=NR) and nitriles (RC≡N), respectively. In the compound phosphorine, one carbon atom in benzene is replaced by phosphorus. Species of this type are relatively rare but for that reason are of interest to researchers. A general method for the synthesis of phosphaalkenes is by 1,2-elimination of suitable precursors, initiated thermally or by base such as DBU, DABCO, or triethylamine:
Thermolysis of Me2PH generates CH2=PMe, an unstable species in the condensed phase.
Organophosphorus(0), (I), and (II) compounds
Compounds where phosphorus exists in a formal oxidation state of less than III are uncommon, but examples are known for each class. Organophosphorus(0) species are debatably illustrated by the carbene adducts, [P(NHC)]2, where NHC is an N-heterocyclic carbene. With the formulae (RP)n and (R2P)2, respectively, compounds of phosphorus(I) and (II) are generated by reduction of the related organophosphorus(III) chlorides:
5 PhPCl2 + 5 Mg → (PhP)5 + 5 MgCl2
2 Ph2PCl + Mg → Ph2P-PPh2 + MgCl2
Diphosphenes, with the formula R2P2, formally contain phosphorus-phosphorus double bonds. These phosphorus(I) species are rare but are stable provided that the organic substituents are large enough to prevent catenation. Bulky substituents also stabilize phosphorus radicals.
Many mixed-valence compounds are known, e.g. the cage P7(CH3)3.
| Physical sciences | Organic compounds | null |
627608 | https://en.wikipedia.org/wiki/OpenDocument | OpenDocument | The Open Document Format for Office Applications (ODF), also known as OpenDocument, standardized as ISO 26300, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.
The standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (OASIS) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for OpenOffice.org and LibreOffice. It was originally developed for StarOffice "to provide an open standard for office documents."
In addition to being an OASIS standard, it is published as an ISO/IEC international standard ISO/IEC 26300 Open Document Format for Office Applications (OpenDocument).
From March 2024, the current version is 1.4.
Specifications
The most common filename extensions used for OpenDocument documents are:
.odt and .fodt for word processing (text) documents
.ods and .fods for spreadsheets
.odp and .fodp for presentations
.odg and .fodg for graphics
.odf for formula, mathematical equations
The original OpenDocument format consists of an XML document that has <document> as its root element. OpenDocument files can also take the format of a ZIP compressed archive containing a number of files and directories; these can contain binary content and benefit from ZIP's lossless compression to reduce file size. OpenDocument benefits from separation of concerns by separating the content, styles, metadata, and application settings into four separate XML files.
There is a comprehensive set of example documents in OpenDocument format available. The whole test suite is available under the Creative Commons Attribution 2.5 license.
History
Conception
The OpenDocument standard was developed by a Technical Committee (TC) under the Organization for the Advancement of Structured Information Standards (OASIS) industry consortium. The ODF-TC has members from a diverse set of companies and individuals. Active TC members have voting rights. Members associated with Sun and IBM have sometimes had a large voting influence. The standardization process involved the developers of many office suites or related document systems.
The first official ODF-TC meeting to discuss the standard was 16 December 2002. OASIS approved OpenDocument as an OASIS standard on 1 May 2005. OASIS submitted the ODF specification to ISO/IEC Joint Technical Committee 1 (JTC 1) on 16 November 2005, under Publicly Available Specification (PAS) rules. ISO/IEC standardization for an open document standard including text, spreadsheet and presentation was proposed for the first time in DKUUG 28 August 2001.
After a six-month review period, on 3 May 2006, OpenDocument unanimously passed its six-month DIS (Draft International Standard) ballot in JTC 1 (ISO/IEC JTC 1/SC 34), with broad participation, after which the OpenDocument specification was "approved for release as an ISO and IEC International Standard" under the name ISO/IEC 26300:2006.
After responding to all written ballot comments, and a 30-day default ballot, the OpenDocument international standard went to publication in ISO, officially published 30 November 2006.
In 2006, Garry Edwards, a member of OASIS TC since 2002, along with Sam Hiser, and Paul "Marbux" E. Merrell founded the OpenDocument Foundation. The aim of this project was to be open-source representative of the format in OASIS. The immediate aim of this project was to develop software that would convert legacy Microsoft Office documents to ODF. By October 2007 the project was a failure: Conversion of Microsoft Office documents could not be achieved. By this time, The foundation was convinced that ODF was not moving in a direction that they supported. As a result, it announced the decision to abandon its namesake format in favor of W3C's Compound Document Format (CDF), which was in early stages of its development. The foundation, however, never acted on this decision and was soon dissolved. The CDF was never designed for this purpose either.
Further standardization
Further standardization work with OpenDocument includes:
The OASIS Committee Specification OpenDocument 1.0 (second edition) corresponds to the published ISO/IEC 26300:2006 standard. The content of ISO/IEC 26300 and OASIS OpenDocument v1.0 2nd ed. is identical. It includes the editorial changes made to address JTC1 ballot comments. It is available in ODF, HTML and PDF formats.
OpenDocument 1.1 includes additional features to address accessibility concerns. It was approved as an OASIS Standard on 2007-02-01 following a call for vote issued on 2007-01-16. The public announcement was made on 2007-02-13. This version was not initially submitted to ISO/IEC, because it is considered to be a minor update to ODF 1.0 only, and OASIS were working already on ODF 1.2 at the time ODF 1.1 was approved. However it was later submitted to ISO/IEC and published in March 2012 as "ISO/IEC 26300:2006/Amd 1:2012 – Open Document Format for Office Applications (OpenDocument) v1.1".
OpenDocument 1.2 includes additional accessibility features, RDF-based metadata, a spreadsheet formula specification based on OpenFormula, support for digital signatures and some features suggested by the public. It consists of three parts: Part 1: OpenDocument Schema, Part 2: Recalculated Formula (OpenFormula) Format and Part 3: Packages. Version 1.2 of the specification was approved as an OASIS Standard on 29 September 2011. It was submitted to the relevant ISO committee under the Publicly Available Specification (PAS) procedure in March 2014. In October 2014, it was unanimously approved as a Draft International Standard. Some comments were raised in the process that needed to be addressed before OpenDocument 1.2 could proceed to become an International Standard. OpenDocument 1.2 was published as ISO/IEC standard on 17 June 2015.
OpenDocument 1.3 includes additional features for digital signatures, encryption, change-tracking and inter-operability. Version 1.3 of the OpenDocument specification was approved as an OASIS Standard April 2021. The specification was completed as the result of the COSM crowdfunding project seeded by The Document Foundation.
Application support
Software
The OpenDocument format is used in free software and in proprietary software. This includes office suites (both stand-alone and web-based) and individual applications such as word-processors, spreadsheets, presentation, and data management applications. Prominent text editors, word processors and office suites supporting OpenDocument fully or partially include:
AbiWord
Adobe Buzzword
Apache OpenOffice
Bean (software)
Calibre ebook viewer, converter, editor, and manager
Calligra Suite
Collabora Office and Collabora Online
Corel WordPerfect Office X6
Dropbox
Evince
Gnumeric
Google Docs
IBM Lotus Symphony
Inkscape exports .odg
KOffice
LibreOffice
Microsoft Office 2003 and Office XP (with the Open Source OpenXML/ODF Translator Add-in for Office)
Microsoft Office 2007 (with Service Pack 2 or 3) supports ODF 1.1 (Windows only)
Microsoft Office 2010 supports ODF 1.1 (Windows only)
Microsoft Office 2013 supports ODF 1.2 (Windows only)
Microsoft Office 2016 and 2019 support ODF 1.2 (Windows: read/write; OS X: read-only after online conversion)
Microsoft Office 2021 supports ODF 1.3 (Windows and MacOS)
Microsoft OneDrive / Office Web Apps
NeoOffice
Okular
ONLYOFFICE
OpenOffice.org
Quarto and R Markdown can export to .odt
Scribus imports .odt and .odg
SoftMaker Office
Sun Microsystems StarOffice
TextEdit
WordPad (Windows 7 and later, Windows Server 2008 R2 and later) supports ODF 1.1
Zoho Office Suite
Various organizations have announced development of conversion software (including plugins and filters) to support OpenDocument on Microsoft's products. , there are nine packages of conversion software. Microsoft first released support for the OpenDocument Format in Office 2007 SP2. However, the implementation faced substantial criticism and the ODF Alliance and others claimed that the third party plugins provided better support. Microsoft Office 2010 can open and save OpenDocument Format documents natively, although not all features are supported. In July 2024, Microsoft announced support for ODF 1.4 (prior to it being released) in Microsoft 365 apps, starting with version 2404 on Windows and 16.84 on macOS.
Starting with Mac OS X 10.5, the TextEdit application and Quick Look preview feature support the OpenDocument Text format.
Accessibility
Licensing
Public access to the standard
Versions of the OpenDocument Format approved by OASIS are available for free download and use. The ITTF has added ISO/IEC 26300 to its "list of freely available standards"; anyone may download and use this standard free-of-charge under the terms of a click-through license.
Additional royalty-free licensing
Obligated members of the OASIS ODF TC have agreed to make deliverables available to implementors under the OASIS Royalty Free with Limited Terms policy.
Key contributor Sun Microsystems made an irrevocable intellectual property covenant, providing all implementers with the guarantee that Sun will not seek to enforce any of its enforceable U.S. or foreign patents against any implementation of the OpenDocument specification in which development Sun participates to the point of incurring an obligation.
A second contributor to ODF development, IBM – which, for instance, has contributed Lotus spreadsheet documentation – has made their patent rights available through their Interoperability Specifications Pledge in which "IBM irrevocably covenants to you that it will not assert any Necessary Claims against you for your making, using, importing, selling, or offering for sale Covered Implementations."
The Software Freedom Law Center has examined whether there are any legal barriers to the use of the OpenDocument Format (ODF) in free and open source software arising from the standardization process. In their opinion ODF is free of legal encumbrances that would prevent its use in free and open source software, as distributed under licenses authored by Apache and the FSF.
Response
Support for OpenDocument
Several governments, companies, organizations and software products support the OpenDocument format. For example:
The OpenDoc Society runs frequent ODF Plugfests in association with industry groups and Public Sector organisations. The 10th Plugfest was hosted by the UK Government Digital Service in conjunction with industry associations including the OpenForum Europe and OpenUK (formerly Open Source Consortium).
An output of the 10th Plugfest was an ODF toolkit which includes "Open Document Format principles for Government Technology" that has the purpose of simply explaining the case for ODF directed at the "average civil servant" and includes an extract from the UK Government policy relating to Open Document Format.
The toolkit also includes a single page graphical image designed to articulate the consequences of not choosing Open Document Format. The illustration has now been translated into more than 10 languages.
Information technology companies like Apple Inc., Adobe Systems, Google, IBM, Intel, Microsoft, Nokia, Novell, Red Hat, Oracle as well as other companies who may or may not be working inside the OASIS OpenDocument Adoption Technical Committee.
Over 600 companies and organizations promote OpenDocument format through The OpenDocument Format Alliance.
NATO with its 26 members uses ODF as a mandatory standard for all members.
The TAC (Telematics between Administrations Committee), composed of e-government policy-makers from the 25 European Union Member States, endorsed a set of recommendations for promoting the use of open document formats in the public sector.
The free office suites Apache OpenOffice, Calligra, KOffice, NeoOffice and LibreOffice all use OpenDocument as their default file format.
Several organisations, such as the OpenDocument Fellowship and OpenDoc Society were founded to support and promote OpenDocument.
The UK government has adopted ODF as the standard for all documents in the UK civil service
The Russian government has recommended adopting ODF as the standard in the public sector as by GOST R ISO/MEK 26300-2010
The Wikimedia Foundation supports ODF export from MediaWiki, which powers Wikipedia and a number of other Internet wiki-based sites.
The default text processing applications in Windows 10 (WordPad) and Mac OS 10.9 (TextEdit) support OpenDocument Text.
On 4 November 2005, IBM and Sun Microsystems convened the "OpenDocument (ODF) Summit" in Armonk, New York, to discuss how to boost OpenDocument adoption. The ODF Summit brought together representatives from several industry groups and technology companies, including Oracle, Google, Adobe, Novell, Red Hat, Computer Associates, Corel, Nokia, Intel, and Linux e-mail company Scalix (LaMonica, 10 November 2005). The providers committed resources to technically improve OpenDocument through existing standards bodies and to promote its usage in the marketplace, possibly through a stand-alone foundation. Scholars have suggested that the "OpenDocument standard is the wedge that can hold open the door for competition, particularly with regard to the specific concerns of the public sector." Indeed, adoption by the public sector has risen considerably since the promulgation of the OpenDocument format initiated the 2005/2006 time period.
Different applications using ODF as a standard document format have different methods of providing macro/scripting capabilities. There is no macro language specified in ODF. Users and developers differ on whether inclusion of a standard scripting language would be desirable.
The ODF specification for tracked changes is limited and does not fully specify all cases, resulting in implementation-specific behaviors. In addition, OpenDocument does not support change tracking in elements like tables or MathML.
It is not permitted to use generic ODF formatting style elements (like font information) for the MathML elements.
Adoption
One objective of open formats like OpenDocument is to guarantee long-term access to data without legal or technical barriers, and some governments have come to view open formats as a public policy issue. Several governments around the world have introduced policies of partial or complete adoption. What this means varies from case to case; in some cases, it means that the ODF standard has a national standard identifier; in some cases, it means that the ODF standard is permitted to be used where national regulation says that non-proprietary formats must be used, and in still other cases, it means that some government body has actually decided that ODF will be used in some specific context. The following is an incomplete list:
International
NATO
European Union
Argentina
Belgium
Brazil
Croatia
Finland
Denmark
France
Germany
Hungary
India
Italy
Japan
Latvia
Malaysia
Netherlands
Norway
Poland
Portugal
Russia
Slovakia
Sweden
Serbia
South Africa
South Korea
Switzerland
Taiwan
Turkey
United Kingdom
Uruguay
Venezuela
Subnational
Andalusia, Spain
Assam, India
Extremadura, Spain
Hong Kong, China
Kerala, India
Massachusetts, United States
Misiones, Argentina
Munich, Bavaria, Germany
Paraná, Brazil
| Technology | File formats | null |
627740 | https://en.wikipedia.org/wiki/Monal | Monal | A monal is a bird of genus Lophophorus of the pheasant family, Phasianidae.
Description
The males all have colorful, iridescent plumage. Their physique is rather plump, and their diet consists of plants such as roots and bulbs and insects. During mating, the males are polygamous and mate with several females. The females in turn only mate with the selected male and enter into a monogamous relationship. Due to habitat destruction and hunting, they have become rare and their population is endangered.
Species
There are three species and several subspecies:
| Biology and health sciences | Galliformes | Animals |
628485 | https://en.wikipedia.org/wiki/Television%20set | Television set | A television set or television receiver (more commonly called TV, TV set, television, telly, or tele) is an electronic device for viewing and hearing television broadcasts, or as a computer monitor. It combines a tuner, display, and loudspeakers. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode-ray tube (CRT) technology. The addition of color to broadcast television after 1953 further increased the popularity of television sets in the 1960s, and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for the first recorded media for consumer use in the 1970s, such as Betamax, VHS; these were later succeeded by DVD. It has been used as a display device since the first generation of home computers (e.g. Timex Sinclair 1000) and dedicated video game consoles (e.g., Atari) in the 1980s. By the early 2010s, flat-panel television incorporating liquid-crystal display (LCD) technology, especially LED-backlit LCD technology, largely replaced CRT and other display technologies. Modern flat-panel TVs are typically capable of high-definition display (720p, 1080i, 1080p, 4K, 8K) and can also play content from a USB device. In the late 2010s, most flat-panel TVs began offering 4K and 8K resolutions.
History
Early television
Mechanical televisions were commercially sold from 1928 to 1934 in the United Kingdom, France, the United States, and the Soviet Union. The earliest commercially made televisions were radios with the addition of a television device consisting of a neon tube behind a mechanically spinning disk with a spiral of apertures that produced a red postage-stamp size image, enlarged to twice that size by a magnifying glass. The Baird "Televisor" (sold in 1930–1933 in the UK) is considered the first mass-produced television, selling about a thousand units.
Karl Ferdinand Braun was the first to conceive the use of a CRT as a display device in 1897. The "Braun tube" became the foundation of 20th century TV. In 1926, Kenjiro Takayanagi demonstrated the first TV system that employed a cathode-ray tube (CRT) display, at Hamamatsu Industrial High School in Japan. This was the first working example of a fully electronic television receiver. His research toward creating a production model was halted by the US after Japan lost World War II.
The first commercially made electronic televisions with CRTs were manufactured by Telefunken in Germany in 1934, followed by other makers in France (1936), Britain (1936), and US (1938). The cheapest model with a screen was $445 (). An estimated 19,000 electronic televisions were manufactured in Britain, and about 1,600 in Germany, before World War II. About 7,000–8,000 electronic sets were made in the U.S. before the War Production Board halted manufacture in April 1942, production resuming in August 1945. Television usage in the western world skyrocketed after World War II with the lifting of the manufacturing freeze, war-related technological advances, the drop in television prices caused by mass production, increased leisure time, and additional disposable income. While only 0.5% of U.S. households had a television in 1946, 55.7% had one in 1954, and 90% by 1962. In Britain, there were 15,000 television households in 1947, 1.4 million in 1952, and 15.1 million by 1968.
Transistorised television
Early electronic television sets were large and bulky, with analog circuits made of vacuum tubes. As an example, the RCA CT-100 color TV set used 36 vacuum tubes. Following the invention of the first working transistor at Bell Labs, Sony founder Masaru Ibuka predicted in 1952 that the transition to electronic circuits made of transistors would lead to smaller and more portable television sets. The first fully transistorized, portable solid-state television set was the Sony TV8-301, developed in 1959 and released in 1960. By the 1970s, television manufacturers utilized this push for miniaturization to create small, console-styled sets which their salesmen could easily transport, pushing demand for television sets out into rural areas. However, the first fully transistorized color TV set, the HMV Colourmaster Model 2700, was released in 1967 by the British Radio Corporation. This began the transformation of television viewership from a communal viewing experience to a solitary viewing experience. By 1960, Sony had sold over 4million portable television sets worldwide.
By the late 1960s and early 1970s, color television had come into wide use. In Britain, BBC1, BBC2 and ITV were regularly broadcasting in colour by 1969.
Late model CRT TVs used highly integrated electronics such as a Jungle chip which performs the functions of many transistors.
LCD television
Paul K. Weimer at RCA developed the thin-film transistor (TFT) in 1962, later the idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. Lechner, F. J. Marlowe, E. O. Nester and J. Tults demonstrated the concept in 1968 with a dynamic scattering LCD that used standard discrete MOSFETs.
In 1973, T. Peter Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories demonstrated the first thin-film-transistor liquid-crystal display (TFT LCD). Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) in 1974.
By 1982, pocket LCD TVs based on AM LCD technology were developed in Japan. The Epson ET-10 (Epson Elf) was the first color LCD pocket TV, released in 1984. In 1988, a Sharp research team led by engineer T. Nagayasu demonstrated a full-color LCD display, which convinced the electronics industry that LCD would eventually replace the CRT as the standard television display technology. The first wall-mountable TV was introduced by Sharp Corporation in 1992.
During the first decade of the 21st century, CRT "picture tube" display technology was almost entirely supplanted worldwide by flat-panel displays: first plasma displays around 1997, then LCDs. By the early 2010s, LCD TVs, which increasingly used LED-backlit LCDs, accounted for the overwhelming majority of television sets being manufactured.
In 2014, Curved OLED TVs were released to the market, which were intended to offer improved image quality but this effect was only visible at a certain position away from the TV.
Rollable OLED TVs were introduced in 2020, which allow the display panel of the TV to be hidden.
2023 saw the release of wireless TVs which connect to other devices solely through a transmitter box with an antenna that transmits information wirelessly to the TV. Demos of transparent TVs have also been made. There are TVs that are offered to users for free, but are paid for by showing ads to users and collecting user data.
TV sizes
Cambridge's Clive Sinclair created a mini TV in 1967 that could be held in the palm of a hand and was the world's smallest television at the time, though it never took off commercially because the design was complex. In 2019, Samsung launched the largest television to date at . The average size of TVs has grown over time.
In 2024, the sales of large-screen televisions significantly increased. Between January and September, approximately 38 million televisions with a screen size of or larger were sold globally. This surge in popularity can be attributed to several factors, including technological advancements and decreasing prices.
The availability of larger screen sizes at more affordable prices has driven consumer demand. For example, Samsung, a leading electronics manufacturer, introduced its first television in 2019 with a price tag of $99,000. In 2024, the company will offer four models starting at $4,000. This trend is reflected in the overall market, with the average price of a television exceeding , declining from $6,662 in 2023 to $3,113 in 2024. As technology advances, even larger screen sizes, such as , are becoming increasingly accessible to consumers.
Display
Television sets may employ one of several available display technologies. As of mid-2019, LCDs overwhelmingly predominate in new merchandise, but OLED displays are claiming an increasing market share as they become more affordable and DLP technology continues to offer some advantages in projection systems. The production of plasma and CRT displays has been completely discontinued.
There are four primary competing TV technologies:
CRT
LCD (multiple variations of LCD screens are called QLED, quantum dot, LED, LCD TN, LCD IPS, LCD PLS, LCD VA, etc.)
OLED
Plasma
CRT
The cathode-ray tube (CRT) is a vacuum tube containing a so-called electron gun (or three for a color television) and a fluorescent screen where the television image is displayed. The electron gun accelerates electrons in a beam which is deflected in both the vertical and horizontal directions using varying electric or (usually, in television sets) magnetic fields, in order to scan a raster image onto the fluorescent screen. The CRT requires an evacuated glass envelope, which is rather deep (well over half of the screen size), fairly heavy, and breakable. As a matter of radiation safety, both the face (panel) and back (funnel) were made of thick lead glass in order to reduce human exposure to harmful ionizing radiation (in the form of x-rays) produced when electrons accelerated using a high voltage () strike the screen. By the early 1970s, most color TVs replaced leaded glass in the face panel with vitrified strontium oxide glass, which also blocked x-ray emissions but allowed better color visibility. This also eliminated the need for cadmium phosphors in earlier color televisions. Leaded glass, which is less expensive, continued to be used in the funnel glass, which is not visible to the consumer.
In Television Sets (or most computer monitors that used CRT's), the entire screen area is scanned repetitively (completing a full frame 25 or 30 times a second) in a fixed pattern called a raster. The Image information is received in real-time from a video signal which controls the electric current supplying the electron gun, or in color televisions each of the three electron guns whose beams land on phosphors of the three primary colors (red, green, and blue). Except in the very early days of television, magnetic deflection has been used to scan the image onto the face of the CRT; this involves a varying current applied to both the vertical and horizontal deflection coils placed around the neck of the tube just beyond the electron gun(s).
DLP
Digital light processing (DLP) is a type of video projector technology that uses a digital micromirror device. Some DLPs have a TV tuner, which makes them a type of TV display. It was originally developed in 1987 by Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for the DLP projector technology. DLP is used in a variety of display applications from traditional static displays to interactive displays and also non-traditional embedded applications including medical, security, and industrial uses.
DLP technology is used in DLP front projectors (standalone projection units for classrooms and business primarily), DLP rear projection television sets, and digital signs. It is also used in about 85% of digital cinema projection, and in additive manufacturing as a power source in some SLA 3D printers to cure resins into solid 3D objects.
Rear projection
Rear-projection televisions (RPTVs) became very popular in the early days of television, when the ability to practically produce tubes with a large display size did not exist. In 1936, for a tube capable of being mounted horizontally in the television cabinet, would have been regarded as the largest convenient size that could be made owing to its required length, due to the low deflection angles of CRTs produced in the era, which meant that CRTs with large front sizes would have also needed to be very deep, which caused such CRTs to be installed at an angle to reduce the cabinet depth of the TV set. tubes and TV sets were available, but the tubes were so long (deep) that they were mounted vertically and viewed via a mirror in the top of the TV set cabinet which was usually mounted under a hinged lid, reducing considerably the depth of the set but making it taller. These mirror lid televisions were large pieces of furniture.
As a solution, Philips introduced a television set in 1937 that relied on back projecting an image from a tube onto a screen. This required the tube to be driven very hard (at unusually high voltages and currents, see ) to produce an extremely bright image on its fluorescent screen. Further, Philips decided to use a green phosphor on the tube face as it was brighter than the white phosphors of the day. In fact these early tubes were not up to the job and by November of that year Philips decided that it was cheaper to buy the sets back than to provide replacement tubes under warranty every couple of weeks or so. Substantial improvements were very quickly made to these small tubes and a more satisfactory tube design was available the following year helped by Philips's decision to use a smaller screen size of . In 1950 a more efficient tube with vastly improved technology and more efficient white phosphor, along with smaller and less demanding screen sizes, was able to provide an acceptable image, though the life of the tubes was still shorter than contemporary direct view tubes. As CRT technology improved during the 1950s, producing larger and larger screen sizes and later on, (more or less) rectangular tubes, the rear projection system was obsolete before the end of the decade.
However, in the early to mid 2000s RPTV systems made a comeback as a cheaper alternative to contemporary LCD and Plasma TVs. They were larger and lighter than contemporary CRT TVs and had a flat screen just like LCD and Plasma, but unlike LCD and Plasma, RPTVs were often dimmer, had lower contrast ratios and viewing angles, image quality was affected by room lighting and suffered when compared with direct view CRTs, and were still bulky like CRTs. These TVs worked by having a DLP, LCoS or LCD projector at the bottom of the unit, and using a mirror to project the image onto a screen. The screen may be a Fresnel lens to increase brightness at the cost of viewing angles. Some early units used CRT projectors and were heavy, weighing up to 500 pounds. Most RPTVs used Ultra-high-performance lamps as their light source, which required periodic replacement partly because they dimmed with use but mainly because the operating bulb glass became weaker with ageing to the point where the bulb could eventually shatter often damaging the projection system. Those that used CRTs and lasers did not require replacement.
Plasma
A plasma display panel (PDP) is a type of flat-panel display common to large TV displays or larger. They are called "plasma" displays because the technology utilizes small cells containing electrically charged ionized gases, or what are in essence chambers more commonly known as fluorescent lamps. Around 2014, television manufacturers were largely phasing out plasma TVs, because a plasma TV became higher cost and more difficult to make in 4k compared to LED or LCD.
In 1997, Philips introduced at CES and CeBIT the first large () commercially available flat-panel TV, using Fujitsu plasma displays.
LCD
Liquid-crystal-display televisions (LCD TV) are television sets that use liquid-crystal displays to produce images. LCD televisions are much thinner and lighter than CRTs of similar display size and are available in much larger sizes (e.g., diagonal). When manufacturing costs fell, this combination of features made LCDs practical for television receivers.
In 2007, LCD televisions surpassed sales of CRT-based televisions globally for the first time, and their sales figures relative to other technologies accelerated. LCD TVs quickly displaced the only major competitors in the large-screen market, the plasma display panel and rear-projection television. In the mid-2010s LCDs became, by far, the most widely produced and sold television display type.
LCDs also have disadvantages. Other technologies address these weaknesses, including OLEDs, FED and SED. LCDs can have quantum dots and mini-LED backlights to enhance image quality.
OLED
An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes. Generally, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens. It is also used for computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs.
There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell or LEC, which has a slightly different mode of operation. OLED displays can use either passive-matrix (PMOLED) or active-matrix addressing schemes. Active-matrix OLEDs (AMOLED) require a thin-film transistor backplane to switch each individual pixel on or off, but allow for higher resolution and larger display sizes.
An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room, an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight.
Television types
While most televisions are designed for consumers in the household, there are several markets that demand variations including hospitality, healthcare, and other commercial settings.
Hospitality television
Televisions made for the hospitality industry are part of an establishment's internal television system designed to be used by its guests. Therefore, settings menus are hidden and locked by a password. Other common software features include volume limiting, customizable power-on splash image, and channel hiding. These TVs are typically controlled by a set-back box using one of the data ports on the rear of the TV. The set back box may offer channel lists, pay per view, video on demand, and casting from a smart phone or tablet.
Hospitality spaces are insecure with respect to content piracy, so many content providers require the use of Digital rights management. Hospitality TVs decrypt the industry standard Pro:Idiom when no set back box is used. While H.264 is not part of the ATSC 1.0 standard in North America, TV content in hospitality can include H.264 encoded video, so hospitality TVs include H.264 decoding. Managing dozens or hundreds of TVs can be time consuming, so hospitality TVs can be cloned by storing settings on a USB drive and restoring those settings quickly. Additionally, server-based and cloud-based management systems can monitor and configure an entire fleet of TVs.
Healthcare television
Healthcare televisions include the provisions of hospitality TVs with additional features for usability and safety. They are designed for use in a healthcare setting in which the user may have limited mobility and audio/visual impairment. A key feature is the pillow speaker connection. Pillow speakers combine nurse call functions, TV remote control and a speaker for audio. In multiple occupancy rooms where several TVs are used in close proximity, the televisions can be programmed to respond to a remote control with unique codes so that each remote only controls one TV. Smaller TVs, also called bedside infotainment systems, have a full function keypad below the screen. This allows direct interaction without the use of a pillow speaker or remote. These TVs typically have antimicrobial surfaces and can withstand daily cleaning using disinfectants. In the US, the UL safety standard for televisions, UL 62368-1, contains a special section (annex DVB) which outlines additional safety requirements for televisions used in healthcare.
Outdoor television
Outdoor television sets are designed for outdoor use and are usually found in the outdoor sections of bars, sports field, or other community facilities. Most outdoor televisions use high-definition television technology. Their body is more robust. The screens are designed to remain clearly visible even in sunny outdoor lighting. The screens also have anti-reflective coatings to prevent glare. They are weather-resistant and often also have anti-theft brackets. Outdoor TV models can also be connected with BD players and PVRs for greater functionality.
Replacing
In the United States, the average consumer replaces their television every 6.9 years, but research suggests that due to advanced software and apps, the replacement cycle may be shortening.
Recycling and disposal
Due to recent changes in electronic waste legislation, economical and environmentally friendly television disposal has been made increasingly more available in the form of television recycling. Challenges with recycling television sets include proper HAZMAT disposal, landfill pollution, and illegal international trade.
Major manufacturers
Global 2016 years statistics for LCD TV.
| Technology | Broadcasting | null |
628721 | https://en.wikipedia.org/wiki/Trans-European%20Transport%20Network | Trans-European Transport Network | The Trans-European Transport Network (TEN-T) is a planned network of roads, railways, airports and water infrastructure in the European Union. The TEN-T network is part of a wider system of Trans-European Networks (TENs), including a telecommunications network (eTEN) and a proposed energy network (TEN-E or Ten-Energy). The European Commission adopted the first action plans on trans-European networks in 1990.
TEN-T envisages coordinated improvements to primary roads, railways, inland waterways, airports, seaports, inland ports and traffic management systems, providing integrated and intermodal long-distance, high-speed routes. A decision to adopt TEN-T was made by the European Parliament and Council in July 1996. The EU works to promote the networks by a combination of leadership, coordination, issuance of guidelines and funding aspects of development.
These projects are technically and financially managed by the Innovation and Networks Executive Agency (INEA), which superseded the Trans-European Transport Network Executive Agency (TEN-T EA) on 31 December 2013. The tenth and newest project, the Rhine-Danube Corridor, was announced for the 2014–2020 financial period.
History
TEN-T guidelines were initially adopted on 23 July 1996, with Decision No 1692/96/EC of the European Parliament and of the Council on Community guidelines for the development of the trans-European transport network. In May 2001, the European Parliament and the Council adopted a Decision No 1346/2001/EC, which amended the TEN-T Guidelines with respect to seaports, inland ports and intermodal terminals.
In April 2004, the European Parliament and the Council adopted Decision No 884/2004/EC (added to the list by Decision No 884/2004/EC), amending Decision No 1692/96/EC on Community guidelines for the development of the trans-European transport network. The April 2004 revision was a more fundamental change to TEN-T policies, intended to accommodate EU enlargement and consequent changes in traffic flows.
The evolution of the TEN-T was facilitated by a proposal in 1994 which included a series of priority projects.
In December 2013, with the Regulations (EU) 1315/2013 (TEN-T Guidelines), and (EU) 1316/2013 (Connecting Europe Facility 1), the TEN-T network has been defined on three levels, the Comprehensive network and the Core network, and therein the 9 Core network corridors.
On 17 October 2013, nine Core network corridors (instead of the 30 TENT Priority projects) were announced. These were:
the Baltic–Adriatic Corridor (Poland–Czechia/Slovakia–Austria–Italy);
the North Sea–Baltic Corridor (Finland–Estonia–Latvia–Lithuania–Poland–Germany–Netherlands/Belgium);
the Mediterranean Corridor (Spain–France–Northern Italy–Slovenia–Croatia–Hungary);
the Orient/East–Med Corridor (Germany–Czechia–Austria/Slovakia–Hungary–Romania–Bulgaria–Greece–Cyprus);
the Scandinavian–Mediterranean Corridor (Finland–Sweden–Denmark–Germany–Austria–Italy);
the Rhine–Alpine Corridor (Netherlands/Belgium–Germany–Switzerland–Italy);
the Atlantic Corridor (formerly known as Lisboa–Strasbourg Corridor) (Portugal–Spain–France);
the North Sea–Mediterranean Corridor (Ireland–UK–Netherlands–Belgium–Luxembourg–Marseille(France),
the Rhine–Danube Corridor (Germany–Austria–Slovakia–Hungary–Romania with branch Germany–Czechia–Slovakia);
In July 2021, with the Regulation (EU) 2021/1153 (Connecting Europe Facility 2), the 9 Core network corridors were extended, partially significantly (e.g. Atlantic, North-Sea Baltic, Scand-Med) while the North Sea-Med because of Brexit has changed to Ireland–Belgium-Netherlands and Ireland–France.
In December 2021, the European Commission's proposal for a new Regulation on TEN-T guidelines (COM 2021/821) proposes inter alia for the future a dissolution of selected Core network corridors (Orient/East–Med, North Sea–Mediterranean), its integration in other corridors (Rhine–Danube, North Sea–Alpine) and the creation of new aligned corridors (Baltic–Black–Aegean Seas, Western Balkans).
Connections to neighbours
The development of Ten-T in the Balkans (Albania, Bosnia and Herzegovina, Kosovo, Montenegro, North Macedonia, and Serbia) was given in 2017 to the Southeast Europe Transport Community.
In 2017, it was decided that the Trans-European Transport Networks would be extended further into Eastern Europe and would include Eastern Partnership member states. The furthest eastern expansion of the Trans-European Transport Network reached Armenia in February 2019.
As per the 2021 proposal, connections shall also lead to the UK, Switzerland, the South Mediterranean, Turkey, and the Western Balkans.
In July 2022, it was agreed to link four European Transport Corridors with Moldova and Ukraine and to drop Russia and Belarus from the TEN-T map. An August 2023 report recommended TEN-T be extended to Moldova and Ukraine with a standard gauge (1435mm) rail line, to assist in their integration with EU rail networks, some lines running alongside the 1520mm lines to avoid disruption during construction.
Core Network Corridors
This is the complete list of the TEN-T Core Network Corridors.
Funding timeline
Financial support for the implementation of TEN-T guidelines stems from the following rules:
Regulation (EC) No 2236/95 of 18 September 1995 contains general rules for the granting of Community financial aid in the field of trans-European networks.
Regulation (EC) No 1655/1999 of the European Parliament and of the Council of 19 July 1999 amends Regulation (EC) No 2236/95.
Regulation (EC) No 807/2004 of the European Parliament and of the Council of 21 April 2004 amends Council Regulation (EC) No 2236/95.
Regulation (EC) No 680/2007 of the European Parliament and of the Council of 20 June 2007 supplies general rules for granting Community financial aid for trans-European transport and energy networks.
In general, TEN-T projects are mostly funded by national or state governments. Other funding sources include: European Community funds (ERDF, Cohesion Funds, TEN-T budget), loans from international financial institutions (e.g. the European Investment Bank), and private funding.
List of transport networks
Each transportation mode has a network. The networks are:
Trans-European road network
Trans-European Rail network, which includes the Trans-European high-speed rail network as well as the Trans-European conventional rail network
Trans-European Inland Waterway network and inland ports
Trans-European Seaport network
Motorways of the Sea (added by Decision No 884/2004/EC)
Trans-European Airport network
Trans-European Combined Transport network
Trans-European Shipping Management and Information network
Trans-European Air Traffic Management network, which includes the Single European Sky and SESAR concepts
Trans-European Positioning and Navigation network, which includes the Galileo
Previous priorities
At its meeting in Essen in 1994, the European Council endorsed a list of 14 TEN-T ‘specific’ projects, drawn up by a group chaired by then Commission Vice-President Henning Christophersen. Following the 2003 recommendations from the Van Miert TEN-T high-level group, the Commission compiled a list of 30 priority projects to be launched before 2010.
The 30 axes and priority projects were:
As of 2019, several of them are finished, e.g. no 2, 5 and 11, other are ongoing e.g. no 12 and 17, and some are not started, e.g no 27.
Related networks
In addition to the various TENs, there are ten Pan-European corridors, which are paths between major urban centres and ports, mainly in Eastern Europe, that have been identified as requiring major investment.
The international E-road network is a naming system for major roads in Europe managed by the United Nations Economic Commission for Europe. It numbers roads with a designation beginning with "E" (such as "E1").
| Technology | Ground transportation networks | null |
628811 | https://en.wikipedia.org/wiki/Chlorella | Chlorella | Chlorella is a genus of about thirteen species of single-celled or colonial green algae of the division Chlorophyta. The cells are spherical in shape, about 2 to 10 μm in diameter, and are without flagella. Their chloroplasts contain the green photosynthetic pigments chlorophyll-a and -b. In ideal conditions cells of Chlorella multiply rapidly, requiring only carbon dioxide, water, sunlight, and a small amount of minerals to reproduce.
The name Chlorella is taken from the Greek χλώρος, chlōros/ khlōros, meaning green, and the Latin diminutive suffix -ella, meaning small. German biochemist and cell physiologist Otto Heinrich Warburg, awarded with the Nobel Prize in Physiology or Medicine in 1931 for his research on cell respiration, also studied photosynthesis in Chlorella. In 1961, Melvin Calvin of the University of California received the Nobel Prize in Chemistry for his research on the pathways of carbon dioxide assimilation in plants using Chlorella.
Chlorella has been considered as a source of food and energy because its photosynthetic efficiency can reach 8%, which exceeds that of other highly efficient crops such as sugar cane.
Description
Chlorella consists of small, rounded cells which are spherical, subspherical, or ellipsoidal, and may be surrounded by a layer of mucilage. The cells contain a single chloroplast which is parietal (lying against the inner side of the cell membrane), with a single pyrenoid that is surrounded by grains of starch.
Reproduction occurs by the formation of autospores; zoospores or gametes are not known to be produced in Chlorella. Autospores are released by a tear in the cell wall. The daughter cell may remain attached to the parent cell wall, thereby forming colonies of cells.
Taxonomy
Chlorella was first described by Martinus Beijerinck in 1890. Since then, over a hundred taxa have been described within the genus. However, biochemical and genomic data has revealed that many of these species were not closely related to each other, even being placed in a separate class Chlorophyceae. In other words, the "green ball" form of Chlorella appears to be a product of convergent evolution and not a natural taxon. Identifying Chlorella-like algae based on morphological features alone is generally not possible.
Some strains of "Chlorella" used for food are incorrectly identified, or correspond to genera that were classified out of true Chlorella. For example, Heterochlorella luteoviridis is typically known as Chlorella luteoviridis which is no longer considered a valid name.
As a food source
When first harvested, Chlorella was suggested as an inexpensive protein supplement to the human diet. According to the American Cancer Society, "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans".
Under certain growing conditions, Chlorella yields oils that are high in polyunsaturated fats—Chlorella minutissima has yielded eicosapentaenoic acid at 39.9% of total lipids.
History
Following global fears of an uncontrollable human population boom during the late 1940s and the early 1950s, Chlorella was seen as a new and promising primary food source and as a possible solution to the then-current world hunger crisis. Many people during this time thought hunger would be an overwhelming problem and saw Chlorella as a way to end this crisis by providing large amounts of high-quality food for a relatively low cost.
Many institutions began to research the algae, including the Carnegie Institution, the Rockefeller Foundation, the NIH, UC Berkeley, the Atomic Energy Commission, and Stanford University. Following World War II, many Europeans were starving, and many Malthusians attributed this not only to the war, but also to the inability of the world to produce enough food to support the increasing population. According to a 1946 FAO report, the world would need to produce 25 to 35% more food in 1960 than in 1939 to keep up with the increasing population, while health improvements would require a 90 to 100% increase. Because meat was costly and energy-intensive to produce, protein shortages were also an issue. Increasing cultivated area alone would go only so far in providing adequate nutrition to the population. The USDA calculated that, to feed the U.S. population by 1975, it would have to add 200 million acres (800,000 km2) of land, but only 45 million were available. One way to combat national food shortages was to increase the land available for farmers, yet the American frontier and farm land had long since been extinguished in trade for expansion and urban life. Hopes rested solely on new agricultural techniques and technologies. Because of these circumstances, an alternative solution was needed.
To cope with the upcoming postwar population boom in the United States and elsewhere, researchers decided to tap into the unexploited sea resources. Initial testing by the Stanford Research Institute showed Chlorella (when growing in warm, sunny, shallow conditions) could convert 20% of solar energy into a plant that, when dried, contains 50% protein. In addition, Chlorella contains fat and vitamins. The plant's photosynthetic efficiency allows it to yield more protein per unit area than any plant—one scientist predicted 10,000 tons of protein a year could be produced with just 20 workers staffing a 1000-acre (4-km2) Chlorella farm. The pilot research performed at Stanford and elsewhere led to immense press from journalists and newspapers, yet did not lead to large-scale algae production. Chlorella seemed like a viable option because of the technological advances in agriculture at the time and the widespread acclaim it got from experts and scientists who studied it. Algae researchers had even hoped to add a neutralized Chlorella powder to conventional food products, as a way to fortify them with vitamins and minerals.
When the preliminary laboratory results were published, the scientific community at first backed the possibilities of Chlorella. Science News Letter praised the optimistic results in an article entitled "Algae to Feed the Starving". John Burlew, the editor of the Carnegie Institution of Washington book Algal Culture-from Laboratory to Pilot Plant, stated, "the algae culture may fill a very real need", which Science News Letter turned into "future populations of the world will be kept from starving by the production of improved or educated algae related to the green scum on ponds". The cover of the magazine also featured Arthur D. Little's Cambridge laboratory, which was a supposed future food factory. A few years later, the magazine published an article entitled "Tomorrow's Dinner", which stated, "There is no doubt in the mind of scientists that the farms of the future will actually be factories." Science Digest also reported, "common pond scum would soon become the world's most important agricultural crop." However, in the decades since those claims were made, algae have not been cultivated on that large of a scale.
Current status
Since the growing world food problem of the 1940s was solved by better crop efficiency and other advances in traditional agriculture, Chlorella has not seen the kind of public and scientific interest that it had in the 1940s. Chlorella has only a niche market for companies promoting it as a dietary supplement.
Production difficulties
The experimental research was carried out in laboratories, rather than in the field, and
scientists discovered that Chlorella would be much more difficult to produce than previously thought. To be practical, the algae grown would have to be placed either in artificial light or in shade to produce at its maximum photosynthetic efficiency. In addition, for the Chlorella to be as productive as the world would require, it would have to be grown in carbonated water, which would have added millions to the production cost. A sophisticated process, and additional cost, was required to harvest the crop and for Chlorella to be a viable food source, its cell walls would have to be pulverized. The plant could reach its nutritional potential only in highly modified artificial situations. Another problem was developing sufficiently palatable food products from Chlorella.
Although the production of Chlorella looked promising and involved creative technology, it has not to date been cultivated on the scale some had predicted. It has not been sold on the scale of Spirulina, soybean products, or whole grains. Costs have remained high, and Chlorella has for the most part been sold as a health food, for cosmetics, or as animal feed. After a decade of experimentation, studies showed that following exposure to sunlight, Chlorella captured just 2.5% of the solar energy, not much better than conventional crops. Chlorella, too, was found by scientists in the 1960s to be impossible for humans and other animals to digest in its natural state due to the tough cell walls encapsulating the nutrients, which presented further problems for its use in American food production.
Use in carbon dioxide reduction and oxygen production
In 1965, the Russian CELSS experiment BIOS-3 determined that 8 m2 of exposed Chlorella could remove carbon dioxide and replace oxygen within the sealed environment for a single human. The algae were grown in vats underneath artificial light.
Dietary supplement
Chlorella is consumed as a dietary supplement. Some manufacturers of Chlorella products have falsely asserted that it has health benefits, including an ability to treat cancer, for which the American Cancer Society stated "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans". The United States Food and Drug Administration has issued warning letters to supplement companies for falsely advertising health benefits of consuming chlorella products, such as one company in October 2020.
There is some support from animal studies of chlorella's ability to detoxify insecticides. Chlorella protothecoides accelerated the detoxification of rats poisoned with chlordecone, a persistent insecticide, decreasing the half-life of the toxin from 40 to 19 days. The ingested algae passed through the gastrointestinal tract unharmed, interrupted the enteric recirculation of the persistent insecticide, and subsequently eliminated the bound chlordecone with the feces.
Health concerns
A 2002 study showed that Chlorella cell walls contain lipopolysaccharides, endotoxins found in Gram-negative bacteria that affect the immune system and may cause inflammation. However, more recent studies have found that the lipopolysaccharides in organisms other than Gram-negative bacteria, for example in cyanobacteria, are considerably different from the lipopolysaccharides in Gram-negative bacteria.
| Biology and health sciences | Green algae | Plants |
629026 | https://en.wikipedia.org/wiki/Meridian%20%28astronomy%29 | Meridian (astronomy) | In astronomy, the meridian is the great circle passing through the celestial poles, as well as the zenith and nadir of an observer's location. Consequently, it contains also the north and south points on the horizon, and it is perpendicular to the celestial equator and horizon. Meridians, celestial and geographical, are determined by the pencil of planes passing through the Earth's rotation axis. For a location not on this axis, there is a unique meridian plane in this axial-pencil through that location. The intersection of this plane with Earth's surface defines two geographical meridians (either one east and one west of the prime meridian, or else the prime meridian itself and its anti-meridian), and the intersection of the plane with the celestial sphere is the celestial meridian for that location and time.
There are several ways to divide the meridian into semicircles. In one approach, the observer's upper meridian extends from a celestial pole and passes through the zenith to contact the opposite pole, while the lower meridian passes through the nadir to contact both poles at the opposite ends. In another approach known as the horizontal coordinate system, the meridian is divided into the local meridian, the semicircle that contains the observer's zenith and the north and south points of their horizon, and the opposite semicircle, which contains the nadir and the north and south points of their horizon.
On any given (sidereal) day/night, a celestial object will appear to drift across, or transit, the observer's upper meridian as Earth rotates, since the meridian is fixed to the local horizon. At culmination, the object contacts the upper meridian and reaches its highest point in the sky. An object's right ascension and the local sidereal time can be used to determine the time of its culmination (see hour angle).
The term meridian comes from the Latin meridies, which means both "midday" and "south", as the celestial equator appears to tilt southward from the Northern Hemisphere.
| Physical sciences | Celestial sphere: General | Astronomy |
629684 | https://en.wikipedia.org/wiki/Immunodeficiency | Immunodeficiency | Immunodeficiency, also known as immunocompromisation, is a state in which the immune system's ability to fight infectious diseases and cancer is compromised or entirely absent. Most cases are acquired ("secondary") due to extrinsic factors that affect the patient's immune system. Examples of these extrinsic factors include HIV infection and environmental factors, such as nutrition. Immunocompromisation may also be due to genetic diseases/flaws such as SCID.
In clinical settings, immunosuppression by some drugs, such as steroids, can either be an adverse effect or the intended purpose of the treatment. Examples of such use is in organ transplant surgery as an anti-rejection measure and in patients with an overactive immune system, as in autoimmune diseases. Some people are born with intrinsic defects in their immune system, or primary immunodeficiency.
A person who has an immunodeficiency of any kind is said to be immunocompromised. An immunocompromised individual may particularly be vulnerable to opportunistic infections, in addition to normal infections that could affect anyone. It also decreases cancer immunosurveillance, in which the immune system scans the body's cells and kills neoplastic ones. They are also more susceptible to infectious diseases owing to the reduced protection afforded by vaccines.
Types
By affected component
Humoral immune deficiency (including B cell deficiency or dysfunction), with signs or symptoms depending on the cause, but generally include signs of hypogammaglobulinemia (decrease of one or more types of antibodies) with presentations including repeated mild respiratory infections, and/or agammaglobulinemia (lack of all or most antibody production) which results in frequent severe infections and is often fatal.
T cell deficiency, often causes secondary disorders such as acquired immune deficiency syndrome (AIDS).
Granulocyte deficiency, including decreased numbers of granulocytes (called as granulocytopenia or, if absent, agranulocytosis) such as of neutrophil granulocytes (termed neutropenia). Granulocyte deficiencies also include decreased function of individual granulocytes, such as in chronic granulomatous disease.
Asplenia, where there is no function of the spleen
Complement deficiency is where the function of the complement system is deficient
In reality, immunodeficiency often affects multiple components, with notable examples including severe combined immunodeficiency (which is primary) and acquired immune deficiency syndrome (which is secondary).
Primary or secondary
The distinction between primary versus secondary immunodeficiencies is based on, respectively, whether the cause originates in the immune system itself or is, in turn, due to insufficiency of a supporting component of it or an external decreasing factor of it.
Primary immunodeficiency
A number of rare diseases feature a heightened susceptibility to infections from childhood onward. Primary Immunodeficiency is also known as congenital immunodeficiencies. Many of these disorders are hereditary and are autosomal recessive or X-linked. There are over 95 recognised primary immunodeficiency syndromes; they are generally grouped by the part of the immune system that is malfunctioning, such as lymphocytes or granulocytes.
The treatment of primary immunodeficiencies depends on the nature of the defect, and may involve antibody infusions, long-term antibiotics and (in some cases) stem cell transplantation. The characteristics of lacking and/or impaired antibody functions can be related to illnesses such as X-Linked Agammaglobulinemia and Common Variable Immune Deficiency
Secondary immunodeficiencies
Secondary immunodeficiencies, also known as acquired immunodeficiencies, can result from various immunosuppressive agents, for example, malnutrition, aging, particular medications (e.g., chemotherapy, disease-modifying antirheumatic drugs, immunosuppressive drugs after organ transplants, glucocorticoids) and environmental toxins like mercury and other heavy metals, pesticides and petrochemicals like styrene, dichlorobenzene, xylene, and ethylphenol. For medications, the term immunosuppression generally refers to both beneficial and potential adverse effects of decreasing the function of the immune system, while the term immunodeficiency generally refers solely to the adverse effect of increased risk for infection.
Many specific diseases directly or indirectly cause immunosuppression. This includes many types of cancer, particularly those of the bone marrow and blood cells (leukemia, lymphoma, multiple myeloma), and certain chronic infections. Immunodeficiency is also the hallmark of acquired immunodeficiency syndrome (AIDS), caused by the human immunodeficiency virus (HIV). HIV directly infects a small number of T helper cells, and also impairs other immune system responses indirectly.
Various hormonal and metabolic disorders can also result in immune deficiency including anemia, hypothyroidism and hyperglycemia.
Smoking, alcoholism and drug abuse also depress immune response.
Heavy schedules of training and competition in athletes increases their risk of immune deficiencies.
Causes
The cause of immunodeficiency varies depending on the nature of the disorder. The cause can be either genetic or acquired by malnutrition and poor sanitary conditions. Only for some genetic causes, the exact genes are known.
Immunodeficiency and autoimmunity
There are a large number of immunodeficiency syndromes that present clinical and laboratory characteristics of autoimmunity. The decreased ability of the immune system to clear infections in these patients may be responsible for causing autoimmunity through perpetual immune system activation.
One example is common variable immunodeficiency (CVID) where multiple autoimmune diseases are seen, e.g., inflammatory bowel disease, autoimmune thrombocytopenia, and autoimmune thyroid disease.
Familial hemophagocytic lymphohistiocytosis, an autosomal recessive primary immunodeficiency, is another example. Low blood levels of red blood cells, white blood cells, and platelets, rashes, lymph node enlargement, and enlargement of the liver and spleen are commonly seen in these patients. Presence of multiple uncleared viral infections due to lack of perforin are thought to be responsible.
In addition to chronic and/or recurrent infections many autoimmune diseases including arthritis, autoimmune hemolytic anemia, scleroderma and type 1 diabetes are also seen in X-linked agammaglobulinemia (XLA).
Recurrent bacterial and fungal infections and chronic inflammation of the gut and lungs are seen in chronic granulomatous disease (CGD) as well. CGD is caused by a decreased production of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase by neutrophils.
Hypomorphic RAG mutations are seen in patients with midline granulomatous disease; an autoimmune disorder that is commonly seen in patients with granulomatosis with polyangiitis and NK/T cell lymphomas.
Wiskott–Aldrich syndrome (WAS) patients also present with eczema, autoimmune manifestations, recurrent bacterial infections and lymphoma.
In autoimmune polyendocrinopathy-candidiasis-ectodermal dystrophy (APECED) also autoimmunity and infections coexist: organ-specific autoimmune manifestations (e.g., hypoparathyroidism and adrenocortical failure) and chronic mucocutaneous candidiasis.
Finally, IgA deficiency is also sometimes associated with the development of autoimmune and atopic phenomena.
Antibody vulnerability period in children
The period following birth is critical for the development of a child's immune system. Initially, a newborn relies heavily on passive immunity transferred from the mother, primarily through the placenta and breastfeeding.
As breastfeeding frequency declines, immune protection gradually wanes, making the child more vulnerable and increasingly reliant on their developing immune system. This transitional phase, known as the "antibody vulnerability period", lasts until approximately three to four years of age, during which the child's immune system matures and becomes fully functional.
To combat pathogens, it is important for babies to develop their own specific antibodies recognizing these antigens. And these types of antibodies are known as immunoglobulins. Immunoglobulin G (IgG) is one of them.
Babies are unable to make their own IgG antibodies at birth and rely on maternal transfer of IgG via placenta during the third trimester. Other types of immunoglobulins (IgA, IgM, IgE and IgD) do not cross the placenta. It is believed that IgG is important in protecting babies against infections.
Naturally bioactive Immunoglobulin G is found in breast milk which plays a significant role in early life during the vulnerable period. The Y-shaped structure of Immunoglobulin G allows it to effectively identify and combat pathogens, providing antibody-like protection to the child.
Research indicates that maintaining adequate levels of IgG during early childhood may help mitigate the risks associated with this immune vulnerability. This supplementation can offer a protective boost, enhancing the infant's ability to fend off infections and other health threats during the critical years when their immune system is still developing. The importance of this period underscores the need for targeted nutritional interventions to support overall immune health in young children.
Classes of immunoglobulins (Igs)
The immune system produces several classes of immunoglobulins (Ig), such as IgA, IgD, IgE, IgG, and IgM. Each class helps protect the body from infection in a different way (see below).
Immunoglobulin Subclasses and Their Properties
Diagnosis
Medical History and Physical Examination:
A physician will inquire about past illnesses and family history of immune disorders to identify inherited conditions. A detailed physical examination helps recognize symptoms indicative of an immune disorder. Blood Tests: these tests are instrumental in diagnosing immunodeficiency as they measure: Infection-fighting proteins (immunoglobulins): Essential for robust immune defense, these protein levels are measured to evaluate immune function. Blood cell counts: Deviations in specific blood cells can point to an immune system anomaly. Immune system cells: These assessments are used to measure the levels of various immune cells. Genetic testing involves collecting samples from patients for molecular analysis when there is a suspicion of inborn errors in immunity. Most Primary Immunodeficiency Disorders (PIDs) are inherited as single-gene defects. The key genes associated with immunodeficiency diseases include CD40L, CD40, RAG1, RAG2, IL2RG, and ADA. Here is a summary of some methods utilized to identify genetic anomalies:
Sanger Sequencing of Single Genes:
Sanger sequencing is widely recognized as the benchmark method for accurately identifying individual nucleotide changes, as well as small-scale insertions or deletions in DNA. It is particularly valuable for confirming known familial genetic variations, for validating findings from next-generation sequencing technologies, and in specific scenarios that require sequencing of single genes. An example is its use to confirm mutations in the Bruton tyrosine kinase (BTK) gene, which are linked to X-linked agammaglobulinemia (XLA)
• Targeted Gene Sequencing Panels (tNGS): This technology is ideal for examining genes in specific pathways or for follow-up experiments (targeted resequencing) from whole genome sequencing (WGS). It is rapid and more cost-effective than WGS, and because it allows for deeper sequencing.
• Whole Exome Sequencing (WES): is a commonly used method which captures the majority of coding regions of the genome for sequencing, as these regions contain the majority of disease-causing mutations Useful for identifying mutations in specific genes
• Trio or Whole-Family Analyses: In some cases, analyzing the DNA of the patient, parents, and siblings (trio analysis) or the entire family (whole-family analysis) can reveal inheritance patterns and identify causative mutations
Treatment
Available treatment falls into two modalities: treating infections and boosting the immune system.
Prevention of Pneumocystis pneumonia using trimethoprim/sulfamethoxazole is useful in those who are immunocompromised. In the early 1950s Immunoglobulin(Ig) was used by doctors to treat patients with primary immunodeficiency through intramuscular injection. Ig replacement therapy are infusions that can be either subcutaneous or intravenously administered, resulting in higher Ig levels for about three to four weeks, although this varies with each patient.
Prognosis
Prognosis depends greatly on the nature and severity of the condition. Some deficiencies cause early mortality (before age one), others with or even without treatment are lifelong conditions that cause little mortality or morbidity. Newer stem cell transplant technologies may lead to gene based treatments of debilitating and fatal genetic immune deficiencies. Prognosis of acquired immune deficiencies depends on avoiding or treating the causative agent or condition (like AIDS).
| Biology and health sciences | Concepts | Health |
630020 | https://en.wikipedia.org/wiki/Dung%20beetle | Dung beetle | Dung beetles are beetles that feed on feces. Some species of dung beetles can bury dung 250 times their own mass in one night..
Many dung beetles, known as rollers, roll dung into round balls, which are used as a food source or breeding chambers. Other dung beetles like Euoniticellus intermedius, known as tunnelers, bury the dung wherever they find it. A third group, the dwellers, neither roll nor burrow: they simply live in dung. They are often attracted by the feces collected by burrowing owls. There are dung beetle species of various colors and sizes, and some functional traits such as body mass (or biomass) and leg length can have high levels of variability.
All the species belong to the superfamily Scarabaeoidea, most of them to the subfamilies Scarabaeinae and Aphodiinae of the family Scarabaeidae (scarab beetles). As most species of Scarabaeinae feed exclusively on feces, that subfamily is often dubbed true dung beetles. There are dung-feeding beetles which belong to other families, such as the Geotrupidae (the earth-boring dung beetle). The Scarabaeinae alone comprises more than 5,000 species.
The nocturnal African dung beetle Scarabaeus satyrus is one of the few known invertebrate animals that navigate and orient themselves using the Milky Way. The daily dung of one elephant can support 2,000,000 beetles.
Taxonomy
Dung beetles are not a single taxonomic group; dung feeding is found in a number of families of beetles, so the behaviour cannot be assumed to have evolved only once.
Coleoptera (order), beetles
Scarabaeoidea (superfamily), scarabs (most families in the group do not use dung)
Geotrupidae (family), "earth-boring dung beetles"
Scarabaeidae (family), "scarab beetles" (not all species use dung)
Scarabaeinae (subfamily), "true dung beetles"
Aphodiinae (subfamily), "small dung beetles" (not all species use dung)
Ecology and behavior
Dung beetles live in many habitats, including desert, grasslands and savannas, farmlands, and native and planted forests. They are highly influenced by the environmental context, and do not prefer extremely cold or dry weather. They are found on all continents except Antarctica. They eat the dung of herbivores and omnivores, and prefer that produced by the latter. Many of them also feed on mushrooms and decaying leaves and fruits. The Neotropical Deltochilum valgum, D. kolbei and D. viridescens are carnivores with a strong preference for preying upon millipedes. Two other species from Brazil, Canthon dives and Canthon virens, prey on queens and other winged forms of leafcutter ants. One species from the Iberian Peninsula, Thorectes lusitanicus, feeds on acorns. Dung beetles do not necessarily have to eat or drink anything else, because the dung provides all the necessary nutrients.
Most dung beetles search for dung using their sensitive sense of smell. Some smaller species simply attach themselves to the dung-providers to wait for the dung. After capturing the dung, a dung beetle rolls it, following a straight line despite all obstacles. Sometimes, dung beetles try to steal the dung ball from another beetle, so the dung beetles have to move rapidly away from a dung pile once they have rolled their ball to prevent it from being stolen. Dung beetles can roll up to 10 times their weight. Male Onthophagus taurus beetles can pull 1,141 times their own body weight: the equivalent of an average person pulling six double-decker buses full of people.
A species of dung beetle (the African Scarabaeus zambesianus) navigates by polarization patterns in moonlight, the first animal known to do so. Dung beetles can also navigate when only the Milky Way or clusters of bright stars are visible, making them the only insects known to orient themselves by the Milky Way. Research using 1 kg bolus of elephant dung found that a larger number exploit it during the night (13,700) than during the day (3,330). The eyes of dung beetles are superposition compound eyes typical of many scarabaeid beetles;
The sequence of images shows a sequence of the beetle rolling a dung ball. It does this to navigate.
Cambefort and Hanski (1991) classified dung beetles into three functional types based on their feeding and nesting strategies such as – Rollers, Tunnelers and Dwellers. The "rollers" roll and bury a dung ball either for food storage or for making a brooding ball. In the latter case, two beetles, one male and one female, stay around the dung ball during the rolling process. Usually it is the male that rolls the ball, while the female hitch-hikes or simply follows behind. In some cases, the male and the female roll together. When a spot with soft soil is found, they stop and bury the ball, then mate underground. After the mating, one or both of them prepares the brooding ball. When the ball is finished, the female lays eggs inside it, a form of mass provisioning.
Some species do not leave after this stage, but remain to safeguard their offspring. The dung beetle goes through a complete metamorphosis. The larvae live in brood balls made with dung prepared by their parents. During the larval stage, the beetle feeds on the dung surrounding it.
The behavior of the beetles was poorly understood until the studies of Jean Henri Fabre in the late 19th century. For example, Fabre corrected the myth that a dung beetle would seek aid from other dung beetles when confronted by obstacles. By observation and experiment, he found the seeming helpers were in fact awaiting an opportunity to steal the roller's food source.
They are widely used in ecological research as a good bioindicator group to examine the impacts of climate disturbances, such as extreme droughts and associated fires, and human activities on tropical biodiversity and ecosystem functioning, such as seed dispersal, soil bioturbation and nutrient cycling.
Benefits and uses
Dung beetles play a role in agriculture and tropical forests. By burying and consuming dung, they improve nutrient recycling and soil structure. Dung beetles have been further shown to improve soil conditions and plant growth on rehabilitated coal mines in South Africa. They are also important for the dispersal of seeds present in animals' dung, influencing seed burial and seedling recruitment in tropical forests. They can protect livestock, such as cattle, by removing the dung which, if left, could provide habitat for pests such as flies. Therefore, many countries have introduced the creatures for the benefit of animal husbandry. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces.
In Australia, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) commissioned the Australian Dung Beetle Project (1965–1985) which, led by George Bornemissza, sought to introduce species of dung beetles from South Africa and Europe. The successful introduction of 23 species was made, most notably Digitonthophagus gazella and Euoniticellus intermedius, which has resulted in improvement of the quality and fertility of Australian cattle pastures, along with a reduction in the population of pestilent Australian bush flies by around 90%. In 1995 it was reported that dung beetles were being trialled in the Sydney beach suburb of Curl Curl to deal with dog droppings.
An application made by Landcare Research to import up to 11 species of dung beetle into New Zealand was approved in 2011. As well as improving pasture soils the Dung Beetle Release Strategy Group said that it would result in a reduction in emissions of nitrous oxide (a greenhouse gas) from agriculture. There was, however, strong opposition from some at the University of Auckland, and a few others, based on the risks of the dung beetles acting as vectors of disease. There were public health researchers at the University of Auckland who agreed with the Environmenal Protection Authority's risk assessment. Several Landcare programmes in Australia involved schoolchildren collecting dung beetles.
The African dung beetle (D. gazella) was introduced in several locations in North and South America and has been spreading its distribution to other regions by natural dispersal and accidental transportation, and is now probably naturalized in most countries between Mexico and Argentina. The exotic species might be useful for controlling diseases of livestock in commercial areas, and might displace native species in modified landscapes; however, data is not conclusive about its effect on native species in natural environments and further monitoring is required.
The Mediterranean dung beetle (Bubas bison) has been used in conjunction with biochar stock fodder to reduce emissions of nitrous oxide and carbon dioxide, which are both greenhouse gases. The beetles work the biochar-enriched dung into the soil without the use of machines.
Scientists in Canberra in 1965 discovered that Dung beetles (Scarabaeids), specifically Onthophagus australis Guérin-Méneville, improve plant yields using their dung. Japanese millet was studied and data on nutrient uptake. These plants were placed in pots lacking nitrogen, phosphorus, and sulfur. Cow-dung was then added in treatment groups with or without O. australis. Some treatment groups even had two out of the three nutrients supplemented in the pots. Comparisons of the treatment and control groups were made to show that top growth and roots significantly increased when the dung was mixed well into the soil in the pots. Results showed that dung beetle activity greatly improved plant life. The dung has little impact alone, but in combination with the dung beetle, the nutritional value for the plants increases greatly. This suggests that dung beetles have many positive implications for the environment, including a beneficial role with plant life.
In culture
Some dung beetles are used as food in South East Asia and a variety of dung beetle species have been used therapeutically (and are still being used in traditionally living societies) in potions and folk medicines to treat a number of illnesses and disorders.
In Isan, Northeastern Thailand, the local people eat many different kinds of insects, including the dung beetle. There is an Isan song กุดจี่หายไปใหน "Where Did the Dung Beetle Go", which relates the replacement of water buffalo with the "metal" buffalo, which does not provide the dung needed for the dung beetle and has led to the increasing rarity of the dung beetle in the agricultural region.
Ancient Egypt
Several species of the dung beetle, most notably the species Scarabaeus sacer (often referred to as the sacred scarab), enjoyed a sacred status among the ancient Egyptians.
Egyptian hieroglyphic script uses the image of the beetle to represent a triliteral phonetic that Egyptologists transliterate as xpr or ḫpr and translate as "to come into being", "to become" or "to transform". The derivative term xprw or ḫpr(w) is variously translated as "form", "transformation", "happening", "mode of being" or "what has come into being", depending on the context. It may have existential, fictional, or ontologic significance.
The scarab was linked to Khepri ("he who has come into being"), the god of the rising sun. The ancients believed that the dung beetle was only male-sexed, and reproduced by depositing semen into a dung ball. The supposed self-creation of the beetle resembles that of Khepri, who creates himself out of nothing. Moreover, the dung ball rolled by a dung beetle resembles the sun. Plutarch wrote:
The ancient Egyptians believed that Khepri renewed the sun every day before rolling it above the horizon, then carried it through the other world after sunset, only to renew it, again, the next day. Some New Kingdom royal tombs exhibit a threefold image of the sun god, with the beetle as symbol of the morning sun. The astronomical ceiling in the tomb of Ramses VI portrays the nightly "death" and "rebirth" of the sun as being swallowed by Nut, goddess of the sky, and re-emerging from her womb as Khepri.
The image of the scarab, conveying ideas of transformation, renewal, and resurrection, is ubiquitous in ancient Egyptian religious and funerary art.
Excavations of ancient Egyptian sites have yielded images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals, dating from the Sixth Dynasty and up to the period of Roman rule. They are generally small, bored to allow stringing on a necklace, and the base bears a brief inscription or cartouche. Some have been used as seals. Pharaohs sometimes commissioned the manufacture of larger images with lengthy inscriptions, such as the commemorative scarab of Queen Tiye. Massive sculptures of scarabs can be seen at Luxor Temple, at the Serapeum in Alexandria (see Serapis) and elsewhere in Egypt.
The scarab was of prime significance in the funerary cult of ancient Egypt. Scarabs, generally, though not always, were cut from green stone, and placed on the chest of the deceased. Perhaps the most famous example of such "heart scarabs" is the yellow-green pectoral scarab found among the entombed provisions of Tutankhamen. It was carved from a large piece of Libyan desert glass. The purpose of the "heart scarab" was to ensure that the heart would not bear witness against the deceased at judgement in the Afterlife. Other possibilities are suggested by the "transformation spells" of the Coffin Texts, which affirm that the soul of the deceased may transform (xpr) into a human being, a god, or a bird and reappear in the world of the living.
One scholar comments on other traits of the scarab connected with the theme of death and rebirth:
In contrast to funerary contexts, some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these being Judean LMLK seals (8 of 21 designs contained scarab beetles), which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah.
The scarab remains an item of popular interest thanks to modern fascination with the art and beliefs of ancient Egypt. Scarab beads in semiprecious stones or glazed ceramics can be purchased at most bead shops, while at Luxor Temple a massive ancient scarab has been roped off to discourage visitors from rubbing the base of the statue "for luck".
In literature
In Aesop's fable "The Eagle and the Beetle", the eagle kills a hare that has asked for sanctuary with a beetle. The beetle then takes revenge by twice destroying the eagle's eggs. The eagle, in despair, flies up to Olympus and places her latest eggs in Zeus's lap, beseeching the god to protect them. When the beetle finds out what the eagle has done, it stuffs itself with dung, goes straight up to Zeus and flies right into his face. Zeus is startled at the sight of the unpleasant creature, jumping to his feet so that the eggs are broken. Learning of the origin of their feud, Zeus attempts to mediate and, when his efforts to mediate fail, he changes the breeding season of the eagle to a time when the beetles are not above ground.
Aristophanes alluded to Aesop's fable several times in his plays. In Peace, the hero rides up to Olympus to free the goddess Peace from her prison. His steed is an enormous dung beetle which has been fed so much dung that it has grown to monstrous size.
Hans Christian Andersen's "The Dung Beetle" tells the story of a dung beetle who lives in the stable of the king's horses in an imaginary kingdom. When he demands golden shoes like those the king's horse wears and is refused, he flies away and has a series of adventures, which are often precipitated by his feeling of superiority to other animals. He finally returns to the stable having decided (against all logic) that it is for him that the king's horse wears golden shoes.
In Franz Kafka's The Metamorphosis, the transformed character of Gregor Samsa is called an "old dung beetle" (alter Mistkäfer) by a charwoman.
| Biology and health sciences | Beetles (Coleoptera) | null |
630426 | https://en.wikipedia.org/wiki/Baton%20round | Baton round | Baton rounds, also known as kinetic impact projectiles (KIPs), are a less lethal alternative to traditional bullets. Baton rounds are designed to impact rather than to penetrate and are typically used for riot control.
Common types of baton round have included the:
Bean bag round, a less-lethal projectile fired from a normal 12-gauge shotgun
Plastic baton round or plastic bullet, a less-lethal projectile fired from a specialised gun
Rubber baton round, commonly called the rubber bullet, a rubber-coated projectile with a metal or ceramic core.
Wooden baton round (which are meant to be skipped off the ground into the targeted area), also called a wooden bullet (a bullet is a direct impact round).
Foam baton round, also called a sponge grenade
Such munitions are meant to cause pain and incapacitation but not penetrate flesh. However, baton rounds can cause death and serious injuries such as damage to internal organs, permanent disabilities including blindness, especially when fired from close range at the head, neck, chest, or abdomen.
History
The use of baton rounds dates back to the 1880s, when Singapore police fired sections of broom handle at demonstrators in Singapore. The Hong Kong police subsequently developed wooden baton rounds, but they were likely to splinter and cause wounds.
Rubber bullets were invented by the British Ministry of Defence for use against rioters in Northern Ireland during The Troubles, and were first used there in 1970.
Rubber bullets tend to bounce uncontrollably, and have largely been replaced by other types of baton rounds, including plastic bullets: solid PVC cylinders 10 cm long, 38 mm in diameter, and weighing 135g. They were invented by Porton Down scientists and intended for use against rioters in Northern Ireland, first used there in 1973.
Injuries
In a 1975 study of injuries in 90 patients injured by rubber bullets, 1 died, 17 suffered permanent disabilities or deformities and 41 required hospital treatment after being fired upon with rubber baton rounds. A review of studies covering multiple different munition types/designs, which covered 1,984 people injured by "kinetic impact projectiles" and found that 53 died, plus 300 permanently disabled. Baton rounds can cause blindness as shown by their use by police in the 2019-2020 Chilean protests. During the first 3–4 months of protests in Chile, rubber bullets contributed to have a toll of 427 persons with eye injuries, an extremely high number when comparing to other protests or conflict zones in the world.
| Technology | Less-lethal weapons | null |
630558 | https://en.wikipedia.org/wiki/Puya%20%28plant%29 | Puya (plant) | Puya is a genus of the botanical family Bromeliaceae. It is the sole genus of the subfamily Puyoideae, and is composed of 226 species. These terrestrial plants are native to the Andes Mountains of South America and southern Central America. Many of the species are monocarpic, with the parent plant dying after one flower and seed production event.
The species Puya raimondii is notable as the largest species of bromeliad known, reaching 3 m tall in vegetative growth with a flower spike 9–10 m tall. The other species are also large, with the flower spikes mostly reaching 1–4 m tall.
The name Puya was derived from the Mapuche Indian word meaning "point".
Taxonomy
The genus is commonly divided into two subgenera, Puya, containing eight species, and Puyopsis containing the remainder. The subgenera can be distinguished by the presence of a sterile inflorescence at the branch apex in Puya, which are fertile in Puyopsis.
Species
, Plants of the World Online accepted the following species:
Cultivation and use
Some species of Puya in Chile, locally known as chagual, are used to make salads from the base of its young leaves or stem. A common species is Puya chilensis.
| Biology and health sciences | Poales | null |
5705108 | https://en.wikipedia.org/wiki/Borohydride | Borohydride | Borohydride refers to the anion , which is also called tetrahydridoborate, and its salts. Borohydride or hydroborate is also the term used for compounds containing , where n is an integer from 0 to 3, for example cyanoborohydride or cyanotrihydroborate and triethylborohydride or triethylhydroborate . Borohydrides find wide use as reducing agents in organic synthesis. The most important borohydrides are lithium borohydride and sodium borohydride, but other salts are well known (see Table). Tetrahydroborates are also of academic and industrial interest in inorganic chemistry.
History
Alkali metal borohydrides were first described in 1940 by Hermann Irving Schlesinger and Herbert C. Brown. They synthesized lithium borohydride from diborane :
, where M = Li, Na, K, Rb, Cs, etc.
Current methods involve reduction of trimethyl borate with sodium hydride.
Structure
In the borohydride anion and most of its modifications, boron has a tetrahedral structure. The reactivity of the B−H bonds depends on the other ligands. Electron-releasing ethyl groups as in triethylborohydride render the B−H center highly nucleophilic. In contrast, cyanoborohydride is a weaker reductant owing to the electron-withdrawing cyano substituent. The countercation also influences the reducing power of the reagent.
Uses
Sodium borohydride is the borohydride that is produced on the largest scale industrially, estimated at 5000 tons/year in 2002. The main use is for the reduction of sulfur dioxide to give sodium dithionite:
Dithionite is used to bleach wood pulp. Sodium borohydride is also used to reduce aldehydes and ketones in the production of pharmaceuticals including chloramphenicol, thiophenicol, vitamin A, atropine, and scopolamine, as well as many flavorings and aromas.
Potential applications
Because of their high hydrogen content, borohydride complexes and salts have been of interest in the context of hydrogen storage. Reminiscent of related work on ammonia borane, challenges are associated with slow kinetics and low yields of hydrogen as well as problems with regeneration of the parent borohydrides.
Coordination complexes
In its coordination complexes, the borohydride ion is bound to the metal by means of one to three bridging hydrogen atoms. In most such compounds, the ligand is bidentate. Some homoleptic borohydride complexes are volatile. One example is uranium borohydride.
Metal borohydride complexes can often be prepared by a simple salt elimination reaction:
Beryllium borohydride is dimeric.
Decomposition
Some metal tetrahydroborates transform on heating to give metal borides. When the borohydride complex is volatile, this decomposition pathway is the basis of chemical vapor deposition (CVD), a way of depositing thin films of metal borides. For example, zirconium diboride and hafnium diboride can be prepared through CVD of the zirconium(IV) tetrahydroborate and hafnium(IV) tetrahydroborate :
Metal diborides find uses as coatings because of their hardness, high melting point, strength, resistance to wear and corrosion, and good electrical conductivity.
| Physical sciences | Borohydride salts | Chemistry |
13673345 | https://en.wikipedia.org/wiki/Car | Car | A car, or an automobile, is a motor vehicle with wheels. Most definitions of cars state that they run primarily on roads, seat one to eight people, have four wheels, and mainly transport people rather than cargo. There are around one billion cars in use worldwide.
The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769, while the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808. The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventor Carl Benz patented his Benz Patent-Motorwagen. Commercial cars became widely available during the 20th century. The 1901 Oldsmobile Curved Dash and the 1908 Ford Model T, both American cars, are widely considered the first mass-produced and mass-affordable cars, respectively. Cars were rapidly adopted in the US, where they replaced horse-drawn carriages. In Europe and other parts of the world, demand for automobiles did not increase until after World War II. In the 21st century, car usage is still increasing rapidly, especially in China, India, and other newly industrialised countries.
Cars have controls for driving, parking, passenger comfort, and a variety of lamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These include rear-reversing cameras, air conditioning, navigation systems, and in-car entertainment. Most cars in use in the early 2020s are propelled by an internal combustion engine, fueled by the combustion of fossil fuels. Electric cars, which were invented early in the history of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025. The transition from fossil fuel-powered cars to electric cars features prominently in most climate change mitigation scenarios, such as Project Drawdown's 100 actionable solutions for climate change.
There are costs and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs and maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance. The costs to society include maintaining roads, land-use, road congestion, air pollution, noise pollution, public health, and disposing of the vehicle at the end of its life. Traffic collisions are the largest cause of injury-related deaths worldwide. Personal benefits include on-demand transportation, mobility, independence, and convenience. Societal benefits include economic benefits, such as job and wealth creation from the automotive industry, transportation provision, societal well-being from leisure and travel opportunities. People's ability to move flexibly from place to place has far-reaching implications for the nature of societies.
Etymology
The English word car is believed to originate from Latin / "wheeled vehicle" or (via Old North French) Middle English "two-wheeled cart", both of which in turn derive from Gaulish "chariot". It originally referred to any wheeled horse-drawn vehicle, such as a cart, carriage, or wagon. The word also occurrs in other Celtic languages.
"Motor car", attested from 1895, is the usual formal term in British English. "Autocar", a variant likewise attested from 1895 and literally meaning "self-propelled car", is now considered archaic. "Horseless carriage" is attested from 1895.
"Automobile", a classical compound derived from Ancient Greek () "self" and Latin "movable", entered English from French and was first adopted by the Automobile Club of Great Britain in 1897. It fell out of favour in Britain and is now used chiefly in North America, where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".
History
In 1649, Hans Hautsch of Nuremberg built a clockwork-driven carriage. The first steam-powered vehicle was designed by Ferdinand Verbiest, a Flemish member of a Jesuit mission in China around 1672. It was a scale-model toy for the Kangxi Emperor that was unable to carry a driver or a passenger. It is not known with certainty if Verbiest's model was successfully built or run.
Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle. He also constructed two steam tractors for the French Army, one of which is preserved in the French National Conservatory of Arts and Crafts. His inventions were limited by problems with water supply and maintaining steam pressure. In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of true cars. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. In the United Kingdom, sentiment against them led to the Locomotive Acts of 1865.
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but installed it in a boat on the river Saone in France. Coincidentally, in 1807, the Swiss inventor François Isaac de Rivaz designed his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen. Neither design was successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir, who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.
In November 1881, French inventor Gustave Trouvé demonstrated a three-wheeled car powered by electricity at the International Exposition of Electricity. Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the German Carl Benz patented his Benz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.
In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His first Motorwagen was built in 1885 in Mannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company, Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888, Bertha Benz, the wife and business partner of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.
In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became a joint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.
Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890, and sold their first car in 1892 under the brand name Daimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes that was placed in a specially ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
In 1890, Émile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines, and so laid the foundation of the automotive industry in France. In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed from Valentigney to Paris and Brest and back again. They were attached to the first Paris–Brest–Paris bicycle race, but finished six days after the winning cyclist, Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 by George Selden of Rochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent () for a two-stroke car engine, which hindered, more than encouraged, development of cars in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
In 1893, the first running, petrol-driven American car was built and road-tested by the Duryea brothers of Springfield, Massachusetts. The first public run of the Duryea Motor Wagon took place on 21 September 1893, on Taylor Street in Metro Center Springfield. Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897 and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.
In Britain, there had been several attempts to build steam cars with varying degrees of success, with Thomas Rickett even attempting a production run in 1860. Santler from Malvern is recognised by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894, followed by Frederick William Lanchester in 1895, but these were both one-offs. The first production vehicles in Great Britain came from the Daimler Company, a company founded by Harry J. Lawson in 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.
In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897, he built the first diesel engine. Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success. All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.
Mass production
Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan, and based upon stationary assembly line techniques pioneered by Marc Isambard Brunel at the Portsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US by Thomas Blanchard in 1821, at the Springfield Armory in Springfield, Massachusetts. This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes). It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colours available before 1913, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black". In 1914, an assembly line worker could buy a Model T with four months' pay.
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroën was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going bankrupt; by 1930, 250 companies which did not, had disappeared.
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910–1911), independent suspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, called the General Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared bonnet, doors, roof, and windows with Pontiac; by the 1990s, corporate powertrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.
In Europe, much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice of vertical integration, buying Hotchkiss' British subsidiary (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, from Abbey to Xtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete. Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Rüsselsheim in 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.
In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, like Daihatsu, or were the result of partnering with European companies, like Isuzu building the Wolseley A-9 in 1922. Mitsubishi was also partnered with Fiat and built the Mitsubishi Model A based on a Fiat vehicle. Toyota, Nissan, Suzuki, Mazda, and Honda began as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to take Toyoda Loom Works into automobile manufacturing would create what would eventually become Toyota Motor Corporation, the largest automobile manufacturer in the world. Subaru, meanwhile, was formed from a conglomerate of six companies who banded together as Fuji Heavy Industries, as a result of having been broken up under keiretsu legislation.
Components and design
Propulsion and fuels
Fossil fuels
Most cars in use in the early 2020s run on petrol burnt in an internal combustion engine (ICE). Some cities ban older more polluting petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say this phase-out of fossil fuel vehicles must be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.
Other hydrocarbon fossil fuels also burnt by deflagration (rather than detonation) in ICE cars include diesel, autogas, and CNG. Removal of fossil fuel subsidies, concerns about oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for cars. This includes hybrid vehicles, plug-in electric vehicles and hydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 million electric cars on the world's roads. Despite rapid growth, less than two per cent of cars on the world's roads were fully electric and plug-in hybrid cars by the end of 2021. Cars for racing or speed records have sometimes employed jet or rocket engines, but these are impractical for common use. Oil consumption has increased rapidly in the 20th and 21st centuries because there are more cars; the 1980s oil glut even fuelled the sales of low-economy vehicles in OECD countries. The BRIC countries are adding to this consumption.
Batteries
In almost all hybrid (even mild hybrid) and pure electric cars regenerative braking recovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot. Although all cars must have friction brakes (front disc brakes and either disc or drum rear brakes) for emergency stops, regenerative braking improves efficiency, particularly in city driving.
User interface
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include a steering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardised, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, the electric car and the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch, ignition timing, and a crank instead of an electric starter. However, new controls have also been added to vehicles, making them more complex. These include air conditioning, navigation systems, and in-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such as BMW's iDrive and Ford's MyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Electronics and interior
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, an entertainment system which originated from car radios, sideways windows which can be lowered or raised electrically (manually on earlier cars), and one or multiple auxiliary power outlets for supplying portable appliances such as mobile phones, portable fridges, power inverters, and electrical air pumps from the on-board electrical system. More costly upper-class and luxury cars are equipped with features earlier such as massage seats and collision avoidance systems.
Dedicated automotive fuses and circuit breakers prevent damage from electrical overload.
Lighting
Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-coloured reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
Weight and size
During the late 20th and early 21st century, cars increased in weight due to batteries, modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines" and, , typically weigh between . Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users. The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. The Wuling Hongguang Mini EV, a typical city car, weighs about . Heavier cars include SUVs and extended-length SUVs like the Suburban. Cars have also become wider.
Some places tax heavier cars more: as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycled aluminium instead of steel. It has been suggested that one benefit of subsidising charging infrastructure is that cars can use lighter batteries.
Seating and body style
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear. Full-size cars and large sport utility vehicles can often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand, sports cars are most often designed with only two seats. Utility vehicles like pickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, the sedan/saloon, hatchback, station wagon/estate, coupe, and minivan.
Safety
Traffic collisions are the largest cause of injury-related deaths worldwide. Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland, and Henry Bliss one of the US's first pedestrian car casualties in 1899 in New York City. There are now standard tests for safety in new cars, such as the Euro and US NCAP tests, and insurance-industry-backed tests by the Insurance Institute for Highway Safety (IIHS). However, not all such tests consider the safety of people outside the car, such as drivers of other cars, pedestrians and cyclists.
Costs and benefits
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs and auto maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance, are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience, and emergency power. During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."
Similarly the costs to society of car use may include; maintaining roads, land use, air pollution, noise pollution, road congestion, public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from the tax opportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.
Environmental effects
Car production and use has a large number of environmental impacts: it causes local air pollution plastic pollution and contributes to greenhouse gas emissions and climate change. Cars and vans caused 10% of energy-related carbon dioxide emissions in 2022. , electric cars produce about half the emissions over their lifetime as diesel and petrol cars. This is set to improve as countries produce more of their electricity from low-carbon sources. Cars consume almost a quarter of world oil production as of 2019. Cities planned around cars are often less dense, which leads to further emissions, as they are less walkable for instance. A growing demand for large SUVs is driving up emissions from cars.
Cars are a major cause of air pollution, which stems from exhaust gas in diesel and petrol cars and from dust from brakes, tyres, and road wear. Electric cars do not produce tailpipe emissions, but are generally heavier and therefore produce slightly more particulate matter. Heavy metals and microplastics (from tyres) are also released into the environment, during production, use and at the end of life. Mining related to car manufactoring and oil spills both cause water pollution.
Animals and plants are often negatively affected by cars via habitat destruction and fragmentation from the road network and pollution. Animals are also killed every year on roads by cars, referred to as roadkill. More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allow wildlife crossings) and creating wildlife corridors.
Governments use fiscal policies, such as road tax, to discourage the purchase and use of more polluting cars; Vehicle emission standards ban the sale of new highly pollution cars. Many countries plan to stop selling fossil cars altogether between 2025 and 2050. Various cities have implemented low-emission zones, banning old fossil fuel and Amsterdam is planning to ban fossil fuel cars completely. Some cities make it easier for people to choose other forms of transport, such as cycling. Many Chinese cities limit licensing of fossil fuel cars,
Social issues
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas. Growth in the popularity of cars and commuting has led to traffic congestion. Moscow, Istanbul, Bogotá, Mexico City and São Paulo were the world's most congested cities in 2018 according to INRIX, a data analytics company.
Access to cars
In the United States, the transport divide and car dependency resulting from domination of car-based transport systems presents barriers to employment in low-income neighbourhoods, with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income. Dependency on automobiles by African Americans may result in exposure to the hazards of driving while black and other types of racial discrimination related to buying, financing and insuring them.
Health impact
Air pollution from cars increases the risk of lung cancer and heart disease. It can also harm pregnancies: more children are born too early or with lower birth weight. Children are extra vulnerable to air pollution, as their bodies are still developing and air pollution in children is linked to the development of asthma, childhood cancer, and neurocognitive issues such as autism. The growth in popularity of the car allowed cities to sprawl, therefore encouraging more travel by car, resulting in inactivity and obesity, which in turn can lead to increased risk of a variety of diseases. When places are designed around cars, children have fewer opportunities to go places by themselves, and lose opportunities to become more independent.
Emerging car technologies
Although intensive development of conventional battery electric vehicles is continuing into the 2020s, other car propulsion technologies that are under development include wireless charging, hydrogen cars, and hydrogen/electric hybrids. Research into alternative forms of power includes using ammonia instead of hydrogen in fuel cells.
New materials which may replace steel car bodies include aluminium, fiberglass, carbon fiber, biocomposites, and carbon nanotubes. Telematics technology is allowing more and more people to share cars, on a pay-as-you-go basis, through car share and carpool schemes. Communication is also evolving due to connected car systems. Open-source cars are not widespread.
Autonomous car
Fully autonomous vehicles, also known as driverless cars, already exist as robotaxis but have a long way to go before they are in general use.
Car sharing
Car-share arrangements and carpooling are also increasingly popular, in the US and Europe. For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.
Industry
The automotive industry designs, develops, manufactures, markets, and sells the world's motor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year. The automotive industry in China produces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India. The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road; they burn over of petrol and diesel fuel yearly, consuming about of energy. The numbers of cars are increasing rapidly in China and India. In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars. The sustainable transport movement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage. In July 2021, the European Commission introduced the "Fit for 55" legislation package, outlining crucial directives for the automotive sector's future. According to this package, by 2035, all newly sold cars in the European market must be Zero-emissions vehicles.
Alternatives
Established alternatives for some aspects of car use include public transport such as busses, trolleybusses, trains, subways, tramways, light rail, cycling, and walking. Bicycle sharing systems have been established in China and many European cities, including Copenhagen and Amsterdam. Similar programmes have been developed in large US cities. Additional individual modes of transport, such as personal rapid transit could serve as an alternative to cars if they prove to be socially accepted. A study which checked the costs and the benefits of introducing Low Traffic Neighbourhood in London found the benefits overpass the costs approximately by 100 times in the first 20 years and the difference is growing over time.
| Technology | Transportation | null |
13677688 | https://en.wikipedia.org/wiki/Potassium-40 | Potassium-40 | Potassium-40 (K) is a radioactive isotope of potassium which has a long half-life of 1.25 billion years. It makes up about 0.012% (120 ppm) of natural potassium.
Potassium-40 undergoes three types of radioactive decay. In about 89.28% of events, it decays to calcium-40 (Ca) with emission of a beta particle (β, an electron) with a maximum energy of 1.31 MeV and an antineutrino. In about 10.72% of events, it decays to argon-40 (Ar) by electron capture (EC), with the emission of a neutrino and then a 1.460 MeV photon. The decay of K explains the large abundance of argon (nearly 1%) in the Earth's atmosphere, as well as prevalence of Ar over other isotopes. Very rarely (0.001% of events), it decays to Ar by emitting a positron (β) and a neutrino.
Potassium–argon dating
Potassium-40 is especially important in potassium–argon (K–Ar) dating. Argon is a gas that does not ordinarily combine with other elements. So, when a mineral forms – whether from molten rock, or from substances dissolved in water – it will be initially argon-free, even if there is some argon in the liquid. However, if the mineral contains any potassium, then decay of the K isotope present will create fresh argon-40 that will remain locked up in the mineral. Since the rate at which this conversion occurs is known, it is possible to determine the elapsed time since the mineral formed by measuring the ratio of K and Ar atoms contained in it.
The argon found in Earth's atmosphere is 99.6% Ar; whereas the argon in the Sun – and presumably in the primordial material that condensed into the planets – is mostly Ar, with less than 15% of Ar. It follows that most of Earth's argon derives from potassium-40 that decayed into argon-40, which eventually escaped to the atmosphere.
Contribution to natural radioactivity
The decay of K in Earth's mantle ranks third, after Th and U, as the source of radiogenic heat. The core also likely contains radiogenic sources, though how much is uncertain. It has been proposed that significant core radioactivity (1–2 TW) may be caused by high levels of U, Th, and K.
Potassium-40 is the largest source of natural radioactivity in animals including humans. A 70 kg human body contains about 140 g of potassium, hence about of K; whose decay produces about 3850 to 4300 disintegrations per second (becquerel) continuously throughout the life of the person.
Banana equivalent dose
Potassium-40 is famous for its usage in the banana equivalent dose, an informal unit of measure, primarily used in general educational settings, to compare radioactive dosages to the amount received by consuming one banana. The radioactive dosage from consuming one banana is generally agreed to be 10 sievert, or 0.1 microsievert, which is 1% of the average American's daily exposure to radiation.
| Physical sciences | Group 1 | Chemistry |
1056536 | https://en.wikipedia.org/wiki/Ant%20colony | Ant colony | An ant colony is a population of ants, typically from a single species, capable of maintaining their complete lifecycle. Ant colonies are eusocial, communal, and efficiently organized and are very much like those found in other social Hymenoptera, though the various groups of these developed sociality independently through convergent evolution. The typical colony consists of one or more egg-laying queens, numerous sterile females (workers, soldiers) and, seasonally, many winged sexual males and females. In order to establish new colonies, ants undertake flights that occur at species-characteristic times of the day. Swarms of the winged sexuals (known as alates) depart the nest in search of other nests. The males die shortly thereafter, along with most of the females. A small percentage of the females survive to initiate new nests.
Names
The term "ant colony" refers to a population of workers, reproductive individuals, and brood that live together, cooperate, and treat one another non-aggressively. Often this comprises the genetically related progeny from a single queen, although this is not universal across ants. The name "ant farm" is commonly given to ant nests that are kept in formicaria, isolated from their natural habitat. These formicaria are formed so scientists can study by rearing or temporarily maintaining them. Another name is "formicary", which derives from the Medieval Latin word formīcārium. The word also derives from formica. "Ant nests" are the physical spaces in which the ants live. These can be underground, in trees, under rocks, or even inside a single acorn. The name "anthill" (or "ant hill") applies to aboveground nests where the workers pile sand or soil outside the entrance, forming a large mound.
Colony size
Colony size (the number of individuals that make up the colony) is very important to ants: it can affect how they forage, how they defend their nests, how they mate, and even their physical appearances. Body size is often seen as the most important factor in shaping the natural history of non-colonial organisms; similarly, colony size is key in influencing how colonial organisms are collectively organized. Colonies have a significant range of sizes: some are just several ants living in a twig, while others are super-colonies with many millions of workers. Within a single ant colony, seasonal variation may be huge. For example, in the ant Dolichoderus mariae, one colony can shift from around 300 workers in the summer to over 2,000 workers per queen in the winter. Genetics and environmental factors can cause the variation among different colonies of a single species to be even bigger. Different ant species, even those in the same genus, may have enormous colony size disparities: Formica yessensis has colony sizes that are reported to be 306 million workers while Formica fusca colonies sometimes comprise only 500 workers.
Supercolonies
A supercolony occurs when many ant colonies over a large area unite. They still continue to recognize genetic differences in order to mate, but the different colonies within the super colony avoid aggression. Until 2000, the largest known ant supercolony was on the Ishikari coast of Hokkaidō, Japan. The colony was estimated to contain 306 million worker ants and one million queen ants living in 45,000 nests interconnected by underground passages over an area of . In 2000, an enormous supercolony of Argentine ants was found in Southern Europe (report published in 2002). Of 33 ant populations tested along the stretch along the Mediterranean and Atlantic coasts in Southern Europe, 30 belonged to one supercolony with estimated millions of nests and billions of workers, interspersed with three populations of another supercolony. The researchers claim that this case of unicoloniality cannot be explained by loss of their genetic diversity due to the genetic bottleneck of the imported ants. In 2009, it was demonstrated that the largest Japanese, Californian and European Argentine ant supercolonies were in fact part of a single global "megacolony". This intercontinental megacolony represents the most populous recorded animal society on earth, other than humans.
Another supercolony, measuring approximately wide, was found beneath Melbourne, Australia in 2004.
Organizational terminology
The following terminology is commonly used among myrmecologists to describe the behaviors demonstrated by ants when founding and organizing colonies:
Monogyny Establishment of an ant colony under a single egg-laying queen.
Polygyny Establishment of an ant colony under multiple egg-laying queens.
Oligogyny Establishment of a polygynous colony where the multiple egg-laying queens remain far apart from one another in the nest.
Haplometrosis Establishment of a colony by a single queen.
Pleometrosis Establishment of a colony by multiple queens.
Monodomy Establishment of a colony at a single nest site.
Polydomy Establishment of a colony across multiple nest sites.
Colony structure
Ant colonies have a complex social structure. Ants’ jobs are determined and can be changed by age. As ants grow older their jobs move them farther from the queen, or center of the colony. Younger ants work within the nest protecting the queen and young. Sometimes, a queen is not present and is replaced by egg-laying workers. These worker ants can only lay haploid eggs producing sterile offspring. Despite the title of queen, she does not delegate the tasks to the worker ants; however, the ants choose their tasks based on individual preference.
Ants as a colony also work as a collective "super mind". Ants can compare areas and solve complex problems by using information gained by each member of the colony to find the best nesting site or to find food. Some social-parasitic species of ants, known as the slave-making ant, raid and steal larvae from neighboring colonies.
Excavation
Ant hill art is a growing collecting hobby. It involves pouring molten metal (typically non-toxic zinc or aluminum), plaster or cement down an ant colony mound acting as a mold and upon hardening, one excavates the resulting structure. In some cases, this involves a great deal of digging.
The casts are often used for research and education purposes, but many are simply given or sold to natural history museums, or sold as folk art or as souvenirs. Walter R. Tschinkel notes in Ant Architecture: The Wonder, Beauty, and Science of Underground Nests that many commercial operations seem to use a casting procedure he developed and published based on the work of Brazilian myrmecologists Meinhard Jacoby and Luiz Forti. Usually, the hills are chosen after the ants have abandoned so as to not kill any ants; however in the Southeast United States, pouring into an active colony of invasive fire ants is a novel way to eliminate them.
Ant-beds
An ant-bed, in its simplest form, is a pile of soil, sand, pine needles, manure, urine, or clay or a composite of these and other materials that build up at the entrances of the subterranean dwellings of ant colonies as they are excavated. A colony is built and maintained by legions of worker ants, who carry tiny bits of dirt and pebbles in their mandibles and deposit them near the exit of the colony. They normally deposit the dirt or vegetation at the top of the hill to prevent it from sliding back into the colony, but in some species, they actively sculpt the materials into specific shapes and may create nest chambers within the mound.
| Biology and health sciences | Shelters and structures | Animals |
1057083 | https://en.wikipedia.org/wiki/Microbial%20ecology | Microbial ecology | Microbial ecology (or environmental microbiology) is the ecology of microorganisms: their relationship with one another and with their environment. It concerns the three major domains of life—Eukaryota, Archaea, and Bacteria—as well as viruses. This relationship is often mediated by secondary metabolites produced by microorganisms. These secondary metabolites are known as specialized metabolites and are mostly volatile or non volatile compounds. These metabolites include terpenoids, sulfur compounds, indole compound and many more.
The study of microorganisms and their interactions with the environment was pioneered by some scientists such as Sergei Winogradsky, Louis Pasteur, Martinus Beijerinck, Robert Koch, Lorenz Hiltner and many more.
Microorganisms are ubiquitous, and play various roles that impact the entire biosphere and any environment they found themselves both positively and negatively. Microbial life plays a primary role in regulating biogeochemical systems in virtually all environments, including some of the most extreme, from frozen environments and acidic lakes, to hydrothermal vents at the bottom of the deepest oceans, and some of the most familiar, such as the human small intestine, nose, and mouth. Microorganisms (soil microbes) are involved in biogeochemical cycles in the soil which helps in fixing nutrients, such as nitrogen, phosphorus and sulphur in the soil (environment). As a consequence of the quantitative magnitude of microbial life (calculated as cells,) microbes, by virtue of their biomass alone, constitute a significant carbon sink. Microbial interactions with their environment have industrial application such as wastewater treatment and bioremediation
Microorganisms also form several symbiotic relationships with other organisms in their environment where one or both of the partners involved benefit or one partner benefits while the other partner is harmed. Some symbiotic relationships include mutualism and commensalism.
Certain substances in the environment can kill microorganisms, thus preventing them from interacting with their environment. These substances are called antimicrobial substances. These can be antibiotic, antifungal, or even antiviral.
History
While microbes have been studied since the seventeenth century, this research was primarily on physiological perspective rather than an ecological one. For instance, Louis Pasteur and his disciples were interested in the problem of microbial distribution both on land and in the ocean. Louis Pasteur was the scientist who invented the pasteurization process. Martinus Beijerinck invented the enrichment culture, a fundamental method of studying microbes from the environment. He is often incorrectly credited with framing the microbial biogeographic idea that "everything is everywhere, but, the environment selects", which was stated by Lourens Baas Becking. Sergei Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context—making him among the first students of microbial ecology and environmental microbiology—discovering chemosynthesis, and developing the Winogradsky column in the process.
Beijerinck and Windogradsky, however, were focused on the physiology of microorganisms, not the microbial habitat or their ecological interactions. Modern microbial ecology was launched by Robert Hungate and coworkers, who investigated the rumen ecosystem. The study of the rumen required Hungate to develop techniques for culturing anaerobic microbes, and he also pioneered a quantitative approach to the study of microbes and their ecological activities that differentiated the relative contributions of species and catabolic pathways.
Progress in microbial ecology has been tied to the development of new technologies. The measurement of biogeochemical process rates in nature was driven by the availability of radioisotopes beginning in the 1950s. For example, 14CO2 allowed analysis of rates of photosynthesis in the ocean (ref). Another significant breakthrough came in the 1980s, when microelectrodes sensitive to chemical species like O2 were developed. These electrodes have a spatial resolution of 50–100 μm, and have allowed analysis of spatial and temporal biogeochemical dynamics in microbial mats and sediments.
Although measuring biogeochemical process rates could analyse what processes were occurring, they were incomplete because they provided no information on which specific microbes were responsible. It was long known that 'classical' cultivation techniques recovered fewer than 1% of the microbes from a natural habitat. However, beginning in the 1990s, a set of cultivation-independent techniques have evolved to determine the relative abundance of microbes in a habitat. Carl Woese first demonstrated that the sequence of the 16S ribosomal RNA molecule could be used to analyse phylogenetic relationships. Norm Pace took this seminal idea and applied it to analysfe 'who's there' in natural environments. The procedure involves (a) isolation of nucleic acids directly from a natural environment, (b) PCR amplification of small subunit rRNA gene sequences, (c) sequencing the amplicons, and (d) comparison of those sequences to a database of sequences from pure cultures and environmental DNA. This has provided tremendous insights into the diversity present within microbial habitats. However, it does not resolve how to link specific microbes to their biogeochemical role. Metagenomics, the sequencing of total DNA recovered from an environment, can provide insights into biogeochemical potential, whereas metatranscriptomics and metaproteomics can measure actual expression of genetic potential but remains more technically difficult.
Roles
Microorganisms are the backbone of all ecosystems, but even more so in areas where photosynthesis cannot take places due to lack of light. In such zones, chemosynthetic microbes provide energy, and carbon to the other organisms. Chemosynthetic microorganisms gain energy by oxidizing inorganic compounds such as hydrogen, nitrite, ammonia, elemental sulfur and iron(II). These organisms can be found in both aerobic and anaerobic environment. Chemosynthetic microorganisms are primary producer in extreme environment such as high temperature geothermal environments. These chemotrophic organisms can also function in anoxic environments by using other electron acceptors for their respiration.
Other microbes are decomposers, with the ability to recycle nutrients from other organisms' waste products. These microbes play a critical role in biogeochemical cycles. The nitrogen cycle, the phosphorus cycle, the sulphur cycle, and the carbon cycle all depend on microorganisms in one way or another. Each cycle works together to regulate the microorganisms in certain processes. For example, the nitrogen gas which makes up 78% of the Earth's atmosphere is unavailable to most organisms, until it is converted to a biologically available form by the microbial process of nitrogen fixation. Through these biogeochemical cycles, microorganisms are able to make nutrients such as nitrogen, phosphorus and potassium available in the soil. Differing from the nitrogen and carbon cycles, stable gaseous species are not created in the phosphorus cycle in the environment. Microorganisms play a role in solubilizing phosphate, improving soil health, and plant growth.
Again, microbial interaction are involved in bioremediation. Bioremediation is a technology that is employed to remove heavy metal contaminants from soil and wastewater using microorganisms. Microorganisms such as bacteria and fungi removes organic and inorganic pollutants by oxidizing or reducing them. Example of microorganisms that play role in bioremediation of heavy metals include Pseudomonas, Bacillus, Arthrobacter, Corynebacterium, Methosinus, Rhodococcus, Stereum hirsutum, Methanogens, Aspergilus niger, Pleurotus ostreatus, Rhizopus arrhizus, Azotobacter, Alcaligenes, Phormidium valderium, and Ganoderma applantus.
Symbiosis
Symbiosis is a close, long term relationship between organisms of different species. Symbiosis can be ectosymbiosis (one organism lives on the surface of other organism) or endosymbiosis (one organism lives inside other organism). Symbiotic relationship can also exist between microorganism that live closely together in a given environment. Symbiotic relationship is found at every level within the ecosystem and has contributed in shaping life. Microorganism produce, change, and utilize nutrient and natural products in numerous ways and this enable them to be ubiquitous. Microbes, especially bacteria, often engage in symbiotic relationships (either positive or negative) with other microorganisms or larger organisms. Plants and animals happen to be the habitat of microorganism that are involved in mutualistic relationship. While such relationships are vital for the development of the microbes, these microbes can provide protection to their host against unfavorable changes in the environment or against predators. They do this by producing bioactive compounds. Although physically small, symbiotic relationships amongst microbes are significant in eukaryotic processes and their evolution. The types of symbiotic relationship that microbes participate in include mutualism, commensalism, parasitism, and amensalism which affect the ecosystem in many ways.
Mutualism
Mutualism is a close relationship between two different species in which each has a positive effect on the other . In mutualism, one partner provides service to the other partner and also receives service from the other partner as well. Mutualism in microbial ecology is a relationship between microbial species and other species (example humans) that allows for both sides to benefit. Microorganisms form mutualistic relationship with other microorganism, plants or animals. One example of microbe-microbe interaction would be syntrophy, also known as cross-feeding, of which Methanobacterium omelianskii is a classical example. This consortium is formed by an ethanol fermenting organism and a methanogen. The ethanol-fermenting organism provides the archaeal partner with the H2, which this methanogen needs in order to grow and produce methane. Syntrophy has been hypothesized to play a significant role in energy and nutrient-limited environments, such as deep subsurface, where it can help the microbial community with diverse functional properties to survive, grow and produce maximum amount of energy. Anaerobic oxidation of methane (AOM) is carried out by mutualistic consortium of a sulfate-reducing bacterium and an anaerobic methane-oxidizing archaeon. The reaction used by the bacterial partner for the production of H2 is endergonic (and so thermodynamically unfavored) however, when coupled to the reaction used by archaeal partner, the overall reaction becomes exergonic. Thus the two organisms are in a mutualistic relationship which allows them to grow and thrive in an environment, deadly for either species alone. Lichen is an example of a symbiotic organism.
Microorganisms also engage in mutualistic relationship with plants and a typical example of such relationship is arbuscular mycorrhizal (AM) relationship, a symbiotic relationship between plants and fungi. This relationship begins when chemical signals are exchange between the plant and the fungi leading to the metabolic stimulation of the fungus. The fungus then attacks the epidermis of the plant’s root and penetrates its highly branched hyphae into the cortical cells of the plant. In this relationship, the fungi gives the plant phosphate and nitrogen obtained from the soil with the plant in return providing the fungi with carbohydrate and lipids obtained from photosynthesis. Also, microorganisms are involve in mutualistic relationship with mammals such as humans. As the host provides shelter and nutrient to the microorganisms, the microorganisms also provide benefits such as helping in the growth of the gastrointestinal tract of the host and protecting host from other detrimental microorganisms.
Commensalism
Commensalism is very common in microbial world, literally meaning "eating from the same table". It is a relationship between two species where one species benefits with no harm or benefit for the other species. Metabolic products of one microbial population are used by another microbial population without either gain or harm for the first population. There are many "pairs "of microbial species that perform either oxidation or reduction reaction to the same chemical equation. For example, methanogens produce methane by reducing CO2 to CH4, while methanotrophs oxidise methane back to CO2.
Amensalism
Amensalism (also commonly known as antagonism) is a type of symbiotic relationship where one species/organism is harmed while the other remains unaffected. One example of such a relationship that takes place in microbial ecology is between the microbial species Lactobacillus casei and Pseudomonas taetrolens. When co-existing in an environment, Pseudomonas taetrolens shows inhibited growth and decreased production of lactobionic acid (its main product) most likely due to the byproducts created by Lactobacillus casei during its production of lactic acid. However, Lactobacillus casei shows no difference in its behaviour.
Microbial resource management
Biotechnology may be used alongside microbial ecology to address a number of environmental and economic challenges. For example, molecular techniques such as community fingerprinting or metagenomics can be used to track changes in microbial communities over time or assess their biodiversity. Managing the carbon cycle to sequester carbon dioxide and prevent excess methanogenesis is important in mitigating global warming, and the prospects of bioenergy are being expanded by the development of microbial fuel cells. Microbial resource management advocates a more progressive attitude towards disease, whereby biological control agents are favoured over attempts at eradication. Fluxes in microbial communities has to be better characterized for this field's potential to be realised. In addition, there are also clinical implications, as marine microbial symbioses are a valuable source of existing and novel antimicrobial agents, and thus offer another line of inquiry in the evolutionary arms race of antibiotic resistance, a pressing concern for researchers.
In built environment and human interaction
Microbes exist in all areas, including homes, offices, commercial centers, and hospitals. In 2016, the journal Microbiome published a collection of various works studying the microbial ecology of the built environment.
A 2006 study of pathogenic bacteria in hospitals found that their ability to survive varied by the type, with some surviving for only a few days while others survived for months.
The lifespan of microbes in the home varies similarly. Generally bacteria and viruses require a wet environment with a humidity of over 10 percent. E. coli can survive for a few hours to a day. Bacteria which form spores can survive longer, with Staphylococcus aureus surviving potentially for weeks or, in the case of Bacillus anthracis, years.
In the home, pets can be carriers of bacteria; for example, reptiles are commonly carriers of salmonella.
S. aureus is particularly common, and asymptomatically colonizes about 30% of the human population; attempts to decolonize carriers have met with limited success and generally involve mupirocin nasally and chlorhexidine washing, potentially along with vancomycin and cotrimoxazole to address intestinal and urinary tract infections.
Antimicrobials
Antimicrobials are substances that are capable of killing microorganism. Antimicrobial can be antibacterial or antibiotic, antifungal or antiviral substance and most of these substance are natural products or may have been obtain from natural products. Natural products are therefore vital in the discovery of pharmaceutical agents. Most of the naturally obtained antibiotics are produced by organism under the phylum Actinobacteria. The genus Streptomyces are responsible for most of the antibiotic substances produced by Actinobacteria. These natural products with antimicrobial properties belong to the terpenoids, spirotetronate, tetracenedione, lactam, and other groups of compounds. Examples include napyradiomycin, nomimicin, formicamycin, and isoikarugamycin, Some metals, particularly copper, silver, and gold also have antimicrobial properties. Using antimicrobial copper-alloy touch surfaces is a technique that has begun to be used in the 21st century to prevent the transmission of bacteria. Silver nanoparticles have also begun to be incorporated into building surfaces and fabrics, although concerns have been raised about the potential side-effects of the tiny particles on human health. Due to the antimicrobial properties certain metals possess, products such as medical devices are made using those metals.
Evolution
Due to the high level of horizontal gene transfer among microbial communities, microbial ecology is also of importance to studies of evolution. Microbial ecology contributes to the evolution in many different parts of the world. For example, different microbial species evolved CRISPR dynamics and functions, allowing a better understanding of human health.
| Biology and health sciences | Ecology | Biology |
1057147 | https://en.wikipedia.org/wiki/Myrmecia%20%28ant%29 | Myrmecia (ant) | Myrmecia is a genus of ants first established by Danish zoologist Johan Christian Fabricius in 1804. The genus is a member of the subfamily Myrmeciinae of the family Formicidae. Myrmecia is a large genus of ants, comprising at least 93 species that are found throughout Australia and its coastal islands, while a single species is only known from New Caledonia. One species has been introduced out of its natural distribution and was found in New Zealand in 1940, but the ant was last seen in 1981. These ants are commonly known as bull ants, bulldog ants or jack jumper ants, and are also associated with many other common names. They are characterized by their extreme aggressiveness, ferocity, and painful stings. Some species are known for the jumping behavior they exhibit when agitated.
Species of this genus are also characterized by their elongated mandibles and large compound eyes that provide excellent vision. They vary in colour and size, ranging from . While workers and queens are hard to distinguish from each other due to their similar appearance, males are identifiable by their perceptibly smaller mandibles. Almost all Myrmecia species are monomorphic, with little variation among workers of a given species. Some queens are ergatoid and have no wings, while others have either stubby or completely developed wings. Nests are mostly found in soil, but they can be found in rotten wood and under rocks. One species does not nest in the ground at all; its colonies can only be found in trees.
A queen will mate with one or more males, and during colony foundation she will hunt for food until the brood have fully developed. The life cycle of the ant from egg to adult takes several months. Myrmecia workers exhibit greater longevity in comparison to other ants, and workers are also able to reproduce with male ants. Myrmecia is one of the most primitive group of ants on earth, exhibiting differentiated behaviors from other ants. Workers are solitary hunters and do not lead other workers to food. Adults are omnivores that feed on sweet substances, but the larvae are carnivores that feed on captured prey. Very few predators eat these ants due to their sting, but their larvae are often consumed by blindsnakes and echidnas, and a number of parasites infect both adults and brood. Some species are also effective pollinators.
Myrmecia stings are very potent, and the venom from these ants is among the most toxic in the insect world. In Tasmania, 3% of the human population are allergic to the venom of M. pilosula and can suffer life-threatening anaphylactic reactions if stung. People prone to severe allergic reactions can be treated with allergen immunotherapy (desensitisation).
Etymology and common names
The generic name Myrmecia derives from Greek word Myrmec- (+ -ia), meaning "ant". In Western Australia, the Indigenous Australians called these ants kallili or killal.
Ants of this genus are popularly known as bulldog ants, bull ants, or jack jumper ants due to their ferocity and the way they hang off their victims using their mandibles, and also due to the jumping behaviour displayed by some species. Other common names include "inch ants", "sergeant ants", and "soldier ants". The jack jumper ant and other members of the Myrmecia pilosula species group are commonly known as "black jumpers", "hopper ants", "jumper ants", "jumping ants", "jumping jacks", and "skipper ants".
Taxonomy and evolution
Genetic evidence suggests that Myrmecia diverged from related groups about 100 million years ago (Mya). The subfamily Myrmeciinae, to which Myrmecia belongs, is believed to have been found in the fossil record of 110 Mya ago. However, one study suggests that the age of the most recent common ancestor for Myrmecia and Nothomyrmecia is 74 Mya, and the subfamily is possibly younger than previously thought. Ants of the extinct genus Archimyrmex may possibly be the ancestor of Myrmecia. In the Evans' vespoid scala, Myrmecia and other primitive ant genera such as Amblyopone and Nothomyrmecia exhibit behavior which is similar to a clade of soil-dwelling families of vespoid wasps. Four species groups form a paraphyletic assemblage while five species groups form a monophyletic assemblage. The following cladogram shows the phylogenetic relationships within Myrmecia:
Classification
Myrmecia was first established by Danish zoologist Johan Christian Fabricius in his 1804 publication Systema Piezatorum, in which seven species from the genus Formica were placed into the genus along with the description of four new species.
Myrmecia has been classified into numerous families and subfamilies; in 1858, British entomologist Frederick Smith placed it in the family Poneridae, subfamily Myrmicidae. It was placed in the subfamily Ponerinae by Austrian entomologist Gustav Mayr in 1862. This classification was short-lived as Mayr reclassified the genus into the subfamily Myrmicinae three years later. In 1877, Italian entomologist Carlo Emery classified the genus into the newly established subfamily Myrmeciidae, family Myrmicidae. Smith, who had originally established the Myrmicidae as a family in 1851, reclassified them as a subfamily in 1858. He again treated them as a family in 1871. Swiss myrmecologist Auguste Forel initially treated the Poneridae as a subfamily and classified Myrmecia as one of its constituent genera but later placed it in the Ponerinae. William H. Ashemad placed the genus in the subfamily Myrmeciinae in 1905, but it was later placed back in the Ponerinae in 1910 by American entomologist William Morton Wheeler. In 1954, Myrmecia was placed into the Myrmeciinae; this was the last time the genus was placed into a different ant subfamily.
In 1911, Emery classified the subgenera Myrmecia, Pristomyrmecia, and Promyrmecia, based on the shape of their mandibles. Wheeler established the subgenus Halmamyrmecia, and the ants placed in it were characterized by their jumping behavior. The taxon Wheeler described was not referred to in his later publications, and the genera Halmamyrmecia and Pristomyrmecia were synonymised by John Clark. At the same time, Clark reclassified the subgenus Promyrmecia as a full genus. He revised the whole subfamily Myrmeciinae in 1951, recognizing 118 species and subspecies in Myrmecia and Promyrmecia; five species groups were assigned to Myrmecia and eight species groups to Promyrmecia. This revision was rejected by entomologist William Brown due to the lack of morphological evidence that would make the two genera distinct from each other. Due to this, Brown classified Promyrmecia as a synonym of Myrmecia in 1953. Clark's revision was the last major taxonomic study on the genus before 1991, and only a single species was described in the intervening years. In 2015, four new Myrmecia ants were described by Robert Taylor, all exclusive to Australia. Currently, 94 species are described in the genus, but as many as 130 species may exist.
Under the present classification, Myrmecia is the only extant genus in the tribe Myrmeciini, subfamily Myrmeciinae. It is a member of the family Formicidae in the order Hymenoptera. The type species for the genus is M. gulosa, discovered by Joseph Banks in 1770 during his expedition with James Cook on HMS Endeavour. M. gulosa is among the earliest Australian insects to be described, and the specimen Banks collected is housed in the Joseph Banks Collection in the Natural History Museum, London. M. gulosa was described by Fabricius in 1775 under the name Formica gulosa and later designated as the type species of Myrmecia in 1840.
Genetics
The number of chromosomes per individual varies from one to over 70 among the species in the genus. The genome of M. pilosula is contained on a single pair of chromosomes (males have just one chromosome, as they are haploid). This is the lowest number possible for any animal, and workers of this species are homologous. Like M. pilosula, M. croslandi also contains a single chromosome. While these ants only have a single chromosome, M. pyriformis contains 41 chromosomes, while M. brevinoda contains 42. The chromosome count for M. piliventris and M. fulvipes is two and 12, respectively. The genus Myrmecia retains many traits that are considered basal for all ants (i.e. workers foraging alone and relying on visual cues).
Species groups
Myrmecia contains a total of nine species groups. Originally, seven species groups were established in 1911, but this was raised to 13 in 1951; Promyrmecia had a total of eight, while Myrmecia only had five. M. maxima does not appear to be in a species group, as no type specimen is available.
Description
Myrmecia ants are easily noticeable due to their large mandibles, large compound eyes that provide excellent vision and a powerful sting that they use to kill prey. Each of their eyes contains 3,000 facets, making them the second largest in the ant world. Size varies widely, ranging from in length. The largest Myrmecia species is M. brevinoda, with workers measuring ; M. brevinoda workers are also the largest in the world. Almost all species are monomorphic, but M. brevinoda is the only known species where polymorphism exists. It is well known that two worker subcastes exist, but this does not distinguish them as two different polymorphic forms. This may be due to the lack of food during winter and they could be incipient colonies. The division of labour is based on the size of ant, rather than its age, with the larger workers foraging for food or keeping guard outside the nest, while the smaller workers tend to the brood.
Their colouration is variable; black combined with red and yellow is a common pattern, and many species have golden-coloured pubescence (hair). Many other species are brightly coloured which warns predators to avoid them. The formicine ant Camponotus bendigensis is similar in appearance to M. fulvipes, and data suggest C. bengdigensis is a batesian mimic of M. fulvipes. The number of malpighian tubules differs between castes; in M. dispar, males have 16 tubules, queens range from 23 to 26, and workers have 21 to 29.
Worker ants are usually the same size as each other, although this is not true for some species; worker ants of M. brevinoda, for example, vary in length from . The mandibles of the workers are long with a number of teeth, and the clypeus is short. The antennae consist of 12 segments and the eyes are large and convex. Based on a study on the antennal sensory of M. pyriformis, the antennal sensilla are known to have eight types. Large ocelli are always present.
Queens are usually larger than the workers, but are similar in colour and body shape. The head, node, and postpetiole are broader in the queen, and the mandibles are shorter and also broad. Myrmecia queens are unique in that particular species either have fully winged queens, queens with poorly developed wings, or queens without any wings. For example, M. aberrans and M. esuriens queens are ergatoid, meaning that they are wingless. Completely excavated nests showed no evidence of any winged queen residing within them. Some species have queens which are subapterous, meaning they are either wingless or only have rudiments of wings; the queens can be well developed with or without these wing buds. M. nigrocincta and M. tarsata are "brachypterous", where queens have small and rudimentary wings which render the queen flightless. Dealated queens with developed wings and thoraces are considered rare. In some species, such as M. brevinoda and M. pilosula, three forms of queens exist, with the dealated queens being the most recognisable.
Males are easy to identify due to their perceptibly broad and smaller mandibles. Their antennae consist of 13 segments, and are almost the same length as the ants' bodies. Ergatandromorph (an ant that exhibits both male and worker characteristics) males are known; in 1985, a male M. gulosa was collected before it hatched from its cocoon, and it had a long but excessively curved left mandible while the other mandible was small. On the right side of its body, it was structurally male, but the left side appeared female. The head was also longer on the female side, its colour was darker, and the legs and prothorax were smaller on the male side. Male genitalia are retracted into a genital cavity that is located in the posterior end of the gaster. The sperm is structurally the same to other animal sperm, forming an oval head with a long tail.
Among the largest larvae examined were those of M. simillima, reaching lengths of . The pupae are enclosed in dark cocoons.
Distribution and habitat
Almost all species in the genus Myrmecia are found in Australia and its coastal islands. M. apicalis is the only species not native to Australia and is only found in the Isle of Pines, New Caledonia. Only one ant has ever established nests outside its native range; M. brevinoda was first discovered in New Zealand in 1940 and the ant was recorded in Devonport in Auckland in 1948, 1965 and 1981 where a single nest was destroyed. Sources suggest the ant was introduced to New Zealand through human activity; they were found inside a wooden crate brought from Australia. While no eradication attempt was made by the New Zealand government, the ant has not been found in the country since 1981 and is presumed to have been eradicated.
Ants of this genus prefer to inhabit grasslands, forests, heath, urban areas and woodland. Nests are found in Callitris forest, dry marri forest, Eucalyptus woodland and forests, mallee scrub, in paddocks, riparian woodland, and wet and dry sclerophyll forests. They also live in dry sandplains, and coastal plain. When a queen establishes a new colony, the nest is at first quite simple structurally. The nest gradually expands as the colony grows larger. Nests can be found in debris, decaying tree stumps, rotten logs, rocks, sand, and soil, and under stones. While most species nest underground, M. mjobergi is an arboreal nesting species found on epiphytic ferns of the genus Platycerium. Two types of nests have been described for this genus: a simple nest with a noticeable shaft inside, and a complex structure surrounded by a mound. Some species construct dome-shaped mounds containing a single entrance, but some nests have numerous holes that are constantly used and can extend several metres underground. Sometimes, these mounds can be 0.5 m (20 in) high. Workers decorate these nests with a variety of items, including charcoal, leaves, plant fragments, pebbles, and twigs. Some ants use the warmth by decorating their nests with dry materials that heat quickly, providing the nest with solar energy traps.
Behaviour and ecology
Foraging
The genus Myrmecia is among the most primitive of all known living ants, and ants of the genus are considered specialist predators. Unlike most ants, workers are solitary hunters, and do not lay pheromone trails; nor do they recruit others to food. Tandem running does not occur, and workers carrying other workers as a method of transportation is rare or awkwardly executed. Although Myrmecia is not known to lay pheromone trails to food, M. gulosa is capable of inducing territorial alarm using pheromones while M. pilosula can attack en masse, suggesting these ants can also induce alarm pheromones. M. gulosa induces territorial alarm behaviour using pheromones from three sources; an alerting substance from the rectal sac, a pheromone found in the Dufour's gland, and an attack pheromone from the mandibular gland. Despite Myrmecia ants being among the most primitive ants, they exhibit some behaviours considered "advanced"; adults will sometimes groom each other and the brood, and distinct nest odors exist for each colony.
Most species are diurnal, and forage on the ground or onto low vegetation in search of food, but a few are nocturnal and only forage at night. Most Myrmecia ants are active during the warmer months, and are dormant during winter. However, M. pyriformis is a nocturnal species that is active throughout the whole year. M. pyriformis also has a unique foraging schedule; 65% of individuals who went out to forage left the nest in 40–60 minutes, while 60% of workers would return to the nest in the same duration of time at dusk. Foraging workers rely on landmarks for navigation back home. If displaced a short distance, they will scan their surroundings, and then rapidly move in the direction of the nest. M. vindex ants carry dead nest-mates out of their nests and place them on refuse piles, a behaviour known as necrophoresis.
Pollination
While pollination by ants is somewhat rare, several Myrmecia species have been observed pollinating flowers. For example, the orchid Leporella fimbriata is a myrmecophyte which can only be pollinated by the winged male ant M. urens. Pollination of this orchid usually occurs between April and June during warm afternoons, and may take several days until the short-lived males all die. The flower mimics M. urens queens, so the males move from flower to flower in an attempt to copulate with it. M. nigrocincta workers have been recorded visiting flowers of Eucalyptus regnans and Senna acclinis, and are considered a potential pollination vector for E. regnans trees. Although Senna acclinis is self-compatible, the inability of M. nigrocincta to appropriately release pollen would restrict its capacity to effect pollination. Foraging M. pilosula workers are regularly observed on the inflorescences of Prasophyllum alpinum (mostly pollinated by wasps of the family Ichneumonidae). Although pollinia are often seen in the ants' jaw, they have a habit of cleaning their mandibles on the leaves and stems of nectar-rich plants before moving on, preventing pollen exchange. Whether M. pilosula contributes to pollination is unknown.
Diet
Despite their ferocity, adults are nectarivores, consuming honeydew (a sweet, sticky liquid found on leaves, deposited from various insects), nectar, and other sweet substances. The larvae, however, are carnivorous. After they reach a certain size, they are fed insects that foragers capture and kill. The workers also regurgitate food for other ants to consume. Young ants are rarely fed food regurgitated by adults. Adult workers prey on a variety of insects and arthropods, such as beetles, caterpillars, earwigs, Ithone fusca, Perga sawflies, and spiders. Other prey include invertebrates such as bees, cockroaches, crickets, wasps and other ants; in particular, workers prey on Orthocrema ants (a subgenus of Crematogaster) and Camponotus, although this is risky since these ants are able to call for help through chemical signals. Slaters, earthworms, scale insects, frogs, lizards, grass seeds, possum feces and kangaroo feces are also collected as food. Flies such as the housefly and blowfly are consumed. Some species, such as M. pilosula, will only attack small fly species and ignore larger ones. Nests of the social spider Delena cancerides are often invaded by M. pyriformis ants, and nests once housing these spiders are filled with debris such as twigs and leaves by the workers, rendering them useless. These "scorched earth" tactics prevent the spiders competing with the ants. M. gulosa attacks Christmas beetles, but workers later bury them.
Myrmecia is one of the very few genera where the workers lay trophic eggs, or infertile eggs laid as food for viable offspring. Workers laying trophic eggs have only been reported in two species; these species are M. forceps and M. gulosa. Depending on the species, colonies specialise in trophallaxis; queens and larvae eat eggs that are laid by worker individuals, but the workers do not feed on eggs. Neither adults nor larvae consume food during winter, but cannibalism among larvae is known to occur throughout the year. The larvae only cannibalise each other; this is most likely to happen when no dead insects are available.
Predators, parasites and associations
Myrmecia ants deter many potential predators due to their sting. The blindsnake Ramphotyphlops nigrescens consumes the larvae and pupae of Myrmecia, while avoiding the potent sting of the adults, which it is vulnerable to. The short-beaked echidna (Tachyglossus aculeatus) also eats the eggs and larvae. Nymphs of the assassin bug species Ptilocnemus lemur lure these ants to themselves by trying to make the ant sting them, by waving its hind legs around to attract a potential prey item. Body remains of Myrmecia have been found in the stomach contents of the eastern yellow robin (Eopsaltria australis). The Australian magpie (Gymnorhina tibicen), the black currawong (Strepera versicolor), and the white-winged chough (Corcorax melanorhamphos) prey on these ants, but few are successfully taken.
The host association between Myrmecia and eucharitid wasps began several million years ago; M. forficata larvae are the host to Austeucharis myrmeciae, being the first recorded eucharitid parasitoid of an ant, and Austeucharis fasciiventris is a parasitoid to M. gulosa pupae. M. pilosula is affected by a gregarines parasite that changes an ant's colour from their typical black appearance to brown. This was discovered when brown workers were dissected and found to have gregarinasina spores, while black workers showed no spores. Another unidentified gregarine parasite is known to infect the larvae of M. pilosula and other Myrmecia species. This gregarine parasite also softens the ant's cuticle. Other parasites include Beauveria bassiana, Paecilomyces lilacinus, Chalcura affinis, Tricoryna wasps, and various mermithid nematodes.
M. hirsuta and M. inquilina are the only known species in this genus that are inquilines and live in other Myrmecia colonies. An M. inquilina queen has been found in an M. vindex colony. Myrmecia is a larval attendant to the butterfly Theclinesthes serpentata (saltbush blue), while some species, particularly M. nigrocincta, enslave other ant species, notably those in the genus Leptomyrmex. M. nigriceps ants are able to enter another colony of the same species without being attacked, as they may be unable to recognise alien conspecifics, nor do they try to distinguish nestmates from ants of another colony. Formicoxenus provancheri and M. brevinoda share a form of symbiotic relationship known as xenobiosis, where one species of ant will live with another and raise their young separately, with M. brevinoda being the host. Solenopsis may sometimes nest in Myrmecia colonies, as a single colony was found to have three or four Solenopsis nests inside. Lagria beetles and rove beetles in the genus Heterothops dwell inside colonies and skinks and frogs have also been found living unmolested within Myrmecia nests. Metacrinia nichollsi, for example, has been reported living inside M. regularis colonies.
Life cycle
Like other ants, Myrmecia ants begin as an egg. If the egg is fertilised, the ant becomes a diploid female; if not, it becomes a haploid male. They develop through complete metamorphosis, meaning that they pass through larval and pupal stages before emerging as adults.
During the process of founding a colony, as many as four queens cooperate with each other to find a suitable nesting ground, but after the first generation of workers is born, they fight each other until one queen is left alive. However, occasional colonies are known to have as many as six queens coexisting peacefully in the presence of workers. A queen searches for a suitable nest site to establish her colony, and excavates a small chamber in the soil or under logs and rocks, where she takes care of her young. A queen also hunts for prey instead of staying in her nest, a behaviour known as claustral colony founding. Although queens do provide sufficient amounts of food to feed their larvae, the first workers are "nanitics" (or minims), smaller than the smallest workers encountered in older developed colonies. Several species do not have any worker caste, and solitary queens will raid a colony, kill the residing queen, and take over the colony. The first generation of workers may take a while to fully develop into adults; for example, M. forficata eggs take around 100 days to fully develop, while other species may take up to eight months.
Queens lay around eight eggs, but less than half of these eggs develop. Some species, such as M. simillima and M. gulosa, lay their eggs singly on the colony floor, while M. pilosula ants may lay eggs in a clump. These clumps have two to 30 eggs each with no larvae present. Certain Myrmecia species do not lay their eggs singly and form clumps of eggs, instead. The larvae are capable of crawling short distances without the assistance of adult workers, and workers will cover the larvae in dirt to help them spin into a cocoon. If cocoons are isolated from a colony, they are capable of shedding their skins before hatching, allowing themselves to advance to full pigmentation. Sometimes, a newborn can emerge from its pupa without the assistance of other ants. Once these ants are born, they are able to identify distinct tasks, a well known primitive trait. Myrmecia lifespans vary in each species, but their longevity is greater than many ant genera: M. nigrocincta and M. pilosula have a lifespan of one year, while M. nigriceps workers can live up to 2.2 years. The oldest recorded worker was a M. vindex, living up to 2.6 years. If a colony is deprived of workers, queens are able to revert to colony-founding behaviours until a sustainable workforce emerges. A colony may also emigrate to a new nesting spot altogether.
Reproduction
Winged, virgin queens and males, known as alates, appear in colonies during January, before their nuptial flight. Twenty females or fewer are found in a single colony, while males are much more common. The nuptial flight begins at different times for each species; they have been recorded in mid-summer to autumn (January to early April), but there is one case of a nuptial flight occurring from May to July. Ideal conditions for nuptial flight are hot stormy days with windspeeds of 30 km/h (18 mi/h) and temperatures reaching 30 °C (86 °F), and elevations of 91 metres (300 ft). Nuptial flights are rarely recorded due to queens leaving their nest singly, although as many as four queens may leave the nest at the same time. Species are both polygynous and polyandrous, with queens mating with one to ten males. Polygynous and polyandrous societies can occur in a single nest, but particular species are either primarily polygynous or primarily polyandrous. For example, nearly 80% of tested M. pilosula colonies are polygynous while M. pyriformis colonies are mostly polyandrous. Nuptial flight takes place during the morning and can last until late afternoon. When the alates leave the nest, most species launch themselves into the air from trees and shrubs, although others launch themselves off the ground. Queens discharge a glandular secretion from the tergal gland, which males are strongly attracted to. As many as 1,000 alates will gather to mate. A queen was once found to have five or six males attempting to copulate with her. The queen is unable to bear the weight of the large number of males trying to mate with her, and will drop to the ground, with the ants dispersing later on. M. pulchra queens are ergatoid and cannot fly; the males meet the queen out in an open area away from the nest and mate, and these queens do not return to their nest after mating.
Both independent and dependent colony foundation can occur after mating. Isolation by distance (IBD) patterns have been recorded with M. pilosula queens, where nests that tend to be closer together were more genetically related to each other in comparison to other nests further away. Independent colony foundation is closely associated with queens which engage in nuptial flight in areas far from their home colony, showing that dependent colony foundation mostly occurs if they mate near their nest. In some cases, queens could seek adoption into alien colonies if there are no suitable areas to find a nest or independent colony foundation cannot be carried out. Other queens could try to return to their home nest after nuptial flight, but they may end up in another nest near the nest they originally came from. In multiple-queen societies, the egg-laying queens are generally unrelated to one another, but one study showed that it is possible for multiple queens in the same colony to be genetically related to each other. Depending on the species, the number of individuals present in a colony can range from 50 to over 2,200 individuals. A colony with less than 100 workers is not considered a mature colony. M. dispar colonies have around 15 to 329 ants, M. nigrocincta have over 1,000, M. pyriformis have from 200 to over 1,400 and M. gulosa have nearly 1,600. A colony can last for a number of years. Foraging behaviour among smaller workers which never usually leave the nest can be a sign of a colony's impending demise.
Workers are known to produce their own eggs, but these eggs are unfertilised and hatch into male ants. There is a chance of workers attacking a particular individual who has successfully produced male offspring due to a change in a workers cuticular hydrocarbon; cuticular hydrocarbons are believed to play a vital role in the regulation of reproduction. However, this is not always the case. Myrmecia is one of several ant genera which possess gamergate workers, where a female worker is able to reproduce with mature males when the colony is lacking a queen. Myrmecia workers are highly fertile and can successfully mate with males. A colony of M. pyriformis without a queen was collected in 1998 and kept in captivity, during which time the gamergates produced viable workers for three years. Ovarian dissections showed that three workers of this colony mated with males and produced female workers. Queens have bigger ovaries than the workers, with 44 ovarioles while workers have 8 to 14. Spermatheca is present in M. gulosa workers, based on eight dissected individuals showing a spermatheca structurally similar to those found in queens. These spermathecas did not have any sperm. Why the queen was not replaced is still unknown.
Vision
While most ants have poor eyesight, Myrmecia ants have excellent vision. This trait is important to them, since Myrmecia primarily relies on visual cues for navigation. These ants are capable of discriminating the distance and size of objects moving nearly a metre away. Winged alates are only active during the day, as they can see better. Members of a colony have different eye structures due to each individual fulfilling different tasks, and nocturnal species have larger ommatidia in comparison to those that are active during the day. Facet lenses also vary in size; for example, the diurnal species M. croslandi has a smaller lens in comparison to M. nigriceps and M. pyriformis which have larger lenses. Myrmecia ants have three photoreceptors that can see UV light, meaning they are capable of seeing colours that humans cannot. Their vision is said to be better than some mammals, such as cats, dogs or wallabies. Despite their excellent vision, worker ants of this genus find it difficult to find their nests at night, due to the difficulty of finding the landmarks they use to navigate. They are thus more likely to return to their nests the following morning, walking slowly with long pauses.
Sting
Myrmecia workers and queens possess a sting described as "sharp in pain with no burning." The pain may last for several minutes. In the Starr sting pain scale, a scale which compares the overall pain of hymenopteran stings on a four-point scale, Myrmecia stings were ranked from 2–3 in pain, described as "painful" or "sharply and seriously painful". Unlike in honeybees, the sting lacks barbs, and so the stinger is not left in the area the ant has stung, allowing the ants to sting repeatedly without any harm to themselves. The retractable sting is located in their abdomen, attached to a single venom gland connected by the venom sac, which is where the venom is accumulated. Exocrine glands are known in some species, which produce the venom compounds later used to inject into their victims. Examined workers of larger species have long and very potent stingers, with some stings measuring .
Interaction with humans
Myrmecia is one of the best-known genera of ants. Myrmecia ants usually display defensive behaviour only around their nests, and are more timid while foraging. However, most species are extremely aggressive towards intruders; a few, such as M. tarsata, are timid, and the workers retreat into their nest instead of pursuing the intruder. If a nest is disturbed, a large force of workers rapidly swarms out of their nest to attack and kill the intruder. Some species, particularly those of the M. nigrocincta and M. pilosula species groups, are capable of jumping several inches when they are agitated after their nest has been disturbed; jumper ants propel their jumps by a sudden extension of their middle and hind legs. M. pyriformis is considered the most dangerous ant in the world by the Guinness World Records. M. inquilina is the only species of this genus that is considered vulnerable by the IUCN, although the conservation status needs updating.
Fatalities associated with Myrmecia stings are well known, and have been attested to by multiple sources. In 1931 two adults and an infant girl from New South Wales died from ant stings, possibly from M. pilosula or M. pyriformis. Another fatality was reported in 1963 in Tasmania. Between 1980 and 2000, there were six recorded deaths, five in Tasmania and one in New South Wales. Four of these deaths were due to M. pilosula, while the remaining two died from a M. pyriformis sting. Half of the victims had known ant-sting allergies, but only one of the victims was carrying adrenaline before being stung. Most victims died within 20 minutes of being stung, but one of the victims died in just five minutes from a M. pyriformis sting. No death has been officially recorded since 2003, but M. pilosula may have been responsible for the death of a man from Bunbury in 2011. Prior to the establishment of a desensitisation program, Myrmecia stings caused one fatality every four years.
Venom
Each Myrmecia species has different venom components, so people who are allergic to ants are advised to stay away from Myrmecia, especially from species they have never encountered before. Based on five species, the median lethal dose (LD50) is 0.18–0.35 mg/kg, making it among the most toxic venoms in the insect world. The toxicity of the venom may have evolved due to the intense predation by animals and birds during the day, since Myrmecia is primarily diurnal. In Tasmania, 2–3% of the human population is allergic to M. pilosula venom. In comparison, only 1.6% people are allergic to the venom of the western honeybee (Apis mellifera), and 0.6% to the venom of the European wasp (Vespula germanica). In a 2011 Australian ant-venom allergy study, the objective of which was to determine what native Australian ants were associated with ant sting anaphylaxis, 265 of the 376 participants in the study reacted to the sting of several Myrmecia species. Of these, the majority of patients (176) reacted to M. pilosula venom and to those of several other species. In Perth, M. gratiosa was responsible for most cases of anaphylaxis due to ant stings, while M nigriscapa and M. ludlowi were responsible for two cases. The green-head ant (Rhytidoponera metallica) was the only ant other than Myrmecia species to cause anaphylaxis in patients. Dogs are also at risk of death from Myrmecia ants; renal failure has been recorded in dogs experiencing mass envenomation, and one dog was euthanised due to its deteriorating health despite treatment. Sensitivity is persistent for many years. Pilosulin 3 has been identified as a major allergen in M. pilosula venom, while pilosulin 1 and pilosulin 4 are minor allergens.
Sting treatment
The nature of treatment for a Myrmecia sting depends on the severity of stingose, and the use of antihistamine tablets are other methods to reduce the pain. Indigenous Australians use bush remedies to treat Myrmecia stings, such as rubbing the tips of bracken ferns onto the stung area. Carpobrotus glaucescens is also used to treat stung areas, using juices that are squeezed and rubbed onto the area, which quickly relieves the pain from the sting.
Emergency treatment is only needed if a person is showing signs of a severe allergic reaction. Prior to calling for help, stung persons should be laid down, and their legs elevated. An EpiPen or an Anapen is given to people at risk of anaphylaxis, to use in case they are stung. If someone experiences anaphylactic shock, adrenaline and intravenous infusions are required, and those who suffer cardiac arrest require resuscitation. Desensitisation (also called allergy immunotherapy) is offered to those who are susceptible to M. pilosula stings, and the program has shown effectiveness in preventing anaphylaxis. However, the standardisation of M. pilosula venom is not validated, and the program is poorly funded. The Royal Hobart Hospital and the Royal Adelaide Hospital are the only known hospitals to run desensitisation programs. During immunotherapy, patients are given an injection of venom under the skin. The first dose is small, but dosage gradually increases. This sort of immunotherapy is designed to change how the immune system reacts to increased doses of venom entering the body.
Before venom immunotherapy, whole-body extract immunotherapy was widely used due to its apparent effectiveness, and it was the only immunotherapy used for ant stings. However, fatal failures were reported, and this led to scientists to research for alternative methods of desensitisation. Before 1986 allergic reactions were not recorded and there was no study on Myrmecia sting venom; whole body extracts were later used on patients during the 1990s, but this was found to be ineffective and was subsequently withdrawn. In 2003, ant venom immunotherapy was shown to be safe and effective against Myrmecia venom.
Prevention
Myrmecia ants are frequently encountered by humans, and avoiding them is difficult. They defend their nests aggressively, so visible nests should be avoided. Wearing closed footwear such as boots and shoes can reduce the risk of getting stung; these ants are capable of stinging through fabric, however. A risk of being stung while gardening also exists; most stings occur when someone is gardening and is unaware of the ants' presence. Eliminating nearby nests or moving to areas with low Myrmecia populations significantly decreases the chances of getting stung.
Human uses
Due to their large mandibles, Myrmecia ants have been used as surgical sutures to close wounds.
Cultural representations
The ant is featured on a postage stamp and on an uncirculated coin which are part of the Things That Sting issue by Australia Post, and M. gulosa is the emblem for the Australian Entomological Society. Myrmecia famously appears in the philosopher Arthur Schopenhauer's major work, The World as Will and Representation, as a paradigmatic example of strife and constant destruction endemic to the "will to live".
Notable Australian poet Diane Fahey wrote a poem about Myrmecia, which is based on Schopenhauer's description, and a music piece written by German composer Karola Obermüller was named after the ant.
| Biology and health sciences | Hymenoptera | Animals |
1059613 | https://en.wikipedia.org/wiki/Space%20capsule | Space capsule | A space capsule is a spacecraft designed to transport cargo, scientific experiments, and/or astronauts to and from space. Capsules are distinguished from other spacecraft by the ability to survive reentry and return a payload to the Earth's surface from orbit or sub-orbit, and are distinguished from other types of recoverable spacecraft by their blunt shape, not having wings and often containing little fuel other than what is necessary for a safe return. Capsule-based crewed spacecraft such as Soyuz or Orion are often supported by a service or adapter module, and sometimes augmented with an extra module for extended space operations. Capsules make up the majority of crewed spacecraft designs, although one crewed spaceplane, the Space Shuttle, has flown in orbit.
Current examples of crewed space capsules include Soyuz, Shenzhou, and Dragon 2. Examples of new crew capsules currently in development include NASA's Orion, Boeing's Starliner, Russia's Orel, India's Gaganyaan, and China's Mengzhou. Historic examples of crewed capsules include Vostok, Mercury, Voskhod, Gemini, and Apollo, and active programs include the New Shepard launches. A crewed space capsule must be able to sustain life in an often demanding thermal and radiation environment in the vacuum of space. It may be expendable (used once, like Soyuz) or reusable (like Crew Dragon).
History
Vostok
The Vostok was the Soviet Union's first crewed space capsule. The first human spaceflight was Vostok 1, accomplished on April 12, 1961 by cosmonaut Yuri Gagarin.
The capsule was originally designed for use both as a camera platform for the Soviet Union's first spy satellite program, Zenit and as a crewed spacecraft. This dual-use design was crucial in gaining Communist Party support for the program. The design used a spherical reentry module, with a biconic descent module containing attitude control thrusters, on-orbit consumables, and the retro rocket for orbit termination. The basic design has remained in use for some 40 years, gradually adapted for a range of other uncrewed satellites.
It was a single-seat capsule that was 4.4 meters long and 2.4 meters in diameter, weighing 4.73 tonnes at launch. The reentry module was completely covered in ablative heat shield material, in diameter, weighing . The capsule was covered with a nose cone to maintain a low-drag profile for launch, with a cylindrical interior cabin approximately in diameter nearly perpendicular to the capsule's longitudinal axis. The cosmonaut sat in an ejection seat with a separate parachute for escape during a launch emergency and landing during a normal flight. The capsule had its own parachute for landing on the ground. Although official sources stated that Gagarin had landed inside his capsule, a requirement for qualifying as a first crewed spaceflight under International Aeronautical Federation (IAF) rules, it was later revealed that all Vostok cosmonauts ejected and landed separately from the capsule. The capsule was serviced by an aft-facing conical equipment module long by , weighing containing nitrogen and oxygen breathing gasses, batteries, fuel, attitude control thrusters, and the retrorocket. It could support flights as long as ten days. Six Vostok launches were successfully conducted, the last two pairs in concurrent flights. The longest flight was just short of five days, on Vostok 5 on June 14–19, 1963.
Since the attitude control thrusters were located in the instrument module which was discarded immediately prior to reentry, the reentry module's path and orientation could not be actively controlled. This meant that the capsule had to be protected from reentry heat on all sides, determining the spherical design (as opposed to Project Mercury's conical design, which allowed for maximum volume while minimizing the heat shield diameter). During reentry, the heat of atmospheric friction is so great that air molecules around the capsule are ionized, creating a layer of plasma around the capsule which blocks radio communication with the ground. However, ionized gases in the plasma layer can also be used to create an artificial radio window, allowing communication signals to be transmitted and received despite the interference. Some control of the capsule's reentry orientation was possible by offsetting its center of gravity. Proper orientation with the cosmonaut's back to the direction of flight was necessary in order to best sustain the which also maximized the 8 to 9 g-force.
Voskhod
The Vostok design was modified to permit carrying multi-cosmonaut crews, and flown as two flights of the Voskhod programme. The cylindrical interior cabin was replaced with a wider, rectangular cabin which could hold either three cosmonauts seated abreast (Voskhod 1), or two cosmonauts with an inflatable airlock in between them, to permit extravehicular activity (Voskhod 2). A backup solid-fuel retro rocket was added to the top of the descent module. Vostok's ejection seat was removed to save space (thus there was no provision for crew escape in the event of a launch or landing emergency). The complete Voskhod spacecraft weighed .
Lack of space meant that the crew members of Voskhod 1 did not wear space suits. Both Voskhod 2 crew members wore spacesuits, as it involved an EVA by cosmonaut Alexei Leonov. An airlock was needed because the vehicle's electrical and environmental systems were air-cooled, and complete capsule depressurization would lead to overheating. The airlock weighed , was in diameter, high when collapsed for launch. When extended in orbit, it was long, had an internal diameter of and an external diameter of . The second crew member wore a spacesuit as a precaution against accidental descent module depressurization. The airlock was jettisoned after use.
The lack of ejection seats meant that the Voskhod crew would return to Earth inside their spacecraft unlike the Vostok cosmonauts who ejected and parachuted down separately. Because of this, a new landing system was developed, which added a small solid-fuel rocket to the parachute lines. It fired as the descent module neared touchdown, providing a softer landing.
Mercury
The Mercury program was the United States' first crewed space program. It ran from 1958 through 1963, with the goal of putting a human in orbit around the Earth and returning him safely. The program used a small capsule attached to a booster rocket to achieve orbit. The development of the Mercury capsule began in earnest after NASA selected the McDonnell Aircraft Corporation as its contractor in 1959. The Mercury spacecraft's principal designer was Maxime Faget, who started research for human spaceflight during the time of the NACA. It was long and wide; with the launch escape system added, the overall length was . With of habitable volume, the capsule was just large enough for a single crew member. Inside were 120 controls: 55 electrical switches, 30 fuses and 35 mechanical levers. The heaviest spacecraft, Mercury-Atlas 9, weighed fully loaded. Its outer skin was made of René 41, a nickel alloy able to withstand high temperatures.
The spacecraft was cone shaped, with a neck at the narrow end. It had a convex base, which carried a heat shield (Item 2 in the diagram below) consisting of an aluminum honeycomb covered with multiple layers of fiberglass. Strapped to it was a retropack (1) consisting of three rockets deployed to brake the spacecraft during reentry. Between these were three minor rockets for separating the spacecraft from the launch vehicle at orbital insertion. The straps that held the package could be severed when it was no longer needed. Next to the heat shield was the pressurized crew compartment (3). Inside, an astronaut would be strapped to a form-fitting seat with instruments in front of him and with his back to the heat shield. Underneath the seat was the environmental control system supplying oxygen and heat, scrubbing the air of CO2, vapor and odors, and (on orbital flights) collecting urine. The recovery compartment (4) at the narrow end of the spacecraft contained three parachutes: a drogue to stabilize free fall and two main chutes, a primary and reserve. Between the heat shield and inner wall of the crew compartment was a landing skirt, deployed by letting down the heat shield before landing. On top of the recovery compartment was the antenna section (5) containing both antennas for communication and scanners for guiding spacecraft orientation. Attached was a flap used to ensure the spacecraft was faced heat shield first during reentry. A launch escape system (6) was mounted to the narrow end of the spacecraft containing three small solid-fueled rockets which could be fired briefly in a launch failure to separate the capsule safely from its booster. It would deploy the capsule's parachute for a landing nearby at sea. ( | Technology | Basics_6 | null |
1059742 | https://en.wikipedia.org/wiki/Fire%20brick | Fire brick | A fire brick, firebrick, fireclay brick, or refractory brick is a block of ceramic material used in lining furnaces, kilns, fireboxes, and fireplaces. A refractory brick is built primarily to withstand high temperature, but will also usually have a low thermal conductivity for greater energy efficiency. Usually dense fire bricks are used in applications with extreme mechanical, chemical, or thermal stresses, such as the inside of a wood-fired kiln or a furnace, which is subject to abrasion from wood, fluxing from ash or slag, and high temperatures. In other, less harsh situations, such as in an electric or natural gas fired kiln, more porous bricks, commonly known as "kiln bricks", are a better choice. They are weaker, but they are much lighter and easier to form and insulate far better than dense bricks. In any case, firebricks should not spall, and their strength should hold up well during rapid temperature changes.
Manufacture
In the making of firebrick, fire clay is fired in the kiln until it is partly vitrified. For special purposes, the brick may also be glazed. There are two standard sizes of fire brick: and . Also available are firebrick "splits" which are half the thickness and are often used to line wood stoves and fireplace inserts. The dimensions of a split are usually . Fire brick was first invented in 1822 by William Weston Young in the Neath Valley of Wales.
High temperature applications
The silica fire bricks that line steel-making furnaces are used at temperatures up to , which would melt many other types of ceramic, and in fact part of the silica firebrick liquefies. High-temperature Reusable Surface Insulation (HRSI), a material with the same composition, was used in the insulating tiles of the Space Shuttle.
Non-ferrous metallurgical processes use basic refractory bricks because the slags used in these processes readily dissolve the "acidic" silica bricks. The most common basic refractory bricks used in smelting non-ferrous metal concentrates are "chrome-magnesite" or "magnesite-chrome" bricks (depending on the relative ratios of magnesite and chromite ores used in their manufacture).
Lower temperature applications
A range of other materials find use as firebricks for lower temperature applications. Magnesium oxide is often used as a lining for furnaces. Silica bricks are the most common type of bricks used for the inner lining of furnaces and incinerators. As the inner lining is usually of sacrificial nature, fire bricks of higher alumina content may be employed to lengthen the duration between re-linings. Very often cracks can be seen in this sacrificial inner lining shortly after being put into operation. They revealed more expansion joints should have been put in the first place, but these now become expansion joints themselves and are of no concern as long as structural integrity is not affected. Silicon carbide, with high abrasive strength, is a popular material for hearths of incinerators and cremators. Common red clay brick may be used for chimneys and wood-fired ovens.
Potential use to store energy
Firebricks, with their ability to withstand high temperatures and store heat, offer a promising solution for storing energy. These refractory bricks can be used to store industrial process heat, leveraging excess renewable electricity to create a low-cost, continuous heat source for industry. Due to their construction from common materials, firebrick storage systems are much more cost-effective than battery systems for thermal energy storage. Research across 149 countries indicates that using firebricks for heat storage can significantly reduce the need for electricity generation, battery storage, hydrogen production, and low-temperature heat storage. This approach could lower overall energy costs by about 1.8%, making firebricks a valuable tool in reducing the costs of transitioning to 100% clean, renewable energy.
| Technology | Building materials | null |
1060624 | https://en.wikipedia.org/wiki/Newton%27s%20rings | Newton's rings | Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two surfaces, typically a spherical surface and an adjacent touching flat surface. It is named after Isaac Newton, who investigated the effect in 1666. When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric ring pattern of rainbow colors because the different wavelengths of light interfere at different thicknesses of the air layer between the surfaces.
History
The phenomenon was first described by Robert Hooke in his 1665 book Micrographia. Its name derives from the mathematician and physicist Sir Isaac Newton, who studied the phenomenon in 1666 while sequestered at home in Lincolnshire in the time of the Great Plague that had shut down Trinity College, Cambridge. He recorded his observations in an essay entitled "Of Colours". The phenomenon became a source of dispute between Newton, who favored a corpuscular nature of light, and Hooke, who favored a wave-like nature of light. Newton did not publish his analysis until after Hooke's death, as part of his treatise "Opticks" published in 1704.
Theory
The pattern is created by placing a very slightly convex curved glass on an optical flat glass. The two pieces of glass make contact only at the center. At other points there is a slight air gap between the two surfaces, increasing with radial distance from the center.
Consider monochromatic (single color) light incident from the top that reflects from both the bottom surface of the top lens and the top surface of the optical flat below it. The light passes through the glass lens until it comes to the glass-to-air boundary, where the transmitted light goes from a higher refractive index (n) value to a lower n value. The transmitted light passes through this boundary with no phase change. The reflected light undergoing internal reflection (about 4% of the total) also has no phase change. The light that is transmitted into the air travels a distance, t, before it is reflected at the flat surface below. Reflection at this air-to-glass boundary causes a half-cycle (180°) phase shift because the air has a lower refractive index than the glass. The reflected light at the lower surface returns a distance of (again) t and passes back into the lens. The additional path length is equal to twice the gap between the surfaces. The two reflected rays will interfere according to the total phase change caused by the extra path length 2t and by the half-cycle phase change induced in reflection at the flat surface. When the distance 2t is zero (lens touching optical flat) the waves interfere destructively, hence the central region of the pattern is dark.
A similar analysis for illumination of the device from below instead of from above shows that in this case the central portion of the pattern is bright, not dark. When the light is not monochromatic, the radial position of the fringe pattern has a "rainbow" appearance.
Interference
In areas where the path length difference between the two rays is equal to an odd multiple of half a wavelength (λ/2) of the light waves, the reflected waves will be in phase, so the "troughs" and "peaks" of the waves coincide. Therefore, the waves will reinforce (add) through constructive interference and the resulting reflected light intensity will be greater. As a result, a bright area will be observed there.
At other locations, where the path length difference is equal to an even multiple of a half-wavelength, the reflected waves will be 180° out of phase, so a "trough" of one wave coincides with a "peak" of the other wave. This is destructive interference: the waves will cancel (subtract) and the resulting light intensity will be weaker or zero. As a result, a dark area will be observed there. Because of the 180° phase reversal due to reflection of the bottom ray, the center where the two pieces touch is dark.
This interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing differences in the thickness of the air gap. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength λ of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so using red light the difference in height between two fringes is half that, or 350 nm, about the diameter of a human hair. Since the gap between the glasses increases radially from the center, the interference fringes form concentric rings. For glass surfaces that are not axially symmetric, the fringes will not be rings but will have other shapes.
Quantitative Relationships
For illumination from above, with a dark center, the radius of the Nth bright ring is given by
where
N is the bright-ring number, R is the radius of curvature of the glass lens the light is passing through, and λ is the wavelength of the light. The above formula is also applicable for dark rings for the ring pattern obtained by transmitted light.
Given the radial distance of a bright ring, r, and a radius of curvature of the lens, R, the air gap between the glass surfaces, t, is given to a good approximation by
where the effect of viewing the pattern at an angle oblique to the incident rays is ignored.
Thin-film interference
The phenomenon of Newton's rings is explained on the same basis as thin-film interference, including effects such as "rainbows" seen in thin films of oil on water or in soap bubbles. The difference is that here the "thin film" is a thin layer of air.
| Physical sciences | Optics | Physics |
3141503 | https://en.wikipedia.org/wiki/Chemical%20nomenclature | Chemical nomenclature | Chemical nomenclature is a set of rules to generate systematic names for chemical compounds. The nomenclature used most frequently worldwide is the one created and developed by the International Union of Pure and Applied Chemistry (IUPAC).
IUPAC Nomenclature ensures that each compound (and its various isomers) have only one formally accepted name known as the systematic IUPAC name. However, some compounds may have alternative names that are also accepted, known as the preferred IUPAC name which is generally taken from the common name of that compound. Preferably, the name should also represent the structure or chemistry of a compound.
For example, the main constituent of white vinegar is , which is commonly called acetic acid and is also its recommended IUPAC name, but its formal, systematic IUPAC name is ethanoic acid.
The IUPAC's rules for naming organic and inorganic compounds are contained in two publications, known as the Blue Book and the Red Book, respectively. A third publication, known as the Green Book, recommends the use of symbols for physical quantities (in association with the IUPAP), while a fourth, the Gold Book, defines many technical terms used in chemistry. Similar compendia exist for biochemistry (the White Book, in association with the IUBMB), analytical chemistry (the Orange Book), macromolecular chemistry (the Purple Book), and clinical chemistry (the Silver Book). These "color books" are supplemented by specific recommendations published periodically in the journal Pure and Applied Chemistry.
Purpose of chemical nomenclature
The main purpose of chemical nomenclature is to disambiguate the spoken or written names of chemical compounds: each name should refer to one compound. Secondarily, each compound should have only one name, although in some cases some alternative names are accepted.
Preferably, the name should also represent the structure or chemistry of a compound. This is achieved by the International Chemical Identifier (InChI) nomenclature. However, the American Chemical Society's CAS numbers nomenclature does not represent a compound's structure.
The nomenclature used depends on the needs of the user, so no single correct nomenclature exists. Rather, different nomenclatures are appropriate for different circumstances.
A common name will successfully identify a chemical compound, given context. Without context, the name should indicate at least the chemical composition. To be more specific, the name may need to represent the three-dimensional arrangement of the atoms. This requires adding more rules to the standard IUPAC system (the Chemical Abstracts Service system (CAS system) is the one used most commonly in this context), at the expense of having names which are longer and less familiar.
The IUPAC system is often criticized for failing to distinguish relevant compounds (for example, for differing reactivity of sulfur allotropes, which IUPAC does not distinguish). While IUPAC has a human-readable advantage over CAS numbering, IUPAC names for some larger, relevant molecules (such as rapamycin) are barely human-readable, so common names are used instead.
Differing needs of chemical nomenclature and lexicography
It is generally understood that the purposes of lexicography versus chemical nomenclature vary and are to an extent at odds. Dictionaries of words, whether in traditional print or on the internet, collect and report the meanings of words as their uses appear and change over time. For internet dictionaries with limited or no formal editorial process, definitions —in this case, definitions of chemical names and terms— can change rapidly without concern for the formal or historical meanings. Chemical nomenclature however (with IUPAC nomenclature as the best example) is necessarily more restrictive: Its purpose is to standardize communication and practice so that, when a chemical term is used it has a fixed meaning relating to chemical structure, thereby giving insights into chemical properties and derived molecular functions. These differing purposes can affect understanding, especially with regard to chemical classes that have achieved popular attention. Examples of the effect of these are as follows:
resveratrol, a single compound defined clearly by this common name, but that can be confused, popularly, with its cis-isomer,
omega-3 fatty acids, a reasonably well-defined class of chemical structures that is nevertheless broad as a result of its formal definition, and
polyphenols, a fairly broad structural class with a formal definition, but where mistranslations and general misuse of the term relative to the formal definition has resulted in serious errors of usage, and so ambiguity in the relationship between structure and activity (SAR).
The rapid pace at which meanings can change on the internet, in particular for chemical compounds with perceived health benefits, ascribed rightly or wrongly, complicate the monosemy of nomenclature (and so access to SAR understanding). Specific examples appear in the Polyphenol article, where varying internet and common-use definitions conflict with any accepted chemical nomenclature connecting polyphenol structure and bioactivity.
History
Alchemical names
The nomenclature of alchemy is descriptive, but does not effectively represent the functions mentioned above. Opinions differ about whether this was deliberate on the part of the early practitioners of alchemy or whether it was a consequence of the particular (and often esoteric) theories according to which they worked. While both explanations are probably valid to some extent, it is remarkable that the first "modern" system of chemical nomenclature appeared at the same time as the distinction (by French chemist Antoine Lavoisier) between elements and compounds, during the late eighteenth century.
Méthode de nomenclature chimique
The French chemist Louis-Bernard Guyton de Morveau published his recommendations in 1782, hoping that his "constant method of denomination" would "help the intelligence and relieve the memory". The system was refined in , published in 1787 in collaboration with Lavoisier, Claude Louis Berthollet, and Antoine-François de Fourcroy, and translated into English as Method of Chymical Nomenclature by James St. John in 1788. Méthode de nomenclature chimique contained handy dictionaries in which older chemical names were listed with their new counterparts and vice versa. New names were provided in both French and Latin for the benefit of an international readership. For a modern reader these dictionaries are still useful, but now to discover and understand older names, rather than the new. In the English version, the new names had been adapted to English, though they did not always align with current conventions. St. John used "acetat" instead of "acetate" for example. For gases, the word "gas" ("gaz") was being popularized by its consistent use in the new names, whereas the old names used the affix "air".
Traité élémentaire de chimie
The new system was presented to a wider audience in Lavoisier's 1789 textbook Traité élémentaire de chimie, translated into English as Elements of Chemistry by Robert Kerr in 1790, and it would be of great influence long after his death at the guillotine in 1794. The project was also endorsed by Swedish chemist Jöns Jakob Berzelius, who adapted the ideas for the German-speaking world.
Traité élémentaire de chimie included the first modern list of elements ("simple substances"). Also here were older names provided to explain their new counterparts. Some element names were new and received English versions similar to the French names. For the new "element" caloric, both the new and some of the "old" names (igneous fluid and matter of fire and of heat) were coined by Lavoisier, their discoverer. Most element names, however, were not new, so they retained their existing English versions. But their status as elements was new—a product of the chemical revolution.
Geneva Rules
The recommendations of Guyton were only for what would later be known as inorganic compounds. With the massive expansion of organic chemistry during the mid-nineteenth century and the greater understanding of the structure of organic compounds, the need for a less ad hoc system of nomenclature was felt just as the theoretical basis became available to make this possible. An international conference was convened in Geneva in 1892 by the national chemical societies, from which the first widely accepted proposals for standardization developed.
IUPAC
A commission was established in 1913 by the Council of the International Association of Chemical Societies, but its work was interrupted by World War I. After the war, the task passed to the newly formed International Union of Pure and Applied Chemistry, which first appointed commissions for organic, inorganic, and biochemical nomenclature in 1921 and continues to do so to this day.
Types of nomenclature
Nomenclature has been developed for both organic and inorganic chemistry. There are also designations having to do with structuresee Descriptor (chemistry).
Organic chemistry
Additive name
Conjunctive name
Functional class name, also known as a radicofunctional name
Fusion name
Hantzsch–Widman nomenclature
Multiplicative name
Replacement name
Substitutive name
Subtractive name
Inorganic chemistry
Compositional nomenclature
Type-I ionic binary compounds
For type-I ionic binary compounds, the cation (a metal in most cases) is named first, and the anion (usually a nonmetal) is named second. The cation retains its elemental name (e.g., iron or zinc), but the suffix of the nonmetal changes to -ide. For example, the compound is made of cations and anions; thus, it is called lithium bromide. The compound , which is composed of cations and anions, is referred to as barium oxide.
The oxidation state of each element is unambiguous. When these ions combine into a type-I binary compound, their equal-but-opposite charges are neutralized, so the compound's net charge is zero.
Type-II ionic binary compounds
Type-II ionic binary compounds are those in which the cation does not have just one oxidation state. This is common among transition metals. To name these compounds, one must determine the charge of the cation and then render the name as would be done with Type-I ionic compounds, except that a Roman numeral (indicating the charge of the cation) is written in parentheses next to the cation name (this is sometimes referred to as Stock nomenclature). For example, for the compound , the cation, iron, can occur as and . In order for the compound to have a net charge of zero, the cation must be so that the three anions can be balanced (3+ and 3− balance to 0). Thus, this compound is termed iron(III) chloride. Another example could be the compound . Because the anion has a subscript of 2 in the formula (giving a 4− charge), the compound must be balanced with a 4+ charge on the cation (lead can form cations with a 4+ or a 2+ charge). Thus, the compound is made of one cation to every two anions, the compound is balanced, and its name is written as lead(IV) sulfide.
An older system – relying on Latin names for the elements – is also sometimes used to name Type-II ionic binary compounds. In this system, the metal (instead of a Roman numeral next to it) has a suffix "-ic" or "-ous" added to it to indicate its oxidation state ("-ous" for lower, "-ic" for higher). For example, the compound contains the cation (which balances out with the anion). Since this oxidation state is lower than the other possibility (), this compound is sometimes called ferrous oxide. For the compound, , the tin ion is (balancing out the 4− charge on the two anions), and because this is a higher oxidation state than the alternative (), this compound is termed stannic oxide.
Some ionic compounds contain polyatomic ions, which are charged entities containing two or more covalently bonded types of atoms. It is important to know the names of common polyatomic ions; these include:
ammonium ()
nitrite ()
nitrate ()
sulfite ()
sulfate ()
hydrogen sulfate (bisulfate) ()
hydroxide ()
cyanide ()
phosphate ()
hydrogen phosphate ()
dihydrogen phosphate ()
carbonate ()
hydrogen carbonate (bicarbonate) ()
hypochlorite ()
chlorite ()
chlorate ()
perchlorate ()
acetate ()
permanganate ()
dichromate ()
chromate ()
peroxide ()
superoxide ()
oxalate ()
hydrogen oxalate ()
The formula denotes that the cation is sodium, or , and that the anion is the sulfite ion (). Therefore, this compound is named sodium sulfite. If the given formula is , it can be seen that is the hydroxide ion. Since the charge on the calcium ion is 2+, it makes sense there must be two ions to balance the charge. Therefore, the name of the compound is calcium hydroxide. If one is asked to write the formula for copper(I) chromate, the Roman numeral indicates that copper ion is and one can identify that the compound contains the chromate ion (). Two of the 1+ copper ions are needed to balance the charge of one 2− chromate ion, so the formula is .
Type-III binary compounds
Type-III binary compounds are bonded covalently. Covalent bonding occurs between nonmetal elements. Compounds bonded covalently are also known as molecules. For the compound, the first element is named first and with its full elemental name. The second element is named as if it were an anion (base name of the element + -ide suffix). Then, prefixes are used to indicate the numbers of each atom present: these prefixes are mono- (one), di- (two), tri- (three), tetra- (four), penta- (five), hexa- (six), hepta- (seven), octa- (eight), nona- (nine), and deca- (ten). The prefix mono- is never used with the first element. Thus, is termed nitrogen trichloride, is termed boron trifluoride, and is termed diphosphorus pentoxide (although the a of the prefix penta- should actually not be omitted before a vowel: the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although "monoxide", rather than "monooxide", is an allowed exception because of general usage).").
Carbon dioxide is written ; sulfur tetrafluoride is written . A few compounds, however, have common names that prevail. , for example, is usually termed water rather than dihydrogen monoxide, and is preferentially termed ammonia rather than nitrogen trihydride.
Substitutive nomenclature
This naming method generally follows established IUPAC organic nomenclature. Hydrides of the main group elements (groups 13–17) are given the base name ending with -ane, e.g. borane (), oxidane (), phosphane () (Although the name phosphine is also in common use, it is not recommended by IUPAC). The compound would thus be named substitutively as trichlorophosphane (with chlorine "substituting"). However, not all such names (or stems) are derived from the element name. For example, is termed "azane".
Additive nomenclature
This method of naming has been developed principally for coordination compounds although it can be applied more widely. An example of its application is , pentaamminechloridocobalt(III) chloride.
Ligands, too, have a special naming convention. Whereas chloride becomes the prefix chloro- in substitutive naming, for a ligand it becomes chlorido-.
| Physical sciences | Nomenclature | Chemistry |
3142619 | https://en.wikipedia.org/wiki/Transjakarta | Transjakarta | Transjakarta (stylised in all-lowercase, often erroneously called Busway, sometimes shortened as TJ and branded as TiJe) or Jakarta BRT is a bus rapid transit (BRT) system in Jakarta, Indonesia. The first BRT system in Southeast Asia, it commenced operations on 15 January 2004 to provide a fast public transport system to help reduce rush hour traffic. The system is considered as the first revolutionary public transit mode in the capital city of Indonesia. The buses run in dedicated lanes (busways), and ticket prices are subsidised by the regional government. Transjakarta has the world's longest BRT system (251.2 km in length), which operates about 4,300 buses. Transjakarta aims to have 50 percent of its fleet be electric buses by 2027. By 2030, the aim is for the entire Transjakarta ecosystem to use electric buses. As of November 2023, it serves an average of 1.134 million passengers daily.
Transjakarta system is operated by municipally-owned company PT Transportasi Jakarta. However, most of its fleet is operated by various companies aside of the company itself.
History
Transjakarta was conceived to provide a fast, comfortable, and affordable mass transportation system. The proposal for a BRT system in Jakarta was emerged in 2001. Governor of Jakarta at the time, Sutiyoso proposed four mass public transportation modes in Jakarta:
Mass-rapid transit (MRT) — with its first line construction on phase 1 began in late 2013 and opened in March 2019.
Monorail — the construction began in 2004 but shortly thereafter, it was halted. The construction was expected to be resumed in 2013, but eventually the project was permanently cancelled two years later.
Bus rapid transit (BRT)
Water transport (waterway).
The MRT have larger passenger capacity and short travel time than the other proposals, but it required large foreign investments. At the time, Indonesia lost its investor confidence due to concerns regarding to unstable domestic situations in the early 2000s, so the MRT construction was unable to be realized yet. Among those four, the bus rapid transit was considered the most likely to be realized in short time because it doesn't require foreign investments.
The Institute for Transportation and Development Policy (ITDP) was an important party accompanying the BRT planning process. The initial concept was created by PT Pamintori Cipta, a transportation consultant who has frequently worked with the Jakarta Office of Transportation (Dinas Perhubungan Provinsi DKI Jakarta). Apart from the private sector, there were several other parties that also supporting this project, including the United States Agency for International Development (USAID) and University of Indonesia's Center for Transportation Studies (UI–CTS).
The buses were given lanes restricted to other traffic and separated by concrete blocks on the streets that became part of the busway routes. The first Transjakarta line opened to the public on 15 January 2004. It was free for the first two weeks, after which commercial operations started on 1February 2004.
At present, Transjakarta has 13 primary routes and ten cross-corridor routes. In addition, there are about 200 "feeder" routes that serve beyond the exclusive busway corridors to serve satellite cities in Greater Jakarta. The number of Transjakarta buses has also increased dramatically, from 605 buses in 2015 to 4,300 in 2020. The fare has remained Rp 3,500 (27 US cents) per passenger since operations began. The service set a record in 2018 when it carried 730,000 passengers per day, a significant jump from 331,000 per day in 2015. About 189.8 million passengers used Transjakarta in 2018 and targeted to serve one million passengers daily. In November 2020, Transjakarta won the 2021 Sustainable Transport Award.
As of September 2019, Transjakarta is currently testing electric buses, with Bundaran SenayanMonas as its first route. Transjakarta has undertaken an ambitious plan to expand its electric bus (e-bus) fleet to 10,000 units over the decade and to have all of its buses electric-powered by 2030.
Operations
Characteristics
The characteristics of Transjakarta listed in an Asian Development Bank study are:
Closed Trunk System without a Feeder System
Elevated Platform for Rapid Boarding and Alighting
Public Sector Bus Procurement and Private Sector Bus Operation
Operating at 450,000 passengers/day (2016)
Routes
15 corridors were initially planned, 14 of which are currently operational. Corridors 1 to 12 and Corridor 14 operates at a ground level, mostly separated from mixed traffic by roadblocks. Corridor 13 is the first and only corridor to feature a dedicated elevated track exclusively available for Transjakarta buses. The track is also shared with Corridor 13's branches, consisting of 13B and L13E, alongside Route 6V.
Other than the 14 main BRT corridors, Transjakarta operates 17 direct cross-corridor BRT routes, 57 feeder (Non-BRT) routes split into two categories: partially integrated (stops at a mix of BRT shelters and pedestrian bus stops) and fully disintegrated (stops at just pedestrian bus stops with no BRT integration), alongside 11 suburban routes to satellite cities, 14 routes serving low-cost apartments, 96 micro bus routes branded as Mikrotrans, 4 Bus Wisata (city travel) routes, and 11 Royaltrans routes. Non-BRT and suburban routes fully disintegrated from BRT system usually run Metrotrans-branded buses, while those partially integrated into BRT carries Minitrans branding for smaller buses or generic Transjakarta branding for standard BRT buses.
Timeline of routes
15 January 2004: Corridor 1, (Blok M to Kota) (soft launch)
1 February 2004: Corridor 1, (Blok M to Kota) (commercial service)
15 January 2006: Corridor 2, (Pulo Gadung to Harmoni) and Corridor 3, (Kalideres to Pasar Baru) became operational.
27 January 2007: Corridor 4, (Pulo Gadung to Dukuh Atas 2 [now Galunggung]), Corridor 5, (Kampung Melayu to Ancol), Corridor 6, (Dukuh Atas 2 [now Galunggung] to Ragunan) and Corridor 7, (Kampung Rambutan to Kampung Melayu) became operational.
21 February 2009: Corridor 8, (Lebak Bulus to Harmoni) became operational.
31 December 2010: Corridor 9, (Pluit to Pinang Ranti) and Corridor 10, (PGC 1 [now Cililitan] to Tanjung Priok) became operational.
18 March 2011: Corridor 9 was the only corridor serving until 11.00pm. Followed by Corridor1, with transit point with Corridor9 at Semanggi shelter. The night service, however, stops only at certain shelters.
20 May 2011: Corridor 2 and Corridor 3 initialised to serve until 11.00pm, but only open nine shelters out of 22 on Corridor2 and9 out of 13 shelters on Corridor3 remain open during the extended hours.
1 July 2011: Corridors 4 to 7 began their late-night service, leaving only Corridor8 without a late-night service.
28 September 2011: Three feeder bus routes launched with Route1 from West Jakarta Municipal Office to Daan Mogot, Route2 from Tanah Abang to Medan Merdeka Selatan and Route3 from SCBD to Senayan. The fare will be Rp.6,500 ($0.72), which cover tickets both for the feeder service and for Transjakarta buses. However, the feeder routes were eventually shut down because of the low number of riders.
13 December 2011: Transjakarta implemented a policy of segregating male and female passengers, following the example set by the commuter rail network. The designated women-only areas have been established between the middle door and driver cabins.
28 December 2011: Corridor 11 (Kampung Melayu to Pulo Gebang) became operational.
14 February 2013: Corridor 12 (Pluit to Tanjung Priok) became operational.
19 May 2014: The extension of Corridor 2 (Pulo Gadung to Harapan Indah) became operational.
1 June 2014: Transjakarta introduced two new services called AMARI (Angkutan Malam Hari) and ANDINI (Angkutan Dini Hari), in conjunction serving from 10.00 pm to 5.00 am the next day, making Transjakarta effectively operational 24 hours a day. The late-night service served only Corridor 1, Corridor 3, and Corridor 9.
Mei 2015: The AMARI service got expanded to serve additional four corridors, consisting of Corridor 2, Corridor 5, Corridor 7, and Corridor 10.
16 August 2017: Corridor 13 (Ciledug to Tendean [now Tegal Mampang]) became operational.
March 2020: Due to Covid-19 pandemic, the late-night AMARI service was cut short to serve just until 12.00 am.
12 September 2022: Approaching the end of Covid-19 pandemic, the late-night AMARI service was reextended to serve until 05:00 am and began serving all the main BRT corridors (excluding cross-corridor BRT routes and all Non-BRT services). The late-night AMARI service has an "M" prefix before the corridor number, so the late-night Corridor 1 service is coded M1 and so on. As of April 2024, most of the AMARI corridors serve the same route and stations as each's respective daytime main corridor. Exception applies to M12, which only serves from Penjaringan to Sunter Kelapa Gading, and M13 which terminates at Puri Beta 2 (Ciledug station closes at night).
3 March 2023: Due to the replacement of a number of Corridor 1 stations by temporary shelters which affects Harmoni as an interchange station, Corridor 3 was modified and interlined with Corridor 1 from Kebon Sirih to Bundaran HI stations (serving Kalideres to Bundaran HI), thus no longer serving Pecenongan, Juanda, and Pasar Baru stations. Corridor 8 was extended to serve the three stations (serving Lebak Bulus to Pasar Baru) and divided into two variants: the main "via Tomang" route, which is a mix of original Corridor 8 route and now-defunct cross-corridor Route 8A; and the alternative "via Cideng" route which mimics the original Corridor 8 by interlining with Corridor 3 from Petojo to Damai.
29 May 2023: The adjusted Corridor 3 was cut short again, now terminating at Monumen Nasional station and no longer serving Kebon Sirih, MH Thamrin, and Bundaran HI stations.
11 November 2023: Corridor 14 (Jakarta International Stadium to Senen) became operational as a BRT corridor. It was previously operated temporarily as a Non-BRT feeder route from 1 March 2022 until 10 November 2023. As a main BRT corridor, it became a 24-hour route with late-night AMARI service.
1 January 2025: AMARI services were rebranded with "M" prefix removed.
Fleets
Current fleets
Each bus is constructed with passenger safety in mind. For example, the body frame is constructed using Galvanyl (Zn–Fe Alloy), a strong and rust-resistant metal. There are also eight or ten glass-shattering hammers mounted on some of the window frames, and three emergency doors for fast evacuation during an emergency. There are also two fire extinguishers at the front and back of the buses.
A typical Transjakarta bus is painted with blue and white livery with the Transjakarta logo. Transjakarta buses was previously mandated to use compressed natural gas (CNG) and prohibited from using diesel fuel, but regulations have since been revised to permit diesel-powered buses once again due to efficiency issues and a shortage of CNG refueling stations. To facilitate passenger ingress and egress, buses are outfitted with two doors on either side, while a partition segregates the driver from passengers to enable the former to focus more intently on operating the vehicle.
The capacity of each bus varies from 85, 100 to 120 passengers. Single Mercedes-Benz and Hino buses can carry about 85 passengers. Scania, Mercedes-Benz and Volvo Maxi buses can carry 100 passengers, and 120 can be carried by a standard articulated bus. Transjakarta operates some Chinese-made Zhongtong and Swedish-made Scania articulated buses on long corridors and those passing mostly straight roads in mix with non-articulated buses. Articulated buses may also be used for some high-demand cross-corridor BRT routes.
Passengers can only board Transjakarta's BRT buses from designated shelters due to the higher passenger doors (about a meter and half from the ground) equipped with automated swing and slide mechanisms, which are controlled by the driver; however, the slide mechanism has been replaced by swing doors on all new buses, and full-height acrylic glass barriers are installed near the sliding doors, while low street-level doors are used for fully-disintegrated Non-BRT routes (that only stops at pedestrian bus stops) with a driver's door on the front-left side of the bus for big buses and a pair of hydraulic folding doors for medium buses.
Transjakarta buses have electronic boards and speakers that announce the name of shelters in Indonesian and English, bi-directional radio transceivers for communication between drivers and control centers, at least four mandatory CCTV cameras per bus, and automatic air freshener dispensers to keep the air fresh during rush hours. The announcer system, officially mentioned as On-Board-Unit (OBU), is synced to the bus position on GPS and is automatically triggered by checkpoints along the bus route.
Transjakarta offers Royaltrans as a premium service, which provides passengers with premium seating, extra comfort, free Wi-Fi, USB charging ports, an onboard entertainment TV, and no standing allowed (the bus does not take new passengers when all the seats are occupied). Payment is made through electronic tapping equipment on board the buses, and the service is not integrated into the main BRT system. Royaltrans is not subsidised by Jakarta Municipal Government as it primarily serves connecting satellite cities. Transjakarta also operates Metrotrans, which uses low entry buses, serves Non-BRT routes without integration to the BRT service, and stops at pedestrian bus stops. Some Non-BRT routes, especially the ones partially integrated into the BRT system, are also served by standard BRT buses and a smaller version called Minitrans, although passengers can only board or alight at pedestrian bus stops through the front door near the driver, with elevated BRT doors for passengers at BRT shelters. Transjakarta operates two free-of-charge services called Mikrotrans, consisting of microbuses operated by various microbus cooperations, and Bus Wisata (intercity travel bus) consisting of double-decker buses circling around significant roads in Jakarta. Both services are not integrated into the BRT system. Although free of charge, Mikrotrans still requires its passengers to tap in when boarding and tap out when alighting, but Bus Wisata does not.
In order to promote gender equality, Transjakarta is aiming to recruit more female drivers, targeting 30% of the total. As of 21 April 2016, Transjakarta introduced female-only buses for Corridor 1, which are operated by female drivers and onboard officers and painted pink to differentiate them from regular buses. Some routes also offer disabled-friendly buses, with plans to acquire an additional 300 such buses by 2017 to serve 15-20 routes.
Reference:
Future fleet
PT. Mayasari Bakti Scania K250IB 4x2 Euro III
Kopami Jaya (Koperasi Pengemudi Angkutan Mikrobus DKI Jakarta) Isuzu NQR71 as Mini Trans
PT. Metro Mini Isuzu NQR71 as Mini Trans
Kopaja (Koperasi Angkutan Jakarta) Mitsubishi FE 84G BC as Mini Trans
Electric Vehicle (EV) MAB MD12E NF Electric bus, BYD K9 Electric bus, BYD C6, Higer Azure KLQ6125GEV, and INKA E-Inobus
Note :Transjakarta stated that it will not buy any electric buses. Instead, electric buses will be operated by operators under the Rupiah-per-kilometer scheme. Currently all electric bus models listed is either under trial or is to commence trial in the near future.
Retired fleet
The Mercedes-Benz OH and Hino RG air-conditioned buses operated in Corridor1 are painted red and yellow, with a picture of a young brahminy kite, which looks similar to a bald eagle grasping a tree branch with three salaks on it. The buses use special fuel which is (a mix of diesel and biodiesel). For Corridors2 (bus colours: blue and white) and3 (bus colours: yellow and red), the buses are CNG-fueled Daewoo buses imported from South Korea. Corridors 4, 5 and6 used Grey Daewoo and Hyundai CNG buses, with Komodo and Huanghai articulated buses dedicated for Corridor5. Grey Hino CNG buses are used for Corridors7 and 8. Corridors9 and10 used Red coloured Hyundai and Komodo articulated buses, whilst Corridor 11 uses red Inobus articulated buses. Corridor 12 used to use red coloured Ankai and Inobus buses as well. Due to various coach builders being involved and design tweaks applied over time, the exterior and interior appearance, quality, and comfort varies between buses operating in the same corridor. Seats in old buses face the aisle to optimise passengers' movement during rush hours. Older buses were equipped with folding or hydraulic sliding doors, while newer units were equipped with swing doors.
In August 2011, Transjakarta operator installed cameras on one bus for a trial period. The plan is to install four cameras on each bus gradually in efforts to improve services such as to inform passengers waiting for buses about how crowded approaching buses are, and to prevent sexual harassment.
Note: Bold text indicates current operators
Reference:
Shelters
Transjakarta shelters (officially mentioned as a BRT station or BRT shelter) are distinguished from typical bus stops as they are often located in the middle of the road and require passengers to access them via elevated bridges, although some shelters lack this and are only accessed by pelican road crossing. Some of the shelters are equipped with escalators or lifts, and are designed to be seamlessly integrated into nearby buildings or integrated train stations. For instance, the Tosari ICBC stop used to be directly connected to the UOB Plaza, but has since been replaced with a road crossing. Similarly, the Blok M stop provides stair access to the nearby Blok M Mall. Accessing the shelter requires passengers to tap an electronic payment card (known as tap in), which they have to do again to exit the arrival shelter (known as tap out).
Older Transjakarta shelters are primarily constructed using aluminium, steel, glass, and concrete materials. The walls are made of aluminium and glass covers, with tread plates constructing the floors. To ensure proper air ventilation, fins are installed on the aluminium parts of the shelters. The concrete makes up for the supporting pillars of the shelters, which are usually painted blue. However, newer shelters built since the revitalisation project in 2022 ditch the glass and aluminium and instead have concrete-constructed walls whose height is only half of a typical human with no walls covering the space all the way to the top, allowing for air to move and circulate freely. The floors are also made of concrete, and all the pillars and covers are coloured creamy white instead of blue. Exception of this design applies to some shelters which are part of a larger building (such as CSW shelter part of its TOD building and Pulo Gebang shelter part of the Pulo Gebang Bus Terminal building), as such their design resembles the building they are part of. Newer shelters may also feature platform screen doors to ensure passenger safety, although its opening and closing aren't synced to the bus doors but rather whether it detects the bus in front of it.
Some of the elevated bridge ramps connecting the shelters have gentle slopes to accommodate disabled passengers, although some require passengers to walk a relatively long way up the ramps before doubling back to reach the boarding shelters. The floors of the bridges are typically made of tread plates, although some newer ones use concrete. However, noise is a problem for tread plates due to the movement of passengers, and some tread plates may become slippery during the rainy season. Older shelters usually lack sanitary facilities, although newer ones include large and disable-friendly restrooms and praying rooms.
Other facilities in a Transjakarta shelter include fans, top-up vending machines (although some older and smaller shelters lack this), and wayfinding boards showing which bus stops at which gate. All stations are also equipped with passenger information system (PIS) displays for each platform direction showing estimated time of arrival and number of upcoming and arriving buses, although its accuracy is questionable as it does not account traffic jams. Some shelters have two stories, with the upper story serving another corridor going through flyover (such as Flyover Jatinegara, a transit point of Corridor 10 and 11) or as a commercial area of food chains and minimarkets (such as Bundaran HI ASTRA, MH Thamrin, Tosari, and Dukuh Atas).
Based on the routes they serve, there are three types of Transjakarta shelters. All shelters serve at least one BRT corridor, and may also serve some cross-corridor BRT routes and partially-integrated Non-BRT routes.
Standard BRT station. These stations serve one main BRT corridor. Most stations fall into this category.
Standard interchange BRT station. These stations serve two or more main BRT corridors in one station. Examples include Monumen Nasional, Pulo Gadung, Kampung Melayu, Cawang Sentral, and some others.
Pair-interchange BRT stations. In areas where two or more stations are located near each other, there is a skybridge placed inside paid area connecting each to another. The pair of multiple stations serve as an interchange, allowing passengers to transfer between the two corridors by crossing the bridge without exiting paid area. Examples include pairs of Grogol and Grogol Reformasi, Dukuh Atas and Galunggung, Bendungan Hilir and Semanggi, and some others.
Other than BRT shelters, Non-BRT routes also stop at regular pedestrian bus stops. All routes stopping at any pedestrian bus stops, be it partially-integrated Non-BRT, fully-disintegrated Non-BRT, Mikrotrans, and Bus Wisata buses only stop at designated bus stops along the pedestrians and do not take passengers anywhere they stand. Designated pedestrian bus stops vary greatly in form from a building with canopy or roof to just a "STOP" sign with bus icon. Exception applies to CSW 2 bus stop, part of CSW-ASEAN TOD, which is designed to resemble a BRT station except with street-level platform screen doors, allowing for easy transfer into BRT or another Non-BRT route. All buses serving these bus stops stop at all the bus stops on its line, regardless of whether there is a passenger or not waiting there.
Initially, shelters are open from 05:00am to 10:00pm although opening hours can be extended if there are passengers still waiting at closing time. Since midnight bus services are launched, a number of shelters start to operate 24 hours a day. Currently all the shelters (except Corridor 13's Ciledug, which still closes at 10:00 pm) serves round-the-clock. Shelters often become extremely overcrowded because of long and sometimes unpredictable intervals between buses. According to a report from the Indonesian Consumers Protection Foundation in 2011, the most common complaint from passengers about the service offered by Transjakarta was the long wait times for buses at some of the main shelters. This issue rearises during revitalisation project and closure of Harmoni in 2022, with customers complaining that Monumen Nasional as a transit point has not enough doors to serve many routes and shelters double-dutying as an alternative to revitalised ones are too small.
The large Harmoni Central Busway (HCB) shelter on Jalan Gadjah Mada, Central Jakarta, is built over the Ciliwung River. It is a transit point between Corridors 1, 2, 3, 5C, 5H, 7F, 8, 8A, 9B and 10H. This 500-person shelter has 18 bus bays. Although many trees had to be chopped down during its construction, an old banyan tree was an exception because it was considered rich in historical value. However, in October 2006 this tree was vandalised by people from the Pemuda Persatuan Islam religious group. Their motive was to show that the tree does not possess supernatural qualities.
On 15 April 2022, revitalization of 11 bus shelters began to improve passenger service, expanding public spaces for tourism, and accelerate integration with other public transportation services. Tosari and Bundaran HI ASTRA shelters are revitalized into an iconic "twin cruise ships that anchored at the Selamat Datang Monument", with the upper floor being a commercial area and photobooth balcony. The revitalization project is expected to rebuild 45 stations across the city and is due to be finished by mid-2024. The revitalisation includes full reconstruction of the shelters with the new style, such as half-height cream-colored concrete-constructed walls, concrete-made floors, and the inclusion of sanitary facilities.
Starting in March 2023, multiple stations along the Corridor 1's road are temporarily closed and replaced by temporary stations to provide the room for the Phase 2 of MRT Jakarta project, including the largest transit point of the network, Harmoni Central Busway station. These temporary stations are small and some of them are made of two separate buildings for opposing directions that require passengers to tap and pay again to cross between, making them unsuitable to be a transit point and as such they only serve Corridor 1. This results in notable changes to routes previously stopping and terminating at Harmoni, with affected corridors and routes, notably Corridor 2, 3, and 8 being rerouted and Monumen Nasional being the new temporary transit point. Some cross-corridor routes deemed no longer needed, such as 8A and 12M, were also scrapped or became limitedly operational.
In December 2023, Transjakarta announced that the company was renaming many of its BRT shelters. The changes were revealed and took place in January 2024, affecting every main corridor and 121 shelters. The reason cited by the company was due to neutralise shelter names from commercial and copyrighted names owned by third parties and to allow the commercialisation of shelter names through naming rights procedure, similar to that of MRT Jakarta and its stations. One such example is Bundaran HI shelter, which has ASTRA branding (referred to in service as Bundaran HI ASTRA) as part of naming right afforded to Astra International. Other reasons cited was to rename some shelters to match name of areas surrounding them or integrated railway stations, such as those in Kuningan and Cawang with their integrated LRT Jabodebek stations. In 2024, the second naming right was afforded to municipally-owned Bank DKI for Gelora Bung Karno shelter, being rebranded as "Senayan BANK DKI".
Ticketing and fares
The cost of a Transjakarta ticket since its opening has been a flat rate of Rp 2.000,- at concessional times (05.00 a.m. to 07.00 a.m.) and Rp 3.500,- (about 24 US cents) each trip at all other times. The fare applies to all BRT and Non-BRT services, except Royaltrans, Mikrotrans, and city travel (Bus Wisata) services. Royaltrans costs Rp20.000,- each trip (or Rp35.000,- for some routes), while Mikrotrans and Bus Wisata are free to ride, although Mikrotrans still requires its passengers to tap in and out. One trip is considered a period from one tap-in to one tap-out.
Passengers who wish to change direction or transit to other corridors do not need to pay again, provided they do not exit the paid area and complete the whole journey in one trip. Based on the definition of "one trip", this rule applies with some terms:
Transfer must be done at the BRT station, either between two BRT routes, two partially-integrated Non-BRT routes, or a pair of both, to be considered in one trip. Transfers that are done in bus stops or require exiting BRT station will require tapping out and paying again. All transfers to, from, or between fully-disintegrated Non-BRT routes require paying again.
For terminus shelters, the departure and arrival platforms must be connected in one paid area, thus passengers do not need to tap out and pay again and can continue their journey in one trip. Some terminus shelters, such as Kalideres, require tapping again to cross from arrival to departure platforms for passengers wishing to transfer.
For stations that are made of a separate building with separate paid area for each direction (most of them in Corridor 9), transfers can only be done between the two routes stopping at the same building to be considered in one trip. This means changing direction or transferring to routes stopping at the opposite building requires exiting paid area and pay again to cross to the opposing building.
For stations that are paired with another station by a skybridge located inside paid area, transfers between the two paired stations must be done by crossing the skybridge so that passengers do not exit the paid area and can continue their journey in one trip.
Up to 2015, passengers could purchase a single-journey paper ticket at the ticket booth in the shelter. In 2013, Transjakarta introduced the use of prepaid cards or e-tickets from BRI BRizzi, BCA Flazz, BNI Tapcash, Mandiri E-Money, Bank DKI JakCard, and Bank Mega MegaCash. The prepaid cards can be purchased and topped-up at any ticket booth in the shelter throughout the system, or the ATM of the issuing bank. The e-ticket is priced at Rp 40,000, Rp 20,000 for the card itself and a balance of Rp 20,000. The prepaid cards, except for Bank DKI JakCard and Bank Mega MegaCash, are also valid as a ticket in the Jabodetabek Commuter Train system as of June 2014, easing the integration plan between the BRT and the commuter train system. In April and May 2014, the Transjakarta management started compulsory use of e-tickets at several terminus in the system, based on news that the BCA Flazz Card could also be used in Jabodetabek Commuter Train. In mid-October 2014, 56% passengers have used e-tickets. Now, all Transjakarta corridors and shelters applied the compulsory use of the e-tickets, since 21 February 2015. 17 August 2016 marks the start of tap-out system trial in Corridor1 (Blok M – Kota), while a similar trial was started on 9September 2016 in Corridor2. The system is meant to control the flow of people going in and out of the shelters, discourage illegal entrance to and exit from the shelters, and to encourage sales and usage of the "e-tickets". In October 2016, the system had been implemented in all corridors of Transjakarta.
Starting on 24 August 2015 students who have the Jakarta Smart Card (Kartu Jakarta Pintar, KJP) can use it as an e-ticket for a free bus ride. The TJ Card, introduced in January 2018, provides free fares for their holders and is available for seniors above 60, residents of the Thousand Islands Regency, disabled persons, low-income households, teachers, mosquito controllers and mosque caretakers in addition to members of the Indonesian National Armed Forces and the Police.
In the early days of feeder (Non-BRT) routes, passengers could pay cash to the bus conductor or use a prepaid card issued by a specific bank. This varied depending on the route (ex: Route 6H mainly accepted BCA Flazz cards only, Route 3E mainly accepted BNI Tapcash cards only), and was criticised for being highly unreliable. The card-reading device was sometimes unavailable, or different from the usual bank device issued in one route. This method of payment was gradually phased out in favor of the Tap-On-Bus (TOB) system. TOB acts similarly to the E-ticket payment system on shelters. It accepts payment from all prepaid bank-issued cards that are eligible on bus shelters. The only difference is that payment is done on board the Non-BRT bus instead of shelters. In 2019, all buses assigned to Route 1H, 1N, 1R, 4F and 5F already has the TOB installed on it and make use of TOB for all payment.
As of 2024, all Non-BRT routes already use the TOB system for all buses. Passengers boarding Non-BRT buses require to tap in when boarding and tap out when alighting, both of which on the bus. For Non-BRT routes that are partially integrated into the BRT system, if the passenger boards from or alights at a BRT shelter, tap in (if boarding from BRT) or tap out (if alighting at BRT) is done at the BRT shelter, while the other tap is done on the bus, unless both boarding and alighting are done each at a BRT shelter. Passengers can easily transfer between BRT and integrated Non-BRT routes at a BRT shelter without tapping again. The most common criticism is the variance of fare-deducting mechanism due to some TOB machines deducting fare at tap in while others and all BRT shelters deduct fare at tap out, sometimes causing double-deducting error, which although has been mostly mitigated and now is very rare, still sometimes occurs.
On 13 October 2021, KAI Commuter starts trialling its Multi Trip Card as a payment card for MRT Jakarta, Transjakarta and LRT Jakarta, as part of efforts integrating Jakarta's public transportation ticketing. However, the Multi Trip Card only works at a BRT shelter and cannot be used with TOB machines on Non-BRT buses.
Bus tracking
In 2017, Transjakarta started allowing its buses to be tracked in Trafi app. Passengers could see the location of the bus in real time in the app, thus minimizing wait time and allowing them to know when the bus was going to arrive.
On 2 October 2020, Transjakarta launched Tije, an app that allowed passengers to buy tickets using QR codes. It was launched to reduce COVID-19 transmission by reducing interaction between passengers and ticket offices. The QR-based tickets, however, could only be used in BRT shelters for BRT buses and could only be paid for with AstraPay, which Transjakarta had a contract with. The app also allowed the users to see bus arrival times through live tracking similar to that in Trafi, although the function only worked in BRT shelters and only tracked BRT buses.
In July 2022, Trafi announced its decision to cease operation in Indonesia, thus the bus tracking feature went out of service. As a response, Transjakarta began trialling bus tracking feature in Moovit in February 2023, allowing passengers to track its buses in the app. However, the agreement was short lived, as Transjakarta terminated its contract in January 2024, leaving Tije app as the sole platform for its bus tracking. Tije app was highly criticized as many of its functions, including bus tracking, not working reliably with most of the buses not appearing even when the app was used in the BRT shelter.
In May 2024, Transjakarta began trialling bus tracking in Google Maps. This time, all of the buses, be it BRT, Non-BRT, or Mikrotrans buses were made trackable. The trial lasted for a month, before bus tracking feature went missing in June. On July 18th, Transjakarta launched a new app called TJ: Transjakarta, to replace the Tije app, which was going to be retired in August. The new app provides the live bus tracking feature of all BRT and Non-BRT buses, including Royaltrans, Mikrotrans, and Bus Wisata services, alongside all other same features as the outgoing Tije app, but with new design. Bus tracking feature also returned to and is available in Google Maps.
Passengers
During rush hours, people from upper or middle classes (one of the main targets of Transjakarta) usually prefer to use private cars or taxis to avoid the inconvenience of the overcrowded Transjakarta buses even though they have to bear with traffic jams instead. Many passengers are thus lower-middle-class people who are ex-users of other less comfortable and/or more expensive commercial buses. This situation is at odds with one of the objectives of Transjakarta, which was to reduce traffic jam during rush hours by persuading private car owners to use comfortable public transport.
There is a special program for the student groups called Transjakarta goes to school. Participants in the program are assigned a dedicated bus. The aim is to train students to stand in line, be decent, and prefer public transport than personal vehicles. The municipal government has been trying to encourage the population to shift from their private vehicles to public transportation, especially Transjakarta. Thus, several regulations are put in place to restrict private cars on the street. By August 2018, the odd–even traffic policy increases Transjakarta passengers by 30,000.
Issues and accidents
Several design and operational problems have been identified. Despite having an exclusive bus lane, unauthorised vehicles illegally using the lanes in an attempt to more quickly navigate through the traffic jams are commonplace. Depot maintenance shops and special gas stations (most buses use compressed natural gas (CNG)) often have long lines of buses, restricting the availability of buses for service. The CNG powered buses also have suffered from higher fuel consumption than expected (one litre for 1.3 km compared to 2.1 km as specified) and high oil and moisture content requiring extra maintenance. Other problems identified were: a lack of feeder bus services, a lack of adequate transfer information and transfer facilities and a lack of articulated buses. A 2010 survey showed 75% of passengers transferred from medium or micro-buses to the Transjakarta buses, and it was estimated if direct service operations were implemented (i.e., multiple stopping points at some stations with bypass lanes and some services continuing beyond the trunk corridors) patronage would increase by 50%. A feeder bus service called APTB was introduced in 2012. These feeder routes stops at pedestrian bus stops like regular intercity bus system. As of 2024, there are 57 feeder routes referred to officially as Non-BRT routes, some of which being partially integrated into the BRT system (stops at a limited number of BRT shelters) while others are fully disintegrated (stops at just pedestrian bus stops, some of which being located near the BRT shelters but still require to tap again to transfer).
In May 2013, it was reported that the system was losing passengers due to unpredictable service frequency, worsening travel times, and poor maintenance of the infrastructure and vehicles. The problem of excluding private vehicles from busways was still ongoing. By November 2013, after a campaign to "sterilise" the lanes improved travel times, reports indicate patronage had increased by 20,000 per day up to between 330,000 and 355,000.
From January to July 2010, there were 237 accidents involving Transjakarta buses, resulting in 57 injuries and eight deaths. Accidents occurred due to pedestrians crossing the busway and cars making U-turns. In 2011, in an effort to stop non-Transjakarta vehicles using the bus lanes, the Jakarta Police Chief suggested that Transjakarta buses should run against the direction of traffic flow. Usually, non-Transjakarta vehicles used the busway lanes during rush hours.
On 12 January 2012, a policeman from the Indonesian Police Headquarters, who was hired by Securicor, fired his gun near the ear of a Transjakarta officer after threatening to kill him. The policeman was angry after the Transjakarta officer stopped the Securicor car from entering the busway lane, which allows only Transjakarta buses, ambulances, and firefighters to enter. The police spokesman said that the policeman would be charged by criminal law or disciplinary sanction.
Hijacking
On 12 March 2012, four Transjakarta buses were hijacked by alleged university students at the Medan Merdeka Selatan street. The buses were driven to the front of the Universitas Kristen Indonesia (Christian University of Indonesia) campus. Three drivers were able to escape from their buses, but one driver was prevented from leaving and forced to drive the hijackers to their destination. Fire extinguishers, glass-breaking hammers and drivers' jackets were also stolen from the buses.
2013 corruption case
In 2014, a corruption investigation began over series of accidents related to poor conditions of new vehicles through fraudulent procurement of more than one trillion IDR. The probe indicted Head of Jakarta Transport Department Udar Pristono as suspect of the corruption case. Pristono argued that he was only working under the supervision of then governor Joko Widodo on the procurement project, and accusing him liable for the legal prosecution as he had the responsibility over financial budget abuses involved in his administration.
Bombing
On 24 May 2017, a twin bomb attack struck the Kampung Melayu Transjakarta bus terminal. The first explosion happened at nine sharp, near the terminal's toilet, and the second explosion happened five minutes after at the bus stop. In total, five people were killed, including the two suspects.
Sexual harassment
A number of sexual harassment cases have been reported on board crammed Transjakarta buses and their overcrowded stations over the past few years, as the number of passengers has continued to rise. Transjakarta responded by providing a women-only area at the front of its buses and launching women-only buses.
Burning incident
Demonstrations opposing a bill draft in October 2020 turned violent and multiple BRT shelters became targets. The stations of Bundaran HI and Sarinah were the first two shelters burned down, out of a total of 20 shelters that were either entirely or partially burned, looted, damaged and vandalised by rioters. Transjakarta claimed a loss of up to 55 billion Rupiah.
Rear-end collision
There has been cases where fellow Transjakarta buses collided in the Transjakarta lane. In 2016, Kopaja AC buses (under pilot integration program with Transjakarta) collided with Corridor 1 bus at Monas station triggering a chain collision and 2 fatal injuries. In October 2021, two buses serving Corridor 9 were involved in a similar accident, leaving 3 casualties.
Transit Oriented Development
Seventeen Transit Oriented Development (TOD) is being built to integrate of multiple transport systems to facilitate easy and convenient transit between various mode of public transportation. At Tebet, the TOD integrates Transjakarta and the Commuter Line. Meanwhile, at Dukuh Atas TOD (Indonesian: Kawasan Integrasi Dukuh Atas or KIDA), the aim is to prioritise walking and the use of public transport as a commuting solution, rather than using private vehicles. KIDA will integrate seven transport systems in total, which are the Jakarta MRT, Jabodebek LRT, Jakarta LRT, Soekarno-Hatta Airport Rail Link, Commuter Line, Transjakarta, and other bus services.
As of July 2019, there are about 1,170 Angkot micro-buses integrated with different routes of Transjakarta, which is expected to increase 1500 by the end of the year.
Logos
| Technology | Asia_2 | null |
3143089 | https://en.wikipedia.org/wiki/Repeated%20game | Repeated game | In game theory, a repeated game (or iterated game) is an extensive form game that consists of a number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.
For an example of a repeated game, consider two gas stations that are adjacent to one another. They compete by publicly posting pricing, and have the same and constant marginal cost c (the wholesale price of gasoline). Assume that when they both charge p = 10, their joint profit is maximized, resulting in a high profit for everyone. Despite the fact that this is the best outcome for them, they are motivated to deviate. By modestly lowering the price, either can steal all of their competitors' customers, nearly doubling their revenues. P = c, where their profit is zero, is the only price without this profit deviation. In other words, in the pricing competition game, the only Nash equilibrium is inefficient (for gas stations) that both charge p = c. This is more of a rule than an exception: in a staged game, the Nash equilibrium is the only result that an agent can consistently acquire in an interaction, and it is usually inefficient for them. This is because the agents are just concerned with their own personal interests, and do not care about the benefits or costs that their actions bring to competitors. On the other hand, gas stations make a profit even if there is another gas station adjacent. One of the most crucial reasons is that their interaction is not one-off. This condition is portrayed by repeated games, in which two gas stations compete for pricing (stage games) across an indefinite time range t = 0, 1, 2,....
Finitely vs infinitely repeated games
Repeated games may be broadly divided into two classes, finite and infinite, depending on how long the game is being played for.
Finite games are those in which both players know that the game is being played a specific (and finite) number of rounds, and that the game ends for certain after that many rounds have been played. In general, finite games can be solved by backwards induction.
Infinite games are those in which the game is being played an infinite number of times. A game with an infinite number of rounds is also equivalent (in terms of strategies to play) to a game in which the players in the game do not know for how many rounds the game is being played. Infinite games (or games that are being repeated an unknown number of times) cannot be solved by backwards induction as there is no "last round" to start the backwards induction from.
Even if the game being played in each round is identical, repeating that game a finite or an infinite number of times can, in general, lead to very different outcomes (equilibria), as well as very different optimal strategies.
Infinitely repeated games
The most widely studied repeated games are games that are repeated an infinite number of times. In iterated prisoner's dilemma games, it is found that the preferred strategy is not to play a Nash strategy of the stage game, but to cooperate and play a socially optimum strategy. An essential part of strategies in infinitely repeated game is punishing players who deviate from this cooperative strategy. The punishment may be playing a strategy which leads to reduced payoff to both players for the rest of the game (called a trigger strategy). A player may normally choose to act selfishly to increase their own reward rather than play the socially optimum strategy. However, if it is known that the other player is following a trigger strategy, then the player expects to receive reduced payoffs in the future if they deviate at this stage. An effective trigger strategy ensures that cooperating has more utility to the player than acting selfishly now and facing the other player's punishment in the future.
There are many results in theorems which deal with how to achieve and maintain a socially optimal equilibrium in repeated games. These results are collectively called "Folk Theorems". An important feature of a repeated game is the way in which a player's preferences may be modelled.
There are many different ways in which a preference relation may be modelled in an infinitely repeated game, but two key ones are :
Limit of means - If the game results in a path of outcomes and player i has the basic-game utility function , player i'''s utility is:
Discounting - If player i's valuation of the game diminishes with time depending on a discount factor , then player is utility is:
For sufficiently patient players (e.g. those with high enough values of ), it can be proved that every strategy that has a payoff greater than the minmax payoff can be a Nash equilibrium - a very large set of strategies.
Finitely repeated games
Repeated games allow for the study of the interaction between immediate gains and long-term incentives. A finitely repeated game is a game in which the same one-shot stage game is played repeatedly over a number of discrete time periods, or rounds. Each time period is indexed by 0 < t ≤ T where T is the total number of periods. A player's final payoff is the sum of their payoffs from each round.
For those repeated games with a fixed and known number of time periods, if the stage game has a unique Nash equilibrium, then the repeated game has a unique subgame perfect Nash equilibrium strategy profile of playing the stage game equilibrium in each round. This can be deduced through backward induction. The unique stage game Nash equilibrium must be played in the last round regardless of what happened in earlier rounds. Knowing this, players have no incentive to deviate from the unique stage game Nash equilibrium in the second-to-last round, and so on this logic is applied back to the first round of the game. This ‘unravelling’ of a game from its endpoint can be observed in the Chainstore paradox.
If the stage game has more than one Nash equilibrium, the repeated game may have multiple subgame perfect Nash equilibria. While a Nash equilibrium must be played in the last round, the presence of multiple equilibria introduces the possibility of reward and punishment strategies that can be used to support deviation from stage game Nash equilibria in earlier rounds.
Finitely repeated games with an unknown or indeterminate number of time periods, on the other hand, are regarded as if they were an infinitely repeated game. It is not possible to apply backward induction to these games.
Examples of cooperation in finitely repeated games
Example 1: Two-Stage Repeated Game with Multiple Nash EquilibriaExample 1 shows a two-stage repeated game with multiple pure strategy Nash equilibria. Because these equilibria differ markedly in terms of payoffs for Player 2, Player 1 can propose a strategy over multiple stages of the game that incorporates the possibility for punishment or reward for Player 2. For example, Player 1 might propose that they play (A, X) in the first round. If Player 2 complies in round one, Player 1 will reward them by playing the equilibrium (A, Z) in round two, yielding a total payoff over two rounds of (7, 9).
If Player 2 deviates to (A, Z) in round one instead of playing the agreed-upon (A, X), Player 1 can threaten to punish them by playing the (B, Y) equilibrium in round two. This latter situation yields payoff (5, 7), leaving both players worse off.
In this way, the threat of punishment in a future round incentivizes a collaborative, non-equilibrium strategy in the first round. Because the final round of any finitely repeated game, by its very nature, removes the threat of future punishment, the optimal strategy in the last round will always be one of the game's equilibria. It is the payoff differential between equilibria in the game represented in Example 1 that makes a punishment/reward strategy viable (for more on the influence of punishment and reward on game strategy, see 'Public Goods Game with Punishment and for Reward').
Example 2: Two-Stage Repeated Game with Unique Nash EquilibriumExample 2' shows a two-stage repeated game with a unique Nash equilibrium. Because there is only one equilibrium here, there is no mechanism for either player to threaten punishment or promise reward in the game's second round. As such, the only strategy that can be supported as a subgame perfect Nash equilibrium is that of playing the game's unique Nash equilibrium strategy (D, N) every round. In this case, that means playing (D, N) each stage for two stages (n=2), but it would be true for any finite number of stages n''. To interpret: this result means that the very presence of a known, finite time horizon sabotages cooperation in every single round of the game. Cooperation in iterated games is only possible when the number of rounds is infinite or unknown.
Solving repeated games
In general, repeated games are easily solved using strategies provided by folk theorems. Complex repeated games can be solved using various techniques most of which rely heavily on linear algebra and the concepts expressed in fictitious play.
It may be deducted that you can determine the characterization of equilibrium payoffs in infinitely repeated games. Through alternation between two payoffs, say a and f, the average payoff profile may be a weighted average between a and f.
Incomplete information
Repeated games can include some incomplete information. Repeated games with incomplete information were pioneered by Aumann and Maschler. While it is easier to treat a situation where one player is informed and the other not, and when information received by each player is independent, it is possible to deal with zero-sum games with incomplete information on both sides and signals that are not independent.
| Mathematics | Game theory | null |
3143285 | https://en.wikipedia.org/wiki/Chemosterilant | Chemosterilant | A chemosterilant is a chemical compound that causes reproductive sterility in an organism. Chemosterilants are particularly useful in controlling the population of species that are known to cause disease, such as insects, or species that are, in general, economically damaging. The sterility induced by chemosterilants can have temporary or permanent effects. Chemosterilants can be used to target one or both sexes, and it prevents the organism from advancing to be sexually functional. They may be used to control pest populations by sterilizing males. The need for chemosterilants is a direct consequence of the limitations of insecticides. Insecticides are most effective in regions in which there is high vector density in conjunction with endemic transmission, and this may not always be the case. Additionally, the insects themselves will develop a resistance to the insecticide either on the target protein level or through avoidance of the insecticide in what is called a behavioral resistance. If an insect that has been treated with a chemosterilant mates with a fertile insect, no offspring will be produced. The intention is to keep the percent of sterile insects within a population constant, such that with each generation, there will be fewer offspring.
Early research and concerns
Research on chemosterilants began in the 1960s–1970s, but the effort was abandoned due to concerns regarding toxicity. However, with great advancements made in genetics and analysis of vectors, the search for safer chemosterilants has resumed in the 21st century. Initially, there were many concerns with using chemosterilants on an operational scale due to difficulties in finding the ideal small molecule. The molecule used as a chemosterilant must satisfy a certain criteria. Firstly, the molecule must be available at a low cost. The molecule must result in permanent sterility upon exposure through topical application or immersion of larvae into water. Additionally, the survivability of the sterile males must not be affected, and the chemosterilant should not be toxic to humans or the environment. The two promising agents in the beginning were aziridines thiotepa and bisazir, but they were unable to satisfy the criteria of minimal toxicity to humans as well as the vector's predators. Pyriproxyfen was another compound of interest since it is not toxic to humans, but it would not be possible to induce sterility in larvae due to the fact that it exists as a larvicide. Exposure of larvae to pyriproxyfen will essentially kill the larvae.
Examples of chemosterilants
Use of chemosterilants for non-surgical castration (dogs and cats)
There are many regions in which there is a population of cats and dogs that freely roam on the streets. The most conventional approach to controlling reproductive rates in companion animals is through surgical means. However, surgical intervention poses ethical concerns. Through the formulation of a non-surgical castration technique, animals would not have to undergo anesthesia, and would not have to experience post-surgical bleeding or infection of the area that has been operated on. Some examples of chemosterilants include CaCl2 and zinc gluconate. These are specifically known as necrosis-inducing agents, which result in the degeneration of cells in the testes, resulting in infertility. These kinds of chemicals are generally injected into male reproductive organs, such as the testes, vas deferens, or epididymis. When injected, they induce azoospermia, which is the degeneration of the sperm cells normally found in the semen. If no sperm cells are present, reproduction can no longer occur. There is, however, one complication that results from the use of necrosis-inducing agents. Many animals generally exhibit an inflammatory response directly after the injection. To avoid the pain and discomfort associated with necrosis-inducing agents, another form of sterilization, known as apoptosis-inducing agents, has been studied. If cells are signaled to perform apoptosis rather than being eliminated by a foreign substance, this will result in no inflammation in the area. Experiments were tested using mice in vitro and ex vivo that have proved this. Using an apoptosis-inducing agent known as doxorubicin encapsulated in a nanoemulsion, and injecting it into mice, testicular cell death was observed. Inflammation was not observed in this case; however, more research still needs to be conducted with these materials, as the long-term impacts are unknown.
Effect of chemosterilants on the behavior of wandering male dogs in Puerto Natales, Chile
Chemosterilants can be useful to developing countries due to the fact that they have less resources and funds that can be allocated towards castration of their free-roaming animals. Additionally, the culture opposes the removal of testes. This study, performed in 2015, was unable to conclude the effects of chemical sterilization on dog aggression, as not enough is known about the aggression displayed by free-roaming dogs, and thus, researchers were unable to objectively make a decision on this front. Using GPS technology to track the movement of the free-roaming male dogs, it was found that chemical sterilization in comparison to surgical sterilization did not have a significant impact on the range of their roaming around the city. Much more detailed studies need to be performed in this area, since this study was the first of its kind and had relatively short sample sizes along with the examination of behavior not spanning a long enough time period.
Use of CaCl2 and zinc gluconate in cattle
The method of administration of CaCl2 and zinc gluconate is through a transvaginal injection of the chemical into the ovaries, and visualization is achieved through the use of an ultrasound. One group of cattle was only treated with CaCl2, one group was only treated with zinc gluconate, and one group was treated with both CaCl2 and zinc gluconate. Treatment with CaCl2 seems to be most promising, as the ovarian mass of the female cattle upon slaughter was less than cattle treated with zinc gluconate or the combination. The goal of treatment with CaCl2 is to cause ovarian atrophy with a minimal amount of pain.
Ornitrol in controlling the sparrow population
Another chemosterilant found to be effective is known as ornitrol. This chemosterilant was provided to sparrows by impregnating canary seeds, and this was used as a food source for a group of sparrows. There was a control group that was fed canary seeds without the ornitrol, and these birds laid almost twice as many eggs as group that was given ornitrol. It was deemed an effective chemosterilant in the study; however, after the removal of the chemosterilant from the diet, the birds were able to lay viable eggs as soon as 1–2 weeks later.
Commonly used chemosterilants
Two types of chemosterilants are commonly used:
Antimetabolites resemble a substance that the cell or tissue needs that the organism's body mistakes for a true metabolite and tries to incorporate them in its normal building processes. The fit of the chemical is not exactly right and the metabolic process comes to a halt.
Alkylating agents are a group of chemicals that act on chromosomes. These chemicals are extremely reactive, capable of intense cell destruction, damage to chromosomes and production of mutations.
| Technology | Pest and disease control | null |
3143591 | https://en.wikipedia.org/wiki/Euclid%27s%20theorem | Euclid's theorem | Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proven by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that there exists at least one additional prime number not included in this list. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would also divide P (since P is the product of every number in the list). If p divides P and q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists that is not in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, Euclid denoted the arbitrary finite set of prime numbers as A, B, Γ.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on logic, states, "Euclid's proof that there are infinitely many primes is not an indirect proof [...] The argument is sometimes formulated as an indirect proof by replacing it with the assumption 'Suppose are all the primes'. However, since this assumption isn't even used in the proof, the reformulation is pointless."
Variations
Several variations on Euclid's proof exist, including the following:
The factorial n! of a positive integer n is divisible by every integer from 2 to n, as it is the product of all of them. Hence, is not divisible by any of the integers from 2 to n, inclusive (it gives a remainder of 1 when divided by each). Hence is either prime or divisible by a prime larger than n. In either case, for every positive integer n, there is at least one prime bigger than n. The conclusion is that the number of primes is infinite.
Euler's proof
Another proof, by the Swiss mathematician Leonhard Euler, relies on the fundamental theorem of arithmetic: that every integer has a unique prime factorization. What Euler wrote (not with this modern notation and, unlike modern standards, not restricting the arguments in sums and products to any finite sets of integers) is equivalent to the statement that we have
where denotes the set of the first prime numbers, and is the set of the positive integers whose prime factors are all in
To show this, one expands each factor in the product as a geometric series, and distributes the product over the sum (this is a special case of the Euler product formula for the Riemann zeta function).
In the penultimate sum, every product of primes appears exactly once, so the last equality is true by the fundamental theorem of arithmetic. In his first corollary to this result Euler denotes by a symbol similar to the "absolute infinity" and writes that the infinite sum in the statement equals the "value" , to which the infinite product is thus also equal (in modern terminology this is equivalent to saying that the partial sum up to of the harmonic series diverges asymptotically like ). Then in his second corollary, Euler notes that the product
converges to the finite value 2, and there are consequently more primes than squares. This proves Euclid's Theorem.
In the same paper (Theorem 19) Euler in fact used the above equality to prove a much stronger theorem that was unknown before him, namely that the series
is divergent, where denotes the set of all prime numbers (Euler writes that the infinite sum equals , which in modern terminology is equivalent to saying that the partial sum up to of this series behaves asymptotically like ).
Erdős's proof
Paul Erdős gave a proof that also relies on the fundamental theorem of arithmetic. Every positive integer has a unique factorization into a square-free number and a square number . For example, .
Let be a positive integer, and let be the number of primes less than or equal to . Call those primes . Any positive integer which is less than or equal to can then be written in the form
where each is either or . There are ways of forming the square-free part of . And can be at most , so . Thus, at most numbers can be written in this form. In other words,
Or, rearranging, , the number of primes less than or equal to , is greater than or equal to . Since was arbitrary, can be as large as desired by choosing appropriately.
Furstenberg's proof
In the 1950s, Hillel Furstenberg introduced a proof by contradiction using point-set topology.
Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset to be an open set if and only if it is either the empty set, , or it is a union of arithmetic sequences (for ), where
Then a contradiction follows from the property that a finite set of integers cannot be open and the property that the basis sets are both open and closed, since
cannot be closed because its complement is finite, but is closed since it is a finite union of closed sets.
Recent proofs
Proof using the inclusion-exclusion principle
Juan Pablo Pinasco has written the following proof.
Let p1, ..., pN be the smallest N primes. Then by the inclusion–exclusion principle, the number of positive integers less than or equal to x that are divisible by one of those primes is
Dividing by x and letting x → ∞ gives
This can be written as
If no other primes than p1, ..., pN exist, then the expression in (1) is equal to and the expression in (2) is equal to 1, but clearly the expression in (3) is not equal to 1. Therefore, there must be more primes than p1, ..., pN.
Proof using Legendre's formula
In 2010, Junho Peter Whang published the following proof by contradiction. Let k be any positive integer. Then according to Legendre's formula (sometimes attributed to de Polignac)
where
But if only finitely many primes exist, then
(the numerator of the fraction would grow singly exponentially while by Stirling's approximation the denominator grows more quickly than singly exponentially),
contradicting the fact that for each k the numerator is greater than or equal to the denominator.
Proof by construction
Filip Saidak gave the following proof by construction, which does not use reductio ad absurdum or Euclid's lemma (that if a prime p divides ab then it must divide a or b).
Since each natural number greater than 1 has at least one prime factor, and two successive numbers n and (n + 1) have no factor in common, the product n(n + 1) has more different prime factors than the number n itself. So the chain of pronic numbers:1×2 = 2 {2}, 2×3 = 6 {2, 3}, 6×7 = 42 {2, 3, 7}, 42×43 = 1806 {2, 3, 7, 43}, 1806×1807 = 3263442 {2, 3, 7, 43, 13, 139}, · · ·provides a sequence of unlimited growing sets of primes.
Proof using the incompressibility method
Suppose there were only k primes (p1, ..., pk). By the fundamental theorem of arithmetic, any positive integer n could then be represented as
where the non-negative integer exponents ei together with the finite-sized list of primes are enough to reconstruct the number. Since for all i, it follows that for all i (where denotes the base-2 logarithm). This yields an encoding for n of the following size (using big O notation):
bits.
This is a much more efficient encoding than representing n directly in binary, which takes bits. An established result in lossless data compression states that one cannot generally compress N bits of information into fewer than N bits. The representation above violates this by far when n is large enough since . Therefore, the number of primes must not be finite.
Proof using an even-odd argument
Romeo Meštrović used an even-odd argument to show that if the number of primes is not infinite then 3 is the largest prime, a contradiction.
Suppose that are all the prime numbers. Consider and note that by assumption all positive integers relatively prime to it are in the set . In particular, is relatively prime to and so is . However, this means that is an odd number in the set , so , or . This means that must be the largest prime number which is a contradiction.
The above proof continues to work if is replaced by any prime with , the product becomes and even vs. odd argument is replaced with a divisible vs. not divisible by argument. The resulting contradiction is that must, simultaneously, equal and be greater than , which is impossible.
Stronger results
The theorems in this section simultaneously imply Euclid's theorem and other results.
Dirichlet's theorem on arithmetic progressions
Dirichlet's theorem states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d.
Prime number theorem
Let be the prime-counting function that gives the number of primes less than or equal to , for any real number . The prime number theorem then states that is a good approximation to , in the sense that the limit of the quotient of the two functions and as increases without bound is 1:
Using asymptotic notation this result can be restated as
This yields Euclid's theorem, since
Bertrand–Chebyshev theorem
In number theory, Bertrand's postulate is a theorem stating that for any integer , there always exists at least one prime number such that
Equivalently, writing for the prime-counting function (the number of primes less than or equal to ), the theorem asserts that for all .
This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all numbers in the interval
His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem.
| Mathematics | Other | null |
1625082 | https://en.wikipedia.org/wiki/Weak%20isospin | Weak isospin | In particle physics, weak isospin is a quantum number relating to the electrically charged part of the weak interaction: Particles with half-integer weak isospin can interact with the bosons; particles with zero weak isospin do not.
Weak isospin is a construct parallel to the idea of isospin under the strong interaction. Weak isospin is usually given the symbol or , with the third component written as or is more important than ; typically "weak isospin" is used as short form of the proper term "3rd component of weak isospin". It can be understood as the eigenvalue of a charge operator.
Notation
This article uses and for weak isospin and its projection.
Regarding ambiguous notation, is also used to represent the 'normal' (strong force) isospin, same for its third component a.k.a. or . Aggravating the confusion, is also used as the symbol for the Topness quantum number.
Conservation law
The weak isospin conservation law relates to the conservation of weak interactions conserve . It is also conserved by the electromagnetic and strong interactions. However, interaction with the Higgs field does not conserve , as directly seen in propagating fermions, which mix their chiralities by the mass terms that result from their Higgs couplings. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time, even in vacuum. Interaction with the Higgs field changes particles' weak isospin (and weak hypercharge). Only a specific combination of electric charge is conserved.
The electric charge, is related to weak isospin, and weak hypercharge, by
In 1961 Sheldon Glashow proposed this relation by analogy to the Gell-Mann–Nishijima formula for charge to isospin.
Relation with chirality
Fermions with negative chirality (also called "left-handed" fermions) have and can be grouped into doublets with that behave the same way under the weak interaction. By convention, electrically charged fermions are assigned with the same sign as their electric charge.
For example, up-type quarks (u, c, t) have and always transform into down-type quarks (d, s, b), which have and vice versa. On the other hand, a quark never decays weakly into a quark of the same Something similar happens with left-handed leptons, which exist as doublets containing a charged lepton (, , ) with and a neutrino (, , ) with In all cases, the corresponding anti-fermion has reversed chirality ("right-handed" antifermion) and reversed sign
Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have and form singlets that do not undergo charged weak interactions.
Particles with do not interact with ; however, they do all interact with the .
Neutrinos
Lacking any distinguishing electric charge, neutrinos and antineutrinos are assigned the opposite their corresponding charged lepton; hence, all left-handed neutrinos are paired with negatively charged left-handed leptons with so those neutrinos have Since right-handed antineutrinos are paired with positively charged right-handed anti-leptons with those antineutrinos are assigned The same result follows from particle-antiparticle charge & parity reversal, between left-handed neutrinos () and right-handed antineutrinos ().
Weak isospin and the W bosons
The symmetry associated with weak isospin is SU(2) and requires gauge bosons with (, , and ) to mediate transformations between fermions with half-integer weak isospin charges. implies that bosons have three different values of
boson is emitted in transitions →
boson would be emitted in weak interactions where does not change, such as neutrino scattering.
boson is emitted in transitions → .
Under electroweak unification, the boson mixes with the weak hypercharge gauge boson ; both have This results in the observed boson and the photon of quantum electrodynamics; the resulting and likewise have zero weak isospin.
| Physical sciences | Quantum numbers | Physics |
1625625 | https://en.wikipedia.org/wiki/Car%20wash | Car wash | A car wash, or auto wash, is a facility used to clean the exterior, and in some cases the interior, of cars. Car washes can be self-service, full-service (with attendants who wash the vehicle), or fully automated (possibly connected to a filling station). Car washes may also be events where people pay to have their cars washed by volunteers, often using less specialized equipment, as a fundraiser.
The Ultimate Guide to Windscreen Wipers: Types and Which One Is Better for Your Car
History
The first U.S. patent for a mechanized car wash was filed in 1900 and soon followed by "auto laundries". The Automobile Laundry in Detroit, Michigan, opened in 1914 by Frank McCormick and J.W. Hinkle, is considered the first business in the U.S. to adopt the name "car wash" for their services. Manual car wash operations, which used manpower to push or move the cars through stages, peaked at 32 drive-through facilities in the United States. The first semi-automatic car wash in the United States debuted in 1946 at a facility in Detroit, which used automatic pulley systems and manual brushing.
Dan Hanna, encouraged by car washers in Detroit, founded a car wash in 1955 called the Rub-a-Dub in Oregon. He later formed Hanna Enterprises and reached about 31 locations. Hanna operated his wash rack until adopting a mechanized car washing system in 1959. The company became one of the leading manufacturers of car washing equipment and materials, including brushes, conveyor belts, tire washes, and recirculating water systems. In the late-1960's, some car washes began to adopt "flex-serve" models to accommodate customers who did not want a full interior and exterior cleaning, in which facilities such as vacuuming and hand detailing are constructed near the exit as an optional service.
The car wash industry in the U.S. remained primarily led by small businesses that distinguished themselves through playful signage or building architecture. At the turn of the 21st century, the "express exterior" business model—first developed by a chain in Baton Rouge, Louisiana—began to emerge, in which computerized point of sale and queueing systems are used to manage customer throughput via automation, reducing the amount of staff required. In the 2010s, this model began to be combined with subscription-based car wash services, which offer convenience and potentially lower costs for car owners compared to traditional pay-per-wash models.
Due to their turnkey nature and lower staffing requirements, express exterior washes became an ubiquitous business model for the industry, resulting in many operators and private equity firms investing in opening larger chains of locations. As of 2024, the United States is estimated to have approximately 60,000 car washes that constitute a $14 billion industry. It has experienced steady growth, with an average annual expansion of 5% in recent years; some market analysts project the industry to double in size by 2030, partly attributed to the growth of subscription-based services. Additionally, the market share of professional car washes has grown significantly, from 50% in 1996 to an estimated 79% in 2021. This suggests a decline in the number of individuals washing their cars themselves. The average revenue per car wash location is reported to be around $1.5 million.
Some municipalities in the United States have enacted saturation bans due to the number of new car wash locations being constructed in clusters.
Categories
The following are forms of car washing.
Hand car wash facilities, where employees wash the vehicle.
Self-service facilities, generally coin-operated, where the customer manually washes the car with a water-dispersing wand and low-pressure brushes, including pressurized "jet washing".
In-bay automatics involve the customer parking and an automatic wash machine rolling back and forth over the stationary vehicle. Housed at filling stations and stand-alone wash sites.
Conveyor or tunnel washes involve the car moving on a conveyor belt through a series of fixed cleaning mechanisms while the customer waits outside. Friction (brushes or curtains) or frictionless (high-pressure nozzles and touches wash) are used.
Mobile car washes often also serve as mobile detailing systems, carry plastic water tanks, and use pressure washers. Systems are often mounted on trailers, trucks, or vans. Generally, operators also have a generator to run a shop vac., buffers, and other tools.
Car wash lift, where cars are placed on a lift platform that can be used to wash under the car.
Touch-free (or touchless) car washing technology is the modern car wash system that reduces water consumption, chemical solutions, and time. Washing machinery uses high-pressure jets that measure the length and width of the vehicle.
Use of chemicals
In modern car wash facilities, whether tunnel, in-bay automatic, or self-serve, detergents and other cleaning solutions are designed to loosen and eliminate dirt and grime. This is in contrast to earlier times, when hydrofluoric acid, a hazardous chemical, was commonly used as a cleaning agent in the industry by some operators. There has been a move in the industry to shift to safer cleaning solutions. Most car wash facilities are legally required to treat and/or reuse their water and may be required to maintain wastewater discharge permits. This is in contrast to unregulated facilities or even driveway washing (at one's home), where wastewater can end up in the storm drain and, eventually, in streams, rivers, and lakes.
A chemical car wash, or waterless car wash, uses chemicals to wash and polish car surfaces. This method is claimed to be eco-friendly, but is recommended only for cars with light dirt accumulation to avoid paint damage.
Mechanized car washes, especially those with brushes, may risk damaging the exterior finish. Paint finishes and car washing processes have improved. More facilities utilize "brushless" (cloth) and "touch-free" (high-pressure water) equipment, as well as modern "foam" washing wheels made of closed-cell foam.
Self-serve car wash
A self-serve car wash is a simple and automated type of car wash that is typically coin-operated or token-operated self-service system. Newer self-service car washes offer the ability to pay with credit cards or loyalty cards. The vehicle is parked inside a large, sometimes covered, bay equipped with a trigger gun and wand (a high-pressure sprayer) and a foam brush for scrubbing. When a customer inserts coins or tokens into the coin box, they can choose options such as soap, tire cleaner, wax, or clear water rinse, all dispensed from the sprayer, or scrub the vehicle with the foam brush. The number of coins or tokens inserted determines when customers operate the equipment; in most instances, a minimum number of coins is necessary to start the equipment. These facilities are often equipped with separate vacuum stations that allow customers to clean the upholstery and rugs inside their cars. Some self-service car washes offer hand-held dryers.
Automatic car wash
Conveyor-driven/tunnel car wash
The first conveyor-driven automatic car wash appeared in Hollywood, California, in 1940. Conveyor-driven automatic car washes consist of tunnel-like buildings into which customers (or attendants) drive.
Before entering the automated section of the wash tunnel, attendants may prewash customers' cars.
The car wash typically starts cleaning with chemicals called presoaks applied through special arches. CTAs, or "chemical tire applicators", apply specialized formulations, which remove brake dust and build-up from the surface of the wheels and tires.
A high-pressure arch may direct water at the vehicle's surface at the end of a car wash's presoak.
Mitters are ribbon-like components that suspend cloth strips or sheets over the tunnel
The car is rinsed with fresh water immediately, followed by extra services if required. In many car washes, the first of these services is a polish wax. After the polish, the wax application is typically a retractable mitter or top brush and, in some cases, side brushes or wrap-around brushes. Next is a protectant, which creates a thin protective film over a vehicle's surface. Protectants generally repel water, which assists in drying the car and aiding in the driver's ability to see through their windshield during rain. A low-end wax or clear coat protectant follows the primary protectant. A drying agent is typically applied at the end of the tunnel to remove water from the vehicle's surface before forced air drying. After the drying agent, there may be a "spot-free" rinse of soft water that has been filtered of the salts usually present and sent through semi-permeable membranes to produce highly purified water that will not leave spots.
Dryers may be present in various forms, such as stationary gantries with a contouring roof jet or small circular assemblies with nozzles of different shapes and sizes mounted on arches. Mitters, side brushes, top brushes, and/or wraps outfitted with chamois- or microfiber-based material may follow the dryers.
At "full-service" car washes, the car's exterior is washed mechanically, by hand, or using a combination of both, with attendants available to dry the vehicle manually and clean the interior. Many full-service car washes also provide "detailing" services, which may include polishing and waxing the car's exterior by hand or machine, shampooing, and steaming interiors as well as other services to provide thorough cleaning and protection to the car.
Touchless wash
Like soft-touch car washes, touchless car washes are automated, with the vehicle passing through a tunnel where it is cleaned. However, touchless car washes do not use the foam or cloth applicators that soft-touch washes use; instead, they rely on high-pressure washers to clean and rinse the vehicle. Sensors utilized by these washes allow for a more precise clean along with the vehicle's exact shape. To compensate for not physically contacting the vehicle, touchless washes use higher pressures and more caustic detergents than ordinary car washes. Because the vehicle is not physically touched during a touchless wash, the vehicle is at a lower risk of being damaged. However, touchless washes have a harder time cleaning off tougher materials or reaching difficult-to-reach locations on vehicles, and their usage of stronger chemicals can potentially damage a vehicle's paint finish.
Environmental factors
The primary environmental considerations for car washing are:
Use of water and energy resources;
Contamination of surface waters;
Contamination of soil and groundwater.
The use of water supplies and energy is self-evident since car washes are users of such resources. The professional car wash industry has made strides in reducing its environmental footprint, a trend that will continue accelerating due to regulation and consumer demand. Many car washes use water reclamation systems to significantly reduce water usage and a variety of energy usage reduction technologies. These systems may be mandatory where water restrictions are in place.
Contamination of surface waters may arise from the rinse discharging to storm drains, which eventually drain to rivers and lakes. Chief pollutants in such wash-water include phosphates; oil and grease; and lead. This is almost exclusively an issue for home/driveway washing and parking lot-style charity washes. Professional carwashing is a point source of discharge that can capture these contaminants, generally in interceptor drains, so the contaminants can be removed before the water enters sanitary systems. (Water and contaminants that enter stormwater drains are not treated and released directly into rivers, lakes, and streams.)
Soil contamination is sometimes related to such surface runoff and is associated with soil contamination from underground fuel tanks or auto servicing operations which commonly are ancillary uses of car wash sites — but not an issue for car washing.
For these reasons, countries like Switzerland and Germany have banned citizens from washing their cars at home. In the US, some state and local environmental groups (the most notable being the New Jersey Department of Environmental Protection) have begun campaigns to encourage consumers to use professional car washes as opposed to driveway washing, including moving charity car wash fundraisers from parking lots to professional car washes. Poland, Portugal, Italy, and many other countries have no regulations regarding wastewater from car washing.
| Technology | Concepts of ground transport | null |
1627000 | https://en.wikipedia.org/wiki/Australopithecus%20africanus | Australopithecus africanus | Australopithecus africanus is an extinct species of australopithecine which lived between about 3.3 and 2.1 million years ago in the Late Pliocene to Early Pleistocene of South Africa. The species has been recovered from Taung, Sterkfontein, Makapansgat, and Gladysvale. The first specimen, the Taung child, was described by anatomist Raymond Dart in 1924, and was the first early hominin found. However, its closer relations to humans than to other apes would not become widely accepted until the middle of the century because most had believed humans evolved outside of Africa. It is unclear how A. africanus relates to other hominins, being variously placed as ancestral to Homo and Paranthropus, to just Paranthropus, or to just P. robustus. The specimen "Little Foot" is the most completely preserved early hominin, with 90% of the skeleton intact, and the oldest South African australopith. However, it is controversially suggested that it and similar specimens be split off into "A. prometheus".
A. africanus brain volume was about . Like other early hominins, the cheek teeth were enlarged and had thick enamel. Male skulls may have been more robust than female skulls. Males may have been on average in height and in weight, and females and . A. africanus was a competent biped, albeit less efficient at walking than humans. A. africanus also had several upper body traits in common with arboreal non-human apes. This is variously interpreted as either evidence of a partially or fully arboreal lifestyle, or as a non-functional vestige from a more apelike ancestor. The upper body of A. africanus is more apelike than that of the East African A. afarensis.
A. africanus, unlike most other primates, seems to have exploited C4 foods such as grasses, seeds, rhizomes, underground storage organs, or potentially creatures higher up on the food chain. Nonetheless, the species had a highly variable diet, making it a generalist. It may have eaten lower quality, harder foods, such as nuts, in leaner times. To survive, children may have needed nursing during such periods until reaching perhaps 4 to 5 years of age. The species appears to have been patrifocal, with females more likely to leave the group than males. A. africanus lived in a gallery forest surrounded by more open grasslands or bushlands. South African australopithecine remains probably accumulated in caves due to predation by large carnivores (namely big cats), and the Taung child appears to have been killed by a bird of prey. A. africanus probably went extinct due to major climatic variability and volatility and possibly competition with Homo and P. robustus.
Research history
Discovery
In 1924, Australian anatomist Professor Raymond Dart, since 1923 working in South-Africa, was informed by one of his students, Josephine Salmons, that monkey fossils (of Papio izodi) had been discovered by shotfirer M.G. de Bruyn in a limestone quarry in Taung, South Africa, operated by the Northern Lime Company. Knowing that Scottish geologist Professor Robert Burns Young was at the time carrying out excavations in the area in search of archaic human remains like Homo rhodesiensis from Kabwe, Zambia (at the time Broken Hill, Northern Rhodesia) discovered in 1921, he asked his colleague to send him some primate remains from the quarry.
On 24 November 1924, Dart received two boxes with fossils collected by De Bruyn. In them, he noticed a natural brain endocast and a face of a, now known to be 2.8 million year old, juvenile skull, the Taung child, that he immediately recognised as a transitional fossil between apes and humans. Most notably, it had a small brain size yet was, as shown by the position of the foramen magnum, bipedal. Dart, after hastily freeing the fossil from its matrix, already in January 1925 named the specimen as a new genus and species: Australopithecus africanus.
Classification
At the time of discovery, great apes were classified into the family Pongidae encompassing all non-human fossil apes, and Hominidae encompassing humans and ancestors. Dart felt the Taung child fit into neither, and erected the family "Homo-simiadæ" ("man-apes"). This family name was soon abandoned, and Dart proposed "Australopithecidae" in 1929. In 1933, South African palaeoanthropologist Robert Broom suggested moving A. africanus into Hominidae, which at the time contained only humans and their ancestors.
A. africanus was the first evidence that humans evolved in Africa, as Charles Darwin had postulated in his 1871 The Descent of Man. However, Dart's claim of the Taung child as the transitional stage between apes and humans was at odds with the-then popular model of human evolution which held that large brain size and humanlike characteristics had developed rather early on, and that large brain size evolved before bipedalism. Resultantly, A. africanus was generally cast aside as a member of the gorilla or chimpanzee lineages, most notably by Sir Arthur Keith.
This view was perpetuated by Charles Dawson's 1912 hoax Piltdown Man hailing from Britain. Further, the discovery of the humanlike Peking Man (Homo erectus pekinensis) in China also seemed to place the origins of humankind outside of Africa. Humanlike characteristics of the Taung child were attributed to the specimen's juvenile status, meaning they would disappear with maturity. Nonetheless, Dart and Broom continued to argue that Australopithecus was far removed from chimpanzees, showing several physical and claiming some behavioural similarities with humans. To this extent, Dart made note of the amalgamations of large mammal bone fragments in australopithecine-bearing caves which are now attributed to hyena activity. However, Dart proposed that the bones were instead evidence of what he named the "osteodontokeratic culture" produced by australopithecine hunters, who manufactured weapons using the long bones, teeth, and horns of large hoofed prey:
Broom was one of the few scientists defending the close human affinities of Australopithecus africanus. In 1936, he was informed by two of Dart's students, Trevor R. Jones and G. Schepers, that human-like remains had been discovered in the Sterkfontein Cave quarries. On 9 August 1936, he asked G.W. Barlow to provide him with any finds. On 17 August 1936 he received an adult skull including a natural endocast, specimen Sts 60. However, Broom classified it as a new species, "A. transvaalensis", and in 1938 moved it into a new genus as "Plesianthropus transvaalensis". He also discovered the robust australopithecine Paranthropus robustus, showing evidence of a wide diversity of Early Pleistocene "man-apes". Before World War II, several more sites bore A. africanus fossils. A detailed monograph by Broom and palaeoanthropologist Gerrit Willem Hendrik Schepers in 1946 regarding these australopithecines from South Africa, as well as several papers by British palaeoanthropologist Sir Wilfrid Le Gros Clark, had turned around scientific opinion, garnering wide support for A. africanus classification as a human ancestor. In 1947, the most complete skull was discovered, STS 5 ("Mrs. Ples"). Wider acceptance of A. africanus prompted re-evaluation of Piltdown Man in 1953, revealing its falsehood.
In 1949, Dart recommended splitting a presumed-female facial fragment from Makapansgat, South Africa, (MLD 2) into a new species as "A. prometheus". In 1954, he referred another presumed-female specimen from Makapansgat (a jawbone fragment). However, in 1953, South African palaeontologist John Talbot Robinson believed that splitting species and genera on such fine hairs was unjustified, and that australopithecine remains from East Africa recovered over the previous couple of decades were indistinguishable from "Plesianthropus"/A. africanus. Based on this, in 1955, Dart agreed with synonymising "A. prometheus" with A. africanus because they are already quite similar to each other, and if speciation did not occur across a continent, then it quite unlikely occurred over a couple tens of kilometres according to Dart. The East African remains would be split off into A. afarensis in 1978. In 2008, palaeoanthropologist Ronald J. Clarke recommended reviving "A. prometheus" to house the StW 573 nearly-complete skeleton ("Little Foot"), StS 71 cranium, StW 505 cranium, StW 183 maxilla, StW 498 maxilla and jawbone, StW 384 jawbone, StS 1 palate, and MLD 2. In 2018, palaeoanthropologists Lee Rogers Berger and John D. Hawks considered "A. prometheus" a nomen nudum ("naked name"), and has not been properly described with diagnostic characteristics which separate it from A. africanus. At the time, these remains were dated to 3.3 million years ago in the Late Pliocene. In 2019, Clarke and South African palaeoanthropologist Kathleen Kuman redated StW 573 to 3.67 million years ago, making it the oldest Australopithecus specimen from South Africa. They considered its antiquity further evidence of species distinction, drawing parallels with A. anamensis and A. afarensis from Middle Pliocene East Africa. Little foot is the most complete early hominin skeleton ever recovered, with about 90% preserved.
In addition to Taung, Sterkfontein, and Makapansgat, A. africanus was in 1992 discovered in Gladysvale Cave. The latter three are in the Cradle of Humankind. Many hominin specimens traditionally assigned to A. africanus have been recovered from Sterkfontein Member 4 (including Mrs. Ples and 2 partial skeletons), previously dated to 2.8 to 2.15 million years ago. But in 2022 a team including Clarke and Kuman used cosmogenic nuclide techniques to date Member 4 at 3.4 million years, which it says discredits the assumption that A. africanus descended from A. afarensis. However, given the wide range of variation exhibited by these specimens, it is debated if all these elements can be confidently assigned to only A. africanus.
At present, the classification of australopithecines is in disarray. Australopithecus is considered a grade taxon, whose members are united by their similar physiology rather than close relations with each other over other hominin genera. It is unclear how A. africanus relates to other hominins. The discovery of Early Pleistocene Homo in Africa during the latter half of the 20th century placed humanity's origins on the continent and A. africanus as ancestral to Homo. The discovery of A. afarensis in 1978, at the time the oldest known hominin, prompted a hypothesis that A. africanus was ancestral to P. robustus, and A. afarensis was the last common ancestor between Homo and A. africanus/P. robustus. It is also suggested that A. africanus is closely related to P. robustus but not to the other Paranthropus species in East Africa, or that A. africanus is ancestral to all Paranthropus. A. africanus has also been postulated to have been ancestral to A. sediba which also inhabited the Cradle of Humankind, perhaps contemporaneously. A. sediba is also postulated to have been ancestral to Homo, which if correct would indeed put A. africanus in an ancestral position to Homo.
Anatomy
Skull
Based on 4 specimens, the A. africanus brain volume averaged about . Based on this, neonatal brain size was estimated to have been using trends seen in adult and neonate brain size in modern primates. If correct, this would indicate that A. africanus was born with about 38% of its total brain size, which is more similar to non-human great apes at 40% than humans at 30%. The inner ear has wide semicircular canals like non-human apes, as well as loose turns at the terminal end of the cochlea like humans. Such a mix may reflect habitual locomotion both in the trees and walking while upright because inner ear anatomy affects the vestibular system (sense of balance).
A. africanus had a prognathic jaw (it jutted out), a somewhat dished face (the cheek were inflated, causing the nose to be at the bottom of a dip), and a defined brow ridge. The temporal lines running across either side of the braincase are raised as small crests. The canines are reduced in size compared to non-human apes, though still notably bigger than those of modern humans. Like other early hominins, the cheek teeth are large and feature thick enamel. In the upper jaw the third molar is the largest molar, and in the lower jaw it is the second molar. A. africanus had a fast, apelike dental development rate. According to Clarke, the older "A. prometheus" is distinguished by larger and more bulbous cheek teeth, larger incisors and canines, more projecting cheeks, more widely spaced eye sockets, and a sagittal crest. A. africanus has a wide range of variation for skull features, which is typically attributed to moderate to high levels of sexual dimorphism in that males were more robust than females.
Build
In 1992, American anthropologist Henry McHenry estimated an average weight (when assuming humanlike or apelike body proportions, respectively) of for males based on five partial leg specimens, and for females based on seven specimens. In 2015, American anthropologist William L. Jungers and colleagues similarly reported an average weight (without attempting to distinguish males from females) of with a range of for weight based on 19 specimens. Based on seven specimens, McHenry estimated that males, on average, grew to tall and females . In 2017, based on 24 specimens, anthropologist Manuel Will and colleagues estimated a height of with a range of . The elderly, probably female StW 573 was estimated to have stood about .
Based on the A. afarensis skeleton DIK-1-1, australopiths are thought to have had a humanlike spine, with 7 neck vertebrae, 12 thoracic vertebrae, and (based on other early australopith skeletons) 5 flexible lumbar vertebrae. In StW 573, the atlas bone in the neck, important for swiveling and stabilising the head, is more similar to non-human apes and indicates greater mobility to swivel up and down than in humans. Such motion is important for arboreal species to locate and focus on climbable surfaces. The StW 573 atlas shows similar mechanical advantages for the muscles which move the shoulder girdle as chimps and gorillas, which may indicate less lordosis (normal curvature of the spine) in A. africanus neck vertebrae. However, the later StW 679 has some similarities to human atlases, which could potentially indicate gradual evolution away from the ape condition. StW 573 has a narrow thoracic inlet unlike A. afarensis and humans. The clavicle is proportionally quite long, with a similar absolute length to that of modern humans.
Like in modern women, L3–L5 curve outwards in specimen StS 14, whereas these are straighter in StW 431 as in modern men. This probably reflects reinforcement of the female spine to aid in walking upright while pregnant. The StS 14 partial skeleton preserves a rather complete pelvis. Like in the restored pelvis of the Lucy specimen (A. afarensis), the sacrum was relatively flat and orientated more towards the back than in humans, and the pelvic cavity had an overall platypelloid shape. This could indicate a broad birth canal compared to neonate head size, and thus a non-rotational birth (unlike humans), though this is debated. When standing, the angle between the sacrum and the lumbar vertebrae was reconstructed to have been about 148.7°, which is much more similar to that of chimps (154.6°) than humans (118.3°). This would indicate A. africanus standing posture was not as erect as in humans.
Limbs
The A. africanus hand and arm exhibit a mosaic anatomy, with some aspects more similar to humans and others to non-human apes. It is unclear if this means australopiths were still arboreal to a degree, or if these traits were simply inherited from the human–chimpanzee last common ancestor. Nonetheless, A. africanus exhibits a more ape-like upper limb anatomy than A. afarensis, and is typically interpreted as having been, to some extent, arboreal. Like in arboreal primates, the fingers are curved, the arms relatively long and the shoulders are in a shrugging position. The A. africanus shoulder is most like that of orangutans, and well suited for maintaining stability and bearing weight while raised and placed overhead. However, the right clavicle of StW 573 has a distinctly S-shaped (sigmoid) curve like humans, which indicates a humanlike moment arm for stabilising the shoulder girdle against the humerus. The A. africanus arm bones are consistent with powerful muscles useful in climbing. Nonetheless, the brachial index (the forearm to humerus ratio) is 82.8–86.2 (midway between chimpanzees and humans), which indicates a reduction in forearm length from the more ancient hominin Ardipithecus ramidus. The thumb and wrist indicate humanlike functionality with a precision grip and forceful opposition between the thumb and fingers. The adoption of such a grip is typically interpreted as an adaptation for tool making at the expense of efficient climbing and arboreal habitation.
The leg bones clearly show that A. africanus habitually engaged in bipedal locomotion, though some aspects of the tibiae are apelike, which could indicate that the leg musculature had not been fully reorganised into the human condition. If correct, its functional implications are unclear. The trabecular bone at the hip joint is distinctly humanlike, which would be inconsistent with the great degrees of hip loading required in prolonged arboreal activity. The tibia met the foot at a similar angle as it does in humans, which is necessary for habitual bipedalism. Consequently, the ankle was not as adept for climbing activities as it is in non-human apes. However, the modern Congo Twa hunter–gatherers can achieve a chimp-like angle with the ankle while climbing trees due to the longer fibres in the gastrocnemius (calf) muscle instead of specific skeletal adaptations. Some aspects of the ankle bone were apelike which may have affected walking efficiency. The foot elements of A. africanus are largely known from remains from Sterkfontein Member 4. The foot is humanlike with a stiff midfoot and lack of a midtarsal break (which allows non-human apes to lift the heel independently from the rest of the foot). Though A. africanus had an adducted big toe (it was not dextrous) like humans, A. africanus likely did not push off with the big toe, using the side of the foot instead. StW 573 is the oldest hominin specimen with an adducted big toe. The specimen StW 355 is the most curved proximal foot phalanx bone of any known hominin, more similar to that of orangutans and siamangs.
The arms of StW 573 were about , and her legs . This means the arm was 86.9% the length of the leg. She is the first and only early hominin specimen to definitively show that the arms were almost all long as the legs. Nonetheless, these proportion are more similar to humans than non-human apes, with humans at 64.5–78%, chimpanzees about 100%, gorillas 100–125%, and orangutans 135–150.9%.
Palaeobiology
Diet
In 1954, Robinson proposed that A. africanus was a generalist omnivore whereas P. robustus was a specialised herbivore; and in 1981, American palaeoanthropologist Frederick E. Grine suggested that P. robustus specialised on hard foods such as nuts whereas A. africanus on softer foods such as fruits and leaves. Based on carbon isotope analyses, A. africanus had a highly variable diet which included a notable amount of C4 savanna plants such as grasses, seeds, rhizomes, underground storage organs, or perhaps grass-eating invertebrates (such as locusts or termites), grazing mammals, or insectivores or carnivores. Most primates do not eat C4 plants. A. africanus facial anatomy seems to suggest adaptations for producing high stress on the premolars, useful for eating small, hard objects such as seeds and nuts that need to be cracked open by the teeth, or for processing a large quantity of food at one time. However, like for P. robustus, microwear analysis on the cheek teeth indicate small, hard foods were infrequently eaten, probably as fall back foods during leaner times. Still, A. africanus, like chimps, may have required hammerstones to crack open nuts (such as marula nuts), though A. africanus is not associated with any tools.
A. africanus conspicuously lacks evidence of dental cavities, whereas P. robustus seems to have had a modern humanlike cavity rate; this could possibly indicate that A. africanus either did not often consume high-sugar cavity-causing foods—such as fruit, honey, and some nuts and seeds—or frequently consumed gritty foods which decrease cavity incidence rate. However, the 2nd right permanent incisor (STW 270) and right canine (STW 213) from the same individual show lesions consistent with acid erosion, which indicates this individual was regularly biting into acidic foods such as citrus. Tubers could have caused the same damage if some chewing was done by the front teeth.
Barium continually deposits onto A. africanus teeth until about 6–9 months of development, and then decreases until about 12 months. Because the barium was most likely sourced from breast milk, this probably reflects the weaning age. This is comparable to the human weaning age. Following this initial period, barium deposits stall and then restart cyclically every year for several years. In the first molar specimen StS 28 (from Sterkfontein), this occurred every 6–9 months, and in the lower canine specimen StS 51 every 4–6 months, and this carried on until 4–5 years of development. Lithium and strontium also deposit cyclically. Cyclical barium, lithium, and strontium bands occur in modern primates—for example, wild orangutans up to 9 years of age—which is caused by seasonal famine when a child has to rely on nursing to sustain themselves and less desirable fallback foods. However, it is unclear if this can be extended to A. africanus.
Society
The group dynamics of australopithecines is difficult to predict with any degree of accuracy. A 2011 strontium isotope study of A. africanus teeth from the dolomite Sterkfontein Valley found that, assuming that especially small teeth represented female specimens and especially large teeth males, females were more likely to leave their place of birth (patrilocal). This is similar to the dispersal patterns of modern-day hominins which have a multi-male kinship-based society, as opposed to the harem society of gorillas and other primates. However, the small canines of males compared to those of females would seem to suggest a much lower degree of male–male aggression than non-human hominins. Males did not seem to have ventured very far from the valley, which could either indicate small home ranges, or that they preferred dolomitic landscapes due to perhaps cave abundance or factors related to vegetation growth.
Pathology
In a sample of ten A. africanus specimens, seven exhibited mild to moderate alveolar bone loss resulting from periodontal disease (the wearing away of the bone which supports the teeth due to gum disease). The juvenile specimen STS 24a was diagnosed with an extreme case of periodontal disease on the right side of the mouth, which caused pathological bone growth around the affected site, and movement of the first two right molars during cyclical periods of bacterial infection and resultant inflammation. Similarly, the individual appears to have preferred to chew using the left side of the jaw. The periodontal disease would have severely hindered chewing, particularly in the last year of life, and the individual potentially may have relied on group members to survive for as long as it did.
In 1992, anthropologists Geoffrey Raymond Fisk and Gabriele Macho interpreted the left ankle bone Stw 363 as bearing evidence of a healed calcaneal fracture on the heel bone (which was not preserved), which they believed resulted from a fall from a tree. If correct, then the individual was able to survive for a long time despite losing a great deal of function in the left leg. However, they also noted that similar damage could potentially have also been inflicted by calcite deposition and crystallisation during the fossilisation process. Calcaneal fractures have been recorded in humans, and are present quite often in arboreal primates.
Palaeoecology
South African australopithecines appear to have lived in an area with a wide range of habitats. At Sterkfontein, fossil wood belonging to the liana Dichapetalum cf. mombuttense was recovered. The only living member of this tree genus in South Africa is Dichapetalum cymosum, which grows in dense, humid gallery forests. In modern day, D. mombuttense only grows in the Congolian rainforests, so its presence could potentially mean the area was an extension of this rainforest. The wildlife assemblages indicate a mix of habitats such as bush savanna, open woodland, or grassland. The shrub Anastrabe integerrima was also found, which today only grows on the wetter South African coastline. This could indicate the Cradle of Humankind received more rainfall in the Plio-Pleistocene. In total, the Cradle of Humankind may have featured gallery forests surrounded by grasslands. Taung also appears to have featured a wet, closed environment.
Australopithecines and early Homo likely preferred cooler conditions than later Homo, as there are no australopithecine sites that were below in elevation at the time of deposition. This would mean that, like chimps, they often inhabited areas with an average diurnal temperature of , dropping to at night.
In 1983, studying P. robustus remains, South African palaeontologist Charles Kimberlin Brain hypothesised that australopithecine bones accumulated in caves due to large carnivore activity, dragging in carcasses. He was unsure if these predators actively sought them out and brought them back to the cave den to eat, or inhabited deeper recesses of caves and ambushed them when they entered. Baboons in this region modern day often shelter in sinkholes especially on cold winter nights, though Brain proposed that australopithecines seasonally migrated out of the Highveld and into the warmer Bushveld, only taking up cave shelters in spring and autumn. The A. africanus fossils from Sterkfontein Member 4 were likely accumulated by big cats, though hunting hyenas and jackals may have also played a role. Scratches, gouges, and puncture marks on the Taung child similar to those inflicted by modern crowned eagles indicate this individual was killed by a bird of prey.
Around 2.07 million years ago, just before the arrival of P. robustus and H. erectus, A. africanus became extinct in the Cradle of Humankind. It is possible that South Africa was a refuge for Australopithecus until the beginning of major climatic variability and volatility, and, perhaps, competition with Homo and Paranthropus.
| Biology and health sciences | Australopithecines | Biology |
1627129 | https://en.wikipedia.org/wiki/Synchronous%20circuit | Synchronous circuit | In digital electronics, a synchronous circuit is a digital circuit in which the changes in the state of memory elements are synchronized by a clock signal. In a sequential digital logic circuit, data is stored in memory devices called flip-flops or latches. The output of a flip-flop is constant until a pulse is applied to its "clock" input, upon which the input of the flip-flop is latched into its output. In a synchronous logic circuit, an electronic oscillator called the clock generates a string (sequence) of pulses, the "clock signal". This clock signal is applied to every storage element, so in an ideal synchronous circuit, every change in the logical levels of its storage components is simultaneous. Ideally, the input to each storage element has reached its final value before the next clock occurs, so the behaviour of the whole circuit can be predicted exactly. Practically, some delay is required for each logical operation, resulting in a maximum speed limitations at which each synchronous system can run.
To make these circuits work correctly, a great deal of care is needed in the design of the clock distribution networks. Static timing analysis is often used to determine the maximum safe operating speed.
Nearly all digital circuits, and in particular nearly all CPUs, are fully synchronous circuits with a global clock.
Exceptions are often compared to fully synchronous circuits.
Exceptions include self-synchronous circuits,
globally asynchronous locally synchronous circuits,
and fully asynchronous circuits.
| Technology | Digital logic | null |
1627162 | https://en.wikipedia.org/wiki/Asynchronous%20circuit | Asynchronous circuit | Asynchronous circuit (clockless or self-timed circuit) is a sequential digital logic circuit that does not use a global clock circuit or signal generator to synchronize its components. Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer protocols. Many synchronous circuits were developed in early 1950s as part of bigger asynchronous systems (e.g. ORDVAC). Asynchronous circuits and theory surrounding is a part of several steps in integrated circuit design, a field of digital electronics engineering.
Asynchronous circuits are contrasted with synchronous circuits, in which changes to the signal values in the circuit are triggered by repetitive pulses called a clock signal. Most digital devices today use synchronous circuits. However asynchronous circuits have a potential to be much faster, have a lower level of power consumption, electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in digital logic design.
It was not until the 1990s when viability of the asynchronous circuits was shown by real-life commercial products.
Overview
All digital logic circuits can be divided into combinational logic, in which the output signals depend only on the current input signals, and sequential logic, in which the output depends both on current input and on past inputs. In other words, sequential logic is combinational logic with memory. Virtually all practical digital devices require sequential logic. Sequential logic can be divided into two types, synchronous logic and asynchronous logic.
Synchronous circuits
In synchronous logic circuits, an electronic oscillator generates a repetitive series of equally spaced pulses called the clock signal. The clock signal is supplied to all the components of the IC. Flip-flops only flip when triggered by the edge of the clock pulse, so changes to the logic signals throughout the circuit begin at the same time and at regular intervals. The output of all memory elements in a circuit is called the state of the circuit. The state of a synchronous circuit changes only on the clock pulse. The changes in signal require a certain amount of time to propagate through the combinational logic gates of the circuit. This time is called a propagation delay.
, timing of modern synchronous ICs takes significant engineering efforts and sophisticated design automation tools. Designers have to ensure that clock arrival is not faulty. With the ever-growing size and complexity of ICs (e.g. ASICs) it's a challenging task. In huge circuits, signals sent over clock distribution network often end up at different times at different parts. This problem is widely known as "clock skew".
The maximum possible clock rate is capped by the logic path with the longest propagation delay, called the critical path. Because of that, the paths that may operate quickly are idle most of the time. A widely distributed clock network dissipates a lot of useful power and must run whether the circuit is receiving inputs or not. Because of this level of complexity, testing and debugging takes over half of development time in all dimensions for synchronous circuits.
Asynchronous circuits
The asynchronous circuits do not need a global clock, and the state of the circuit changes as soon as the inputs change. The local functional blocks may be still employed but the clock skew problem still can be tolerated.
Since asynchronous circuits do not have to wait for a clock pulse to begin processing inputs, they can operate faster. Their speed is theoretically limited only by the propagation delays of the logic gates and other elements.
However, asynchronous circuits are more difficult to design and subject to problems not found in synchronous circuits. This is because the resulting state of an asynchronous circuit can be sensitive to the relative arrival times of inputs at gates. If transitions on two inputs arrive at almost the same time, the circuit can go into the wrong state depending on slight differences in the propagation delays of the gates.
This is called a race condition. In synchronous circuits this problem is less severe because race conditions can only occur due to inputs from outside the synchronous system, called asynchronous inputs.
Although some fully asynchronous digital systems have been built (see below), today asynchronous circuits are typically used in a few critical parts of otherwise synchronous systems where speed is at a premium, such as signal processing circuits.
Theoretical foundation
The original theory of asynchronous circuits was created by David E. Muller in mid-1950s. This theory was presented later in the well-known book "Switching Theory" by Raymond Miller.
The term "asynchronous logic" is used to describe a variety of design styles, which use different assumptions about circuit properties. These vary from the bundled delay model – which uses "conventional" data processing elements with completion indicated by a locally generated delay model – to delay-insensitive design – where arbitrary delays through circuit elements can be accommodated. The latter style tends to yield circuits which are larger than bundled data implementations, but which are insensitive to layout and parametric variations and are thus "correct by design".
Asynchronous logic
Asynchronous logic is the logic required for the design of asynchronous digital systems. These function without a clock signal and so individual logic elements cannot be relied upon to have a discrete true/false state at any given time. Boolean (two valued) logic is inadequate for this and so extensions are required.
Since 1984, Vadim O. Vasyukevich developed an approach based upon new logical operations which he called venjunction (with asynchronous operator "x∠y" standing for "switching x on the background y" or "if x when y then") and sequention (with priority signs "xi≻xj" and "xi≺xj"). This takes into account not only the current value of an element, but also its history.
Karl M. Fant developed a different theoretical treatment of asynchronous logic in his work Logically determined design in 2005 which used four-valued logic with null and intermediate being the additional values. This architecture is important because it is quasi-delay-insensitive. Scott C. Smith and Jia Di developed an ultra-low-power variation of Fant's Null Convention Logic that incorporates multi-threshold CMOS. This variation is termed Multi-threshold Null Convention Logic (MTNCL), or alternatively Sleep Convention Logic (SCL).
Petri nets
Petri nets are an attractive and powerful model for reasoning about asynchronous circuits (see Subsequent models of concurrency). A particularly useful type of interpreted Petri nets, called Signal Transition Graphs (STGs), was proposed independently in 1985 by Leonid Rosenblum and Alex Yakovlev and Tam-Anh Chu. Since then, STGs have been studied extensively in theory and practice, which has led to the development of popular software tools for analysis and synthesis of asynchronous control circuits, such as Petrify and Workcraft.
Subsequent to Petri nets other models of concurrency have been developed that can model asynchronous circuits including the Actor model and process calculi.
Benefits
A variety of advantages have been demonstrated by asynchronous circuits. Both quasi-delay-insensitive (QDI) circuits (generally agreed to be the most "pure" form of asynchronous logic that retains computational universality) and less pure forms of asynchronous circuitry which use timing constraints for higher performance and lower area and power present several advantages.
Robust and cheap handling of metastability of arbiters.
Average-case performance: an average-case time (delay) of operation is not limited to the worst-case completion time of component (gate, wire, block etc.) as it is in synchronous circuits. This results in better latency and throughput performance. Examples include speculative completion which has been applied to design parallel prefix adders faster than synchronous ones, and a high-performance double-precision floating point adder which outperforms leading synchronous designs.
Early completion: the output may be generated ahead of time, when result of input processing is predictable or irrelevant.
Inherent elasticity: variable number of data items may appear in pipeline inputs at any time (pipeline means a cascade of linked functional blocks). This contributes to high performance while gracefully handling variable input and output rates due to unclocked pipeline stages (functional blocks) delays (congestions may still be possible however and input-output gates delay should be also taken into account).
No need for timing-matching between functional blocks either. Though given different delay models (predictions of gate/wire delay times) this depends on actual approach of asynchronous circuit implementation.
Freedom from the ever-worsening difficulties of distributing a high-fan-out, timing-sensitive clock signal.
Circuit speed adapts to changing temperature and voltage conditions rather than being locked at the speed mandated by worst-case assumptions.
Lower, on-demand power consumption; zero standby power consumption. In 2005 Epson has reported 70% lower power consumption compared to synchronous design. Also, clock drivers can be removed which can significantly reduce power consumption. However, when using certain encodings, asynchronous circuits may require more area, adding similar power overhead if the underlying process has poor leakage properties (for example, deep submicrometer processes used prior to the introduction of high-κ dielectrics).
No need for power-matching between local asynchronous functional domains of circuitry. Synchronous circuits tend to draw a large amount of current right at the clock edge and shortly thereafter. The number of nodes switching (and hence, the amount of current drawn) drops off rapidly after the clock edge, reaching zero just before the next clock edge. In an asynchronous circuit, the switching times of the nodes does not correlated in this manner, so the current draw tends to be more uniform and less bursty.
Robustness toward transistor-to-transistor variability in the manufacturing transfer process (which is one of the most serious problems facing the semiconductor industry as dies shrink), variations of voltage supply, temperature, and fabrication process parameters.
Less severe electromagnetic interference (EMI). Synchronous circuits create a great deal of EMI in the frequency band at (or very near) their clock frequency and its harmonics; asynchronous circuits generate EMI patterns which are much more evenly spread across the spectrum.
Design modularity (reuse), improved noise immunity and electromagnetic compatibility. Asynchronous circuits are more tolerant to process variations and external voltage fluctuations.
Disadvantages
Area overhead caused by additional logic implementing handshaking. In some cases an asynchronous design may require up to double the resources (area, circuit speed, power consumption) of a synchronous design, due to addition of completion detection and design-for-test circuits.
Compared to a synchronous design, as of the 1990s and early 2000s not many people are trained or experienced in the design of asynchronous circuits.
Synchronous designs are inherently easier to test and debug than asynchronous designs. However, this position is disputed by Fant, who claims that the apparent simplicity of synchronous logic is an artifact of the mathematical models used by the common design approaches.
Clock gating in more conventional synchronous designs is an approximation of the asynchronous ideal, and in some cases, its simplicity may outweigh the advantages of a fully asynchronous design.
Performance (speed) of asynchronous circuits may be reduced in architectures that require input-completeness (more complex data path).
Lack of dedicated, asynchronous design-focused commercial EDA tools. As of 2006 the situation was slowly improving, however.
Communication
There are several ways to create asynchronous communication channels that can be classified by their protocol and data encoding.
Protocols
There are two widely used protocol families which differ in the way communications are encoded:
two-phase handshake (also known as two-phase protocol, Non-Return-to-Zero (NRZ) encoding, or transition signaling): Communications are represented by any wire transition; transitions from 0 to 1 and from 1 to 0 both count as communications.
four-phase handshake (also known as four-phase protocol, or Return-to-Zero (RZ) encoding): Communications are represented by a wire transition followed by a reset; a transition sequence from 0 to 1 and back to 0 counts as single communication.
Despite involving more transitions per communication, circuits implementing four-phase protocols are usually faster and simpler than two-phase protocols because the signal lines return to their original state by the end of each communication. In two-phase protocols, the circuit implementations would have to store the state of the signal line internally.
Note that these basic distinctions do not account for the wide variety of protocols. These protocols may encode only requests and acknowledgements or also encode the data, which leads to the popular multi-wire data encoding. Many other, less common protocols have been proposed including using a single wire for request and acknowledgment, using several significant voltages, using only pulses or balancing timings in order to remove the latches.
Data encoding
There are two widely used data encodings in asynchronous circuits: bundled-data encoding and multi-rail encoding
Another common way to encode the data is to use multiple wires to encode a single digit: the value is determined by the wire on which the event occurs. This avoids some of the delay assumptions necessary with bundled-data encoding, since the request and the data are not separated anymore.
Bundled-data encoding
Bundled-data encoding uses one wire per bit of data with a request and an acknowledge signal; this is the same encoding used in synchronous circuits without the restriction that transitions occur on a clock edge. The request and the acknowledge are sent on separate wires with one of the above protocols. These circuits usually assume a bounded delay model with the completion signals delayed long enough for the calculations to take place.
In operation, the sender signals the availability and validity of data with a request. The receiver then indicates completion with an acknowledgement, indicating that it is able to process new requests. That is, the request is bundled with the data, hence the name "bundled-data".
Bundled-data circuits are often referred to as micropipelines, whether they use a two-phase or four-phase protocol, even if the term was initially introduced for two-phase bundled-data.
Multi-rail encoding
Multi-rail encoding uses multiple wires without a one-to-one relationship between bits and wires and a separate acknowledge signal. Data availability is indicated by the transitions themselves on one or more of the data wires (depending on the type of multi-rail encoding) instead of with a request signal as in the bundled-data encoding. This provides the advantage that the data communication is delay-insensitive. Two common multi-rail encodings are one-hot and dual rail. The one-hot (also known as 1-of-n) encoding represents a number in base n with a communication on one of the n wires. The dual-rail encoding uses pairs of wires to represent each bit of the data, hence the name "dual-rail"; one wire in the pair represents the bit value of 0 and the other represents the bit value of 1. For example, a dual-rail encoded two bit number will be represented with two pairs of wires for four wires in total. During a data communication, communications occur on one of each pair of wires to indicate the data's bits. In the general case, an m n encoding represent data as m words of base n.
Dual-rail encoding
Dual-rail encoding with a four-phase protocol is the most common and is also called three-state encoding, since it has two valid states (10 and 01, after a transition) and a reset state (00). Another common encoding, which leads to a simpler implementation than one-hot, two-phase dual-rail is four-state encoding, or level-encoded dual-rail, and uses a data bit and a parity bit to achieve a two-phase protocol.
Asynchronous CPU
Asynchronous CPUs are one of several ideas for radically changing CPU design.
Unlike a conventional processor, a clockless processor (asynchronous CPU) has no central clock to coordinate the progress of data through the pipeline.
Instead, stages of the CPU are coordinated using logic devices called "pipeline controls" or "FIFO sequencers". Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. It may actually be even easier to implement high performance devices in asynchronous, as opposed to clocked, logic:
components can run at different speeds on an asynchronous CPU; all major components of a clocked CPU must remain synchronized with the central clock;
a traditional CPU cannot "go faster" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.
Asynchronous logic proponents believe these capabilities would have these benefits:
lower power dissipation for a given performance level, and
highest possible execution speeds.
The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (i.e., a synchronous circuit). Many tools "enforce synchronous design practices". Making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems. The group that designed the AMULET, for example, developed a tool called LARD to cope with the complex design of AMULET3.
Examples
Despite all the difficulties numerous asynchronous CPUs have been built.
The ORDVAC of 1951 was a successor to the ENIAC and the first asynchronous computer ever built.
The ILLIAC II was the first completely asynchronous, speed independent processor design ever built; it was the most powerful computer at the time.
DEC PDP-16 Register Transfer Modules (ca. 1973) allowed the experimenter to construct asynchronous, 16-bit processing elements. Delays for each module were fixed and based on the module's worst-case timing.
Caltech
Since the mid-1980s, Caltech has designed four non-commercial CPUs in attempt to evaluate performance and energy efficiency of the asynchronous circuits.
Caltech Asynchronous Microprocessor (CAM)
In 1988 the Caltech Asynchronous Microprocessor (CAM) was the first asynchronous, quasi delay-insensitive (QDI) microprocessor made by Caltech. The processor had 16-bit wide RISC ISA and separate instruction and data memories. It was manufactured by MOSIS and funded by DARPA. The project was supervised by the Office of Naval Research, the Army Research Office, and the Air Force Office of Scientific Research.
During demonstrations, the researchers loaded a simple program which ran in a tight loop, pulsing one of the output lines after each instruction. This output line was connected to an oscilloscope. When a cup of hot coffee was placed on the chip, the pulse rate (the effective "clock rate") naturally slowed down to adapt to the worsening performance of the heated transistors. When liquid nitrogen was poured on the chip, the instruction rate shot up with no additional intervention. Additionally, at lower temperatures, the voltage supplied to the chip could be safely increased, which also improved the instruction rate – again, with no additional configuration.
When implemented in gallium arsenide () it was claimed to achieve 100MIPS. Overall, the research paper interpreted the resultant performance of CAM as superior compared to commercial alternatives available at the time.
MiniMIPS
In 1998 the MiniMIPS, an experimental, asynchronous MIPS I-based microcontroller was made. Even though its SPICE-predicted performance was around 280 MIPS at 3.3 V the implementation suffered from several mistakes in layout (human mistake) and the results turned out be lower by about 40% (see table).
The Lutonium 8051
Made in 2003, it was a quasi delay-insensitive asynchronous microcontroller designed for energy efficiency. The microcontroller's implementation followed the Harvard architecture.
Epson
In 2004, Epson manufactured the world's first bendable microprocessor called ACT11, an 8-bit asynchronous chip. Synchronous flexible processors are slower, since bending the material on which a chip is fabricated causes wild and unpredictable variations in the delays of various transistors, for which worst-case scenarios must be assumed everywhere and everything must be clocked at worst-case speed. The processor is intended for use in smart cards, whose chips are currently limited in size to those small enough that they can remain perfectly rigid.
IBM
In 2014, IBM announced a SyNAPSE-developed chip that runs in an asynchronous manner, with one of the highest transistor counts of any chip ever produced. IBM's chip consumes orders of magnitude less power than traditional computing systems on pattern recognition benchmarks.
Timeline
ORDVAC and the (identical) ILLIAC I (1951)
Johnniac (1953)
WEIZAC (1955)
Kiev (1958), a Soviet machine using the programming language with pointers much earlier than they came to the PL/1 language
ILLIAC II (1962)
Victoria University of Manchester built Atlas (1964)
ICL 1906A and 1906S mainframe computers, part of the 1900 series and sold from 1964 for over a decade by ICL
Polish computers KAR-65 and K-202 (1965 and 1970 respectively)
Honeywell CPUs 6180 (1972) and Series 60 Level 68 (1981) upon which Multics ran asynchronously
Soviet bit-slice microprocessor modules (late 1970s) produced as К587, К588 and К1883 (U83x in East Germany)
Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988)
ARM-implementing AMULET (1993 and 2000)
Asynchronous implementation of MIPS R3000, dubbed MiniMIPS (1998)
Several versions of the XAP processor experimented with different asynchronous design styles: a bundled data XAP, a 1-of-4 XAP, and a 1-of-2 (dual-rail) XAP (2003?)
ARM-compatible processor (2003?) designed by Z. C. Yu, S. B. Furber, and L. A. Plana; "designed specifically to explore the benefits of asynchronous design for security sensitive applications"
SAMIPS (2003), a synthesisable asynchronous implementation of the MIPS R3000 processor
"Network-based Asynchronous Architecture" processor (2005) that executes a subset of the MIPS architecture instruction set
ARM996HS processor (2006) from Handshake Solutions
HT80C51 processor (2007?) from Handshake Solutions.
Vortex, a superscalar general purpose CPU with a load/store architecture from Intel (2007); it was developed as Fulcrum Microsystem test Chip 2 and was not commercialized, excepting some of its components; the chip included DDR SDRAM and a 10Gb Ethernet interface linked via Nexus system-on-chip net to the CPU
SEAforth multi-core processor (2008) from Charles H. Moore
GA144 multi-core processor (2010) from Charles H. Moore
TAM16: 16-bit asynchronous microcontroller IP core (Tiempo)
Aspida asynchronous DLX core; the asynchronous open-source DLX processor (ASPIDA) has been successfully implemented both in ASIC and FPGA versions
| Technology | Digital logic | null |
1627853 | https://en.wikipedia.org/wiki/Newman%20projection | Newman projection | A Newman projection is a drawing that helps visualize the 3-dimensional structure of a molecule. This projection most commonly sights down a carbon-carbon bond, making it a very useful way to visualize the stereochemistry of alkanes. A Newman projection visualizes the conformation of a chemical bond from front to back, with the front atom represented by the intersection of three lines (a dot) and the back atom as a circle. The front atom is called proximal, while the back atom is called distal. This type of representation clearly illustrates the specific dihedral angle between the proximal and distal atoms.
This projection is named after American chemist Melvin Spencer Newman, who introduced it in 1952 as a partial replacement for Fischer projections, which are unable to represent conformations and thus conformers properly. This diagram style is an alternative to a sawhorse projection, which views a carbon–carbon bond from an oblique angle, or a wedge-and-dash style, such as a Natta projection. These other styles can indicate the bonding and stereochemistry, but not as much conformational detail.
A Newman projection can also be used to study cyclic molecules, such as the chair conformation of cyclohexane:
Because of the free rotation around single bonds, there are various conformations for a single molecule. Up to six unique conformations may be drawn for any given chemical bond. Each conformation is drawn by rotation of either the proximal or distal atom 60 degrees. Of these six conformations, three will be in a staggered conformation, while the other three will be in an eclipsed conformation. These six conformations can be represented in a relative energy diagram.
A staggered projection appears to have the surrounding species equidistant from each other. This kind of conformation tends to experience both anti and gauche interactions. Anti interactions refer to the molecules (usually of the same type) sitting exactly opposite of each other at 180° on the Newman projection. Gauche interactions refer to molecules (also usually of the same type) being 60° from each other on a Newman projection. Anti interactions experience less steric strain than gauche interactions, but both experience less steric strain than the eclipsed conformation.
An eclipsed projection appears to have the surrounding species almost on top of each other. In reality, these species are in line with each other, but are drawn slightly staggered to help format the projection onto paper. These types of conformations are generally higher in energy due to increased bond strain. However, this strain can be somewhat lower if a hydrogen is eclipsed over a larger species, as opposed to two large species eclipsed over each other.
| Physical sciences | Stereochemistry | Chemistry |
1627862 | https://en.wikipedia.org/wiki/Eclipsed%20conformation | Eclipsed conformation | In chemistry an eclipsed conformation is a conformation in which two substituents X and Y on adjacent atoms A, B are in closest proximity, implying that the torsion angle X–A–B–Y is 0°. Such a conformation can exist in any open chain, single chemical bond connecting two sp3-hybridised atoms, and it is normally a conformational energy maximum. This maximum is often explained by steric hindrance, but its origins sometimes actually lie in hyperconjugation (as when the eclipsing interaction is of two hydrogen atoms).
In the example of ethane, two methyl groups are connected with a carbon-carbon sigma bond, just as one might connect two Lego pieces through a single "stud" and "tube". With this image in mind, if the methyl groups are rotated around the bond, they will remain connected; however, the shape will change. This leads to multiple possible three-dimensional arrangements, known as conformations, conformational isomers (conformers), or sometimes rotational isomers (rotamers).
Organic chemistry
Conformations can be described by dihedral angles, which are used to determine the placements of atoms and their distance from one another and can be visualized by Newman projections. A dihedral angle can indicate staggered and eclipsed orientation, but is specifically used to determine the angle between two specific atoms on opposing carbons. Different conformations have unequal energies, creating an energy barrier to bond rotation which is known as torsional strain. In particular, eclipsed conformations tend to have raised energies due to the repulsion of the electron clouds of the eclipsed substituents. The relative energies of different conformations can be visualized using graphs. In the example of ethane, such a graph shows that rotation around the carbon-carbon bond is not entirely free but that an energy barrier exists. The ethane molecule in the eclipsed conformation is said to suffer from torsional strain, and by a rotation around the carbon carbon bond to the staggered conformation around 12.5 kJ/mol of torsional energy is released. In the case of butane and its four-carbon chain, three carbon-carbon bonds are available to rotate. The example below is looking down the C2 and C3 bond. Below is the sawhorse and Newman representation of butane in an eclipsed conformation with the two CH3 groups (C1 and C4) at a 0-degree angle from one another (left).
If the front is rotated 60° clockwise, the butane molecule is now in a staggered conformation (right). This conformation is more specifically referred to as the gauche conformation of butane. This is due to the fact that the methyl groups are staggered, but only 60° from one another. This conformation is more energetically favored than the eclipsed conformation, but it is not the most energetically favorable conformation. Another 60° rotation gives us a second eclipsed conformation where both methyl groups are aligned with hydrogen atoms. One more 60 rotation produces another staggered conformation referred to as the anti conformation. This occurs when the methyl groups are positioned opposite (180°) of one another. This is the most energetically favorable conformation.
The minima can be seen on the graph at 60, 180 and 300 degrees while the maxima can bee see at 0, 120, 240, and 360 degrees. The maxima represent the eclipsed conformations due to the dihedral angle of zero degrees.
Structural applications
As established by X-ray crystallography, octachlorodimolybdate(II) anion ([Mo2Cl8]4-) has an eclipsed conformation. This sterically unfavorable geometry is given as evidence for a quadruple bond between the Mo centers.
Experiments such as X-ray and electron diffraction analyses, nuclear magnetic resonance, microwave spectroscopies, and more have allowed researchers to determine which cycloalkane structures are the most stable based on the different possible conformations. Another method that was shown successful is molecular mechanics, a computational method that allows the total strain energies of different conformations to be found and analyzed. It was found that the most stable conformations had lower energies based on values of energy due to bond distances and bond angles.
In many cases, isomers of alkanes with branched chains have lower boiling points than those that are unbranched, which has been shown through experimentation with isomers of C8H18. This is because of a combination of intermolecular forces and size that results from the branched chains. The more branches that an alkane has, the more extended its shape is; meanwhile, if it is less branched then it will have more intermolecular attractive forces that will need to be broken which is the cause of the increased boiling point for unbranched alkanes. In another case, 2,2,3,3-tetramethylbutane is shaped more like an ellipsoid causing it to be able to form a crystal lattice which raises the melting point of the molecule because it will take more energy to transition from a solid to a liquid state.
| Physical sciences | Stereochemistry | Chemistry |
1628107 | https://en.wikipedia.org/wiki/Ulmus%20rubra | Ulmus rubra | Ulmus rubra, the slippery elm, is a species of elm native to eastern North America.
Other common names include red elm, gray elm, soft elm, moose elm, and Indian elm.
Description
Ulmus rubra is a medium-sized deciduous tree with a spreading head of branches, commonly growing to , very occasionally over in height. Its heartwood is reddish-brown. The broad oblong to obovate leaves are long, rough above but velvety below, with coarse double-serrate margins, acuminate apices and oblique bases; the petioles are long. The leaves are often tinged red on emergence, turning dark green by summer and a dull yellow in autumn. The perfect, apetalous, wind-pollinated flowers are produced before the leaves in early spring, usually in tight, short-stalked, clusters of 10–20. The reddish-brown fruit is an oval winged samara, orbicular to obovate, slightly notched at the top, long, the single, central seed coated with red-brown hairs, naked elsewhere.
Similar species
The species superficially resembles American elm (Ulmus americana), but is more closely related to the European wych elm (U. glabra), which has a very similar flower structure, though lacks the pubescence over the seed. U. rubra is chiefly distinguished from American elm by its downy twigs, chestnut brown or reddish hairy buds, and slimy red inner bark.
Taxonomy
The tree was first named as part of Ulmus americana in 1753, but identified as a separate species, U. rubra, in 1793 by Pennsylvania botanist Gotthilf Muhlenberg. The slightly later name U. fulva, published by French botanist André Michaux in 1803, is still widely used in information related to dietary supplements and alternative medicine.
Etymology
The specific epithet rubra (red) alludes to the tree's reddish wood, whilst the common name 'slippery elm' alludes to the mucilaginous inner bark.
The reddish-brown heartwood lends the tree the common name 'red elm'.
Distribution and habitat
The species is native to eastern North America, ranging from southeast North Dakota, east to Maine and southern Quebec, south to northernmost Florida, and west to eastern Texas, where it thrives in moist uplands, although it will also grow in dry, intermediate soils.
Ecology
Pests and diseases
The tree is reputedly less susceptible to Dutch elm disease than other species of American elms, but is severely damaged by the elm leaf beetle (Xanthogaleruca luteola).
Hybrids
In the central United States, native U. rubra hybridizes in the wild with the Siberian elm (U. pumila), which was introduced in the early 20th century and has spread widely since, prompting conservation concerns for the genetic integrity of the former species.
Cultivation
The species has seldom been planted for ornament in its native country. It occasionally appeared in early 20th-century US nursery catalogues. Introduced to Europe and Australasia, it has never thrived in the UK; Elwes & Henry knew of not one good specimen, and the last tree planted at Kew attained a height of only in 60 years. Specimens supplied by the Späth nursery to the Royal Botanic Garden Edinburgh in 1902 as U. fulva may survive in Edinburgh as it was the practice of the Garden to distribute trees about the city (vide Wentworth Elm). A specimen at RBGE was felled c.1990. The current list of Living Accessions held in the Garden per se does not list the plant. Several mature trees survive in Brighton (see Accessions). The tree was propagated and marketed in the UK by the Hillier & Sons nursery, Winchester, Hampshire, from 1945, with 20 sold in the period 1970 to 1976, when production ceased.
U. rubra was introduced to Europe in 1830.
There are no known cultivars, though Meehan misnamed Ulmus americana 'Beebe's Weeping' as U. fulva pendula (1889) and Späth misnamed Ulmus americana 'Pendula' U. fulva (Michx.) pendula Hort. (1890). The hybrid U. rubra × U. pumila cultivar 'Lincoln' is sometimes erroneously listed as U. rubra 'Lincoln'.
Hybrid cultivars
U. rubra had limited success as a hybrid parent in the 1960s, resulting in the cultivars 'Coolshade', 'Fremont', 'Improved Coolshade', 'Lincoln', 'Rosehill', and probably 'Willis'. In later years, it was also used in the Wisconsin elm breeding program to produce 'Repura' and 'Revera' although neither is known to have been released to commerce. In Germany, the tree formed part of a complex hybrid raised by the Eisele nursery in Darmstadt, provisionally named 'Eisele H1'; patent pending (2020).
Uses
Food
The mucilaginous inner bark of the tree is edible raw or boiled, and was eaten by Native Americans. The bark can also be used to make tea.
Medicinal
The species has various traditional medicinal uses. The inner bark has long been used as a demulcent, and is still produced commercially for this purpose in the United States with approval for sale as an over-the-counter demulcent by the US Food and Drug Administration. Sometimes the leaves are dried and ground into a powder, then made into a tea.
Timber
The timber is not of much importance commercially, and is not found anywhere in great quantity. Macoun considered it more durable than that of the other elms, and better suited for railway ties, fence-posts, and rails, while Pinchot recommended planting it in the Mississippi valley, as it grows fast in youth, and could be utilized for fence-posts when quite young, since the sapwood, if thoroughly dried, is quite as durable as the heartwood. The wood is also used for the hubs of wagon wheels, as it is very shock resistant owing to the interlocking grain. The wood, as 'red elm', is sometimes used to make bows for archery. The yoke of the Liberty Bell, a symbol of the independence of the United States, was made from slippery elm.
Baseball
Though now outmoded, slippery elm tablets were chewed by spitball pitchers to enhance the effectiveness of the saliva applied to make the pitched baseball curve. Gaylord Perry wrote about how he used slippery elm tablets in his 1974 autobiography, Me and the Spitter.
Miscellaneous
The tree's fibrous inner bark produces a strong and durable fiber that can be spun into thread, twine, or rope useful for bowstrings, ropes, jewellery, clothing, snowshoe bindings, woven mats, and even some musical instruments. Once cured, the wood is also excellent for starting fires with the bow-drill method, as it grinds into a very fine flammable powder under friction.
Culture
Notable trees
A tree in Westmount, Quebec, Canada, measured in girth in 2011. The US national champion, measuring in circumference and tall, with an average crown spread of wide, grows in Kentucky. Another tall specimen grows in the Bronx, New York City, at 710 West 246th Street, measuring high in 2002. In the UK, there is no designated Tree Register champion.
Accessions
North America
Arnold Arboretum, US. Acc. nos. 737–88 (unrecorded provenance), 172-2017 (Massachusetts), 344-2017 (Missouri).
Bernheim Arboretum and Research Forest , Clermont, Kentucky, US. No details available.
Brenton Arboretum, Dallas Center, Iowa, US. No details available.
Chicago Botanic Garden, Glencoe, Illinois, US. 1 tree, no other details available.
Dominion Arboretum, Ottawa, Ontario, Canada. No acc. details available.
Longwood Gardens, US. Acc. no. L–3002, of unrecorded provenance.
Nebraska Statewide Arboretum, US. No details available.
Smith College, US. Acc. no. 8119PA.
U S National Arboretum , Washington, D.C., US. Acc. no. 77501.
Europe
Brighton & Hove City Council, UK. NCCPG Elm Collection. Carden Park, Hollingdean (1 tree); Malthouse Car Park, Kemp Town (1 tree).
Grange Farm Arboretum, Sutton St James, Spalding, Lincolnshire, UK. Acc. no. 522
Hortus Botanicus Nationalis, Salaspils, Latvia. Acc. nos. 18168, 18169, 18170.
Linnaean Gardens of Uppsala, Sweden. Acc. no. 2009–0223. Wild collected in US.
Royal Botanic Gardens Wakehurst Place, UK. Acc. no. 1973–21050.
Thenford House arboretum, Northamptonshire, UK. No details available.
University of Copenhagen Botanic Garden, Denmark. No details available.
Wijdemeren city council, The Netherlands. One tree planted gardens Rading 1, Loosdrecht.
Australasia
Eastwoodhill Arboretum , Gisborne, New Zealand. 1 tree, no details available.
| Biology and health sciences | Rosales | Plants |
1629198 | https://en.wikipedia.org/wiki/Saint%20Petersburg%20Metro | Saint Petersburg Metro | The Saint Petersburg Metro () is a rapid transit system in Saint Petersburg, Russia. Construction began in early 1941, but was put on hold due to World War II and the subsequent Siege of Leningrad, during which the constructed stations were used as bomb shelters. It was finally opened on 15 November 1955.
Formerly known as the Order of Lenin Leningrad Metro named after V. I. Lenin (), the system exhibits many typical Soviet designs and features exquisite decorations and artwork making it one of the most attractive and elegant metros in the world. Due to the city's unique geology, the Saint Petersburg Metro is also one of the deepest metro systems in the world and the deepest by the average depth of all the stations. The system's deepest station, Admiralteyskaya, is below ground.
The network consists of 5 lines with a total length of . It has 73 stations including 7 transfer points. Serving about 2 million passengers daily, it is the 26th busiest metro system in the world.
History
Metro projects for the imperial capital
The question of building an underground road in Saint Petersburg arose in 1820. A resident of the city, a self-taught man by the name of Torgovanov, submitted a bold project to Tsar Alexander I — involving the digging of a tunnel from the center of the city to Vasilyevsky Island. The Russian ruler rejected the project and ordered the inventor to sign a pledge "not to engage in hare-brained schemes in the future, but to exercise his efforts in matters appropriate to his estate."
Other, more developed projects subsequently emerged, but they, too, received no recognition.
Many arguments were advanced against the construction of an underground road. The "city fathers" stated that the excavation works would "violate the amenities and respectability of the city"; the landlords affirmed that underground traffic would undermine the foundations of the buildings; the merchants feared that "the open excavations would interfere with normal trade"; but the most violent adversaries of the novelty, the clergy, insisted that "the underground passages running near church buildings would detract from their dignity". Thus all the projects for the construction of an underground passage in Saint Petersburg, and later in Petrograd, remained on paper.
By the end of the 19th century, certain interested parties began discussing the possibility of opening the Russian Empire's first metropolitan railway system. The press of the time praised the initial plans, while engineers privately worried about the serious lack of experience in the sort of projects required to build a metro; at the time, Saint Petersburg did not even have electrified tramways. However, due to the wish of the municipal authorities of the time to take ownership of the metro after its eventual entry into service, none of the aforementioned projects ever came to fruition.
In 1901 the engineer Vladimir Pechkovsky presented his project to build an elevated station in the middle of Nevsky Prospect, opposite the Kazan Cathedral, and to link it, via elevated and underground sections of track (above the Ekaterinsky and Obvodny canals and beneath the Zabalkansky prospect) with the Baltiysky and Varshavsky Rail Terminals. In the same year, Reshevsky, also an engineer, working at the behest of the Emperor's minister for transport, came up with two possible projects, which aimed primarily to unite all of Saint Petersburg's main railway stations with one urban interchange. An interesting development, the work upon which had been carried out for many years by railway engineer P.I. Balinsky (one of the first Russian metro engineers) involved plans to build a dedicated network of six urban lines, two of which would be radial lines with a total length of . The construction work (including the filling of low-lying areas of the city in order to avoid flooding, construction of 11 major bridges, embankments and viaducts at a height of , and the actual laying of track etc.) was projected to cost around 190 million rubles. However, in 1903 Emperor Nicholas II rejected the scheme before any work ever started.
Almost all pre-revolutionary designs featured the concept of an elevated metro system, similar to the Paris or Vienna metros. However, as it was later discovered through the experience of operating uncovered ground-level metro sections in St. Petersburg (which were later closed due to the same reason), such projects would lead to many difficulties in its maintaining. Unfortunately, Russian engineers had neither sufficient equipment nor technical skills at the time to build deep-lying tunnels through the challenging ground beneath St. Petersburg.
In 1918 Moscow became the country's capital after the October Revolution of 1917 and the Russian Civil War (1917–1922) followed; for more than a decade plans to build a metro in Petrograd languished.
First phase construction, abandonment, and opening
In 1938 the question of building a metro for St Petersburg (by then renamed to Leningrad), resurfaced at the initiative of Alexei Kosygin, Chairman of the Executive Committee of the Leningrad City Soviets of Working People's Deputies. Ivan Zubkov, an engineer who for his work was later to become a Hero of Socialist Labour was appointed the first director for the metro construction. The initial project was designed by the Moscow institute 'Metrogiprotrans', but on 21 January 1941 'Construction Directorate № 5 of the People's Commissariat' was founded as a body to specifically oversee the design and construction of the Leningrad Metro. By April 1941, 34 shafts for the initial phase of construction had been finished.
During the Second World War construction work was frozen due to severe lack of funding, manpower and equipment. At this time, many of the metro construction workers were employed in the construction and repair of railheads and other objects vital to the besieged city. Zubkov died in 1944, having never seen the opening of the metro.
The initial post-war era
In 1946 Lenmetroproyekt was created, under the leadership of M A Samodurov, to finish the construction of the metro first phase. A new version of the metro project, devised by specialists, identified two new solutions to the problems to be encountered during the metro construction. Firstly, stations were to be built at a level slightly raised above that of normal track so as to prevent drainage directly into them, whilst the average tunnel width was to be reduced from the standard of the Moscow Metro to .
On 3 September 1947 construction began again in the Leningrad subway, and in December 1954, the Council of Ministers of the USSR ordered the establishment of the state transport organization Leningradsky Metropoliten, to be headed by Ivan Novikov. The organisation set up its offices in the building directly above Tekhnologichesky Institut station. On 7 October 1955 the electricity was turned on in the metro, and on 5 November 1955, the act by which the first stage of the metro was put into operation, was signed. Ten years after the end of the war, at the beginning of the post-Stalin Khrushchev Thaw, the city finally got an underground transport network. The subway grand opening was held on 15 November 1955, with the first seven stations (the eighth one, Pushkinskaya opened a few months later) being put into public use. These stations later became part of the Kirovsko-Vyborgskaya Line, connecting the Moscow Rail Terminal in the city centre with the Kirovsky industrial zone in the southwest. Subsequent development included lines under the Neva River in 1958, and the construction of the Vyborgsky Radius in the mid-1970s to reach the new housing developments in the north. In 1978, the line was extended past the city limits into the Leningrad Oblast. 1,023 governmental awards were made to participants of the construction of the metro first stage.
Further development
The first expansion of the metro took place in 1958, when the first line (later to become the Kirovsko-Vyborgskaya Line) was extended beneath the Neva river to the Finlyandsky Rail Terminal. Later this same line was extended when the Vyborgsky radius, constructed in the 1970s, brought the metro to new residential areas constructed in the north-east of the city, and by 1978, those further out, in the nearby Leningrad Oblast. The metro was expanded to the south-west, with the construction of the Kirovsky radius, in 1977.
Construction of the second, Moskovsko-Petrogradskaya line began almost immediately after the initial opening of the metro. Just six years later, in 1961, the section from Tekhnologichesky Institut to Park Pobedy, along Moskovsky Prospect to the southern areas of the city, was opened. In 1963 the line was extended north to the station Petrogradskaya station; in the process making Tekhnologichesky Institut the USSR's first cross-platform interchange station. Further extension of the line was undertaken to the south in the early 1970s, and in the 1980s to the north, with the final station Parnas being opened, following numerous delays, in 2006.
The third Nevsko-Vasileostrovskaya Line was first opened in 1967 and eventually linked Vasilievsky Island, the city centre, and the industrial zones on the southeastern bank of the Neva in a series of extensions (1970, 1979, 1981 and 1984). The fourth line, Pravoberezhnaya, was opened in 1985 to serve the new residential districts on the right bank of the Neva before reaching the city centre in 1991 and continuing to the northwest in the late 1990s. It was in this period that the opening of the metro's fifth (Frunzensko-Primorskaya) line was planned, however, it was only in 2008, with the opening of Volkovskaya and Zvenigorodskaya stations, that this took place. On 7 March 2009, when the fourth line was expanded with the addition of Spasskaya station, the fifth line finally (as dictated in earlier projects) began to directly serve both the Primorsky and Frunzensky districts of Saint Petersburg.
By the time of the USSR's collapse, the Leningrad Metro comprised 54 stations and of track. Up until this period, it was officially known as the 'V.I. Lenin Leningrad Metro of the Order of Lenin' (Леининградский Метрополитен Ордена Ленина имени В.И.Ленина).
Modern period
At the beginning of 1992 construction work was being carried out at 14 stations, or objects relating to them. These were six stations of the Primorsky radius (Admiralteyskaya, Sportivnaya, Chkalovskaya, Krestovsky Ostrov, Staraya Derevnya, and Komendantsky Prospekt), two stations on the fourth line (Spasskaya and transfer tunnels to Sadovaya station), Parnas and the 'Vyborg' depot on line 2, and five stations of the Frunzensky radius (Zvenigorodskaya, Obvodny Kanal, Volkovskaya, Bukharestskaya, and Mezhdunarodnaya). Thus, it was believed, considering the average time of construction of a metro station in Saint Petersburg being equal to 5.6 years, that, with sufficient funding, all the works mentioned above would be completed by no later than 1997; a record in the history of the construction of the St. Petersburg metro. This however, was not achieved, and the plans were only completed in late 2012.
In 1994 it was planned, over 10 years, to massively extend the metro and almost "double" its size, building three new lines and 61 new stations. However, in reality, over this period until 2004, just 6 stations were opened. At this point the metro considered funding construction through a system of individual stage and station sponsorship. Saint Petersburg's unforgiving geology has frequently hampered attempts by Metro builders. The most notable case took place on the Kirovsko-Vyborgskaya Line. While constructing the line in the 1970s, the tunnelers entered an underground cavity of the Neva River. They managed to complete the tunnel, but in 1995 the tunnel had to be closed and a section of it between Lesnaya and Ploschad Muzhestva flooded. For more than nine years, the northern segment of the line was physically cut off from the rest of the system. A new set of tunnels was built and in June 2004 normal service was restored.
Lines
Line 1 (Kirovsko-Vyborgskaya)
Kirovsko-Vyborgskaya ("Kirovsky [district]–Vyborgsky [district]") Line is the oldest line of the metro, opened in 1955. The original stations are very beautiful and elaborately decorated, especially Avtovo and Narvskaya. The line connects four out of five Saint Petersburg's main railway stations. In 1995, a flooding occurred in a tunnel between Lesnaya and Ploschad Muzhestva stations and, for nine years, the line was separated into two independent segments (the gap was connected by a shuttle bus route). The line contains three of the six shallow stations that are present in the metro.
The line cuts Saint Petersburg centre on a northeast–southwest axis. In the south its alignment follows the shore of the Gulf of Finland. In the north it extends outside the city limits into the Leningrad oblast (it is the only line to stretch beyond the city boundary). The Kirovsko-Vyborgskaya Line generally coloured red on Metro maps.
Line 2 (Moskovsko-Petrogradskaya)
Moskovsko-Petrogradskaya ("Moskovsky [district]–Petrogradsky [district]") Line is the second oldest line of the metro, opened in 1961. It featured the first cross-platform transfer in the USSR. It was also the first metro line in Saint Petersburg to feature a unique platform type that soon became dubbed as "Horizontal Lift".
The line cuts Saint Petersburg on a north-south axis and is generally coloured blue on Metro maps. In 2006, as an extension was opened, it became the longest line on the system.
Line 3 (Nevsko-Vasileostrovskaya)
Nevsko-Vasileostrovskaya ("Nevsky [district]–Vasileostrovsky [district]") Line is a line of the metro, opened in 1967. Since 1994, it has been officially designated as Line 3. It stands out among St. Petersburg metro lines for two reasons : its stations are almost exclusively of "Horizontal Lift" type and it has the longest inter-station tunnels in the entire system. Metro officials originally intended to add stations in-between the existing ones, but those plans were later abandoned.
The line cuts Saint Petersburg centre on an east-west axis and then turns southeast following the left bank of the Neva River. It is generally coloured green on Metro maps.
Line 4 (Lakhtinsko-Pravoberezhnaya)
Lakhtinsko-Pravoberezhnaya ("Lakhta–Right Bank") Line, initially known just as Pravoberezhnaya ("Right Bank") line, was opened in 1985. It is the shortest line in the system with the stations featuring a modern design.
The line originally opened to provide access from the centre for the new residential areas in the eastern part of city, along the right bank of the Neva River. However, delays in the construction of the future Line 5, compelled to temporarily link the already completed northern part of the Line 5 (starting from Sadovaya) to Pravoberezhnaya Line, as they felt that it was better to have a single connected line rather than two unconnected ones. From that point on, the line expanded northward, as per original plans of Line 5 expansion.
On 7 March 2009 Spasskaya station was completed, creating the city's first three-way transfer and it officially became the new terminal for Line 4. As per the original plan, all Line 4 stations north of Dostoyevskaya were absorbed into the recently opened Line 5. In 27th of December 2024, a west extension from Spasskaya to Gornyi Institut opened with 1 new station and one intermediate under construction (Teatralnaya).
Line 5 (Frunzensko-Primorskaya)
Frunzensko-Primorskaya ("Frunzensky [district]–Primorsky [district]") Line connects the city's historical centre to the northwestern and southern districts. The line is planned to be expanded to the north.
The line originally opened in December 2008. It contained only two stations until 7 March 2009, when the Line 4 (Pravoberezhnaya Line) segment between Komendantsky Prospekt and Sadovaya stations became a part of the new line.
Line 6 (Krasnosel'sko-Kalininskaya)
Krasnosel'sko-Kalininskaya ("Krasnosel'sky [district]–Kalininsky [district]") Line will go from the southwest of Saint Petersburg, through the city centre, to the northeast of the city. The first stage which consists of two stations is under construction and should be opened on the 30th of April, 2025. These stations are Yugo-Zapadnaya (Kazakovskaya) and Putilovskaya.
Stations
Some of the features of the Saint Petersburg Metro make it stand out amongst others, even those in the former USSR. It is customary to have stations in the centre of a city built very deep, not only to minimise disruption, but also, because of the Cold War threat, they were built to double as bomb shelters, and many old stations do feature provisions such as blast doors and air filters. In most cities, the lines become shallow or even begin to run above ground as they reach the city's outer residential districts. However, this is not the case in Saint Petersburg. The difficult geology means that all but 9 stations are at a deep level. The design and architecture went through numerous phases. The original stations were predominantly of the pylon type, of which there are 15 stations. Also popular was the column layout, and there are 16 such stations in the system.
The first stage is exquisitely decorated in the Stalinist Architecture style, but from 1958, Nikita Khrushchev's struggle with decorative extras restricted the vivid decorations to simple aesthetic themes. During this time a new design called "horizontal lift" became widespread, and 10 stations were built with this layout. The horizontal lift design is a variation of a station with platform screen doors, and has not been found elsewhere outside Saint Petersburg. However, because the design became unpopular with passengers, and for technical reasons, no stations featuring this design were built between 1972 and 2018. From the mid-1970s, a new open "single-vault" design was developed by local engineers and became very popular, not only in Saint Petersburg, but some other cities as well. Known technically as Leningradky Odnosvod, it remains the most popular of all and there are 16 such stations in the city.
The remaining stations are located virtually on the edge of the city, and one, Devyatkino, is territorially in Leningrad Oblast, far away from the harsh underground geology that forms the Neva Delta. The six shallow column stations are located in the southern and northwestern sections of the city, and the first three are found on the Kirovsko-Vyborgskaya Line. The first one, Avtovo, is considered to be one of the most beautiful stations in the world and was opened as part of the first stage in 1955, while the other two were built in the late 1970s as typical Moscow-style pillar trispan stations. There are two shallow-column stations on the Nevsko-Vasileostrovskaya Line: Zenit and Begovaya. Both of these stations, which use a modified version of the horizontal lift design, were opened in May 2018 as part of the line's extension to the northwestern section of the city. A sixth shallow-column station, Dunayskaya, opened in October 2019 as part of the Frunzensko-Primorskaya Line's southern extension. In addition, there are four termini stations that are on the surface and are located near the lines' connection with the train depots. The city's northern climate means that even here all of the station space is inside an enclosed structure.
Network map
Expansion Plans
The Metro has a very large expansion plan for the next half century. The Pravoberezhnaya Line was split in early 2009, and the new fifth line (Frunzensko-Primorskaya) took the northern (Primorsky) radius away from Pravoberezhnaya and opened with a new section (Frunzensky) to the south. The Pravoberezhnaya line will extend to the west, and then north to Lakhta, and then to Yuntolovo. The two stations, Bukharestskaya and Mezhdunarodnaya of the Frunzensko-Primorskaya line, opened in December 2012. In 2018 the expansion of Line 3 from Primorskaya to Begovaya opened adding 2 new stations (Zenit (Novokrestovskaya when opened) and Begovaya) and the newest extension of the system adding 3 new stations on Line 5 (Prospekt Slavy, Dunayskaya and Shushary) opened in 2019. Three new lines, Krasnoselsko-Kalininskaya, Admiralteysko-Okhtinskaya and Koltsevaya, are to be constructed in the future. The first six stations of the Krasnoselsko-Kalininskaya line are already under construction, and should be opened in 2 stages by 2024. The Admiralteysko-Okhtinskaya and Koltsevaya line should appear after the 2030s. A two-station expansion of Line 4 from Spasskaya to Gorny Institut is under construction and is set to open by 2024 and a two-station expansion of Line 3 from Begovaya to Kamenka is under construction with the opening set around 2028-2030.
Back in 2012, the official website of the Saint Petersburg metro claimed the opening of 54 new stations, 5 new depots and of new lines. Delays due to the difficult geology of the city's underground and to the insufficient funding have cut down these plans, as of 2014 (2 new stations later), to 17 new stations and one new depot until 2025.
At the same time, there are several short and mid-term projects on station upgrades, including escalator replacements and lighting upgrades.
Operation
The Metro is managed by the state municipal company Sankt-Peterburgsky Metropoliten (Saint Petersburg Metropolitan, ) that was privatised from the Ministry of Rail Services. The Metro was renamed to coincide with the city's name change in the early 1990s. The company employs several thousand men and women in station and track management as well as rolling stock operation and maintenance.
The Metro is financed by the city of Saint Petersburg, from passenger fares, and from advertisement space at the stations and on the trains. Metro construction is undertaken by the subsidiary Lenmetrostroy () that is financed by the Metro as well as directly by the Ministry of Transportation.
Rolling stock
The rolling stock of the metro is provided by five depots with a total of 1403 cars forming 188 trains. Most of the models are the Metrowagonmash 81-717/714 that are very common in all ex-Soviet cities. In addition there are older E and Em type trains on the Kirovsko-Vyborgskaya Line and newer 81-540/541 (built by Škoda Transportation unit Vagonmash) on the Pravoberezhnaya and Fruzenskaya-Primorskaya Line. In addition, the Metro has also received 81-722 and 81-724 cars from Metrowagonmash, which are custom models specifically for Saint Petersburg. Both these and the Skoda cars are equipped with sliding doors that go in to pockets rather than the plug doors now being used elsewhere. This is due to the fact that several stations on the system have platform doors that do not permit sufficient clearance for the plug ones. Transmashholding unveiled a new prototype train for the St Petersburg Metro in June 2019. In August 2022 an order was placed for 950 cars, the first was delivered in September 2022.
Security
The Metro was originally built as a system that could offer shelter in case of a nuclear attack. Every station is equipped with CCTV surveillance following recent terrorist threats. Until the summer of 2009, all photography and video filming in the Metro required a written permit. However, because of a legal challenge by an amateur photographer, after 24 August 2009, photography without a flash can be done without a permit.
Incidents
Terrorist bombings
Approximately fourteen people died and over 50 sustained injuries from an explosion 3 April 2017 on a train between Sennaya Ploshchad and Tekhnologicheski Institut stations, on the Line 2. The explosion occurred at 2:20 pm, local time. Seven individuals were confirmed DOA, while others were rushed to hospital. Russian President Vladimir Putin's hometown is Saint Petersburg, and he was in the city visiting when the attack occurred. He issued condolence statements to the victims' families immediately after the blast.
Russia's National Anti-Terrorism Committee also on 3 April defused an improvised explosive device at Ploshchad Vosstaniya station, on the Line 1.
| Technology | Russia | null |
1629332 | https://en.wikipedia.org/wiki/Schinus | Schinus | Schinus is a genus of flowering trees and tall shrubs in the sumac family, Anacardiaceae. Members of the genus are commonly known as pepper trees. The Peruvian pepper tree (Schinus molle) is the source of the spice known as pink peppercorn.
The species of Schinus are native to South America, ranging from Peru and northeastern Brazil to southern South America. Some species (e.g. Schinus terebinthifolia) have become an invasive species outside their natural habitats. Schinus polygama, although less well known, is also potentially weedy in mesic areas.
Etymology
The generic name is derived from the Greek word for Pistacia lentiscus, Σχίνος (schinos), which it resembles. Considerable historic confusion has existed as to the correct gender of the genus name; as of 2015, this has been resolved with the determination that the correct gender of Schinus is feminine (rather than masculine), and adjectival names within the genus must be spelled accordingly.
Species
34 species are currently accepted:
Schinus areira – Peru, Bolivia, northern Chile
Schinus bumelioides – northern Argentina
Schinus engleri F.A.Barkley – Argentina, Brazil, and Uruguay
Schinus fasciculata (Griseb.) I.M.Johnst. – Bolivia, Paraguay, and northern Argentina
Schinus ferox – southern Brazil, Paraguay, Uruguay, northern Argentina (Misiones Province)
Schinus gracilipes – northwestern Argentina
Schinus johnstonii – Argentina and Uruguay
Schinus kauselii – central Chile
Schinus latifolia (Gillies ex Lindl.) Engl. – central Chile
Schinus lentiscifolia – southern Brazil, Paraguay, Uruguay, northern Argentina (Misiones Province)
Schinus longifolia – west-central and southern Brazil, Paraguay, Uruguay, nand northern Argentina
Schinus marchandii – southern Chile and southern Argentina
Schinus meyeri – Bolivia and northwestern Argentina (Salta Province)
Schinus microphylla – Peru and Bolivia
Schinus molle L. Peruvian pepper tree
Schinus molle var. molle (=S. bituminosa, S. occidentalis) – Peru and northern Chile; southern Brazil, Paraguay, Uruguay, and northeastern Argentina
Schinus molle var. rusbyi (L.) DC. – southern Peru and northern Chile
Schinus montana – central Chile
Schinus myrtifolia – Bolivia and northwestern Argentina
Schinus odonellii – southern Chile and western Argentina
Schinus pampeana – southern Brazil (Rio Grande do Sul)
Schinus patagonica – central and southern Chile and western Argentina
Schinus pearcei Engler – Bolivia, northern Chile, and Peru
Schinus pilifera – Bolivia and northern Argentina
Schinus polygama (Cav.) Cabrera (=S. dentata, S. dependens) – Chile and northwestern Argentina (Mendoza)
Schinus praecox – north-central Argentina
Schinus ramboi – southern and southeastern Brazil
Schinus roigii – western Argentina
Schinus sinuata – northeastern Argentina
Schinus spinosa – Brazil (Paraná state)
Schinus talampaya – northwestern Argentina (San Juan and La Rioja)
Schinus terebinthifolia Raddi Brazilian pepper tree – northeastern to southeastern Brazil, northeastern Argentina, and Paraguay
Schinus terebinthifolia var. acutifolia Engl.
Schinus terebinthifolia var. terebinthifolia (=S. aroiera, S. chichita, S. mellisii, S, mucronulata, S. rhoifolia)
Schinus uruguayensis – southern Brazil, Uruguay, and northeastern Argentina
Schinus velutina – central Chile
Schinus venturii F.A.Barkley – southern Bolivia to northwestern Argentina (Salta)
Schinus weinmanniifolia Mart. ex Engl. – south-central to eastern and southern Brazil, Paraguay, Uruguay, and northeastern Argentina
Formerly placed here
Cuscuta myricoides (L.) Druce (as S. myricoides L.)
Limonia acidissima L. (as S. limonia L.)
Lithraea molleoides (Vell.) Engl. (as S. molleoides Vell.)
Zanthoxylum fagara (L.) Sarg. (as S. fagara L.)
| Biology and health sciences | Sapindales | Plants |
8955397 | https://en.wikipedia.org/wiki/Internal%20structure%20of%20the%20Moon | Internal structure of the Moon | Having a mean density of 3,346.4 kg/m3, the Moon is a differentiated body, being composed of a geochemically distinct crust, mantle, and planetary core. This structure is believed to have resulted from the fractional crystallization of a magma ocean shortly after its formation about 4.5 billion years ago. The energy required to melt the outer portion of the Moon is commonly attributed to a giant impact event that is postulated to have formed the Earth-Moon system, and the subsequent reaccretion of material in Earth orbit. Crystallization of this magma ocean would have given rise to a mafic mantle and a plagioclase-rich crust.
Geochemical mapping from orbit implies that the crust of the Moon is largely anorthositic in composition, consistent with the magma ocean hypothesis. In terms of elements, the lunar crust is composed primarily of oxygen, silicon, magnesium, iron, calcium, and aluminium, but important minor and trace elements such as titanium, uranium, thorium, potassium, sulphur, manganese, chromium and hydrogen are present as well. Based on geophysical techniques, the crust is estimated to be on average about 50 km thick.
Partial melting within the mantle of the Moon gave rise to the eruption of mare basalts on the lunar surface. Analyses of these basalts indicate that the mantle is composed predominantly of the minerals olivine, orthopyroxene and clinopyroxene, and that the lunar mantle is more iron-rich than that of the Earth. Some lunar basalts contain high abundances of titanium (present in the mineral ilmenite), suggesting that the mantle is highly heterogeneous in composition. Moonquakes have been found to occur deep within the mantle of the Moon about 1,000 km below the surface. These occur with monthly periodicities and are related to tidal stresses caused by the eccentric orbit of the Moon about the Earth. A few shallow moonquakes with hypocenters located about 100 km below the surface have also been detected, but these occur more infrequently and appear to be unrelated to the lunar tides.
Core
Several lines of evidence imply that the lunar core is small, with a radius of about 350 km or less. The diameter of the lunar core is only about 20% the diameter of the Moon itself, in contrast to about 50% as is the case for most other terrestrial bodies. The composition of the lunar core is not well constrained, but most believe that it is composed of metallic iron alloy with a small amount of sulfur and nickel. Analyses of the Moon's time-variable rotations indicate that the core is at least partly molten. Within the giant-impact formation scenario, the core formation of Moon could have occurred within the initial 100–1000 years from the commencement of its accretion from its moonlets.
In 2010, a reanalysis of the old Apollo seismic data on the deep moonquakes using modern processing methods confirmed that the Moon has an iron rich core with a radius of . The same reanalysis established that the solid inner core made of pure iron has a radius of . The core is surrounded by the partially (10 to 30%) melted layer of the lower mantle with a radius of (thickness ~150 km). These results imply that 40% of the core by volume has solidified. The density of the liquid outer core is about 5 g/cm3 and it could contain as much as 6% sulfur by weight. The temperature in the core is probably about 1600–1700 K (1330–1430 °C).
In 2019, a reanalysis of nearly 50 years of data collected from the Lunar Laser Ranging experiment with lunar gravity field data from the GRAIL mission, shows that for a relaxed lunar fluid core with non-hydrostatic lithospheres, the core flattening is determined as with the radii of its core-mantle boundary as .
| Physical sciences | Solar System | Astronomy |
8955468 | https://en.wikipedia.org/wiki/Selenography | Selenography | Selenography is the study of the surface and physical features of the Moon (also known as geography of the Moon, or selenodesy). Like geography and areography, selenography is a subdiscipline within the field of planetary science. Historically, the principal concern of selenographists was the mapping and naming of the lunar terrane identifying maria, craters, mountain ranges, and other various features. This task was largely finished when high resolution images of the near and far sides of the Moon were obtained by orbiting spacecraft during the early space era. Nevertheless, some regions of the Moon remain poorly imaged (especially near the poles) and the exact locations of many features (like crater depths) are uncertain by several kilometers. Today, selenography is considered to be a subdiscipline of selenology, which itself is most often referred to as simply "lunar science." The word selenography is derived from the Greek word Σελήνη (Selene, meaning Moon) and γράφω graphō, meaning to write.
History
The idea that the Moon is not perfectly smooth originates to at least , when Democritus asserted that the Moon's "lofty mountains and hollow valleys" were the cause of its markings. However, not until the end of the 15th century AD did serious selenography begin. Around AD 1603, William Gilbert made the first lunar drawing based on naked-eye observation. Others soon followed, and when the telescope was invented, initial drawings of poor accuracy were made, but soon thereafter improved in tandem with optics. In the early 18th century, the librations of the Moon were measured, which revealed that more than half of the lunar surface was visible to observers on Earth. In 1750, Johann Meyer produced the first reliable set of lunar coordinates that permitted astronomers to locate lunar features.
Lunar mapping became systematic in 1779 when Johann Schröter began meticulous observation and measurement of lunar topography. In 1834 Johann Heinrich von Mädler published the first large cartograph (map) of the Moon, comprising 4 sheets, and he subsequently published The Universal Selenography. All lunar measurement was based on direct observation until March 1840, when J.W. Draper, using a 5-inch reflector, produced a daguerreotype of the Moon and thus introduced photography to astronomy. At first, the images were of very poor quality, but as with the telescope 200 years earlier, their quality rapidly improved. By 1890 lunar photography had become a recognized subdiscipline of astronomy.
Lunar photography
The 20th century witnessed more advances in selenography. In 1959, the Soviet spacecraft Luna 3 transmitted the first photographs of the far side of the Moon, giving the first view of it in history. The United States launched the Ranger spacecraft between 1961 and 1965 to photograph the lunar surface until the instant they impacted it, the Lunar Orbiters between 1966 and 1967 to photograph the Moon from orbit, and the Surveyors between 1966 and 1968 to photograph and softly land on the lunar surface. The Soviet Lunokhods 1 (1970) and 2 (1973) traversed almost 50 km of the lunar surface, making detailed photographs of the lunar surface. The Clementine spacecraft obtained the first nearly global cartograph (map) of the lunar topography, and also multispectral images. Successive missions transmitted photographs of increasing resolution.
Lunar topography
The Moon has been measured by the methods of laser altimetry and stereo image analysis, including data obtained during several missions. The most visible topographical feature is the giant far-side South Pole-Aitken basin, which possesses the lowest elevations of the Moon. The highest elevations are found just to the northeast of this basin, and it has been suggested that this area might represent thick ejecta deposits that were emplaced during an oblique South Pole-Aitken basin impact event. Other large impact basins, such as the maria Imbrium, Serenitatis, Crisium, Smythii, and Orientale, also possess regionally low elevations and elevated rims.
Another distinguishing feature of the Moon's shape is that the elevations are on average about 1.9 km higher on the far side than the near side. If it is assumed that the crust is in isostatic equilibrium, and that the density of the crust is everywhere the same, then the higher elevations would be associated with a thicker crust. Using gravity, topography and seismic data, the crust is thought to be on average about thick, with the far-side crust being on average thicker than the near side by about 15 km.
Lunar cartography and toponymy
The oldest known illustration of the Moon was found in a passage grave in Knowth, County Meath, Ireland. The tomb was carbon dated to 3330–2790 BC. Leonardo da Vinci made and annotated some sketches of the Moon in c. 1500. William Gilbert made a drawing of the Moon in which he denominated a dozen surface features in the late 16th century; it was published posthumously in De Mondo Nostro Sublunari Philosophia Nova. After the invention of the telescope, Thomas Harriot (1609), Galileo Galilei (1609), and Christoph Scheiner (1614) made drawings also.
Denominations of the surface features of the Moon, based on telescopic observation, were made by Michael van Langren in 1645. Many of his denominations were distinctly Catholic, denominating craters in honor of Catholic royalty and capes and promontories in honor of Catholic saints. The lunar maria were denominated in Latin for terrestrial seas and oceans. Minor craters were denominated in honor of astronomers, mathematicians, and other famous scholars.
In 1647, Johannes Hevelius produced the rival work Selenographia, which was the first lunar atlas. Hevelius ignored the nomenclature of Van Langren and instead denominated the lunar topography according to terrestrial features, such that the names of lunar features corresponded to the toponyms of their geographical terrestrial counterparts, especially as the latter were denominated by the ancient Roman and Greek civilizations. This work of Hevelius influenced his contemporary European astronomers, and the Selenographia was the standard reference on selenography for over a century.
Giambattista Riccioli, SJ, a Catholic priest and scholar who lived in northern Italy authored the present scheme of Latin lunar nomenclature. His Almagestum novum was published in 1651 as summary of then current astronomical thinking and recent developments. In particular he outlined the arguments in favor of and against various cosmological models, both heliocentric and geocentric. Almagestum Novum contained scientific reference matter based on contemporary knowledge, and contemporary educators across Europe widely used it. Although this handbook of astronomy has long since been superseded, its system of lunar nomenclature is used even today.
The lunar illustrations in the Almagestum novum were drawn by a fellow Jesuit educator named Francesco Grimaldi, SJ. The nomenclature was based on a subdivision of the visible lunar surface into octants that were numbered in Roman style from I to VIII. Octant I referenced the northwest section and subsequent octants proceeded clockwise in alignment with compass directions. Thus Octant VI was to the south and included Clavius and Tycho Craters.
The Latin nomenclature had two components: the first denominated the broad features of terrae (lands) and maria (seas) and the second denominated the craters. Riccioli authored lunar toponyms derived from the names of various conditions, including climactic ones, whose causes were historically attributed to the Moon. Thus there were the seas of crises ("Mare Crisium"), serenity ("Mare Serenitatis"), and fertility ("Mare Fecunditatis"). There were also the seas of rain ("Mare Imbrium"), clouds ("Mare Nubium"), and cold ("Mare Frigoris"). The topographical features between the maria were comparably denominated, but were opposite the toponyms of the maria. Thus there were the lands of sterility ("Terra Sterilitatis"), heat ("Terra Caloris"), and life ("Terra Vitae"). However, these names for the highland regions were supplanted on later cartographs (maps). See List of features on the Moon for a complete list.
Many of the craters were denominated topically pursuant to the octant in which they were located. Craters in Octants I, II, and III were primarily denominated based on names from ancient Greece, such as Plato, Atlas, and Archimedes. Toward the middle in Octants IV, V, and VI craters were denominated based on names from the ancient Roman Empire, such as Julius Caesar, Tacitus, and Taruntius. Toward the southern half of the lunar cartograph (map) craters were denominated in honor of scholars, writers, and philosophers of medieval Europe and Arabic regions. The outer extremes of Octants V, VI, and VII, and all of Octant VIII were denominated in honor of contemporaries of Giambattista Riccioli. Features of Octant VIII were also denominated in honor of Copernicus, Kepler, and Galileo. These persons were "banished" to it far from the "ancients," as a gesture to the Catholic Church. Many craters around the Mare Nectaris were denominated in honor of Catholic saints pursuant to the nomenclature of Van Langren. All of them were, however, connected in some mode with astronomy. Later cartographs (maps) removed the "St." from their toponyms.
The lunar nomenclature of Giambattista Riccioli was widely used after the publication of his Almagestum Novum, and many of its toponyms are presently used. The system was scientifically inclusive and was considered eloquent and poetic in style, and therefore it appealed widely to his contemporaries. It was also readily extensible with new toponyms for additional features. Thus it replaced the nomenclature of Van Langren and Hevelius.
Later astronomers and lunar cartographers augmented the nomenclature with additional toponyms. The most notable among these contributors was Johann H. Schröter, who published a very detailed cartograph (map) of the Moon in 1791 titled the Selenotopografisches Fragmenten. Schröter's adoption of Riccioli's nomenclature perpetuated it as the universally standard lunar nomenclature. A vote of the International Astronomical Union (IAU) in 1935 established the lunar nomenclature of Riccioli, which included 600 lunar toponyms, as universally official and doctrinal.
The IAU later expanded and updated the lunar nomenclature in the 1960s, but new toponyms were limited to toponyms honoring deceased scientists. After Soviet spacecraft photographed the far side of the Moon, many of the newly discovered features were denominated in honor of Soviet scientists and engineers. The IAU assigned all subsequent new lunar toponyms. Some craters were denominated in honor of space explorers.
Satellite craters
Johann H. Mädler authored the nomenclature for satellite craters. The subsidiary craters surrounding a major crater were identified by a letter. These subsidiary craters were usually smaller than the crater with which they were associated, with some exceptions. The craters could be assigned letters "A" through "Z," with "I" omitted. Because the great majority of the toponyms of craters were masculine, the major craters were generically denominated "patronymic" craters.
The assignment of the letters to satellite craters was originally somewhat haphazard. Letters were typically assigned to craters in order of significance rather than location. Precedence depended on the angle of illumination from the Sun at the time of the telescopic observation, which could change during the lunar day. In many cases the assignments were seemingly random. In a number of cases the satellite crater was located closer to a major crater with which it was not associated. To identify the patronymic crater, Mädler placed the identifying letter to the side of the midpoint of the feature that was closest to the associated major crater. This also had the advantage of permitting omission of the toponyms of the major craters from the cartographs (maps) when their subsidiary features were labelled.
Over time, lunar observers assigned many of the satellite craters an eponym. The International Astronomical Union (IAU) assumed authority to denominate lunar features in 1919. The commission for denominating these features formally adopted the convention of using capital Roman letters to identify craters and valleys.
When suitable maps of the far side of the Moon became available by 1966, Ewen Whitaker denominated satellite features based on the angle of their location relative to the major crater with which they were associated. A satellite crater located due north of the major crater was identified as "Z". The full 360° circle around the major crater was then subdivided evenly into 24 parts, like a 24-hour clock. Each "hour" angle, running clockwise, was assigned a letter, beginning with "A" at 1 o'clock. The letters "I" and "O" were omitted, resulting in only 24 letters. Thus a crater due south of its major crater was identified as "M".
Reference elevation
The Moon obviously lacks any mean sea level to be used as vertical datum.
The USGS's Lunar Orbiter Laser Altimeter (LOLA), an instrument on NASA's Lunar Reconnaissance Orbiter (LRO), employs a digital elevation model (DEM) that uses the nominal lunar radius of .
The selenoid (the geoid for the Moon) has been measured gravimetrically by the GRAIL twin satellites.
Historical lunar maps
The following historically notable lunar maps and atlases are arranged in chronological order by publication date.
Michael van Langren, engraved map, 1645.
Johannes Hevelius, Selenographia, 1647.
Giovanni Battista Riccioli and Francesco Maria Grimaldi, Almagestum novum, 1651.
Giovanni Domenico Cassini, engraved map, 1679 (reprinted in 1787).
Tobias Mayer, engraved map, 1749, published in 1775.
Johann Hieronymus Schröter, Selenotopografisches Fragmenten, 1st volume 1791, 2nd volume 1802.
John Russell, engraved images, 1805.
Wilhelm Lohrmann, Topographie der sichtbaren Mondoberflaeche, Leipzig, 1824.
Wilhelm Beer and Johann Heinrich Mädler, Mappa Selenographica totam Lunae hemisphaeram visibilem complectens, Berlin, 1834-36.
Edmund Neison, The Moon, London, 1876.
Julius Schmidt, Charte der Gebirge des Mondes, Berlin, 1878.
Thomas Gwyn Elger, The Moon, London, 1895.
Johann Krieger, Mond-Atlas, 1898. Two additional volumes were published posthumously in 1912 by the Vienna Academy of Sciences.
Walter Goodacre, Map of the Moon, London, 1910.
Mary A. Blagg and Karl Müller, Named Lunar Formations, 2 volumes, London, 1935.
Philipp Fauth, Unser Mond, Bremen, 1936.
Hugh P. Wilkins, 300-inch Moon map, 1951.
Gerard Kuiper et al., Photographic Lunar Atlas, Chicago, 1960.
Ewen A. Whitaker et al., Rectified Lunar Atlas, Tucson, 1963.
Hermann Fauth and Philipp Fauth (posthumously), Mondatlas, 1964.
Gerard Kuiper et al., System of Lunar Craters, 1966.
Yu I. Efremov et al., Atlas Obratnoi Storony Luny, Moscow, 1967–1975.
NASA, Lunar Topographic Orthophotomaps, 1978.
Antonín Rükl, Atlas of the Moon, 2004.
Galleries
| Physical sciences | Solar System | Astronomy |
8960524 | https://en.wikipedia.org/wiki/Double%20Cluster | Double Cluster | The Double Cluster (also known as Caldwell 14) consists of the open clusters NGC 869 and NGC 884 (often designated h Persei and χ (chi) Persei, respectively), which are close together in the constellation Perseus. Both visible with the naked eye, NGC 869 and NGC 884 lie at a distance of about 7,500 light years in the Perseus Arm of the Milky Way galaxy.
Membership
NGC 869 has a mass of 4,700 solar masses and NGC 884 weighs in at 3,700 solar masses; both clusters are surrounded with a very extensive halo of stars, with a total mass for the complex of at least 20,000 solar masses. They form the core of the Perseus OB1 association of young hot stars.
Based on their individual stars, the clusters are relatively young, both 14 million years old. In comparison, the Pleiades have an estimated age ranging from 75 million years to 150 million years.
There are more than 300 blue-white super-giant stars in each of the clusters. The clusters are also blueshifted, with NGC 869 approaching Earth at a speed of and NGC 884 approaching at a similar speed of . Their hottest main sequence stars are of spectral type B0. NGC 884 includes five prominent red supergiant stars, all variable and all around 8th magnitude: RS Persei, AD Persei, FZ Persei, V403 Persei, and V439 Persei.
History
Greek astronomer Hipparchus cataloged the object (a patch of light in Perseus) as early as 130 BCE. To Bedouin Arabs the cluster marked the tail of the smaller of two fish they visualized in this area, and it was shown on illustrations in Abd al-Rahman al-Sufi's Book of Fixed Stars. However, the true nature of the Double Cluster was not discovered until the invention of the telescope, many centuries later. In the early 19th century William Herschel was the first to recognize the object as two separate clusters. The Double Cluster is not included in Messier's catalog, but is included in the Caldwell catalogue of popular deep-sky objects.
The clusters were designated h Persei and χ Persei by Johann Bayer in his Uranometria (1603). It is sometimes claimed that Bayer did not resolve the pair into two patches of nebulosity, and that χ refers to the Double Cluster and h to a nearby star. Bayer's Uranometria chart for Perseus does not show them as nebulous objects, but his chart for Cassiopeia does, and they are described as Nebulosa Duplex in Schiller's Coelum Stellatum Christianum, which was assembled with Bayer's help.
Location
The Double Cluster is circumpolar (continuously above the horizon) from most northern temperate latitudes. It is in proximity to the constellation Cassiopeia. This northern location renders this object invisible from locations south of about 30º south latitude, such as New Zealand, most of Australia and South Africa. The Double Cluster is approximately the radiant of the Perseid meteor shower, which peaks annually around August 12 or 13. Although easy to locate in the northern sky, observing the Double Cluster in its two parts requires optical aid. They are described as being an "awe-inspiring" and "breathtaking" sight, and are often cited as a target in astronomy observer's guides.
Mythology
Perseus was a famous hero of Greek mythology, a son of the Greek god Zeus. Along with beheading Medusa, Perseus performed other heroic deeds such as saving princess Andromeda who was chained to a rock as a sacrifice to a sea monster, Cetus. The gods commemorated Perseus by placing him among the stars, with the head of Medusa in one hand and the jeweled sword in the other. The Double Cluster represents the jeweled handle of his sword.
| Physical sciences | Notable star clusters | Astronomy |
62332 | https://en.wikipedia.org/wiki/Deep-sea%20fish | Deep-sea fish | Deep-sea fish are fish that live in the darkness below the sunlit surface waters, that is below the epipelagic or photic zone of the sea. The lanternfish is, by far, the most common deep-sea fish. Other deep-sea fishes include the flashlight fish, cookiecutter shark, bristlemouths, anglerfish, viperfish, and some species of eelpout.
Only about 2% of known marine species inhabit the pelagic environment. This means that they live in the water column as opposed to the benthic organisms that live in or on the sea floor. Deep-sea organisms generally inhabit bathypelagic ( deep) and abyssopelagic ( deep) zones. However, characteristics of deep-sea organisms, such as bioluminescence can be seen in the mesopelagic ( deep) zone as well. The mesopelagic zone is the disphotic zone, meaning light there is minimal but still measurable. The oxygen minimum layer exists somewhere between a depth of and deep depending on the place in the ocean. This area is also where nutrients are most abundant. The bathypelagic and abyssopelagic zones are aphotic, meaning that no light penetrates this area of the ocean. These zones make up about 75% of the inhabitable ocean space.
The epipelagic zone ( deep) is the area where light penetrates the water and photosynthesis occurs. This is also known as the photic zone. Because this typically extends only a few hundred meters below the water, the deep sea, about 90% of the ocean volume, is in darkness. The deep sea is also an extremely hostile environment, with temperatures that rarely exceed and fall as low as (with the exception of hydrothermal vent ecosystems that can exceed 350 °C, or 662 °F), low oxygen levels, and pressures between 20 and 1000 atm (between 2 and 100 MPa).
Evolution
It has been speculated that deep-sea ecosystems may have been inhospitable to vertebrate life prior to an increased influx of nutrients into the ocean during the Late Jurassic and Early Cretaceous following the rise of angiosperms on land, which led to an increase in abyssal invertebrate life, allowing fish to in turn colonize these ecosystems. However, some modern deep-sea fish, such as holocephalians, are descendants of much older lineages, indicating that much earlier colonizations of the deep-sea by vertebrates may have occurred, although no fossil evidence of this is known.
The earliest known records of deep-sea fish are trace fossils of feeding and swimming behavior attributed to unidentified neoteleosts (referable to the ichnogenera Piscichnus and Undichna), from the Early Cretaceous (130 million-year-old) Palombini Shale of Italy, which is thought to have been deposited in the abyssal plain of the former Piemont-Liguria Ocean. Prior to the discovery of these fossils, there was no evidence for deep-sea bony fish older than 50 million years in the Paleogene. The Cretaceous origin for most modern deep-sea fish has been further affirmed with phylogenetic studies such as those of aulopiform fish, which indicate that many deep-sea lineages of these groups originated around this time.
Although the records from the Palombini Shale represent the earliest records of deep-sea bony fish, formations that preserve deepwater shark fossils are also known from later in the Cretaceous. These include the Northumberland Formation of Canada and similarly aged deposits in Angola, both of which preserve fossils of taxa such as hexanchids, chlamydoselachids, and catsharks, which are known from deepwater habitats today but rare in other formations of the time. Paleogene formations with fossil deep-sea shark teeth are known from New Zealand for the middle Paleocene, and formations in Denmark, France, Austria, and Morocco during the Eocene. The Paratethys Sea still supported deepwater sharks and rays into the Miocene, which are preserved in formations in Hungary.
During the Paleogene, some prominent formations that preserve well-articulated specimens of deep-sea bony fish are known. These include the Monte Solane lagerstatte of early Eocene Italy, which preserves a bathypelagic habitat likely deposited 300–600 meters under the sea, as well as the late Eocene Pabdeh Formation of Iran. The deep-sea environments preserved by both formations are apparent through their abundance of fossil stomiiform fish. Notable Neogene formations that preserve fossils of deep-sea bony fish are known from the Miocene of Italy, Japan, and California.
Environment
In the deep ocean, the waters extend far below the epipelagic zone, and support very different types of pelagic fishes adapted to living in these deeper zones. In deep water, marine snow is a continuous shower of mostly organic detritus falling from the upper layers of the water column. Its origin lies in activities within the productive photic zone. Marine snow includes dead or dying plankton, protists (diatoms), fecal matter, sand, soot and other inorganic dust. The "snowflakes" grow over time and may reach several centimetres in diameter, travelling for weeks before reaching the ocean floor. However, most organic components of marine snow are consumed by microbes, zooplankton and other filter-feeding animals within the first of their journey, that is, within the epipelagic zone. In this way marine snow may be considered the foundation of deep-sea mesopelagic and benthic ecosystems: As sunlight cannot reach them, deep-sea organisms rely heavily on marine snow as an energy source. Since there is no light in the deep sea (aphotic), there is a lack of primary producers. Therefore, most organisms in the bathypelagic rely on the marine snow from regions higher in the vertical column.
Some deep-sea pelagic groups, such as the lanternfish, ridgehead, marine hatchetfish, and lightfish families, are sometimes termed pseudoceanic because, rather than having an even distribution in open water, they occur in significantly higher abundances around structural oases, notably seamounts and over continental slopes. The phenomenon is explained by the likewise abundance of prey species which are also attracted to the structures.
Hydrostatic pressure increases by 1 atm (0.1 MPa) for every 10 m (32.8 ft) in depth. Deep-sea organisms have the same pressure within their bodies as is exerted on them from the outside, so they are not crushed by the extreme pressure. Their high internal pressure, however, results in the reduced fluidity of their membranes because molecules are squeezed together. Fluidity in cell membranes increases efficiency of biological functions, most importantly the production of proteins, so organisms have adapted to this circumstance by increasing the proportion of unsaturated fatty acids in the lipids of the cell membranes. In addition to differences in internal pressure, these organisms have developed a different balance between their metabolic reactions from those organisms that live in the epipelagic zone. David Wharton, author of Life at the Limits: Organisms in Extreme Environments, notes "Biochemical reactions are accompanied by changes in volume. If a reaction results in an increase in volume, it will be inhibited by pressure, whereas, if it is associated with a decrease in volume, it will be enhanced". This means that their metabolic processes must ultimately decrease the volume of the organism to some degree.
Most fish that have evolved in this harsh environment are not capable of surviving in laboratory conditions, and attempts to keep them in captivity have led to their deaths. Deep-sea organisms contain gas-filled spaces (vacuoles). Gas is compressed under high pressure and expands under low pressure. Because of this, these organisms have been known to blow up if they come to the surface.
Characteristics
The fish of the deep-sea have evolved various adaptations to survive in this region. Since many of these fish live in regions where there is no natural illumination, they cannot rely solely on their eyesight for locating prey and mates and avoiding predators; deep-sea fish have evolved appropriately to the extreme sub-photic region in which they live. Many of these organisms are blind and rely on their other senses, such as sensitivities to changes in local pressure and smell, to catch their food and avoid being caught. Those that aren't blind have large and sensitive eyes that can use bioluminescent light. These eyes can be as much as 100 times more sensitive to light than human eyes. Rhodopsin (Rh1) is a protein found in the eye’s rod cells that helps animals see in dim light. While most vertebrates usually have one Rh1 opsin gene, some deep-sea fish have several Rh1 genes, and one species, the silver spinyfin (Diretmus argenteus), has 38. This proliferation of Rh1 genes may help deep-sea fish to see in the depths of the ocean. Also, to avoid predation, many species are dark to blend in with their environment.
Many deep-sea fish are bioluminescent, with extremely large eyes adapted to the dark. Bioluminescent organisms are capable of producing light biologically through the agitation of molecules of luciferin, which then produce light. This process must be done in the presence of oxygen. These organisms are common in the mesopelagic region and below ( and below). More than 50% of deep-sea fish, as well as some species of shrimp and squid, are capable of bioluminescence. About 80% of these organisms have photophores – light producing glandular cells that contain luminous bacteria bordered by dark colourings. Some of these photophores contain lenses, much like those in the eyes of humans, which can intensify or lessen the emanation of light. The ability to produce light only requires 1% of the organism's energy and has many purposes: It is used to search for food and attract prey, like the anglerfish; claim territory through patrol; communicate and find a mate, and distract or temporarily blind predators to escape. Also, in the mesopelagic where some light still penetrates, some organisms camouflage themselves from predators below them by illuminating their bellies to match the colour and intensity of light from above so that no shadow is cast. This tactic is known as counter-illumination.
The lifecycle of deep-sea fish can be exclusively deep-water, although some species are born in shallower water and sink upon maturation. Regardless of the depth where eggs and larvae reside, they are typically pelagic. This planktonic — drifting — lifestyle requires neutral buoyancy. In order to maintain this, the eggs and larvae often contain oil droplets in their plasma. When these organisms are in their fully matured state they need other adaptations to maintain their positions in the water column. In general, water's density causes upthrust — the aspect of buoyancy that makes organisms float. To counteract this, the density of an organism must be greater than that of the surrounding water. Most animal tissues are denser than water, so they must find an equilibrium to make them float. Many organisms develop swim bladders (gas cavities) to stay afloat, but because of the high pressure of their environment, deep-sea fishes usually do not have this organ. Instead they exhibit structures similar to hydrofoils in order to provide hydrodynamic lift. It has also been found that the deeper a fish lives, the more jelly-like its flesh and the more minimal its bone structure. They reduce their tissue density through high fat content, reduction of skeletal weight — accomplished through reductions of size, thickness and mineral content — and water accumulation makes them slower and less agile than surface fish. The body shapes of deep-sea fish are generally better adapted for periodic bursts of swimming rather than constant swimming.
Due to the poor level of photosynthetic light reaching deep-sea environments, most fish need to rely on organic matter sinking from higher levels, or, in rare cases, hydrothermal vents for nutrients. This makes the deep-sea much poorer in productivity than shallower regions. Also, animals in the pelagic environment are sparse and food doesn't come along frequently. Because of this, organisms need adaptations that allow them to survive. Some have long feelers to help them locate prey or attract mates in the pitch black of the deep ocean. The deep-sea angler fish in particular has a long fishing-rod-like adaptation protruding from its face, on the end of which is a bioluminescent piece of skin that wriggles like a worm to lure its prey. Some must consume other fish that are the same size or larger than them and they need adaptations to help digest them efficiently. Great sharp teeth, hinged jaws, disproportionately large mouths, and expandable bodies are a few of the characteristics that deep-sea fishes have for this purpose. The gulper eel is one example of an organism that displays these characteristics.
Fish in the different pelagic and deep-water benthic zones are physically structured, and behave in ways, that differ markedly from each other. Groups of coexisting species within each zone all seem to operate in similar ways, such as the small mesopelagic vertically migrating plankton-feeders, the bathypelagic anglerfishes, and the deep-water benthic rattails.
Ray finned species, with spiny fins, are rare among deep-sea fishes, which suggests that deep-sea fish are ancient and so well adapted to their environment that invasions by more modern fishes have been unsuccessful. The few ray fins that do exist are mainly in the Beryciformes and Lampriformes, which are also ancient forms. Most deep-sea pelagic fishes belong to their own orders, suggesting a long evolution in deep-sea environments. In contrast, deep-water benthic species, are in orders that include many related shallow-water fishes.
Mesopelagic fish
Below the epipelagic zone, conditions change rapidly. Between 200 m and about 1000 m, light continues to fade until there is almost none. Temperatures fall through a thermocline to temperatures between 3.9 °C (39 °F) and 7.8 °C (46 °F). This is the twilight or mesopelagic zone. Pressure continues to increase, at the rate of one atm (0.1 MPa) every 10 m (32.8 ft), while nutrient concentrations fall, along with dissolved oxygen and the rate at which the water circulates.
Sonar operators, using the newly developed sonar technology during , were puzzled by what appeared to be a false sea floor deep by day, and less deep at night. This turned out to be due to millions of marine organisms, most particularly small mesopelagic fish, with swim bladders that reflected the sonar. These organisms migrate up into shallower water at dusk to feed on plankton. The layer is deeper when the moon is out, and can become shallower when clouds pass over the moon. This phenomenon has come to be known as the deep scattering layer.
Most mesopelagic fish make daily vertical migrations, moving at night into the epipelagic zone, often following similar migrations of zooplankton, and returning to the depths for safety during the day. These vertical migrations often occur over large vertical distances, and are undertaken with the assistance of a swim bladder. The swim bladder is inflated when the fish wants to move up, and, given the high pressures in the messoplegic zone, this requires significant energy. As the fish ascends, the pressure in the swim bladder must adjust to prevent it from bursting. When the fish wants to return to the depths, the swim bladder is deflated. Some mesopelagic fishes make daily migrations through the thermocline, where the temperature changes between 50 °F (10 °C) and 69 °F (20 °C), thus displaying considerable tolerances for temperature change.
These fish have muscular bodies, ossified bones, scales, well developed gills and central nervous systems, and large hearts and kidneys. Mesopelagic plankton feeders have small mouths with fine gill rakers, while the piscivores have larger mouths and coarser gill rakers.
Mesopelagic fish are adapted for an active life under low light conditions. Most of them are visual predators with large eyes. Some of the deeper water fish have tubular eyes with big lenses and only rod cells that look upwards. These give binocular vision and great sensitivity to small light signals. This adaptation gives improved terminal vision at the expense of lateral vision, and allows the predator to pick out squid, cuttlefish, and smaller fish that are silhouetted against the gloom above them.
Mesopelagic fish usually lack defensive spines, and use colour to camouflage themselves from other fish. Ambush predators are dark, black or red. Since the longer, red, wavelengths of light do not reach the deep sea, red effectively functions the same as black. Migratory forms use countershaded silvery colours. On their bellies, they often display photophores producing low grade light. For a predator from below, looking upwards, this bioluminescence camouflages the silhouette of the fish. However, some of these predators have yellow lenses that filter the (red deficient) ambient light, leaving the bioluminescence visible.
The brownsnout spookfish, a species of barreleye, is the only vertebrate known to employ a mirror, as opposed to a lens, to focus an image in its eyes.
Sampling by deep trawling indicates that lanternfish account for as much as 65% of all deep-sea fish biomass. Indeed, lanternfish are among the most widely distributed, populous, and diverse of all vertebrates, playing an important ecological role as prey for larger organisms. The estimated global biomass of lanternfish is 550–660 million tonnes, several times the entire world fisheries catch. Lanternfish also account for much of the biomass responsible for the deep scattering layer of the world's oceans.
Bigeye tuna are an epipelagic/mesopelagic species that eats other fish. Satellite tagging has shown that bigeye tuna often spend prolonged periods cruising deep below the surface during the daytime, sometimes making dives as deep as . These movements are thought to be in response to the vertical migrations of prey organisms in the deep scattering layer.
Bathypelagic fish
Below the mesopelagic zone it is pitch dark. This is the midnight (or bathypelagic zone), extending from to the bottom deep-water benthic zone. If the water is exceptionally deep, the pelagic zone below is sometimes called the lower midnight (or abyssopelagic zone). Temperatures in this zone range from and it is completely aphotic.
Conditions are somewhat uniform throughout these zones; the darkness is complete, the pressure is crushing, and temperatures, nutrients and dissolved oxygen levels are all low.
Bathypelagic fish have special adaptations to cope with these conditions – they have slow metabolisms and unspecialized diets, being willing to eat anything that comes along. They prefer to sit and wait for food rather than waste energy searching for it. The behaviour of bathypelagic fish can be contrasted with the behaviour of mesopelagic fish. Mesopelagic fish are often highly mobile, whereas bathypelagic fish are almost all lie-in-wait predators, normally expending little energy in movement.
The dominant bathypelagic fishes are small bristlemouth and anglerfish; fangtooth, viperfish, daggertooth and barracudina are also common. These fishes are small, many about long, and not many longer than . They spend most of their time waiting patiently in the water column for prey to appear or to be lured by their phosphors. What little energy is available in the bathypelagic zone filters from above in the form of detritus, faecal material, and the occasional invertebrate or mesopelagic fish. About 20 per cent of the food that has its origins in the epipelagic zone falls down to the mesopelagic zone, but only about 5 per cent filters down to the bathypelagic zone.
Bathypelagic fish are sedentary, adapted to outputting minimum energy in a habitat with very little food or available energy, not even sunlight, only bioluminescence. Their bodies are elongated with weak, watery muscles and skeletal structures. Since so much of the fish is water, they are not compressed by the great pressures at these depths. They often have extensible, hinged jaws with recurved teeth. They are slimy, without scales. The central nervous system is confined to the lateral line and olfactory systems, the eyes are small and may not function, and gills, kidneys and hearts, and swim bladders are small or missing. These are the same features found in fish larvae, which suggests that during their evolution, bathypelagic fish have acquired these features through neoteny. As with larvae, these features allow the fish to remain suspended in the water with little expenditure of energy.
Despite their ferocious appearance, these forms are mostly miniature fish with weak muscles, and are too small to represent any threat to humans. An exception to the generally small size of bathypelagic fish is the Yokozuna slickhead (Narcetes shonanmaruae), described in 2021, which is the largest known entirely bathypelagic bony fish at over in length.
The swim bladders of deep-sea fish are either absent or scarcely operational, and bathypelagic fish do not normally undertake vertical migrations. Filling bladders at such great pressures incurs huge energy costs. Some deep-sea fishes have swim bladders which function while they are young and inhabit the upper epipelagic zone, but they wither or fill with fat when the fish move down to their adult habitat.
A couple of known exceptions are the cusk-eel (Holcomycteronus profundissimus), retrieved from 7160 meters deep, and the rough abyssal grenadier (Coryphaenoides yaquinae), found at 7259 meters depth. These species still have functional swim bladders, which allows them to maintain high bone density and strong jaws.
The most important sensory systems are usually the inner ear, which responds to sound, and the lateral line, which responds to changes in water pressure. The olfactory system can also be important for males who find females by smell.
Bathypelagic fish are black, or sometimes red, with few photophores. When photophores are used, it is usually to entice prey or attract a mate. Because food is so scarce, bathypelagic predators are not selective in their feeding habits, but grab whatever comes close enough. They accomplish this by having a large mouth with sharp teeth for grabbing large prey and overlapping gill rakers which prevent small prey that have been swallowed from escaping.
It is not easy finding a mate in this zone. Some species depend on bioluminescence, where bioluminescent patterns are unique to specific species. Others are hermaphrodites, which doubles their chances of producing both eggs and sperm when an encounter occurs. The female anglerfish releases pheromones to attract tiny males. When a male finds her, he bites on to her and never lets go. When a male of the anglerfish species Haplophryne mollis bites into the skin of a female, he releases an enzyme that digests the skin of his mouth and her body, fusing the pair to the point where the two circulatory systems join up. The male then atrophies into nothing more than a pair of gonads. This extreme sexual dimorphism ensures that, when the female is ready to spawn, she has a mate immediately available.
Many forms other than fish live in the bathypelagic zone, such as squid, large whales, octopuses, sponges, brachiopods, sea stars, and echinoids, but this zone is difficult for fish to live in.
Adaptation to high pressure
As a fish moves deeper into the sea, the weight of the water overhead exerts increasing hydrostatic pressure on the fish. This increased pressure amounts to about one atm (0.1 MPa) for every 10 m (32.8 ft)in depth. For a fish at the bottom of the bathypelagic zone, this pressure amounts to about 400 atm (40 MPa, or nearly 6000 pounds per square inch).
Deep-sea organisms possess adaptations at cellular and physiological levels that allow them to survive in environments of great pressure. Not having these adaptations limits the depths at which shallow-water species can operate. High levels of external pressure affects how metabolic processes and biochemical reactions proceed. The equilibrium of many chemical reactions is disturbed by pressure, and pressure can inhibit processes which result in an increase in volume. Water, a key component in many biological processes, is very susceptible to volume changes, mainly because constituents of cellular fluid have an effect on water structure. Thus, enzymatic reactions that induce changes in water organization effectively change the system's volume. Proteins responsible for catalyzing reactions are typically held together by weak bonds and the reactions usually involve volume increases.
Species that can tolerate these depths have evolved changes in their protein structure and reaction criteria to withstand pressure, in order to perform reactions in these conditions. In high pressure environments, bilayer cellular membranes experience a loss of fluidity. Deep-sea cellular membranes favor phospholipid bilayers with a higher proportion of unsaturated fatty acids, which induce a higher fluidity than their sea-level counterparts.
Ten orders, thirteen families and about 200 known species of deep-sea fish have evolved a gelatinous layer below the skin or around the spine, which is used for buoyancy, low-cost growth and to increase swimming efficiency by reducing drag.
Deep-sea species exhibit lower changes of entropy and enthalpy compared to surface level organisms, since a high pressure and low temperature environment favors negative enthalpy changes and reduced dependence on entropy-driven reactions. From a structural standpoint, globular proteins of deep-sea fish due to the tertiary structure of G-actin are relatively rigid compared to those of surface level fish. The fact that proteins in deep-sea fish are structurally different from surface fish is apparent from the observation that actin from the muscle fibers of deep-sea fish are extremely heat-resistant; an observation similar to what is found in lizards. These proteins are structurally strengthened by modification of the bonds in the tertiary structure of the protein which also happens to induce high levels of thermal stability. Proteins are structurally strengthened to resist pressure by modification of bonds in the tertiary structure. Therefore, high levels of hydrostatic pressure, similar to high body temperatures of thermophilic desert reptiles, favor rigid protein structures.
Na+/K+ -ATPase is a lipoprotein enzyme that plays a prominent role in osmoregulation and is heavily influenced by hydrostatic pressure. The inhibition of Na+/K+ -ATPase is due to increased compression due to pressure. The rate-limiting step of the Na+/K+ -ATPase reaction induces an expansion in the bilayer surrounding the protein, and therefore an increase in volume. An increase in volume makes Na+/K+ -ATPase reactivity susceptible to higher pressures. Even though the Na+/K+ -ATPase activity per gram of gill tissue is lower for deep-sea fishes, the Na+/K+ -ATPases of deep-sea fishes exhibit a much higher tolerance of hydrostatic pressure compared to their shallow-water counterparts. This is exemplified between the species C. acrolepis (around 2000 m deep) and its hadalpelagic counterpart C. armatus (around deep), where the Na+/K+ -ATPases of C. armatus are much less sensitive to pressure. This resistance to pressure can be explained by adaptations in the protein and lipid moieties of Na+/K+ -ATPase.
Lanternfish
Sampling via deep trawling indicates that lanternfish account for as much as 65% of all deep-sea fish biomass. Indeed, lanternfish are among the most widely distributed, populous, and diverse of all vertebrates, playing an important ecological role as prey for larger organisms. With an estimated global biomass of 550–660 million metric tons, several times the entire world fisheries catch, lanternfish also account for much of the biomass responsible for the deep scattering layer of the world's oceans. In the Southern Ocean, Myctophids provide an alternative food resource to krill for predators such as squid and the king penguin. Although these fish are plentiful and prolific, currently only a few commercial lanternfish fisheries exist: these include limited operations off South Africa, in the sub-Antarctic, and in the Gulf of Oman.
Endangered species
A 2006 study by Canadian scientists has found five species of deep-sea fish – blue hake, spiny eel – to be on the verge of extinction due to the shift of commercial fishing from continental shelves to the slopes of the continental shelves, down to depths of . The slow reproduction of these fish – they reach sexual maturity at about the same age as human beings – is one of the main reasons that they cannot recover from the excessive fishing.
| Biology and health sciences | Fishes | null |
62366 | https://en.wikipedia.org/wiki/Moa | Moa | Moa (order Dinornithiformes) are an extinct group of flightless birds formerly endemic to New Zealand. During the Late Pleistocene-Holocene, there were nine species (in six genera). The two largest species, Dinornis robustus and Dinornis novaezelandiae, reached about in height with neck outstretched, and weighed about while the smallest, the bush moa (Anomalopteryx didiformis), was around the size of a turkey. Estimates of the moa population when Polynesians settled New Zealand circa 1300 vary between 58,000 and approximately 2.5 million.
Moa are traditionally placed in the ratite group. However, genetic studies have found that their closest relatives are the flighted South American tinamous, once considered a sister group to ratites. The nine species of moa were the only wingless birds, lacking even the vestigial wings that all other ratites have. They were the largest terrestrial animals and dominant herbivores in New Zealand's forest, shrubland, and subalpine ecosystems until the arrival of the Māori, and were hunted only by Haast's eagle. Moa extinction occurred within 100 years of human settlement of New Zealand, primarily due to overhunting.
Etymology
The word moa is a Polynesian term for domestic fowl. The name was not in common use among the Māori by the time of European contact, likely because the bird it described had been extinct for some time, and traditional stories about it were rare. The earliest record of the name was by missionaries William Williams and William Colenso in January 1838; Colenso speculated that the birds may have resembled gigantic fowl. In 1912, Māori chief Urupeni Pūhara claimed that the moa's traditional name was "te kura" (the red bird).
Description
Moa skeletons were traditionally reconstructed in an upright position to create impressive height, but analysis of their vertebral articulations indicates that they probably carried their heads forward, in the manner of a kiwi. The spine was attached to the rear of the head rather than the base, indicating the horizontal alignment. This would have let them graze on low vegetation, while being able to lift their heads and browse trees when necessary. This has resulted in a reconsideration of the height of larger moa. However, Māori rock art depicts moa or moa-like birds (likely geese or adzebills) with necks upright, indicating that moa were more than capable of assuming both neck postures.
No records survive of what sounds moa made, though some idea of their calls can be gained from fossil evidence. The trachea of moa were supported by many small rings of bone known as tracheal rings. Excavation of these rings from articulated skeletons has shown that at least two moa genera (Euryapteryx and Emeus) exhibited tracheal elongation, that is, their trachea were up to 1 m (3 ft) long and formed a large loop within the body cavity. They are the only ratites known to exhibit this feature, which is also present in several other bird groups, including swans, cranes, and guinea fowl. The feature is associated with deep resonant vocalisations that can travel long distances.
Evolutionary relationships
The moa's closest relatives are small terrestrial South American birds called the tinamous, which can fly. Previously, the kiwi, the Australian emu, and cassowary were thought to be most closely related to moa.
Although dozens of species were described in the late 19th and early 20th centuries, many were based on partial skeletons and turned out to be synonyms. Currently, 11 species are formally recognised, although recent studies using ancient DNA recovered from bones in museum collections suggest that distinct lineages exist within some of these. One factor that has caused much confusion in moa taxonomy is the intraspecific variation of bone sizes, between glacial and interglacial periods (see Bergmann’s rule and Allen’s rule), as well as sexual dimorphism being evident in several species. Dinornis seems to have had the most pronounced sexual dimorphism, with females being up to 150% as tall and 280% as heavy as males—so much bigger that they were classified as separate species until 2003. A 2009 study showed that Euryapteryx curtus and E. gravis were synonyms. A 2010 study explained size differences among them as sexual dimorphism. A 2012 morphological study interpreted them as subspecies, instead.
Analyses of ancient DNA have determined that a number of cryptic evolutionary lineages occurred in several moa genera. These may eventually be classified as species or subspecies; Megalapteryx benhami (Archey) is synonymised with M. didinus (Owen) because the bones of both share all essential characters. Size differences can be explained by a north–south cline combined with temporal variation such that specimens were larger during the Otiran glacial period (the last ice age in New Zealand). Similar temporal size variation is known for the North Island's Pachyornis mappini. Some of the other size variation for moa species can probably be explained by similar geographic and temporal factors.
The earliest moa remains come from the Miocene Saint Bathans Fauna. Known from multiple eggshells and hind limb elements, these represent at least two already fairly large-sized species.
Classification
Taxonomy
The currently recognised genera and species are:
Order †Dinornithiformes (Gadow 1893) Ridgway 1901 [Dinornithes Gadow 1893; Immanes Newton 1884] (moa)
Family Dinornithidae Owen 1843 [Palapteryginae Bonaparte 1854; Palapterygidae Haast 1874; Dinornithnideae Stejneger 1884] (giant moa)
Genus Dinornis
North Island giant moa, Dinornis novaezealandiae (North Island, New Zealand)
South Island giant moa, Dinornis robustus (South Island, New Zealand)
Family Emeidae (Bonaparte 1854) [Emeinae Bonaparte 1854; Anomalopterygidae Oliver 1930; Anomalapteryginae Archey 1941] (lesser moa)
Genus Anomalopteryx
Bush moa, Anomalopteryx didiformis (North and South Island, New Zealand)
Genus Emeus
Eastern moa, Emeus crassus (South Island, New Zealand)
Genus Euryapteryx
Broad-billed moa, Euryapteryx curtus (North and South Island, New Zealand)
Genus Pachyornis
Heavy-footed moa, Pachyornis elephantopus (South Island, New Zealand)
Mantell's moa, Pachyornis geranoides (North Island, New Zealand)
Crested moa, Pachyornis australis (South Island, New Zealand)
Family Megalapterygidae
Genus Megalapteryx
Upland moa, Megalapteryx didinus (South Island, New Zealand)
Two unnamed species are also known from the Saint Bathans Fauna.
Phylogeny
Because moa are a group of flightless birds with no vestiges of wing bones, questions have been raised about how they arrived in New Zealand, and from where. Many theories exist about the moa's arrival and radiation in New Zealand, but the most recent theory suggests that they arrived in New Zealand about 60 million years ago (Mya) and split from the "basal" (see below) moa species, Megalapteryx, about 5.8 Mya instead of the 18.5 Mya split suggested by Baker et al. (2005). This does not necessarily mean there was no speciation between the arrival 60 Mya and the basal split 5.8 Mya, but the fossil record is lacking and most likely the early moa lineages existed, but became extinct before the basal split 5.8 Mya. The presence of Miocene-aged species certainly suggests that moa diversification began before the split between Megalapteryx and the other taxa.
The Oligocene Drowning Maximum event, which occurred about 22 Mya, when only 18% of present-day New Zealand was above sea level, is very important in the moa radiation. Because the basal moa split occurred so recently (5.8 Mya), it was argued that ancestors of the Quaternary moa lineages could not have been present on both the South and North Island remnants during the Oligocene drowning. This does not imply that moa were previously absent from the North Island, but that only those from the South Island survived, because only the South Island was above sea level. Bunce et al. (2009) argued that moa ancestors survived on the South Island and then recolonised the North Island about 2 Myr later, when the two islands rejoined after 30 Myr of separation. The presence of Miocene moa in the Saint Bathans fauna seems to suggest that these birds increased in size soon after the Oligocene drowning event, if they were affected by it at all.
Bunce et al. also concluded that the highly complex structure of the moa lineage was caused by the formation of the Southern Alps about 6 Mya, and the habitat fragmentation on both islands resulting from Pleistocene glacial cycles, volcanism, and landscape changes. The cladogram below is a phylogeny of Palaeognathae generated by Mitchell (2014) with some clade names after Yuri et al. (2013). It provides the position of the moa (Dinornithiformes) within the larger context of the "ancient jawed" (Palaeognathae) birds:
The cladogram below gives a more detailed, species-level phylogeny, of the moa branch (Dinornithiformes) of the "ancient jawed" birds (Palaeognathae) shown above:
Distribution and habitat
Analyses of fossil moa bone assemblages have provided detailed data on the habitat preferences of individual moa species, and revealed distinctive regional moa faunas:
South Island
The two main faunas identified in the South Island include:
The fauna of the high-rainfall west coast beech (Nothofagus) forests that included Anomalopteryx didiformis (bush moa) and Dinornis robustus (South Island giant moa), and
The fauna of the dry rainshadow forest and shrublands east of the Southern Alps that included Pachyornis elephantopus (heavy-footed moa), Euryapteryx gravis, Emeus crassus, and Dinornis robustus.
A 'subalpine fauna' might include the widespread D. robustus, and the two other moa species that existed in the South Island:
Pachyornis australis, the rarest moa species, the only moa species not yet found in Māori middens. Its bones have been found in caves in the northwest Nelson and Karamea districts (such as Honeycomb Hill Cave), and some sites around the Wānaka district.
Megalapteryx didinus, more widespread, named "upland moa" because its bones are commonly found in the subalpine zone. However, it also occurred down to sea level, where suitable steep and rocky terrain (such as Punakaiki on the west coast and Central Otago) existed. Their distributions in coastal areas have been rather unclear, but were present at least in several locations such as on Kaikōura, Otago Peninsula, and Karitane.
North Island
Significantly less is known about North Island paleofaunas, due to the scarcity of fossil sites compared to the South Island, but the basic pattern of moa-habitat relationships was the same. The South Island and the North Island shared some moa species (Euryapteryx gravis, Anomalopteryx didiformis), but most were exclusive to one island, reflecting divergence over several thousand years since lower sea level in the Ice Age had made a land bridge across the Cook Strait.
In the North Island, Dinornis novaezealandiae and Anomalopteryx didiformis dominated in high-rainfall forest habitat, a similar pattern to the South Island. The other moa species present in the North Island (Euryapteryx gravis, E. curtus, and Pachyornis geranoides) tended to inhabit drier forest and shrubland habitats. P. geranoides occurred throughout the North Island. The distributions of E. gravis and E. curtus were almost mutually exclusive, the former having only been found in coastal sites around the southern half of the North Island.
Behaviour and ecology
About eight moa trackways, with fossilised moa footprint impressions in fluvial silts, have been found in the North Island, including Waikanae Creek (1872), Napier (1887), Manawatū River (1895), Marton (1896), Palmerston North (1911) (see photograph to left), Rangitīkei River (1939), and under water in Lake Taupō (1973). Analysis of the spacing of these tracks indicates walking speeds between 3 and 5 km/h (1.75–3 mph).
Diet
Their diet has been deduced from fossilised contents of their gizzards and coprolites, as well as indirectly through morphological analysis of skull and beak, and stable isotope analysis of their bones. Moa fed on a range of plant species and plant parts, including fibrous twigs and leaves taken from low trees and shrubs. The beak of Pachyornis elephantopus was analogous to a pair of secateurs, and could clip the fibrous leaves of New Zealand flax (Phormium tenax) and twigs up to at least 8 mm in diameter.
Moa filled the ecological niche occupied in other countries by large browsing mammals such as antelope and llamas. Some biologists contend that a number of plant species evolved to avoid moa browsing. Divaricating plants such as Pennantia corymbosa (the kaikōmako), which have small leaves and a dense mesh of branches, and Pseudopanax crassifolius (the horoeka or lancewood), which has tough juvenile leaves, are possible examples of plants that evolved in such a way. Likewise, it has been suggested that heteroblasty might be a response to moa browsing.
Like many other birds, moa swallowed gizzard stones (gastroliths), which were retained in their muscular gizzards, providing a grinding action that allowed them to eat coarse plant material. This grinding action suggests that moa were not effective seed dispersers, with only the smallest seeds passing through their gut intact. These stones were commonly smooth rounded quartz pebbles, but stones over long have been found among preserved moa gizzard contents. Dinornis gizzards could often contain several kilograms of stones. Moa likely exercised a certain selectivity in the choice of gizzard stones and chose the hardest pebbles.
Reproduction
The pairs of species of moa described as Euryapteryx curtus / E. exilis, Emeus huttonii / E. crassus, and Pachyornis septentrionalis / P. mappini have long been suggested to constitute males and females, respectively. This has been confirmed by analysis for sex-specific genetic markers of DNA extracted from bone material.
For example, before 2003, three species of Dinornis were recognised: South Island giant moa (D. robustus), North Island giant moa (D. novaezealandiae), and slender moa (D. struthioides). However, DNA showed that all D. struthioides were males, and all D. robustus were females. Therefore, the three species of Dinornis were reclassified as two species, one each formerly occurring on New Zealand's North Island (D. novaezealandiae) and South Island (D. robustus); D. robustus however, comprises three distinct genetic lineages and may eventually be classified as many species, as discussed above.
Examination of growth rings in moa cortical bone has revealed that these birds were K-selected, as are many other large endemic New Zealand birds. They are characterised by having a low fecundity and a long maturation period, taking about 10 years to reach adult size. The large Dinornis species took as long to reach adult size as small moa species, and as a result, had fast skeletal growth during their juvenile years.
No evidence has been found to suggest that moa were colonial nesters. Moa nesting is often inferred from accumulations of eggshell fragments in caves and rock shelters, little evidence exists of the nests themselves. Excavations of rock shelters in the eastern North Island during the 1940s found moa nests, which were described as "small depressions obviously scratched out in the soft dry pumice". Moa nesting material has also been recovered from rock shelters in the Central Otago region of the South Island, where the dry climate has preserved plant material used to build the nesting platform (including twigs clipped by moa bills). Seeds and pollen within moa coprolites found among the nesting material provide evidence that the nesting season was late spring to summer.
Fragments of moa eggshell are often found in archaeological sites and sand dunes around the New Zealand coast. Thirty-six whole moa eggs exist in museum collections and vary greatly in size (from in length and wide). The outer surface of moa eggshell is characterised by small, slit-shaped pores. The eggs of most moa species were white, although those of the upland moa (Megalapteryx didinus) were blue-green.
A 2010 study by Huynen et al. found that the eggs of certain species were fragile, only around a millimetre in shell thickness: "Unexpectedly, several thin-shelled eggs were also shown to belong to the heaviest moa of the genera Dinornis, Euryapteryx, and Emeus, making these, to our knowledge, the most fragile of all avian eggs measured to date. Moreover, sex-specific DNA recovered from the outer surfaces of eggshells belonging to species of Dinornis and Euryapteryx suggest that these very thin eggs were likely to have been incubated by the lighter males. The thin nature of the eggshells of these larger species of moa, even if incubated by the male, suggests that egg breakage in these species would have been common if the typical contact method of avian egg incubation was used." Despite the bird's extinction, the high yield of DNA available from recovered fossilised eggs has allowed the moa's genome to be sequenced.
Pre-human forests
Studies of accumulated dried vegetation in the pre-human mid-late Holocene period suggests a low Sophora microphylla or Kōwai forest ecosystem in Central Otago that was used and perhaps maintained by moa, for both nesting material and food. Neither the forests nor moa existed when European settlers came to the area in the 1850s.
Relationship with humans
Extinction
Before the arrival of humans, the moa's only predator was the massive Haast's eagle. New Zealand had been isolated for 80 million years and had few predators before human arrival, meaning that not only were its ecosystems extremely vulnerable to perturbation by outside species, but also the native species were ill-equipped to cope with human predators. Polynesians arrived sometime before 1300, and all moa genera were soon driven to extinction by hunting and, to a lesser extent, by habitat reduction due to forest clearance. By 1445, all moa had become extinct, along with Haast's eagle, which had relied on them for food. Recent research using carbon-14 dating of middens strongly suggests that the events leading to extinction took less than a hundred years, rather than a period of exploitation lasting several hundred years as previously hypothesised.
An expedition in the 1850s under Lieutenant A. Impey reported two emu-like birds on a hillside in the South Island; an 1861 story from the Nelson Examiner told of three-toed footprints measuring between Tākaka and Riwaka that were found by a surveying party; and finally in 1878, the Otago Witness published an additional account from a farmer and his shepherd. An 86-year-old woman, Alice McKenzie, claimed in 1959 that she had seen a moa in Fiordland bush in 1887, and again on a Fiordland beach when she was 17 years old. She claimed that her brother had also seen a moa on another occasion. In childhood, Mackenzie saw a large bird that she believed to be a takahē, but after its rediscovery in the 1940s, she saw a picture of it and concluded that she had seen something else.
Some authors have speculated that a few Megalapteryx didinus may have persisted in remote corners of New Zealand until the 18th and even 19th centuries, but this view is not widely accepted. Some Māori hunters claimed to be in pursuit of the moa as late as the 1770s; however, these accounts possibly did not refer to the hunting of actual birds as much as a now-lost ritual among South Islanders. Whalers and sealers recalled seeing monstrous birds along the coast of the South Island, and in the 1820s, a man named George Pauley made an unverified claim of seeing a moa in the Otago region of New Zealand. Occasional speculation since at least the late 19th century, and as recently as 2008, has suggested that some moa may still exist, particularly in the wilderness of South Westland and Fiordland. A 1993 report initially interested the Department of Conservation, but the animal in a blurry photograph was identified as a red deer. Cryptozoologists continue to search for them, but their claims and supporting evidence (such as of purported footprints) have earned little attention from experts and are pseudoscientific.
The rediscovery of the takahē in 1948 after none had been seen since 1898 showed that rare birds can exist undiscovered for a long time. However, the takahē is a much smaller bird than the moa, and was rediscovered after its tracks were identified—yet no reliable evidence of moa tracks has ever been found, and experts still contend that moa survival is extremely unlikely, since they would have to be living unnoticed for over 500 years in a region visited often by hunters and hikers.
Surviving remains
Joel Polack, a trader who lived on the East Coast of the North Island from 1834 to 1837, recorded in 1838 that he had been shown "several large fossil ossifications" found near Mt Hikurangi. He was certain that these were the bones of a species of emu or ostrich, noting that "the Natives add that in times long past they received the traditions that very large birds had existed, but the scarcity of animal food, as well as the easy method of entrapping them, has caused their extermination". Polack further noted that he had received reports from Māori that a "species of Struthio" still existed in remote parts of the South Island.
Dieffenbach also refers to a fossil from the area near Mt Hikurangi, and surmises that it belongs to "a bird, now extinct, called Moa (or Movie) by the natives". 'Movie' is the first transcribed name for the bird. In 1839, John W. Harris, a Poverty Bay flax trader who was a natural-history enthusiast, was given a piece of unusual bone by a Māori who had found it in a river bank. He showed the fragment of bone to his uncle, John Rule, a Sydney surgeon, who sent it to Richard Owen, who at that time was working at the Hunterian Museum at the Royal College of Surgeons in London.
Owen puzzled over the fragment for almost four years. He established it was part of the femur of a big animal, but it was uncharacteristically light and honeycombed. Owen announced to a skeptical scientific community and the world that it was from a giant extinct bird like an ostrich, and named it Dinornis. His deduction was ridiculed in some quarters, but was proved correct with the subsequent discoveries of considerable quantities of moa bones throughout the country, sufficient to reconstruct skeletons of the birds.
In July 2004, the Natural History Museum in London placed on display the moa bone fragment Owen had first examined, to celebrate 200 years since his birth, and in memory of Owen as founder of the museum.
Since the discovery of the first moa bones in the late 1830s, thousands more have been found. They occur in a range of late Quaternary and Holocene sedimentary deposits, but are most common in three main types of site: caves, dunes, and swamps.
Bones are commonly found in caves or tomo (the Māori word for doline or sinkhole, often used to refer to pitfalls or vertical cave shafts). The two main ways that the moa bones were deposited in such sites were birds that entered the cave to nest or escape bad weather, and subsequently died in the cave and birds that fell into a vertical shaft and were unable to escape. Moa bones (and the bones of other extinct birds) have been found in caves throughout New Zealand, especially in the limestone/marble areas of northwest Nelson, Karamea, Waitomo, and Te Anau.
Moa bones and eggshell fragments sometimes occur in active coastal sand dunes, where they may erode from paleosols and concentrate in 'blowouts' between dune ridges. Many such moa bones antedate human settlement, although some originate from Māori midden sites, which frequently occur in dunes near harbours and river mouths (for example the large moa hunter sites at Shag River, Otago, and Wairau Bar, Marlborough).
Densely intermingled moa bones have been encountered in swamps throughout New Zealand. The most well-known example is at Pyramid Valley in north Canterbury, where bones from at least 183 individual moa have been excavated, mostly by Roger Duff of Canterbury Museum. Many explanations have been proposed to account for how these deposits formed, ranging from poisonous spring waters to floods and wildfires. However, the currently accepted explanation is that the bones accumulated slowly over thousands of years, from birds that entered the swamps to feed and became trapped in the soft sediment.
Many New Zealand and international museums hold moa bone collections. Auckland War Memorial Museum – Tāmaki Paenga Hira has a significant collection, and in 2018 several moa skeletons were imaged and 3D scanned to make the collections more accessible. There is also a major collection in Otago Museum in Dunedin.
Feathers and soft tissues
Several examples of moa remains have been found with soft tissues (muscle, skin, feathers) preserved through desiccation after the bird died at a dry site (for example, a cave with a constant dry breeze blowing through it). Most were found in the semiarid Central Otago region, the driest part of New Zealand. These include:
Dried muscle on bones of a female Dinornis robustus found at Tiger Hill in the Manuherikia River Valley by gold miners in 1864 (currently held by Yorkshire Museum)
Several bones of Emeus crassus with muscle attached, and a row of neck vertebrae with muscle, skin, and feathers collected from Earnscleugh Cave near the town of Alexandra in 1870 (currently held by the Otago Museum)
An articulated foot of a male D. giganteus with skin and foot pads preserved, found in a crevice on the Knobby Range in 1874 (currently held by the Otago Museum)
The type specimen of Megalapteryx didinus found near Queenstown in 1878 (currently held by Natural History Museum, London; see photograph of foot on this page)
The lower leg of Pachyornis elephantopus, with skin and muscle, from the Hector Range in 1884; (currently held by the Zoology Department, Cambridge University)
The complete feathered leg of a M. didinus from Old Man Range in 1894 (currently held by the Otago Museum)
The head of a M. didinus found near Cromwell sometime before 1949 (currently held by Te Papa Tongarewa).
Two specimens are known from outside the Central Otago region:
A complete foot of M. didinus found in a cave on Mount Owen near Nelson in the 1980s (currently held by Te Papa Tongarewa)
A skeleton of Anomalopteryx didiformis with muscle, skin, and feather bases collected from a cave near Te Anau in 1980.
In addition to these specimens, loose moa feathers have been collected from caves and rock shelters in the southern South Island, and based on these remains, some idea of the moa plumage has been achieved. The preserved leg of M. didinus from the Old Man Range reveals that this species was feathered right down to the foot. This is likely to have been an adaptation to living in high-altitude, snowy environments, and is also seen in the Darwin’s rhea, which lives in a similar seasonally snowy habitat.
Moa feathers are up to long, and a range of colours has been reported, including reddish-brown, white, yellowish, and purplish. Dark feathers with white or creamy tips have also been found, and indicate that some moa species may have had plumage with a speckled appearance.
Potential revival
The creature has frequently been mentioned as a potential candidate for revival by cloning. Its iconic status, coupled with the facts that it only became extinct a few hundred years ago and that substantial quantities of moa remains exist, mean that it is often listed alongside such creatures as the dodo as leading candidates for de-extinction. Preliminary work involving the extraction of DNA has been undertaken by Japanese geneticist Ankoh Yasuyuki Shirota.
Interest in the moa's potential for revival was further stirred in mid-2014 when New Zealand Member of Parliament Trevor Mallard suggested that bringing back some smaller species of moa within 50 years was a viable idea. The idea was ridiculed by many, but gained support from some natural history experts.
In literature and culture
Heinrich Harder portrayed moa being hunted by Māori in the classic German collecting cards about extinct and prehistoric animals, "Tiere der Urwelt", in the early 1900s.
The moa was the most commonly used animal as a symbol of New Zealand before it was replaced by the kiwi in the early 20th century.
Allen Curnow's poem, "The Skeleton of the Great Moa in the Canterbury Museum, Christchurch" was published in 1943.
| Biology and health sciences | Ratites | null |
62400 | https://en.wikipedia.org/wiki/Secretariat%20%28horse%29 | Secretariat (horse) | Secretariat (March 30, 1970 – October 4, 1989), also known as Big Red, was a champion American thoroughbred racehorse who was the ninth winner of the American Triple Crown, setting and still holding the fastest time record in all three of its constituent races. He is widely considered to be the greatest racehorse of all time. He became the first Triple Crown winner in 25 years and his record-breaking victory in the Belmont Stakes, which he won by 31 lengths, is often considered the greatest race ever run by a thoroughbred racehorse. During his racing career, he won five Eclipse Awards, including Horse of the Year honors at ages two and three. He was nominated to the National Museum of Racing and Hall of Fame in 1974. In the Blood-Horse magazine List of the Top 100 U.S. Racehorses of the 20th Century, Secretariat was second to Man o' War.
At age two, Secretariat finished fourth in his 1972 debut in a maiden race, but then won seven of his remaining eight starts, including five stakes victories. His only loss during this period was in the Champagne Stakes, where he finished first but was disqualified to second for interference. He received the Eclipse Award for champion two-year-old colt, and also was the 1972 Horse of the Year, a rare honor for a horse so young.
At age three, Secretariat not only won the Triple Crown, but also set speed records in all three races. His time in the Kentucky Derby still stands as the Churchill Downs track record for miles, and his time in the Belmont Stakes stands as the American record for miles on the dirt. In 2012, his actual time of 1:53 in the Preakness Stakes was recognized as a stakes record after an official review.
Secretariat's win in the Gotham Stakes tied the track record for 1 mile, he set a world record in the Marlboro Cup at miles and further proved his versatility by winning two major stakes races on turf. He lost three times that year: in the Wood Memorial, Whitney, and Woodward Stakes, but the brilliance of his nine wins made him an American icon. He won his second Horse of the Year title, plus Eclipse Awards for champion three-year-old colt and champion turf horse.
At the beginning of his year as a three-year-old, Secretariat was syndicated for a record-breaking $6.08 million (equivalent to $ million in ), on the condition that he be retired from racing by the end of the year. Although he sired several successful racehorses, he ultimately was most influential through his daughters' offspring, becoming the leading broodmare sire in North America in 1992. His daughters produced several notable sires, including Storm Cat, A.P. Indy, Gone West, Dehere, Summer Squall, and Chief's Crown, and through them Secretariat appears in the pedigree of many modern champions. Secretariat died in 1989 as a result of laminitis at age 19.
Background
Secretariat was officially bred by Christopher Chenery's Meadow Stud, but the breeding was actually arranged by Penny Chenery (then known as Penny Tweedy), who had taken over the running of the stable in 1968 when her father became ill. Secretariat was sired by Bold Ruler and his dam was Somethingroyal, a daughter of Princequillo. Bold Ruler was the leading sire in North America from 1963 to 1969 and again in 1973. Owned by the Phipps family, Bold Ruler possessed both speed and stamina, having won the Preakness Stakes and Horse of the Year honors in 1957, and American Champion Sprint Horse honors in 1958. Bold Ruler was retired to stud at Claiborne Farm, but the Phipps family owned most of the mares to which Bold Ruler was bred, and few of his offspring were sold at public auction.
To bring new blood into their breeding program, the Phipps family sometimes negotiated a foal-sharing agreement with other mare owners: Instead of charging a stud fee for Bold Ruler, they would arrange for multiple matings with Bold Ruler, either with two mares in one year or one mare over a two-year period. Assuming two foals were produced, the Phipps family would keep one and the mare's owner would keep the other, with a coin toss determining who received first pick.
Under such an arrangement, Chenery sent two mares to be bred to Bold Ruler in 1968, Hasty Matelda and Somethingroyal. She then sent Cicada and Somethingroyal in 1969. The foal-sharing agreement stated that the winner of the coin toss would get first pick of the foals produced in 1969, while the loser of the toss would get first pick of the foals due in 1970. In the spring of 1969, a colt and filly were produced. In the 1969 breeding season, Cicada did not conceive, leaving only one foal due in the spring of 1970. Thus, the winner of the coin toss would get only one foal (the first pick from 1969), and the loser would get two (the second pick from 1969 and the only foal from 1970). Chenery later said that both owners hoped they would lose the coin toss, which was held in the fall of 1969 in the office of New York Racing Association Chairman Alfred G. Vanderbilt II, with Arthur "Bull" Hancock of Claiborne Farm as witness. Ogden Phipps won the toss and took the 1969 weanling filly out of Somethingroyal. The filly was named The Bride and never won a race, though she did later become a stakes producer. Chenery received the Hasty Matelda colt in 1969 and the as-yet-unborn 1970 foal of Somethingroyal, which turned out to be Secretariat.
Early years
On March 30, 1970, at 12:10 a.m. at the Meadow Stud in Caroline County, Virginia, Somethingroyal foaled a bright-red chestnut colt with three white socks and a star with a narrow stripe. The foal stood when he was 45 minutes old and nursed 30 minutes later. Howard Gentry, the manager of Meadow Stud, was at the foaling and later said, "He was a very well-made foal. He was as perfect a foal that I ever delivered." The colt soon distinguished himself from the others. "He was always the leader in the crowd," said Gentry's nephew, Robert, who also worked at the farm. "To us, he was Big Red, and he had a personality. He was a clown and was always cutting up, always into some devilment." Some time later, Chenery got her first look at the foal and made a one-word entry in her notebook: "Wow!"
That fall, Chenery and Elizabeth Ham, the Meadow's longtime secretary, worked together to name the newly weaned colt. The first set of names submitted to the Jockey Club (Sceptre, Royal Line, and Something Special) played on the names of his sire and dam, but were rejected. The second set, submitted in January 1971, were Games of Chance, Deo Volente ("God Willing"), and Secretariat, the last suggested by Ham based on her previous job associated with the secretariat of the League of Nations (the predecessor of the United Nations).
Appearance and conformation
Secretariat grew into a massive, powerful horse said to resemble his sire's damsire, Discovery. He stood when fully grown. He was noted for being exceptionally well-balanced, described as having "nearly perfect" conformation and stride biomechanics. His chest was so large that he required a custom-made girth, and he was noted for his large, powerful, well-muscled hindquarters. An Australian trainer said of him, "He is incredible, an absolutely perfect horse. I never saw anything like him."
Secretariat's absence of major conformation flaws was important, as horses with well made limbs and feet are less likely to be injured. Secretariat's hindquarters were the main source of his power, with a sloped croup that extended the length of his femur. When in full stride, his hind legs were able to reach far under himself, increasing his drive. His ample girth, long back and well-made neck all contributed to his heart-lung efficiency.
The manner in which Secretariat's body parts fit together determined the efficiency of his stride, which affected his acceleration and endurance. Even very small differences in the length and angles of bones can have a major effect on performance. Secretariat was well put together even as a two-year-old, and by the time he was three, he had further matured in body and smoothed out his gait. The New York Racing Association's Dr. M. A. Gilman, a veterinarian who routinely measured leading thoroughbreds with a goal of applying science to create better ways to breed and evaluate racehorses, measured Secretariat's development from two to three as follows:
Secretariat's length of stride was considered large even after taking into account his large frame and strong build. While training for the Preakness Stakes, his stride was measured as 24 feet, 11 inches. His powerful hindquarters allowed him to unleash "devastating" speed and because he was so well-muscled and had significant cardiac capacity, he could simply out-gallop competitors at nearly any point in a race.
His weight before the Gotham Stakes in April 1973 was . After completing the gruelling Triple Crown, his weight on June 15 had dropped only 24 pounds, to . Secretariat was known for his appetite — during his three-year-old campaign, he ate 15 quarts of oats a day — and to keep his muscles in good condition, he needed fast workouts that could have won many a stakes race.
Seth Hancock of Claiborne Farm once said,
Racing career
Secretariat raced in Meadow Stables' blue-and-white-checkered colors. He never raced in track bandages, but typically wore a blinker hood, mostly to help him focus, but also because he had a tendency to run in toward the rail during races. In January 1972, he joined trainer Lucien Laurin's winter stable at Hialeah. Secretariat gained a reputation as a kind horse, likeable and unruffled in crowds or by the bumping that occurs between young horses. He had the physique of a runner but at first was awkward and clumsy. He was frequently outpaced by more precocious stable mates, running a quarter-mile in 26 seconds compared to 23 seconds by his peers. His regular exercise riders were Jim Gaffney and Charlie Davis. Davis was not initially impressed. "He was a big fat sucker", Davis said. "I mean, he was big. He wasn't in a hurry to do nothin'. He took his time. The quality was there, but he didn't show it until he wanted to." Gaffney though recalled his first ride on Secretariat in early 1972 as "having this big red machine under me, and from that very first day I knew he had a power of strength that I have never felt before ..."
Groom Eddie Sweat was another important member of the Secretariat team, providing most of the daily hands-on care. Sweat once told a reporter, "I guess a groom gets closer to a horse than anyone. The owner, the trainer, they maybe see him once a day. But I lived with him, worked with him."
Laurin sent Chenery regular updates on Secretariat's progress, saying that the colt was still learning to run, or that he still needed to lose his baby fat. Chenery recalled that when Secretariat was in training, Lucien once said: "Your big Bold Ruler colt don't show me nothin'. He can't outrun a fat man." But Secretariat made steady progress over the spring. On June 6, he wore blinkers for the first time to keep his attention focused and responded with a half-mile workout in a solid 47 seconds. On June 24, he ran a "bullet", the fastest workout of the day, at 6 furlongs in 1:12 on a sloppy track. Laurin called Chenery at her Colorado home and advised her that Secretariat was ready to race.
1972: Two-year-old season
For his first start on July 4, 1972, at Aqueduct Racetrack, Secretariat was made the lukewarm favorite at 3–1. At the start, a horse named Quebec cut in front of the field, causing a chain reaction that resulted in Secretariat being bumped hard. According to jockey Paul Feliciano, he would have fallen if he hadn't been so strong. Secretariat recovered, only to run into traffic on the backstretch. In tenth position at the top of the stretch, he closed ground rapidly and finished fourth, beaten by only lengths. In many of his subsequent races, Secretariat hung back at the start, which Laurin later attributed to the bumping he received in his debut.
With Feliciano again up, Secretariat returned to the track on July 15 as the 6–5 favorite. He broke poorly, but then rushed past the field on the turn to win by six lengths. On July 31 in an allowance race at Saratoga, Feliciano was replaced by Ron Turcotte, the regular jockey for Meadow Stables. Turcotte had ridden the colt in several morning workouts but had missed his first two starts while recovering from a fall. Secretariat's commanding win as the 2–5 favorite caught the attention of veteran sportswriter, Charles Hatton. He later reported, "You carry an ideal around in your head, and boy, I thought, 'This is it.' I never saw perfection before. I absolutely could not fault him in any way. And neither could the rest of them and that was the amazing thing about it. The body and the head and the eye and the general attitude. It was just incredible. I couldn't believe my eyes, frankly."
In August, Secretariat entered the Sanford Stakes, facing off with highly regarded Linda's Chief, the only horse ever to be favored against Secretariat in any of his races. Entering the stretch, Secretariat was blocked by the horses in front of him but then made his way through "like a hawk scattering a barnyard of chickens" on his way to a three-length win. Sportswriter Andrew Beyer covered the race for the Washington Star and later wrote, "Never have I watched a lightly raced 2-year-old stamp himself so definitively as a potential great."
Ten days later in the Hopeful Stakes, Secretariat made a "dazzling" move, passing eight horses within mile to take the lead then drawing off to win by five lengths. His time of 1:16 for furlongs was only of a second off the track record. Returning to Belmont Park on September 16, he won the Belmont Futurity by a length and a half after starting his move on the turn. He then ran in the Champagne Stakes at Belmont on October 14 as the 7–10 favorite. As had become his custom, he started slowly and then made a big move around the turn, blowing past his rivals to win by two lengths. However, following an inquiry by the racecourse stewards, Secretariat was disqualified and placed second for bearing in and interfering with Stop the Music, who was declared the winner.
Secretariat then took the Laurel Futurity on October 28, winning by eight lengths over Stop the Music. His time on a sloppy track was just of a second off the track record. He completed his season in the Garden State Futurity on November 18, dropping back early and making a powerful move around the turn to win by lengths at 1–10 odds. Laurin said, "In all his races, he has taken the worst of it by coming from behind, usually circling his field. A colt has to be a real runner to do this consistently and get away with it."
Secretariat won the Eclipse Award for American Champion Two-Year-Old Male Horse and, in a rare occurrence, two two-year-olds topped the balloting for 1972 American Horse of the Year honors, with Secretariat edging out the undefeated filly, La Prevoyante. Secretariat received the votes of the Thoroughbred Racing Associations of North America and the Daily Racing Form, while La Prevoyante was chosen by the National Turf Writers Association. Only one horse since then, Favorite Trick in 1997, has won that award as a two-year-old.
1973: Three-year-old season
In January 1973, Christopher Chenery, the founder of Meadow Stables, died and the taxes on his estate forced his daughter Penny to consider selling Secretariat. Together with of Claiborne Farm, she instead managed to syndicate the horse, selling 32 shares worth $190,000 each for a total of $6.08 million, a world syndication record at the time, surpassing the previous record for Nijinsky who was syndicated for $5.44 million in 1970. Hancock said the sale was easy, citing Secretariat's two-year-old performance, breeding, and appearance. "He's, well, he's a hell of a horse." Chenery retained four shares in the horse and would have complete control over his three-year-old racing campaign, but agreed that he would be retired at the end of the year.
Secretariat wintered in Florida but did not race until March 17, 1973, in the Bay Shore Stakes at Aqueduct, where he went off as the heavy favorite. As the trainer of one of his opponents put it, "The only chance we have is if he falls down." Racing boxed in by horses on each side, Turcotte decided to go through a narrow gap between horses rather than try to circle the field. Secretariat broke free and won easily, but one of the other jockeys claimed that Secretariat had committed a foul going through the hole. The stewards reviewed photos from the race and determined that Secretariat was actually on the receiving end of a bump, so let the result stand. The Bay Shore established that Secretariat had improved over the winter and that he could also handle adversity.
In the Gotham Stakes on April 7, Laurin decided to experiment with Secretariat's running style. With no speed horses entered in the race, Secretariat would be allowed to set his own pace. Accordingly, Turcotte hustled Secretariat from the starting gate and they led easily. Down the stretch though, Champagne Charlie came running and at the eighth pole was almost even. Turcotte tapped Secretariat once on each side with the whip and Secretariat drew away to win by three lengths. He ran the first 3/4 mile in 1:08 and finished the one-mile race in 1:33, matching the track record.
His final preparatory race for the Kentucky Derby was the Wood Memorial, where he finished a surprising third to Angle Light and Santa Anita Derby winner Sham. Laurin was crushed, even though he had trained the winner, Angle Light, who set a slow pace and "stole" the race. Secretariat's loss was later attributed to a large abscess in his mouth, which made him sensitive to the bit. Before and after the race, there was some ill feeling between Laurin and the trainer of Sham, Pancho Martin, fanned by comments in the press. The dispute concerned the use of coupled entries as Martin had entered two horses in addition to Sham, all with the same owner. There was fear that an entry could be used tactically to gang up on another horse. Stung by such insinuations, Martin wound up scratching the two horses that he had originally entered with Sham, and asked Laurin to do the same, but Laurin could not follow suit as Secretariat and Angle Light had different owners.
Because of the Wood Memorial results, Secretariat's chances in the Kentucky Derby became the subject of much speculation in the media. Some questioned his stamina: in part because of his "blocky" build, more typical of a sprinter, and in part because of Bold Ruler's reputation as a sire of precocious sprinters. Rumors circulated that Secretariat was unsound.
Kentucky Derby
The 1973 Kentucky Derby on May 5 attracted a crowd of 134,476 to Churchill Downs, then the largest crowd in North American racing history. The bettors made the entry of Secretariat and Angle Light the 3–2 favorite, with Sham the second choice at 5–2. The start was marred when Twice a Prince reared in his stall, hitting Our Native, positioned next to him, and causing Sham to bang his head against the gate, loosening two teeth. Sham then broke poorly and cut himself, also bumping into Navajo. Secretariat avoided problems by breaking last from post position 10, then cut over to the rail. Early leader Shecky Greene set a reasonable pace, then gave way to Sham around the far turn. Secretariat came charging as they entered the stretch and battled with Sham down the stretch, finally pulling away to win by lengths. Our Native finished eight lengths farther back in third.
On his way to a still-standing track record of 1:59, Secretariat ran each quarter-mile segment faster than the one before it. The successive quarter-mile times were :25, :24, :23, :23, and :23. This means he was still accelerating as of the final quarter-mile of the race. No other horse had won the Derby in less than 2 minutes before, and it would not be accomplished again until Monarchos ran the race in 1:59.97 in 2001.
Sportswriter Mike Sullivan later said:
Preakness Stakes
In the 1973 Preakness Stakes on May 19, Secretariat broke last, but then made a huge, last-to-first move on the first turn. Raymond Woolfe, a photographer for the Daily Racing Form, captured Secretariat launching the move with a leaping stride in the air. This was later used as the basis for the statue by John Skeaping that stands in the Belmont Park paddock. Turcotte later said that he was proudest of this win because of the split-second decision he made going into the turn: "I let my horse drop back, when I went to drop in, they started backing up into me. I said, 'I don't want to get trapped here.' So I just breezed by them." Secretariat completed the second quarter mile of the race in under 22 seconds. After reaching the lead with furlongs to go, Secretariat was never challenged, and won by lengths, with Sham again finishing second and Our Native in third, a further eight lengths back. It was the first time in history that the top three finishers in the Derby and Preakness were the same; the distance between each of the horses was also the same.
The time of the race was disputed. The infield teletimer displayed a time of 1:55 but it had malfunctioned because of damage caused by people crossing the track to reach the infield. The Pimlico Race Course clocker E.T. McLean Jr. announced a hand time of 1:54, but two Daily Racing Form clockers claimed the time was 1:53, which would have broken the track record of 1:54 set by Cañonero II. Tapes of Secretariat and Cañonero II were played side by side by CBS, and Secretariat got to the finish line first on tape, though this was not a reliable method of timing a horse race at the time. The Maryland Jockey Club, which managed the Pimlico racetrack and is responsible for maintaining Preakness records, discarded both the electronic and Daily Racing Form times and recognized the clocker's 1:54 as the official time; however, the Daily Racing Form, for the first time in history, printed its own clocking of 1:53 underneath the official time in the chart of the race.
On June 19, 2012, a special meeting of the Maryland Racing Commission was convened at Laurel Park at the request of Penny Chenery, who hired companies to conduct a forensic review of the videotapes of the race. After over two hours of testimony, the commission unanimously voted to change the time of Secretariat's win from 1:54 to 1:53, establishing a new stakes record. The Daily Racing Form announced that it would honor the commission's ruling with regard to the running time. With the revised time, Sham also would have broken the old stakes record.
As Secretariat prepared for the Belmont Stakes, he appeared on the covers of three national magazines: Time, Newsweek, and Sports Illustrated. He had become a national celebrity. William Nack wrote: "Secretariat suddenly transcended horse racing and became a cultural phenomenon, a sort of undeclared national holiday from the tortures of Watergate and the Vietnam War." Chenery needed a secretary to handle all the fan mail and hired the William Morris Agency to manage public engagements. Secretariat responded to his fame by learning to pose for the camera.
Belmont Stakes
Only four horses ran against Secretariat for the June 9 Belmont Stakes, including Sham and three other horses thought to have little chance by the bettors: Twice A Prince, My Gallant, and Private Smiles. With so few horses in the race, and Secretariat expected to win, no "show" bets were taken. Secretariat was sent off as a 1–10 favorite before a crowd of 69,138, then the second largest attendance in Belmont history. The race was televised by CBS and was watched by over 15 million households, an audience share of 52%.
On race day, the track was fast, and the weather was warm and sunny. Secretariat broke well on the rail and Sham rushed up beside him. The two ran the first quarter in a quick :23 and the next quarter in a swift :22, completing the fastest opening half mile in the history of the race and opening ten lengths on the rest of the field. After the six-furlong mark, Sham began to tire, ultimately finishing last. Secretariat continued the fast pace and opened up a larger and larger margin on the field. His time for the mile was 1:34, over a second faster than the next fastest Belmont mile fraction in history, set by his sire Bold Ruler, who had eventually tired and finished third. Secretariat, however, did not falter. Turcotte said, "This horse really paced himself. He is smart: I think he knew he was going miles, I never pushed him." In the stretch, Secretariat opened a lead of almost of a mile on the rest of the field. At the finish, he won by 31 lengths, breaking the margin-of-victory record set by Triple Crown winner Count Fleet in 1943 of 25 lengths.
CBS Television announcer Chic Anderson described the horse's pace in a famous commentary:
The time for the race was not only a record, it was the fastest miles on dirt in history, 2:24 flat, breaking by more than two seconds the track and stakes record of 2:26 set 16 years earlier by Gallant Man. Secretariat's record still stands as an American record on the dirt. If the Beyer Speed Figure calculation had been developed during that time, Andrew Beyer calculated that Secretariat would have earned a figure of 139, the highest he has ever assigned.
A large crowd had started gathering around the paddock hours before the Belmont, many missing the races run earlier in the day for a chance to see the horses up close. Secretariat and Chenery were greeted with an enthusiasm that Chenery responded to with a wave or smile; Secretariat was imperturbable. A large cheer went up at the break, but as the race went on, the two most commonly reported reactions were disbelief and fear that Secretariat had gone too fast. When it was clear that Secretariat would win, the sound reportedly made the grandstand shake. The Blood-Horse magazine editor Kent Hollingsworth described the impact: "Two twenty-four flat! I don't believe it. Impossible. But I saw it. I can't breathe. He won by a sixteenth of a mile! I saw it. I have to believe it."
The race is widely considered the greatest performance by a North American racehorse. Secretariat became the ninth Triple Crown winner in history, and the first since Citation in 1948, a gap of 25 years. Bettors holding 5,427 winning parimutuel tickets on Secretariat never redeemed them, presumably keeping them as souvenirs (and because the tickets would have paid only $2.20 on a $2 bet).
Arlington Invitational
Three weeks after his win at Belmont, Secretariat was shipped to Arlington Park for the June 30th Arlington Invitational. Laurin explained: "Even before the Belmont, you remember, I said I really didn't know how I could give this horse a rest. He's so strong and full of energy. Well, this is only a week and a half after the Belmont, and believe me when I tell you, if I don't run this horse he's going to hurt himself in his stall. So we decided it would be nice to race him in Chicago to let the people in the Midwest have a chance to see him run." The race was run at miles with a purse of $125,000. The challengers were grouped as a single betting entry at 6–1: Secretariat was 1–20 (the legal minimum) and created a minus pool of $17,941.
Mayor Richard Daley of Chicago declared that the Saturday of the race was Secretariat Day. A crowd of 41,223 (the largest at Arlington in three decades) greeted his arrival on the track with sustained applause. Secretariat broke poorly but soon went to the lead, setting slow early fractions. He gathered momentum on the final turn and eventually won by nine lengths in 1:47 flat, just off the track record set by Damascus. George Plimpton commented, "With a better start, a horse to press him and less bow to his turns, Secretariat might have posted a time that would have stood a century."
The July 10, 1973 New York Times reported that a number of Chicago fans in attendance did as their New York counterparts had in the Belmont Stakes and $11,170 worth of winning tickets on Secretariat had not been cashed.
Whitney Stakes
Secretariat next went to Saratoga, popularly nicknamed "the graveyard of champions", in preparation for the Whitney Stakes on August 4, where he would face older horses for the first time. On July 27, he put in a stunning workout of 1:34 for a mile on a sloppy track, a time that would have broken Saratoga's track record. On race day though, he was beaten by the Allen Jerkens-trained Onion, a four-year-old gelding that had set a track record at furlongs in his previous start. The track condition for the Whitney was labelled fast but was running slow, especially along the inside rail. Secretariat broke poorly and Onion led from the start, setting a slow pace running well off the rail. Down the backstretch, Turcotte chose to make his move along the rail rather than sweeping wide. Secretariat responded more sluggishly than usual and Turcotte went to the whip. Secretariat closed to within a head on the final turn before Onion pulled ahead in the straight to win by a length. A record crowd of more than 30,000 witnessed what was described as an "astonishing" upset.
Despite Jerkens's reputation as the "Giant Killer," Secretariat's stunning loss can possibly be attributed to a viral infection, which caused a low-grade fever and diarrhea. "I was learning then that anything could happen in horse racing," said Chenery. "We knew he had a low-grade infection. But we decided he was strong enough to win anyway, and we were wrong."
Secretariat lost his appetite and acted sluggishly for several days. Charles Hatton wrote: "He seemed distressingly ill walking off, and he missed the Travers. Returned to Belmont to point for the $250,000 Marlboro, the sport's pin-up horse looked bloody awful, rather like one of those sick paintings which betoken an inner theatre of the macabre. It required supernatural recuperative powers to recover as he did. He was subjected to four severe preps in two weeks. Astonishingly, he gained weight and blossomed with every trial."
Marlboro Cup
On September 15, Secretariat returned to Belmont Park in the inaugural Marlboro Cup, which was originally intended to be a match race with stablemate Riva Ridge, the 1972 Derby and Belmont Stakes winner. After Secretariat's loss in the Whitney, the field was expanded to invite top horses from across the country. Entries included 1972 turf champion and top California stakes winner Cougar II, Canadian champion Kennedy Road, 1972 American champion three-year-old colt Key to the Mint, Travers winner Annihilate 'Em (the only other three-year-old in the race), and Onion. Riva Ridge was assigned top weight of 127 pounds (one pound over the weight-for-age scale), Key to the Mint and Cougar II were at 126 pounds, scale weight, while Secretariat was at 124, three pounds over scale for his age. The field included five champions, and the seven starters had won 63 stakes races between them.
It rained the night before, but the track dried out by race time. Secretariat stalked a fast pace in fifth, while Riva Ridge rated just behind Onion and Kennedy Road. Around the turn, Secretariat raced wide and started to make up ground. Coming into the stretch, Secretariat overtook Riva Ridge, while the other early leaders dropped back. Secretariat drew away to win, completing miles in 1:45 , then a world record on the dirt for the distance. Riva Ridge ran second with Cougar II in third and Onion in fourth. Turcotte said, "Today he was the old Secretariat and he did it on his own." The purse for the Marlboro Cup was $250,000, then the highest prize money offered: the win made Secretariat the 13th thoroughbred millionaire in history.
Woodward Stakes
After the Marlboro Cup, the original plan was to enter Riva Ridge in the mile Woodward Stakes, just two weeks later, while Secretariat put in some slow workouts on the turf in preparation for the Man o' War Stakes in October. It rained before the Woodward and the track was sloppy, which Riva Ridge could not handle, so Secretariat was entered in his place. Secretariat led into the straight but was overtaken by the Allen Jerkens-trained four-year-old Prove Out, who pulled clear to win by lengths despite carrying seven more pounds than Secretariat under the weight-for-age conditions of the race. Prove Out ran the race of his life that day: his time was the second-fastest mile-and-a-half on the dirt in Belmont Park's history despite the sloppy conditions. Prove Out went on to beat Riva Ridge in that year's Jockey Club Gold Cup.
Man o' War Stakes
On October 8, just nine days after the Woodward, Secretariat was moved to turf for the Man O' War Stakes at a distance of miles. He faced Tentam, who had set a world record for miles on the turf earlier that summer, and five others. Secretariat went to the lead early, followed by Tentam, who gradually closed the gap down the backstretch. Tentam got to within a half-length before Secretariat responded, pulling away by three lengths. Tentam made another run around the far turn, but Secretariat again drew away, eventually winning by five lengths over Tentam, with Big Spruce seven and a half lengths further back in third. Secretariat set a course record time of 2:24. After the race, Turcotte explained that "when Tentam came up to him in the backstretch I just chirped to him and he pulled away."
Canadian International Stakes
The syndication deal for Secretariat precluded the horse racing past age three. Accordingly, Secretariat's last race was against older horses in the Canadian International Stakes over one and five-eighths miles on the turf at Woodbine Racetrack in Toronto, Ontario, Canada on October 28, 1973. The race was chosen in part because of long-time ties between E.P. Taylor and the Chenery family, and partly to honor Secretariat's Canadian connections, Laurin and Turcotte. Turcotte missed the race with a five-day suspension: Eddie Maple got the mount.
The day of the race was cold, windy and wet, but the Marshall turf course was firm. Despite the weather, some 35,000 people turned out to greet Secretariat in a "virtual hysteria.” His biggest opponents were Kennedy Road, whom he had beaten in the Marlboro Cup, and Big Spruce, who had finished third in the Man o' War. Kennedy Road went to the early lead, while Secretariat moved to second after breaking from an outside post. On the backstretch, Secretariat made his move and forged to the lead. "Snorting steam in the raw twilight", he rounded the far turn with a 12-length lead before gearing down in the final furlong, ultimately winning by lengths. Once again, many winning tickets went uncashed by souvenir hunters.
After the race, Secretariat was brought to Aqueduct Racetrack where he was paraded with Turcotte dressed in the Meadow silks before a crowd of 32,990 in his final public appearance. "It's a sad day, and yet it's a great day," said Laurin. "I certainly wish he could run as a 4-year-old. He's a great horse and he loves to run."
Altogether, Secretariat won 16 of his 21 career races, with three seconds and one third, and total earnings of $1,316,808.
For 1973, Secretariat was again named Horse of the Year and also won Eclipse Awards as the American Champion Three-Year-Old Male Horse and the American Champion Male Turf Horse.
Retirement
Stud career
When Secretariat first retired to Claiborne Farm, his sperm showed some signs of immaturity, so he was bred to three non-thoroughbred mares in December 1973 to test his fertility. One of these, an Appaloosa named Leola, produced Secretariat's first foal in November 1974. Named First Secretary, the foal was a chestnut like his sire, but spotted like his dam.
Secretariat's first official foal crop, arriving in 1975, consisted of 28 foals, the best of which was Dactylographer, who won the William Hill Futurity in October 1977. The first crop also included Canadian Bound, who at the 1976 Keeneland July sale was the first yearling to break the $1 million barrier, selling for $1.5 million. Canadian Bound, however, was a complete failure in racing, and for several years, the value of Secretariat's offspring declined considerably, especially given the rising popularity of Northern Dancer's offspring in the sales ring.
Secretariat eventually sired a number of major stakes winners, including:
General Assembly, winner of the 1979 Travers Stakes, setting a track record of 2:00 flat that stood for 37 years.
Lady's Secret, 1986 Horse of the Year.
Risen Star, 1988 Preakness and Belmont Stakes winner.
Kingston Rule, 1990 Melbourne Cup winner, breaking the course record.
Tinners Way, born in 1990 to Secretariat's last crop, winner of the 1994 and 1995 Pacific Classic.
Ultimately, Secretariat officially sired 663 named foals, including 341 winners (51.4%) and 54 stakes winners (8.1%). There has been some criticism of Secretariat as a stallion, mainly because he did not produce male offspring of his own ability and did not leave a leading sire son behind, but his legacy is assured through the quality of his daughters, several of whom were excellent racers and even more of whom were excellent producers. In 1992, Secretariat was the leading broodmare sire in North America. Overall, Secretariat's daughters produced 24 Grade/Group 1 winners. As a broodmare sire, Secretariat's most notable progeny were:
Weekend Surprise, a stakes winner and the 1992 Kentucky Broodmare of the Year. Her sons include 1990 Preakness winner Summer Squall and 1992 Horse of the Year A.P. Indy.
Terlingua, a stakes winner and dam of leading sire Storm Cat.
Secrettame, a stakes winner and dam of important sire Gone West, whose descendants include Kentucky Derby and Preakness Stakes winner Smarty Jones.
Six Crowns, dam of champion two-year-old and sire Chief's Crown.
Sister Dot, dam of champion two-year-old and sire Dehere.
Celtic Assembly, dam of Volksraad, leading sire in New Zealand.
Betty's Secret, dam of Secreto, winner of The Derby, and Istabraq, three-time winner of the Champion Hurdle.
Through Weekend Surprise and Terlingua alone, Secretariat appears in the pedigree of numerous champions. Weekend Surprise's son A.P. Indy was the leading sire in North America in 2003 and 2006, and is the sire of 2003 Horse of the Year Mineshaft and 2007 Belmont Stakes winner Rags to Riches. He has also established a successful sire-line that leads to Kentucky Derby winners Orb and California Chrome. A.P. Indy's leading sire-line descendant is Tapit, who led the sire list in 2014–2015 and is the sire of Belmont Stakes winners Tonalist and Creator. Terlingua's son Storm Cat is also a two time leading sire, whose offspring include Giant's Causeway, three-time leading sire in North America. Storm Cat also sired Yankee Gentleman, who is the broodmare sire of 2015 Triple Crown winner American Pharoah. Both Storm Cat and A.P. Indy appear in the pedigree of 2018 Triple Crown winner Justify.
Inbreeding to Secretariat has also proven successful, as exemplified by numerous graded stakes winners, including two-time Horse of the Year Wise Dan, as well as sprint champion Speightstown.
Secretariat's paddock at Claiborne Farm bordered three other stallions: Drone, Sir Ivor, and Hall of Fame inductee Spectacular Bid. Secretariat did not pay much attention to Drone or Sir Ivor, but he and Spectacular Bid became friendly and occasionally raced each other along the fence line between their paddocks.
Death
In the fall of 1989, Secretariat became afflicted with laminitis—a painful and debilitating hoof condition. When his condition failed to improve after a month of treatment, he was euthanized on October 4 at the age of 19. Secretariat was buried at Claiborne Farm, given the rare honor of being buried whole (traditionally only the head, heart, and hooves of a winning race horse are buried).
At the time of Secretariat's death, the veterinarian who performed the necropsy, Thomas Swerczek, head pathologist at the University of Kentucky, did not weigh Secretariat's heart, but stated, "We just stood there in stunned silence. We couldn't believe it. The heart was perfect. There were no problems with it. It was just this huge engine." Later, Swerczek also performed a necropsy on Sham, who died in 1993. Swerczek did weigh Sham's heart, and it was . Based on Sham's measurement, and having necropsied both horses, he estimated Secretariat's heart probably weighed , or about 2.5 times that of the average horse ().
An extremely large heart is a trait that occasionally occurs in thoroughbreds, hypothesized to be linked to a genetic condition, called the "x-factor", passed down in specific inheritance patterns. The x-factor can be traced to the historic racehorse Eclipse, who was necropsied after his death in 1789. Because Eclipse's heart appeared to be much larger than the hearts of other horses, it was weighed, and found to be , almost twice the normal weight. Eclipse is believed to have passed the trait on via his daughters, and pedigree research verified that Secretariat traces his dam line to a daughter of Eclipse. Secretariat's success as a broodmare sire has been linked by some to this large heart theory. However, it has not been proven whether the x-factor exists, let alone if it contributes to athletic ability.
Honors and recognition
Secretariat was inducted into the National Museum of Racing and Hall of Fame in 1974, the year following his Triple Crown victory. In 1994, Sports Illustrated ranked Secretariat #17 in their list of the 40 greatest sports figures of the past 40 years. In 1999, ESPN listed him 35th of the 100 greatest North American athletes of the 20th century, the highest of three non-humans on the list (the other two were also racehorses: Man o' War at 84th and Citation at 97th). Secretariat ranked second behind Man o' War in The Blood-Horse'''s List of the Top 100 U.S. Racehorses of the 20th Century. He was also ranked second behind Man o' War by both a six-member panel of experts assembled by the Associated Press, and a Sports Illustrated panel of seven experts.
On October 16, 1999, in a ceremony conducted in the winner's circle at Keeneland Race Course in Lexington, the U.S. Postal Service honored Secretariat with a 33-cent postage stamp bearing his image. In 2005, Secretariat was featured in ESPN Classic's show Who's No. 1? in the episode "Greatest Sports Performances". He was the only nonhuman on the list, with his run at Belmont ranking second behind Wilt Chamberlain's 100-point game. On May 2, 2007, Secretariat was inducted into the Kentucky Athletic Hall of Fame, marking the first time an animal received this honor. In 2013, Secretariat was inducted into the Canadian Horse Racing Hall of Fame in honor of his victory in the Canadian International 40 years earlier. Secretariat was also the focus of a 2013 segment of 60 Minutes Sports. In March 2016, Secretariat's Triple Crown victory was rated #13 in the Sports Illustrated listing of the 100 Greatest Moments in Sports History.
Due to Secretariat's enduring popularity, Chenery remained a prominent figure in racing and a powerful advocate for thoroughbred aftercare and veterinary research until her death in 2017. In 2004, the Maker's Mark Secretariat Center, dedicated to reschooling former racehorses and matching them to new homes, opened at the Kentucky Horse Park. In 2010, Chenery developed the Secretariat Vox Populi ("voice of the people") Award, which is voted for by racing fans. It is intended to acknowledge "the horse whose popularity and racing excellence best resounded with the American public and gained recognition for thoroughbred racing." The consideration of the racing fan's engagement is what distinguishes the Vox Populi award from others. The first honoree in 2010 was Zenyatta, that year's Horse of the Year, while the second award went to Rapid Redux, a former claimer who went on to win 22 consecutive races at smaller racetracks. Paynter received the 2012 award for his battle with laminitis, the same condition that led to Secretariat's death. "Paynter's popularity stems from his ability to battle and exceed expectations, making him the perfect choice as the recipient of this year's Vox Populi Award", said Chenery. "After seeing firsthand the devastating effects of this disease, I am even more convinced that the industry must continue to diligently fight laminitis. The progress we have made to date clearly benefited Paynter—a beautiful colt with a tremendous spirit."
Various states and localities have also honored Secretariat. According to ESPN, 263 roads in the United States are named after him, more than any other athlete. Secretariat Drive is the most common option. In Illinois, the Secretariat Stakes was created in 1974 to honor his appearance at Arlington Park in 1973. In honor of Secretariat and Kentucky's horse racing history, the University of Kentucky football uniforms incorporate blue-and-white checkers reminiscent of the silks of Meadow Stables.
In Virginia, The Meadow, the farm at which he was foaled, was listed on the National Register of Historic Places. It is now known as The Meadow Historic District. In 2023, the Virginia ABC honored 50th Anniversary of Secretariat's Triple Crown win by making Ragged Branch Secretariat Reserve bourbon. Secretariat has been honored multiple times by Virginia's General Assembly with Triple Crown anniversary proclamations. In 2023, Caroline County received a proclamation by the General Assembly to honor the 50th anniversary of Secretariat's record-setting Triple Crown-win.
Statues
In 1974, Paul Mellon commissioned a bronze statue, sometimes known as Secretariat in Full Stride, from John Skeaping. The life-size statue remained in the center of the walking ring at Belmont Park until 1988 when it was replaced by a replica. The original is now located at the National Museum of Racing and Hall of Fame.
The Kentucky Horse Park has two other life-sized statues of Secretariat. The first, created by Jim Reno in 1992, shows Secretariat as an older sire, while the second, completed by Edwin Bogucki in 2004, shows him being led into the winner's circle after the Kentucky Derby.
In 2015, a statue of Secretariat and Ron Turcotte crossing the finish line at the Belmont Stakes was unveiled in Grand Falls, New Brunswick, Turcotte's hometown.
On October 12, 2019, a new monument was unveiled during the Secretariat Festival at Keeneland in Lexington. The two and a half times life-size bronze statue by Jocelyn Russell shows Secretariat and Turcotte winning the Kentucky Derby. After the Festival, it was permanently relocated to the center of the traffic circle at Old Frankfort Pike and Alexandria Drive. A duplicate statue by Russell began a tour in Ashland, Virginia in March 2023.
Media
Louisville’s Churchill Downs was a set location for several racing scenes in the 2010 film, Secretariat. The film, starring Diane Lane as Penny Chenery, John Malkovich as Lucien Laurin, and Otto Thorwarth as Ron Turcotte, was written by Mike Rich, directed by Randall Wallace, and produced by Walt Disney Pictures.
A fictionalized version of Secretariat appears in the animated series BoJack Horseman'', wherein he is depicted as an anthropomorphic horse voiced by John Krasinski.
Racing statistics
Secretariat's earnings in 1973 were, at the time, a single-season record.
Pedigree
Secretariat was sired by Bold Ruler, who led the North America sire list eight times, more than any other stallion in the 20th century. He also led the juvenile (two-year-old) sire list a record six times. Before Secretariat's Triple Crown run, Bold Ruler was often categorized as a sire of precocious juveniles that lacked stamina or did not train on past age two. However, even before Secretariat, Bold Ruler actually had sired 11 stakes winners of races at 10 furlongs or more. Ultimately, seven of the ten Kentucky Derby winners in the 1970s can be traced directly to Bold Ruler in their lines, including Secretariat and fellow Triple Crown winner Seattle Slew.
Secretariat's dam was Somethingroyal, the 1973 Kentucky Broodmare of the Year. Although Somethingroyal was unplaced in her only start, she had an excellent pedigree. Her sire Princequillo was the leading broodmare sire from 1966 to 1970 and was noted as a source of stamina and soundness. Her dam Imperatrice was a stakes winner who was purchased by Christopher Chenery at a dispersal sale in 1947 for $30,000. Imperatrice produced several stakes winners and stakes producers for the Meadow. Prior to foaling Secretariat at age 18, Somethingroyal had already produced three stakes winners: Sir Gaylord, First Family and Syrian Sea, the latter a full sister to Secretariat. Sir Gaylord became an important sire, whose offspring included Epsom Derby winner Sir Ivor.
Breeders speak of a "nick" occurring when a sire or grandsire produces significantly better offspring from the daughters of one particular sire than with mares from other bloodlines. The breeding of Bold Ruler with Somethingroyal is an example of a famous nick between Bold Ruler's sire Nasrullah and daughters of Princequillo. The goal was to balance the speed, precocity, and fiery temperament provided by the Nasrullah side of the pedigree with Princequillo's stamina, soundness, and sensible temperament.
| Biology and health sciences | Individual animals | Animals |
62404 | https://en.wikipedia.org/wiki/Lovebird | Lovebird | Lovebird is the common name for the genus Agapornis, a small group of parrots in the Old World parrot family Psittaculidae. Of the nine species in the genus, all are native to the African continent, with the grey-headed lovebird being native to the African island of Madagascar. Social and affectionate, the name comes from the parrots' strong, monogamous pair bonding and the long periods which paired birds spend sitting together. Lovebirds live in small flocks and eat fruit, vegetables, grasses, and seeds. Some species are kept as pets, and several coloured mutations have been selectively bred in aviculture. The average lifespan is 10 to 12 years.
Description
Lovebirds are in length, up to 24 cm in wingspan with 9 cm for a single wing and in weight. They are among the smallest parrots, characterised by a stocky build, a short blunt tail, and a relatively large, sharp beak. Wildtype lovebirds are mostly green with a variety of colours on their upper body, depending on the species. The Fischer's lovebird, black-cheeked lovebird, and the masked lovebird have a prominent white ring around their eyes. Many colour mutant varieties have been produced by selective breeding of the species that are popular in aviculture. , there are 30 known plumage colour variations among lovebirds, which are caused by pigments called psittacofulvins.
Taxonomy
The genus Agapornis was described by the English naturalist Prideaux John Selby in 1836. The name combines the Ancient Greek agape meaning "love" and ornis meaning "bird". The type species is the black-collared lovebird (Agapornis swindernianus), which was originally placed into the genus Psittacus within a section called Psittacula by naturalist Heinrich Kuhl. Selby contended that this placement rather than a separate genus was "artificial" and done "without regard to the structure, habits, or distribution of the species."
The genus contains nine species of which five are monotypic and four are divided into subspecies. They are native to mainland Africa and the island of Madagascar. In the wild, the different species are separated geographically.
Traditionally, lovebirds are divided into three groups:
the sexually dimorphic species: Madagascar, Abyssinian, and red-headed lovebird
the intermediate species: peach-faced lovebird
the white-eye-ringed species: masked, Fischer's, Lilian's, and black-cheeked lovebirds
However, this division is not fully supported by phylogenetic studies, as the species of the dimorphic group are not grouped together in a single clade.
Species
Species and subspecies:
Nesting
Depending on the species of lovebird, the female will carry nesting material into the nest in various ways. The peach-faced lovebird, for example, tucks nesting material in the feathers of its rump.
Feral populations
Feral populations of Fischer's lovebirds and masked lovebirds live in cities of East Africa. There are interspecific hybrids that exist between these two species. The hybrid has a reddish-brown head and orange on upper chest, but otherwise resembles the masked lovebird.
There are two feral colonies present in the Pretoria region (Silver Lakes, Faerie Glen and Centurion) in South Africa. They probably originated from birds that escaped from aviaries. They consist mostly of masked, black cheeked, Fischer and hybrid birds and vary in colours. White (not albino) and yellow as well as blue occur in many cases. The white ringed eyes are very prominent.
Diet and health
Parrot species (including cockatiels) are biologically vegetarian species.
Wild lovebirds may harbor diseases such as avian polyomavirus.
| Biology and health sciences | Psittaciformes | Animals |
62405 | https://en.wikipedia.org/wiki/Tarsier | Tarsier | Tarsiers ( ) are haplorhine primates of the family Tarsiidae, which is, itself, the lone extant family within the infraorder Tarsiiformes. Although the group was, prehistorically, more globally widespread, all of the species living today are restricted to Maritime Southeast Asia, predominantly in Brunei, Indonesia, Malaysia and the Philippines.They are found primarily in forested habitats, especially forests that have liana, since the vine gives tarsiers vertical support when climbing trees.
Evolutionary history
Fossil record
Fossils of tarsiiform primates have been found in Asia, Europe, and North America (with disputed fossils from Northern Africa), but extant tarsiers are restricted to several Southeast Asian islands. The fossil record indicates that their dentition has not changed much, except in size, over the past 45 million years.
Within the family Tarsiidae, there are two extinct genera—Xanthorhysis and Afrotarsius; however, the placement of Afrotarsius is not certain, and it is sometimes listed in its own family, Afrotarsiidae, within the infraorder Tarsiiformes, or considered an anthropoid primate.
So far, four fossil species of tarsiers are known from the fossil record:
Tarsius eocaenus is known from the Middle Eocene in China.
Hesperotarsius thailandicus lived during the Early Miocene in northwestern Thailand.
Hesperotarsius sindhensis lived during the Miocene in Pakistan.
Tarsius sirindhornae lived during the Middle Miocene in northern Thailand.
The genus Tarsius has a longer fossil record than any other primate genus, but the assignment of the Eocene and Miocene fossils to the genus is dubious.
Classification
The phylogenetic position of extant tarsiers within the order Primates has been debated for much of the 20th century, and tarsiers have alternately been classified with strepsirrhine primates in the suborder Prosimii, or as the sister group to the simians (Anthropoidea) in the infraorder Haplorhini. Analysis of SINE insertions, a type of macromutation to the DNA, is argued to offer very persuasive evidence for the monophyly of Haplorhini, where other lines of evidence, such as DNA sequence data, remain ambiguous. Thus, some systematists argue the debate is conclusively settled in favor of a monophyletic Haplorrhini. In common with simians, tarsiers have a mutation in the L-gulonolactone oxidase (GULO) gene, which prevents their bodies from synthesizing vitamin C so they must find it in the diet. Since the strepsirrhines do not have this mutation and have retained the ability to make vitamin C, the genetic trait that confers the need for it in the diet would tend to place tarsiers with haplorhines.
At a lower phylogenetic level, the tarsiers have, until recently, all been placed in the genus Tarsius, while it was debated whether the species should be placed in two (a Sulawesi and a Philippine-western group) or three separate genera (Sulawesi, Philippine and western groups). Species level taxonomy is complex, with morphology often being of limited use compared to vocalizations. Further confusion existed over the validity of certain names. Among others, the widely used T. dianae has been shown to be a junior synonym of T. dentatus, and comparably, T. spectrum is now considered a junior synonym of T. tarsier.
In 2010, Colin Groves and Myron Shekelle suggested splitting the genus Tarsius into three genera, the Philippine tarsiers (genus Carlito), the western tarsiers (genus Cephalopachus), and the eastern tarsiers (genus Tarsius). This was based on differences in dentition, eye size, limb and hand length, tail tufts, tail sitting pads, the number of mammae, chromosome count, socioecology, vocalizations, and distribution. The senior taxon of the species, T. tarsier was restricted to the population of a Selayar island, which then required the resurrection of the defunct taxon T. fuscus.
In 2014, scientists published the results of a genetic study from across the range of the Philippine tarsier, revealing previously unrecognised genetic diversity. Three subspecies are recognised in the established taxonomy: Carlito syrichta syrichta from Leyte and Samar, C. syrichta fraterculus from Bohol, and C. syrichta carbonarius from Mindanao. Their analysis of mitochondrial and nuclear DNA sequences suggested that ssp. syrichta and fraterculus may represent a single lineage, whereas ssp. carbonarius may represent two lineages – one occupies the majority of Mindanao while the other is in northeastern Mindanao and the nearby Dinagat Island, which the authors termed the 'Dinagat-Caraga tarsier'. More detailed studies that integrate morphological data will be needed to review the taxonomy of tarsiers in the Philippines.
Infraorder Tarsiiformes
Family Tarsiidae: tarsiers
Genus Carlito
Philippine tarsier, Carlito syrichta
C. s. syrichta
C. s. fraterculus (to be combined into C. s. syrichta?)
C. s. carbonarius
Genus Cephalopachus
Horsfield's tarsier, Cephalopachus bancanus
C. b. bancanus
C. b. natunensis
C. b. boreanus
C. b. saltator
Genus Tarsius
Dian's tarsier, T. dentatus
Makassar tarsier T. fuscus
Lariang tarsier, T. lariang
Niemitz's tarsier, T. niemitzi
Peleng tarsier, T. pelengensis
Sangihe tarsier, T. sangirensis
Gursky's spectral tarsier, T. spectrumgurskyae
Jatna's tarsier, T. supriatnai
Spectral tarsier, T. tarsier
Siau Island tarsier, T. tumpara
Pygmy tarsier, T. pumilus
Wallace's tarsier, T. wallacei
Anatomy and physiology
Tarsiers are small animals with enormous eyes; each eyeball is approximately in diameter and is as large as, or in some cases larger than, its entire brain. The unique cranial anatomy of the tarsier results from the need to balance their large eyes and heavy head so they are able to wait silently for nutritious prey. Tarsiers have a strong auditory sense, and their auditory cortex is distinct. Tarsiers also have long hind limbs, owing mostly to the elongated tarsus bones of the feet, from which the animals get their name. The combination of their elongated tarsi and fused tibiofibulae makes them morphologically specialized for vertical clinging and leaping. The head and body range from 10 to 15 cm in length, but the hind limbs are about twice this long (including the feet), and they also have a slender tail from 20 to 25 cm long. Their fingers are also elongated, with the third finger being about the same length as the upper arm. Most of the digits have nails, but the second and third toes of the hind feet bear claws instead, which are used for grooming. Tarsiers have soft, velvety fur, which is generally buff, beige, or ochre in color.
Tarsiers morphology allows for them to move their heads 180 degrees in either direction, allowing for them to see 360 degrees around them. Their dental formula is also unique: Unlike many nocturnal vertebrates, tarsiers lack a light-reflecting layer (tapetum lucidum) of the retina and have a fovea.
The tarsier's brain is different from that of other primates in terms of the arrangement of the connections between the two eyes and the lateral geniculate nucleus, which is the main region of the thalamus that receives visual information. The sequence of cellular layers receiving information from the ipsilateral (same side of the head) and contralateral (opposite side of the head) eyes in the lateral geniculate nucleus distinguishes tarsiers from lemurs, lorises, and monkeys, which are all similar in this respect. Some neuroscientists suggested that "this apparent difference distinguishes tarsiers from all other primates, reinforcing the view that they arose in an early, independent line of primate evolution."
Philippine tarsiers are capable of hearing frequencies as high as 91 kHz. They are also capable of vocalizations with a dominant frequency of 70 kHz.
Unlike most primates, male tarsiers do not have bacula.
Behavior
Pygmy tarsiers differ from other species in terms of their morphology, communication, and behavior. The differences in morphology that distinguish pygmy tarsiers from other species are likely based on their high altitude environment.
All tarsier species are nocturnal in their habits, but like many nocturnal organisms, some individuals may show more or less activity during the daytime. Based on the anatomy of all tarsiers, they are all adapted for leaping even though they all vary based on their species.
Ecological variation is responsible for differences in morphology and behavior in tarsiers because different species become adapted to local conditions based on the level of altitude. For example, the colder climate at higher elevations can influence cranial morphology.
Tarsiers tend to be extremely shy animals and are sensitive to bright lights, loud noises, and physical contact. They have been reported to behave suicidally when stressed or kept in captivity.
Predators
Due to their small size, tarsiers are prey to various other animals. Tarsiers primarily inhabit the lower vegetation layers as they face threats from both terrestrial predators such as cats, lizards, and snakes, and aerial predators such as owls and birds. By residing in these lower layers, they can minimize their chances of being preyed upon by staying off the ground and yet still low enough to avoid birds of prey.
Tarsiers, though known as being shy and reclusive, are known to mob predators. In nature, mobbing is the act of harassing predators to reduce the chance of being attacked. When predators are near, tarsiers will make a warning vocalization. Other tarsiers will respond to the call, and within a short period of time, 2-10 tarsiers will show up to mob the predator. The majority of the group consists of adult males, but there will occasionally be a female or two. While tarsier groups only contain one adult male, males from other territories will join in the mob event, meaning there are multiple alpha male tarsiers attacking the predator.
Diet
Tarsiers are the only entirely carnivorous extant primates, albeit mainly insectivorous, catching invertebrates by jumping at them. The tarsiers also opportunistically prey on a variety of arboreal and small forest animals, including orthopterans, scarab beetles, small flying frogs, lizards and, occasionally, amphibious crabs that climb into the lower sections of trees. However, it has been found that their favorite prey are arthropods, beetles, arachnids, cockroaches, grasshoppers, katydids, cicadas, and walking sticks. Tarsiers are, rarely, also known to prey on baby birds, small tree snakes and even baby bats.
Reproduction
Gestation takes about six months, and tarsiers give birth to single offspring. Young tarsiers are born furred, and with open eyes, and are able to climb within a day of birth. They reach sexual maturity by the end of their second year. Sociality and mating system varies, with tarsiers from Sulawesi living in small family groups, while Philippine and western tarsiers are reported to sleep and forage alone.
Conservation
Tarsiers have never formed successful breeding colonies in captivity. This may be due in part to their special feeding requirements.
A sanctuary near the town of Corella, on the Philippine island of Bohol, is having some success restoring tarsier populations. The Philippines Tarsier Foundation (PTFI) has developed a large, semi-wild enclosure known as the Tarsier Research and Development Center. Carlito Pizarras, also known as the "Tarsier man", founded this sanctuary where visitors can observe tarsiers in the wild. As of 2011, the sanctuary was maintained by him and his brother. The trees in the sanctuary are populated with nocturnal insects that make up the tarsier's diet.
The conservation status of all tarsiers is vulnerable to extinction. Tarsiers are a conservation dependent species meaning that they need to have more and improved management of protected habitats or they will definitely become extinct in the future.
The first quantitative study on the activity patterns of captive Philippine Tarsier (Tarsius syrichta) has been studied at the Subayon Conservation Centre for the Philippine Tarsier in Bilar, Bohol, Philippines. From December 2014 to January 2016, Female and male T. syrichta were observed based on their time apportioned to normal activities during non-mating versus mating season. During the non-mating season, a significant amount of their waking hours were spent scanning which proceeded to resting, foraging, and traveling. Feeding, scent-marking, self-grooming, social activities, and other activities were minimal. Scanning was still a common activity among the paired sexes during mating season. However, resting remarkedly decreased while increases in travel and foraging were evident. These findings are being considered for the continuance of housing T.syrichta with successes with captivity due to anthropogenic threats.
The 2008-described Siau Island tarsier in Indonesia is regarded as Critically Endangered and was listed among The World's 25 Most Endangered Primates by Conservation International and the IUCN/SCC Primate Specialist Group in 2008.
The Malaysian government protects tarsiers by listing them in the Totally Protected Animals of Sarawak, the Malaysian state in Borneo where they are commonly found.
A new scheme to conserve the tarsiers of Mount Matutum near Tupi in South Cotabato on the island of Mindanao is being organised by the Tupi civil government and the charity Endangered Species International (ESI). Tarsier UK are also involved on the margins helping the Tupi Government to educate the children of Tupi about the importance of the animal. ESI is hoping to build a visitor centre on the slopes of Mount Matutum and help the local indigenous peoples to farm more environmentally and look after the tarsiers. The first stage in this is educating the local peoples on the importance of keeping the animal safe and secure. A number of native tarsier-friendly trees have been replanted on land which had been cleared previously for fruit tree and coconut tree planting.
| Biology and health sciences | Primates | null |
62413 | https://en.wikipedia.org/wiki/Schooner | Schooner | A schooner ( ) is a type of sailing vessel defined by its rig: fore-and-aft rigged on all of two or more masts and, in the case of a two-masted schooner, the foremast generally being shorter than the mainmast. A common variant, the topsail schooner also has a square topsail on the foremast, to which may be added a topgallant. Differing definitions leave uncertain whether the addition of a fore course would make such a vessel a brigantine. Many schooners are gaff-rigged, but other examples include Bermuda rig and the staysail schooner.
Etymology
The name "schooner" first appeared in eastern North America in the early 1700s. The name may be related to a Scots word meaning to skip over water, or to skip stones.
History
The origins of schooner rigged vessels is obscure, but there is good evidence of them from the early 17th century in paintings by Dutch marine artists. The earliest known illustration of a schooner depicts a yacht owned by the mayors (Dutch: burgemeesters) of Amsterdam, drawn by the Dutch artist Rool and dated 1600. Later examples show schooners (Dutch: schoeners) in Amsterdam in 1638 and New Amsterdam in 1627. Paintings by Van de Velde (1633–1707) and an engraving by Jan Kip of the Thames at Lambeth, dated 1697, suggest that schooner rig was common in England and Holland by the end of the 17th century. The Royal Transport was an example of a large British-built schooner, launched in 1695 at Chatham.
The schooner rig was used in vessels with a wide range of purposes. On a fast hull, good ability to windward was useful for privateers, blockade runners, slave ships, smaller naval craft and opium clippers. Packet boats (built for the fast conveyance of passengers and goods) were often schooners. Fruit schooners were noted for their quick passages, taking their perishable cargoes on routes such as the Azores to Britain. Some pilot boats adopted the rig. The fishing vessels that worked the Grand Banks of Newfoundland were schooners, and held in high regard as an outstanding development of the type. In merchant use, the ease of handling in confined waters and smaller crew requirements made schooners a common rig, especially in the 19th century. Some schooners worked on deep sea routes. In British home waters, schooners usually had cargo-carrying hulls that were designed to take the ground in drying harbours (or, even, to unload dried out on an open beach). The last of these once-common craft had ceased trading by the middle of the 20th century. Some very large schooners with five or more masts were built in the United States from circa 1880–1920. They mostly carried bulk cargoes such as coal and timber. In yachting, schooners predominated in the early years of the America's Cup. In more recent times, schooners have been used as sail training ships.
The type was further developed in British North America starting around 1713. In the 1700s and 1800s in what is now New England and Atlantic Canada schooners became popular for coastal trade, requiring a smaller crew for their size compared to then traditional ocean crossing square rig ships, and being fast and versatile. Three-masted schooners were introduced around 1800.
Schooners were popular on both sides of the Atlantic in the late 1800s and early 1900s. By 1910, 45 five-masted and 10 six-masted schooners had been built in Bath, Maine and in towns on Penobscot Bay, including Wyoming which is considered the largest wooden ship ever built. The Thomas W. Lawson was the only seven-masted schooner built.
Rig types
The rig is rarely found on a hull of less than 50 feet LOA, and small schooners are generally two-masted. In the two decades around 1900, larger multi-masted schooners were built in New England and on the Great Lakes with four, five, six, or even, seven masts. Schooners were traditionally gaff-rigged, and some schooners sailing today are reproductions of famous schooners of old, but modern vessels tend to be Bermuda rigged (or occasionally junk-rigged). While a sloop rig is simpler and cheaper, the schooner rig may be chosen on a larger boat so as to reduce the overall mast height and to keep each sail to a more manageable size, giving a mainsail that is easier to handle and to reef. An issue when planning a two-masted schooner's rig is how best to fill the space between the masts: for instance, one may adopt (i) a gaff sail on the foremast (even with a Bermuda mainsail), or (ii) a main staysail, often with a fisherman topsail to fill the gap at the top in light airs.
Various types of schooners are defined by their rig configuration. Most have a bowsprit although some were built without one for crew safety, such as Adventure.
The following varieties were built:
Grand Banks fishing schooner: includes a gaff topsail on the main mast and a fisherman's staysail; in winter topmasts and their upper sails are taken down. Bluenose was one such example.
Topsail schooner/Square topsail schooner: includes square topsails. A version with raked masts and known for its great speed, called the Baltimore Clipper was popular in the early 1800s.
Four- to seven-masted schooners: these designs spread the sail area over many smaller sails, at a time when sails were hoisted by hand, though mechanical assistance was used as the ships, sails, and gaffs became too large and heavy to raise manually. These were used for coastal trade on the Atlantic coast of North America, the West Indies, South America, and some trans-Atlantic voyages.
Tern schooner: a three-masted schooner very popular between 1880 and 1920. Wawona, the largest of this type built, sailed on the West Coast of the United States from 1897 to 1947.
Uses
Schooners were built primarily for cargo, passengers, and fishing.
The Norwegian polar schooner Fram was used by both Fridtjof Nansen and Roald Amundsen in their explorations of the poles.
Bluenose was both a successful fishing boat and a racer. America, eponym of America's Cup, was one of the few schooners ever designed for racing. This race was long dominated by schooners. Three-masted schooner Atlantic set the transatlantic sailing record for a monohull in the 1905 Kaiser's Cup race. The record remained unbroken for nearly 100 years.
Gallery
Sail plans
Examples
| Technology | Naval transport | null |
62462 | https://en.wikipedia.org/wiki/Umami | Umami | Umami ( from ), or savoriness, is one of the five basic tastes. It is characteristic of broths and cooked meats.
People taste umami through taste receptors that typically respond to glutamates and nucleotides, which are widely present in meat broths and fermented products. Glutamates are commonly added to some foods in the form of monosodium glutamate (MSG), and nucleotides are commonly added in the form of disodium guanylate, inosine monophosphate (IMP) or guanosine monophosphate (GMP). Since umami has its own receptors rather than arising out of a combination of the traditionally recognized taste receptors, scientists now consider umami to be a distinct taste.
Foods that have a strong umami flavor include meats, shellfish, fish (including fish sauce and preserved fish such as Maldives fish, katsuobushi, sardines, and anchovies), dashi, tomatoes, mushrooms, hydrolyzed vegetable protein, meat extract, yeast extract, kimchi, cheeses, and soy sauce.
Etymology
A loanword from Japanese , umami can be translated as "pleasant savory taste". This neologism was coined in 1908 by Japanese chemist Kikunae Ikeda from a nominalization of umai () "delicious". The compound (with mi () "taste") is used for a more general sense of a food as delicious. There is no English equivalent to umami; however, some close descriptions are "meaty", "savory", and "broth-like".
Background
Scientists have debated whether umami was a basic taste since Kikunae Ikeda first proposed its existence in 1908. In 1985, the term umami was recognized as the scientific term to describe the taste of glutamates and nucleotides at the first Umami International Symposium in Hawaii. Umami represents the taste of the amino acid L-glutamate and 5'-ribonucleotides such as guanosine monophosphate (GMP) and inosine monophosphate (IMP). It can be described as a pleasant "brothy" or "meaty" taste with a long-lasting, mouthwatering and coating sensation over the tongue.
The sensation of umami is due to the detection of the carboxylate anion of glutamate in specialized receptor cells present on human and other animal tongues. Some 52 peptides may be responsible for detecting umami taste. Its effect is to balance taste and round out the overall flavor of a dish. Umami enhances the palatability of a wide variety of foods.
Glutamate in acid form (glutamic acid) imparts little umami taste, whereas the salts of glutamic acid, known as glutamates, give the characteristic umami taste due to their ionized state. GMP and IMP amplify the taste intensity of glutamate. Adding salt to the free acids also enhances the umami taste. It is disputed whether umami is truly an independent taste because standalone glutamate without table salt ions(Na+) is perceived as sour; sweet and umami tastes share a taste receptor subunit, with salty taste blockers reducing discrimination between monosodium glutamate and sucrose; and some people cannot distinguish umami from a salty taste.
Monosodium L-aspartate has an umami taste about four times less intense than MSG, whereas ibotenic acid and tricholomic acid (likely as their salts or with salt) are claimed to be many times more intense.
Discovery
Glutamate has a long history in cooking. Fermented fish sauces (garum), which are rich in glutamate, were used widely in ancient Rome, fermented barley sauces (murri) rich in glutamate were used in medieval Byzantine and Arab cuisine, and fermented fish sauces and soy sauces have histories going back to the third century in China. Cheese varieties are rich in glutamate and umami flavor. In the late 1800s, chef Auguste Escoffier, who opened restaurants in Paris and London, created meals that combined umami with salty, sour, sweet, and bitter tastes. However, he did not know the chemical source of this unique quality.
Umami was first scientifically identified in 1908 by Kikunae Ikeda, a professor of the Tokyo Imperial University. He found that glutamate was responsible for the palatability of the broth from kombu seaweed. He noticed that the taste of kombu dashi was distinct from sweet, sour, bitter, and salty and named it umami.
Professor Shintaro Kodama, a disciple of Ikeda, discovered in 1913 that dried bonito flakes (a type of tuna) contained another umami substance. This was the ribonucleotide IMP. In 1957, Akira Kuninaka realized that the ribonucleotide GMP present in shiitake mushrooms also conferred the umami taste. One of Kuninaka's most important discoveries was the synergistic effect between ribonucleotides and glutamate. When foods rich in glutamate are combined with ingredients that have ribonucleotides, the resulting taste intensity is higher than would be expected from merely adding the intensity of the individual ingredients.
This synergy of umami may help explain various classical food pairings: the Japanese make dashi with kombu seaweed and dried bonito flakes; the Chinese add Chinese leek and Chinese cabbage to chicken soup, as do Scots in the similar Scottish dish of cock-a-leekie soup; and Italians grate the Parmigiano-Reggiano cheese on a variety of different dishes.
Properties
Umami has a mild but lasting aftertaste associated with salivation and a sensation of furriness on the tongue, stimulating the throat, the roof and the back of the mouth. By itself, umami is not palatable, but it makes a great variety of foods pleasant, especially in the presence of a matching aroma. Like other basic tastes, umami is pleasant only within a relatively narrow concentration range.
The optimum umami taste depends also on the amount of salt, and at the same time, low-salt foods can maintain a satisfactory taste with the appropriate amount of umami. One study showed that ratings of pleasantness, taste intensity, and ideal saltiness of low-salt soups were greater when the soup contained umami, whereas low-salt soups without umami were less pleasant. Another study demonstrated that using fish sauce as a source of umami could reduce the need for salt by 10–25% to flavor such foods as chicken broth, tomato sauce, or coconut curry while maintaining overall taste intensity.
Some population groups, such as the elderly, may benefit from umami taste because their taste and smell sensitivity may be impaired by age and medication. The loss of taste and smell can contribute to poor nutrition, increasing their risk of disease. Some evidence exists to show umami not only stimulates appetite, but also may contribute to satiety.
Foods rich in umami components
Many foods are rich in the amino acids and nucleotides imparting umami. Naturally occurring glutamate can be found in meats and vegetables. Inosine (IMP) comes primarily from meats and guanosine (GMP) from vegetables. Mushrooms, especially dried shiitake, are rich sources of umami flavor from guanylate. Smoked or fermented fish are high in inosinate, and shellfish in adenylate. Protein in food is tasteless, however processes such as fermentation, curing, or heat treatment release glutamate and other amino acids.
Generally, umami taste is common to foods that contain high levels of L-glutamate, IMP and GMP, most notably in fish, shellfish, cured meats, meat extracts, mushrooms, vegetables (e.g., ripe tomatoes, Chinese cabbage, spinach, celery, etc.), green tea, hydrolyzed vegetable protein, and fermented and aged products involving bacterial or yeast cultures, such as cheeses, shrimp pastes, fish sauce, soy sauce, natto, nutritional yeast, and yeast extracts such as Vegemite and Marmite.
Studies have shown that the amino acids in breast milk are often the first encounter humans have with umami. Glutamic acid makes up half of the free amino acids in breast milk.
Perceptual independence from salty and sweet taste
Since all umami taste compounds are sodium salts, the perceptual differentiation of salty and umami tastes has been difficult in taste tests and studies have found as much as 27% of certain populations may be umami "hypotasters".
Furthermore single glutamate (glutamic acid) with no table salt ions (Na+) elicits sour taste and in psychophysical tests, sodium or potassium salt cations seem to be required to produce a perceptible umami taste.
Sweet and umami tastes both utilize the taste receptor subunit TAS1R3, with salt taste blockers reducing discrimination between monosodium glutamate and sucrose in rodents.
If umami doesn't have perceptual independence, it could be classified with other tastes like fat, carbohydrate, metallic, and calcium, which can be perceived at high concentrations but may not offer a prominent taste experience.
Taste receptors
Most taste buds on the tongue and other regions of the mouth can detect umami taste, irrespective of their location. (The tongue map in which different tastes are distributed in different regions of the tongue is a common misconception.) Biochemical studies have identified the taste receptors responsible for the sense of umami as modified forms of mGluR4, mGluR1, and taste receptor type 1 (TAS1R1 + TAS1R3), all of which have been found in all regions of the tongue bearing taste buds. These receptors are also found in some regions of the duodenum. A 2009 review corroborated the acceptance of these receptors, stating, "Recent molecular biological studies have now identified strong candidates for umami receptors, including the heterodimer TAS1R1/TAS1R3, and truncated type 1 and 4 metabotropic glutamate receptors missing most of the N-terminal extracellular domain (taste-mGluR4 and truncated-mGluR1) and brain-mGluR4."
Receptors mGluR1 and mGluR4 are specific to glutamate whereas TAS1R1 + TAS1R3 are responsible for the synergism already described by Akira Kuninaka in 1957. However, the specific role of each type of receptor in taste bud cells remains unclear. They are G protein-coupled receptors (GPCRs) with similar signaling molecules that include G proteins beta-gamma, PLCB2 and PI3-mediated release of calcium (Ca2+) from intracellular stores. Calcium activates a so-called transient-receptor-potential cation channel TRPM5 that leads to membrane depolarization and the consequent release of ATP and secretion of neurotransmitters including serotonin.
Cells responding to umami taste stimuli do not possess typical synapses, but ATP conveys taste signals to gustatory nerves and in turn to the brain that interprets and identifies the taste quality via the gut-brain axis.
Consumers and safety
Umami has become popular as a flavor with food manufacturers trying to improve the taste of low sodium offerings. Chefs create "umami bombs", which are dishes made of several umami ingredients like fish sauce. Umami may account for the long-term formulation and popularity of ketchup. The United States Food and Drug Administration has designated the umami enhancer monosodium glutamate (MSG) as a safe ingredient. While some people identify themselves as sensitive to MSG, a study commissioned by the FDA was only able to identify transient, mild symptoms in a few of the subjects, and only when the MSG was consumed in unrealistically large quantities. There is also no apparent difference in sensitivity to umami when comparing Japanese and Americans.
Background of other taste categories
The five basic tastes (saltiness, sweetness, bitterness, sourness, and savoriness) are detected by specialized taste receptors on the tongue and palate epithelium. The number of taste categories in humans remains under research, with a sixth taste possibly including spicy or pungent.
| Biology and health sciences | Sensory nervous system | Biology |
62529 | https://en.wikipedia.org/wiki/Sea%20level | Sea level | Mean sea level (MSL, often shortened to sea level) is an average surface level of one or more among Earth's coastal bodies of water from which heights such as elevation may be measured. The global MSL is a type of vertical datuma standardised geodetic datumthat is used, for example, as a chart datum in cartography and marine navigation, or, in aviation, as the standard sea level at which atmospheric pressure is measured to calibrate altitude and, consequently, aircraft flight levels. A common and relatively straightforward mean sea-level standard is instead a long-term average of tide gauge readings at a particular reference location.
The term above sea level generally refers to the height above mean sea level (AMSL). The term APSL means above present sea level, comparing sea levels in the past with the level today.
Earth's radius at sea level is 6,378.137 km (3,963.191 mi) at the equator. It is 6,356.752 km (3,949.903 mi) at the poles and 6,371.001 km (3,958.756 mi) on average. This flattened spheroid, combined with local gravity anomalies, defines the geoid of the Earth, which approximates the local mean sea level for locations in the open ocean. The geoid includes a significant depression in the Indian Ocean, whose surface dips as much as below the global mean sea level (excluding minor effects such as tides and currents).
Measurement
Precise determination of a "mean sea level" is difficult because of the many factors that affect sea level. Instantaneous sea level varies substantially on several scales of time and space. This is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity, and so forth. The mean sea level at a particular location may be calculated over an extended time period and used as a datum. For example, hourly measurements may be averaged over a full Metonic 19-year lunar cycle to determine the mean sea level at an official tide gauge.
Still-water level or still-water sea level (SWL) is the level of the sea with motions such as wind waves averaged out.
Then MSL implies the SWL further averaged over a period of time such that changes due to, e.g., the tides, also have zero mean.
Global MSL refers to a spatial average over the entire ocean area, typically using large sets of tide gauges and/or satellite measurements.
One often measures the values of MSL with respect to the land; hence a change in relative MSL or (relative sea level) can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates, or both.
In the UK, the ordnance datum (the 0 metres height on UK maps) is the mean sea level measured at Newlyn in Cornwall between 1915 and 1921. Before 1921, the vertical datum was MSL at the Victoria Dock, Liverpool.
Since the times of the Russian Empire, in Russia and its other former parts, now independent states, the sea level is measured from the zero level of Kronstadt Sea-Gauge.
In Hong Kong, "mPD" is a surveying term meaning "metres above Principal Datum" and refers to height of above chart datum and below the average sea level.
In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and offers the longest collated data about the sea level. It is used for a part of continental Europe and the main part of Africa as the official sea level. Spain uses the reference to measure heights below or above sea level at Alicante, while the European Vertical Reference System is calibrated to the Amsterdam Peil elevation, which dates back to the 1690s.
Satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001 and the Ocean Surface Topography Mission on the Jason-2 satellite in 2008.
Height above mean sea level
Height above mean sea level (AMSL) is the elevation (on the ground) or altitude (in the air) of an object, relative to a reference datum for mean sea level (MSL). It is also used in aviation, where some heights are recorded and reported with respect to mean sea level (contrast with flight level), and in the atmospheric sciences, and in land surveying. An alternative is to base height measurements on a reference ellipsoid approximating the entire Earth, which is what systems such as GPS do. In aviation, the reference ellipsoid known as WGS84 is increasingly used to define heights; however, differences up to exist between this ellipsoid height and local mean sea level. Another alternative is to use a geoid-based vertical datum such as NAVD88 and the global EGM96 (part of WGS84). Details vary in different countries.
When referring to geographic features such as mountains, on a topographic map variations in elevation are shown by contour lines. A mountain's highest point or summit is typically illustrated with the AMSL height in metres, feet or both. In unusual cases where a land location is below sea level, such as Death Valley, California, the elevation AMSL is negative.
Difficulties in use
It is often necessary to compare the local height of the mean sea surface with a "level" reference surface, or geodetic datum, called the geoid. In the absence of external forces, the local mean sea level would coincide with this geoid surface, being an equipotential surface of the Earth's gravitational field which, in itself, does not conform to a simple sphere or ellipsoid and exhibits gravity anomalies such as those measured by NASA's GRACE satellites. In reality, the geoid surface is not directly observed, even as a long-term average, due to ocean currents, air pressure variations, temperature and salinity variations, etc. The location-dependent but time-persistent separation between local mean sea level and the geoid is referred to as (mean) ocean surface topography. It varies globally in a typical range of ±.
Dry land
Several terms are used to describe the changing relationships between sea level and dry land.
"relative" means change relative to a fixed point in the sediment pile.
"eustatic" refers to global changes in sea level relative to a fixed point, such as the centre of the earth, for example as a result of melting ice-caps.
"steric" refers to global changes in sea level due to thermal expansion and salinity variations.
"isostatic" refers to changes in the level of the land relative to a fixed point in the earth, possibly due to thermal buoyancy or tectonic effects, disregarding changes in the volume of water in the oceans.
The melting of glaciers at the end of ice ages results in isostatic post-glacial rebound, when land rises after the weight of ice is removed. Conversely, older volcanic islands experience relative sea level rise, due to isostatic subsidence from the weight of cooling volcanos. The subsidence of land due to the withdrawal of groundwater is another isostatic cause of relative sea level rise.
On planets that lack a liquid ocean, planetologists can calculate a "mean altitude" by averaging the heights of all points on the surface. This altitude, sometimes referred to as a "sea level" or zero-level elevation, serves equivalently as a reference for the height of planetary features.
Change
Local and eustatic
Local mean sea level (LMSL) is defined as the height of the sea with respect to a land benchmark, averaged over a period of time long enough that fluctuations caused by waves and tides are smoothed out, typically a year or more. One must adjust perceived changes in LMSL to account for vertical movements of the land, which can occur at rates similar to sea level changes (millimetres per year).
Some land movements occur because of isostatic adjustment to the melting of ice sheets at the end of the last ice age. The weight of the ice sheet depresses the underlying land, and when the ice melts away the land slowly rebounds. Changes in ground-based ice volume also affect local and regional sea levels by the readjustment of the geoid and true polar wander. Atmospheric pressure, ocean currents and local ocean temperature changes can affect LMSL as well.
Eustatic sea level change (global as opposed to local change) is due to change in either the volume of water in the world's oceans or the volume of the oceanic basins. Two major mechanisms are currently causing eustatic sea level rise. First, shrinking land ice, such as mountain glaciers and polar ice sheets, is releasing water into the oceans. Second, as ocean temperatures rise, the warmer water expands.
Short-term and periodic changes
Many factors can produce short-term changes in sea level, typically within a few metres, in timeframes ranging from minutes to months:
Recent changes
Aviation
Pilots can estimate height above sea level with an altimeter set to a defined barometric pressure. Generally, the pressure used to set the altimeter is the barometric pressure that would exist at MSL in the region being flown over. This pressure is referred to as either QNH or "altimeter" and is transmitted to the pilot by radio from air traffic control (ATC) or an automatic terminal information service (ATIS). Since the terrain elevation is also referenced to MSL, the pilot can estimate height above ground by subtracting the terrain altitude from the altimeter reading. Aviation charts are divided into boxes and the maximum terrain altitude from MSL in each box is clearly indicated. Once above the transition altitude, the altimeter is set to the international standard atmosphere (ISA) pressure at MSL which is 1013.25 hPa or 29.92 inHg.
| Physical sciences | Oceanography | null |
62603 | https://en.wikipedia.org/wiki/Domain%20%28biology%29 | Domain (biology) | In biological taxonomy, a domain ( or ) (Latin: regio), also dominion, superkingdom, realm, or empire, is the highest taxonomic rank of all organisms taken together. It was introduced in the three-domain system of taxonomy devised by Carl Woese, Otto Kandler and Mark Wheelis in 1990.
According to the domain system, the tree of life consists of either three domains, Archaea, Bacteria, and Eukarya, or two domains, Archaea and Bacteria, with Eukarya included in Archaea. In the three-domain model, the first two are prokaryotes, single-celled microorganisms without a membrane-bound nucleus. All organisms that have a cell nucleus and other membrane-bound organelles are included in Eukarya and called eukaryotes.
Non-cellular life, most notably the viruses, is not included in this system. Alternatives to the three-domain system include the earlier two-empire system (with the empires Prokaryota and Eukaryota), and the eocyte hypothesis (with two domains of Bacteria and Archaea, with Eukarya included as a branch of Archaea).
Terminology
The term domain was proposed by Carl Woese, Otto Kandler, and Mark Wheelis (1990) in a three-domain system. This term represents a synonym for the category of dominion (Lat. dominium), introduced by Moore in 1974.
Development of the domain system
Carl Linnaeus made the classification "domain" popular in the famous taxonomy system he created in the middle of the eighteenth century. This system was further improved by the studies of Charles Darwin later on but could not classify bacteria easily, as they have very few observable features to compare to the other domains.
Carl Woese made a revolutionary breakthrough when, in 1977, he compared the nucleotide sequences of the 16s ribosomal RNA and discovered that the rank "domain" contained three branches, not two as scientists had previously thought. Initially, due to their physical similarities, Archaea and Bacteria were classified together and called "archaebacteria". However, scientists now know that these two domains are hardly similar and are internally distinctly different.
Characteristics of the three domains
Each of these three domains contains unique ribosomal RNA. This forms the basis of the three-domain system. While the presence of a nuclear membrane differentiates the Eukarya from the Archaea and Bacteria, both of which lack a nuclear envelope, the Archaea and Bacteria are distinct from each other due to differences in the biochemistry of their cell membranes and RNA markers.
Archaea
Archaea are prokaryotic cells, typically characterized by membrane lipids that are branched hydrocarbon chains attached to glycerol by ether linkages. The presence of these ether linkages in Archaea adds to their ability to withstand extreme temperatures and highly acidic conditions, but many archaea live in mild environments. Halophiles (organisms that thrive in highly salty environments) and hyperthermophiles (organisms that thrive in extremely hot environments) are examples of Archaea.
Archaea are relatively small. They range from 0.1 μm to 15 μm diameter and up to 200 μm long, about the size of bacteria and the mitochondria found in eukaryotic cells. Members of the genus Thermoplasma are the smallest Archaea.
Bacteria
Cyanobacteria and mycoplasmas are two examples of bacteria.
Even though bacteria are prokaryotic cells like Archaea, their cell membranes are instead made of phospholipid bilayers, with none of the ether linkages that Archaea have. Internally, bacteria have different RNA structures in their ribosomes, hence they are grouped into a different category. In the two- and three-domain systems, this puts them into a separate domain.
There is a great deal of diversity in the domain Bacteria. That diversity is further confounded by the exchange of genes between different bacterial lineages. The occurrence of duplicate genes between otherwise distantly-related bacteria makes it nearly impossible to distinguish bacterial species, count the bacterial species on the Earth, or organize them into a tree-like structure (unless the structure includes cross-connections between branches, making it a "network" instead of a "tree").
Eukarya
Members of the domain Eukarya – called eukaryotes – have membrane-bound organelles (including a nucleus containing genetic material) and are represented by five kingdoms: Plantae, Protozoa, Animalia, Chromista, and Fungi.
Exclusion of viruses and prions
The three-domain system includes no form of non-cellular life. Stefan Luketa proposed a five-dominion system in 2012, adding Prionobiota (acellular and without nucleic acid) and Virusobiota (acellular but with nucleic acid) to the traditional three domains.
Alternative classifications
Alternative classifications of life include:
The two-empire system or superdomain system, proposed by Mayr (1998), with top-level groupings of Prokaryota (or Monera) and Eukaryota.
The eocyte hypothesis, proposed by Lake et al. (1984), which posits two domains, Bacteria and Archaea, with Eukaryota included as a subordinate clade branching from Archaea.
| Biology and health sciences | Genetics and taxonomy | null |
62641 | https://en.wikipedia.org/wiki/Vector%20field | Vector field | In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).
A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space.
Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other.
Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).
More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.
Definition
Vector fields on subsets of Euclidean space
Given a subset of , a vector field is represented by a vector-valued function in standard Cartesian coordinates . If each component of is continuous, then is a continuous vector field. It is common to focus on smooth vector fields, meaning that each component is a smooth function (differentiable any number of times). A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.
One standard notation is to write for the unit vectors in the coordinate directions. In these terms, every smooth vector field on an open subset of can be written as
for some smooth functions on . The reason for this notation is that a vector field determines a linear map from the space of smooth functions to itself, , given by differentiating in the direction of the vector field.
Example: The vector field describes a counterclockwise rotation around the origin in . To show that the function is rotationally invariant, compute:
Given vector fields , defined on and a smooth function defined on , the operations of scalar multiplication and vector addition,
make the smooth vector fields into a module over the ring of smooth functions, where multiplication of functions is defined pointwise.
Coordinate transformation law
In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.
Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector are
and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law
Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law () relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.
Vector fields on manifolds
Given a differentiable manifold , a vector field on is an assignment of a tangent vector to each point in . More precisely, a vector field is a mapping from into the tangent bundle so that is the identity mapping
where denotes the projection from to . In other words, a vector field is a section of the tangent bundle.
An alternative definition: A smooth vector field on a manifold is a linear map such that is a derivation: for all .
If the manifold is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold is often denoted by or (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by (a fraktur "X").
Examples
A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas.
Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid.
Streamlines, streaklines and pathlines are 3 types of lines that can be made from (time-dependent) vector fields. They are:
streaklines: the line produced by particles passing through a specific fixed point over various times
pathlines: showing the path that a given particle (of zero mass) would follow.
streamlines (or fieldlines): the path of a particle influenced by the instantaneous field (i.e., the path of a particle if the field is held fixed).
Magnetic fields. The fieldlines can be revealed using small iron filings.
Maxwell's equations allow us to use a given set of initial and boundary conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electric field.
A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases.
Gradient field in Euclidean spaces
Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).
A vector field V defined on an open set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that
The associated flow is called the , and is used in the method of gradient descent.
The path integral along any closed curve γ (γ(0) = γ(1)) in a conservative field is zero:
Central field in euclidean spaces
A -vector field over is called a central field if
where is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.
The point 0 is called the center of the field.
Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.
Operations on vector fields
Line integral
A common technique in physics is to integrate a vector field along a curve, also called determining its line integral. Intuitively this is summing up all vector components in line with the tangents to the curve, expressed as their scalar products. For example, given a particle in a force field (e.g. gravitation), where each vector at some point in space represents the force acting there on the particle, the line integral along a certain path is the work done on the particle, when it travels along this path. Intuitively, it is the sum of the scalar products of the force vector and the small tangent vector in each point along the curve.
The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.
Given a vector field and a curve , parametrized by in (where and are real numbers), the line integral is defined as
To show vector field topology one can use line integral convolution.
Divergence
The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by
with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.
The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.
Curl in three dimensions
The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three dimensions, it is defined by
The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.
Index of a vector field
The index of a vector field is an integer that helps describe its behaviour around an isolated zero (i.e., an isolated singularity of the field). In the plane, the index takes the value −1 at a saddle singularity but +1 at a source or sink singularity.
Let n be the dimension of the manifold on which the vector field is defined. Take a closed surface (homeomorphic to the (n-1)-sphere) S around the zero, so that no other zeros lie in the interior of S. A map from this sphere to a unit sphere of dimension n − 1 can be constructed by dividing each vector on this sphere by its length to form a unit length vector, which is a point on the unit sphere Sn−1. This defines a continuous map from S to Sn−1. The index of the vector field at the point is the degree of this map. It can be shown that this integer does not depend on the choice of S, and therefore depends only on the vector field itself.
The index is not defined at any non-singular point (i.e., a point where the vector is non-zero). It is equal to +1 around a source, and more generally equal to (−1)k around a saddle that has k contracting dimensions and n−k expanding dimensions.
The index of the vector field as a whole is defined when it has just finitely many zeroes. In this case, all zeroes are isolated, and the index of the vector field is defined to be the sum of the indices at all zeroes.
For an ordinary (2-dimensional) sphere in three-dimensional space, it can be shown that the index of any vector field on the sphere must be 2. This shows that every such vector field must have a zero. This implies the hairy ball theorem.
For a vector field on a compact manifold with finitely many zeroes, the Poincaré-Hopf theorem states that the vector field’s index is the manifold’s Euler characteristic.
Physical intuition
Michael Faraday, in his concept of lines of force, emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory.
In addition to the magnetic field, other phenomena that were modeled by Faraday include the electrical field and light field.
In recent decades many phenomenological formulations of irreversible dynamics and evolution equations in physics, from the mechanics of complex fluids and solids to chemical kinetics and quantum thermodynamics, have converged towards the geometric idea of "steepest entropy ascent" or "gradient flow" as a consistent universal modeling framework that guarantees compatibility with the second law of thermodynamics and extends well-known near-equilibrium results such as Onsager reciprocity to the far-nonequilibrium realm.
Flow curves
Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.
Given a vector field defined on , one defines curves on such that for each in an interval ,
By the Picard–Lindelöf theorem, if is Lipschitz continuous there is a unique -curve for each point in so that, for some ,
The curves are called integral curves or trajectories (or less commonly, flow lines) of the vector field and partition into equivalence classes. It is not always possible to extend the interval to the whole real number line. The flow may for example reach the edge of in a finite time.
In two or three dimensions one can visualize the vector field as giving rise to a flow on . If we drop a particle into this flow at a point it will move along the curve in the flow depending on the initial point . If is a stationary point of (i.e., the vector field is equal to the zero vector at the point ), then the particle will remain at .
Typical applications are pathline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups.
Complete vector fields
By definition, a vector field on is called complete if each of its flow curves exists for all time. In particular, compactly supported vector fields on a manifold are complete. If is a complete vector field on , then the one-parameter group of diffeomorphisms generated by the flow along exists for all time; it is described by a smooth mapping
On a compact manifold without boundary, every smooth vector field is complete. An example of an incomplete vector field on the real line is given by . For, the differential equation , with initial condition , has as its unique solution if (and for all if ). Hence for , is undefined at so cannot be defined for all values of .
The Lie bracket
The flows associated to two vector fields need not commute with each other. Their failure to commute is described by the Lie bracket of two vector fields, which is again a vector field. The Lie bracket has a simple definition in terms of the action of vector fields on smooth functions :
f-relatedness
Given a smooth function between manifolds, , the derivative is an induced map on tangent bundles, . Given vector fields and , we say that is -related to if the equation holds.
If is -related to , , then the Lie bracket is -related to .
Generalizations
Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields.
Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.
| Mathematics | Multivariable and vector calculus | null |
62692 | https://en.wikipedia.org/wiki/Archean | Archean | The Archean Eon ( , also spelled Archaean or Archæan), in older sources sometimes called the Archaeozoic, is the second of the four geologic eons of Earth's history, preceded by the Hadean Eon and followed by the Proterozoic. The Archean represents the time period from (million years ago). The Late Heavy Bombardment is hypothesized to overlap with the beginning of the Archean. The Huronian glaciation occurred at the end of the eon.
The Earth during the Archean was mostly a water world: there was continental crust, but much of it was under an ocean deeper than today's oceans. Except for some rare relict crystals, today's oldest continental crust dates back to the Archean. Much of the geological detail of the Archean has been destroyed by subsequent activity. The Earth's atmosphere was also vastly different in composition from today's: the prebiotic atmosphere was a reducing atmosphere rich in methane and lacking free oxygen.
The earliest known life, mostly represented by shallow-water microbial mats called stromatolites, started in the Archean and remained simple prokaryotes (archaea and bacteria) throughout the eon. The earliest photosynthetic processes, especially those by early cyanobacteria, appeared in the mid/late Archean and led to a permanent chemical change in the ocean and the atmosphere after the Archean.
Etymology and changes in classification
The word Archean is derived from the Greek word (), meaning 'beginning, origin'. The Pre-Cambrian had been believed to be without life (azoic); however, fossils were found in deposits that were judged to belong to the Azoic age. Before the Hadean Eon was recognized, the Archean spanned Earth's early history from its formation about 4,540 million years ago until 2,500 million years ago.
Instead of being based on stratigraphy, the beginning and end of the Archean Eon are defined chronometrically. The eon's lower boundary or starting point of 4,031±3 million years ago is officially recognized by the International Commission on Stratigraphy, which is the age of the oldest known intact rock formations on Earth. Evidence of rocks from the preceding Hadean Eon are therefore restricted by definition to non-rock and non-terrestrial sources such as individual mineral grains and lunar samples.
Geology
When the Archean began, the Earth's heat flow was nearly three times as high as it is today, and it was still twice the current level at the transition from the Archean to the Proterozoic (2,500 ). The extra heat was partly remnant heat from planetary accretion, from the formation of the metallic core, and partly arose from the decay of radioactive elements. As a result, the Earth's mantle was significantly hotter than today.
Although a few mineral grains have survived from the Hadean, the oldest rock formations exposed on the surface of the Earth are Archean. Archean rocks are found in Greenland, Siberia, the Canadian Shield, Montana, Wyoming (exposed parts of the Wyoming Craton), Minnesota (Minnesota River Valley), the Baltic Shield, the Rhodope Massif, Scotland, India, Brazil, western Australia, and southern Africa. Granitic rocks predominate throughout the crystalline remnants of the surviving Archean crust. These include great melt sheets and voluminous plutonic masses of granite, diorite, layered intrusions, anorthosites and monzonites known as sanukitoids. Archean rocks are often heavily metamorphosed deep-water sediments, such as graywackes, mudstones, volcanic sediments, and banded iron formations. Volcanic activity was considerably higher than today, with numerous lava eruptions, including unusual types such as komatiite. Carbonate rocks are rare, indicating that the oceans were more acidic, due to dissolved carbon dioxide, than during the Proterozoic. Greenstone belts are typical Archean formations, consisting of alternating units of metamorphosed mafic igneous and sedimentary rocks, including Archean felsic volcanic rocks. The metamorphosed igneous rocks were derived from volcanic island arcs, while the metamorphosed sediments represent deep-sea sediments eroded from the neighboring island arcs and deposited in a forearc basin. Greenstone belts, which include both types of metamorphosed rock, represent sutures between the protocontinents.
Plate tectonics likely started vigorously in the Hadean, but slowed down in the Archean. The slowing of plate tectonics was probably due to an increase in the viscosity of the mantle due to outgassing of its water. Plate tectonics likely produced large amounts of continental crust, but the deep oceans of the Archean probably covered the continents entirely. Only at the end of the Archean did the continents likely emerge from the ocean. The emergence of continents towards the end of the Archaean initiated continental weathering that left its mark on the oxygen isotope record by enriching seawater with isotopically light oxygen.
Due to recycling and metamorphosis of the Archean crust, there is a lack of extensive geological evidence for specific continents. One hypothesis is that rocks that are now in India, western Australia, and southern Africa formed a continent called Ur as of 3,100 Ma. Another hypothesis, which conflicts with the first, is that rocks from western Australia and southern Africa were assembled in a continent called Vaalbara as far back as 3,600 Ma. Archean rock makes up only about 8% of Earth's present-day continental crust; the rest of the Archean continents have been recycled.
By the Neoarchean, plate tectonic activity may have been similar to that of the modern Earth, although there was a significantly greater occurrence of slab detachment resulting from a hotter mantle, rheologically weaker plates, and increased tensile stresses on subducting plates due to their crustal material metamorphosing from basalt into eclogite as they sank. There are well-preserved sedimentary basins, and evidence of volcanic arcs, intracontinental rifts, continent-continent collisions and widespread globe-spanning orogenic events suggesting the assembly and destruction of one and perhaps several supercontinents. Evidence from banded iron formations, chert beds, chemical sediments and pillow basalts demonstrates that liquid water was prevalent and deep oceanic basins already existed.
Asteroid impacts were frequent in the early Archean. Evidence from spherule layers suggests that impacts continued into the later Archean, at an average rate of about one impactor with a diameter greater than every 15 million years. This is about the size of the Chicxulub impactor. These impacts would have been an important oxygen sink and would have caused drastic fluctuations of atmospheric oxygen levels.
Environment
The Archean atmosphere is thought to have almost completely lacked free oxygen; oxygen levels were less than 0.001% of their present atmospheric level, with some analyses suggesting they were as low as 0.00001% of modern levels. However, transient episodes of heightened oxygen concentrations are known from this eon around 2,980–2,960 Ma, 2,700 Ma, and 2,501 Ma. The pulses of increased oxygenation at 2,700 and 2,501 Ma have both been considered by some as potential start points of the Great Oxygenation Event, which most scholars consider to have begun in the Palaeoproterozoic (). Furthermore, oases of relatively high oxygen levels existed in some nearshore shallow marine settings by the Mesoarchean. The ocean was broadly reducing and lacked any persistent redoxcline, a water layer between oxygenated and anoxic layers with a strong redox gradient, which would become a feature in later, more oxic oceans. Despite the lack of free oxygen, the rate of organic carbon burial appears to have been roughly the same as in the present. Due to extremely low oxygen levels, sulphate was rare in the Archean ocean, and sulphides were produced primarily through reduction of organically sourced sulphite or through mineralisation of compounds containing reduced sulphur. The Archean ocean was enriched in heavier oxygen isotopes relative to the modern ocean, though δ18O values decreased to levels comparable to those of modern oceans over the course of the later part of the eon as a result of increased continental weathering.
Astronomers think that the Sun had about 75–80 percent of its present luminosity, yet temperatures on Earth appear to have been near modern levels only 500 million years after Earth's formation (the faint young Sun paradox). The presence of liquid water is evidenced by certain highly deformed gneisses produced by metamorphism of sedimentary protoliths. The moderate temperatures may reflect the presence of greater amounts of greenhouse gases than later in the Earth's history. Extensive abiotic denitrification took place on the Archean Earth, pumping the greenhouse gas nitrous oxide into the atmosphere. Alternatively, Earth's albedo may have been lower at the time, due to less land area and cloud cover.
Early life
The processes that gave rise to life on Earth are not completely understood, but there is substantial evidence that life came into existence either near the end of the Hadean Eon or early in the Archean Eon.
The earliest evidence for life on Earth is graphite of biogenic origin found in 3.7 billion–year-old metasedimentary rocks discovered in Western Greenland.
The earliest identifiable fossils consist of stromatolites, which are microbial mats formed in shallow water by cyanobacteria. The earliest stromatolites are found in 3.48 billion-year-old sandstone discovered in Western Australia. Stromatolites are found throughout the Archean and become common late in the Archean. Cyanobacteria were instrumental in creating free oxygen in the atmosphere.
Further evidence for early life is found in 3.47 billion-year-old baryte, in the Warrawoona Group of Western Australia. This mineral shows sulfur fractionation of as much as 21.1%, which is evidence of sulfate-reducing bacteria that metabolize sulfur-32 more readily than sulfur-34.
Evidence of life in the Late Hadean is more controversial. In 2015, biogenic carbon was detected in zircons dated to 4.1 billion years ago, but this evidence is preliminary and needs validation.
Earth was very hostile to life before 4,300 to 4,200 Ma, and the conclusion is that before the Archean Eon, life as we know it would have been challenged by these environmental conditions. While life could have arisen before the Archean, the conditions necessary to sustain life could not have occurred until the Archean Eon.
Life in the Archean was limited to simple single-celled organisms (lacking nuclei), called prokaryotes. In addition to the domain Bacteria, microfossils of the domain Archaea have also been identified. There are no known eukaryotic fossils from the earliest Archean, though they might have evolved during the Archean without leaving any. Fossil steranes, indicative of eukaryotes, have been reported from Archean strata but were shown to derive from contamination with younger organic matter. No fossil evidence has been discovered for ultramicroscopic intracellular replicators such as viruses.
Fossilized microbes from terrestrial microbial mats show that life was already established on land 3.22 billion years ago.
| Physical sciences | Geological timescale | Earth science |
62694 | https://en.wikipedia.org/wiki/Stromatolite | Stromatolite | Stromatolites ( ) or stromatoliths () are layered sedimentary formations (microbialite) that are created mainly by photosynthetic microorganisms such as cyanobacteria, sulfate-reducing bacteria, and Pseudomonadota (formerly proteobacteria). These microorganisms produce adhesive compounds that cement sand and other rocky materials to form mineral "microbial mats". In turn, these mats build up layer by layer, growing gradually over time.
This process generates the characteristic lamination of stromatolites, a feature that is hard to interpret, in terms of its temporal and environmental significance. Different styles of stromatolite lamination have been described, which can be studied through microscopic and mathematical methods. A stromatolite may grow to a meter or more. Fossilized stromatolites provide important records of some of the most ancient life. As of the Holocene, living forms are rare.
Definition
Stromatolites are layered, biochemical, accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains in biofilms (specifically microbial mats), through the action of certain microbial lifeforms, especially cyanobacteria.
Ancient stromatolites
Morphology
Fossilized stromatolites exhibit a variety of forms and structures, or morphologies, including conical, stratiform, domal, columnar, and branching types. Stromatolites occur widely in the fossil record of the Precambrian but are rare today. Very few Archean stromatolites contain fossilized microbes, but fossilized microbes are sometimes abundant in Proterozoic stromatolites.
While features of some ancient apparent stromatolites are suggestive of biological activity, others possess features that are more consistent with abiotic (non-biological) precipitation. Finding reliable ways to distinguish between biologically formed and abiotic stromatolites is an active area of research in geology. Multiple morphologies of stromatolites may exist in a single local or geological stratum, depending on specific conditions at the time of their formation, such as water depth.
Most stromatolites are spongiostromate in texture, having no recognisable microstructure or cellular remains. A minority are porostromate, having recognisable microstructure; these are mostly unknown from the Precambrian but persist throughout the Palaeozoic and Mesozoic. Since the Eocene, porostromate stromatolites are known only from freshwater settings.
Fossil record
Some Archean rock formations show macroscopic similarity to modern microbial structures, leading to the inference that these structures represent evidence of ancient life, namely stromatolites. However, others regard these patterns as being the result of natural material deposition or some other abiogenic mechanism. Scientists have argued for a biological origin of stromatolites due to the presence of organic globule clusters within the thin layers of the stromatolites, of aragonite nanocrystals (both features of current stromatolites), and of other microstructures in older stromatolites that parallel those in younger stromatolites that show strong indications of biological origin.
Stromatolites are a major constituent of the fossil record of the first forms of life on Earth. They peaked about 1.25 billion years ago (Ga) and subsequently declined in abundance and diversity, so that by the start of the Cambrian they had fallen to 20% of their peak. The most widely supported explanation is that stromatolite builders fell victim to grazing creatures (the Cambrian substrate revolution); this theory implies that sufficiently complex organisms were common around 1 Ga. Another hypothesis is that protozoa such as foraminifera were responsible for the decline, favoring formation of thrombolites over stromatolites through microscopic bioturbation.
Proterozoic stromatolite microfossils (preserved by permineralization in silica) include cyanobacteria and possibly some forms of the eukaryote chlorophytes (that is, green algae). One genus of stromatolite very common in the geologic record is Collenia.
The connection between grazer and stromatolite abundance is well documented in the younger Ordovician evolutionary radiation; stromatolite abundance also increased after the Late Ordovician mass extinction and Permian–Triassic extinction event decimated marine animals, falling back to earlier levels as marine animals recovered. Fluctuations in metazoan population and diversity may not have been the only factor in the reduction in stromatolite abundance. Factors such as the chemistry of the environment may have been responsible for changes.
While prokaryotic cyanobacteria reproduce asexually through cell division, they were instrumental in priming the environment for the evolutionary development of more complex eukaryotic organisms. They are thought to be largely responsible for increasing the amount of oxygen in the primeval Earth's atmosphere through their continuing photosynthesis (see Great Oxygenation Event). They use water, carbon dioxide, and sunlight to create their food. A layer of polysaccharides often forms over mats of cyanobacterial cells. In modern microbial mats, debris from the surrounding habitat can become trapped within the polysaccharide layer, which can be cemented together by the calcium carbonate to grow thin laminations of limestone. These laminations can accrete over time, resulting in the banded pattern common to stromatolites. The domal morphology of biological stromatolites is the result of the vertical growth necessary for the continued infiltration of sunlight to the organisms for photosynthesis. Layered spherical growth structures termed oncolites are similar to stromatolites and are also known from the fossil record. Thrombolites are poorly laminated or non-laminated clotted structures formed by cyanobacteria, common in the fossil record and in modern sediments. There is evidence that thrombolites form in preference to stromatolites when foraminifera are part of the biological community.
The Zebra River Canyon area of the Kubis platform in the deeply dissected Zaris Mountains of southwestern Namibia provides a well-exposed example of the thrombolite-stromatolite-metazoan reefs that developed during the Proterozoic period, the stromatolites here being better developed in updip locations under conditions of higher current velocities and greater sediment influx.
Modern occurrence
Formation
Time lapse photography of modern microbial mat formation in a laboratory setting gives some revealing clues to the behavior of cyanobacteria in stromatolites. Biddanda et al. (2015) found that cyanobacteria exposed to localized beams of light moved towards the light, or expressed phototaxis, and increased their photosynthetic yield, which is necessary for survival. In a novel experiment, the scientists projected a school logo onto a petri dish containing the organisms, which accreted beneath the lighted region, forming the logo in bacteria. The authors speculate that such motility allows the cyanobacteria to seek light sources to support the colony.
In both light and dark conditions, the cyanobacteria form clumps that then expand outwards, with individual members remaining connected to the colony via long tendrils. In harsh environments where mechanical forces may tear apart the microbial mats, these substructures may provide evolutionary benefit to the colony, affording it at least some measure of shelter and protection.
Lichen stromatolites are a proposed mechanism of formation of some kinds of layered rock structure that are formed above water, where rock meets air, by repeated colonization of the rock by endolithic lichens.
Saline locations
Modern stromatolites are mostly found in hypersaline lakes and marine lagoons where high saline levels prevent animal grazing. One such location where excellent modern specimens can be observed is Hamelin Pool Marine Nature Reserve, Shark Bay in Western Australia. In 2010, a fifth type of chlorophyll, namely chlorophyll f, was discovered by Min Chen from stromatolites in Shark Bay. Halococcus hamelinensis, a halophilic archaeon, occurs in living stromatolites in Shark Bay where it is exposed to extreme conditions of UV radiation, salinity and desiccation. H. hamelinesis possesses genes that encode enzymes employed in the repair of UV induced damages in DNA by the processes of nucleotide excision repair and photoreactivation.
Other locations include Pampa del Tamarugal National Reserve in Chile; Lagoa Salgada, Rio Grande do Norte, Brazil, where modern stromatolites can be observed as both bioherms (domal type) and beds; and in the Puna de Atacama of the Andes.
Inland stromatolites can be found in saline waters in Cuatro Ciénegas Basin, a unique ecosystem in the Mexican desert. Alchichica Lake in Puebla, Mexico has two distinct morphologic generations of stromatolites: columnar-dome like structures, rich in aragonite, forming near the shore line, dated back to 1,100 years before present (ybp) and spongy-cauliflower like thrombolytic structures that dominate the lake from top to the bottom, mainly composed of hydromagnesite, huntite, calcite and dated back to 2,800 ybp. The only open marine environment where modern stromatolites are known to prosper is the Exuma Cays in the Bahamas.
Freshwater locations
Laguna de Bacalar in Mexico's southern Yucatán Peninsula has an extensive formation of living giant microbialites (that is, stromatolites or thrombolites). The microbialite bed is over long with a vertical rise of several meters in some areas. These may be the largest sized living freshwater microbialites, or any organism, on Earth.
A 1.5 km stretch of reef-forming stromatolites (primarily of the genus Scytonema) occurs in Chetumal Bay in Belize, just south of the mouth of the Rio Hondo and the Mexican border. Large microbialite towers up to 40 m high were discovered in the largest soda lake on Earth Lake Van in eastern Turkey. They are composed of aragonite and grow by precipitation of calcite from sub-lacustrine karst-water. Freshwater stromatolites are found in Lake Salda in southern Turkey. The waters are rich in magnesium and the stromatolite structures are made of hydromagnesite.
Two instances of freshwater stromatolites are found in Canada, at Pavilion Lake and Kelly Lake in British Columbia. Pavilion Lake has the largest known freshwater stromatolites, and NASA has conducted xenobiology research there, called the "Pavilion Lake Research Project." The goal of the project is to better understand what conditions would likely harbor life on other planets.
Microbialites have been discovered in an open pit pond at an abandoned asbestos mine near Clinton Creek, Yukon,
Canada. These microbialites are extremely young and presumably began forming soon after the mine closed in 1978. The combination of a low sedimentation rate, high calcification rate, and low microbial growth rate appears to result in the formation of these microbialites. Microbialites at an historic mine site demonstrates that an anthropogenically constructed environment can foster microbial carbonate formation. This has implications for creating artificial environments for building modern microbialites including stromatolites.
A very rare type of non-lake dwelling stromatolite lives in the Nettle Cave at Jenolan Caves, NSW, Australia. The cyanobacteria live on the surface of the limestone and are sustained by the calcium-rich dripping water, which allows them to grow toward the two open ends of the cave which provide light.
Stromatolites composed of calcite have been found in both the Blue Lake in the dormant volcano, Mount Gambier and at least eight cenote lakes including the Little Blue Lake in the Lower South-East of South Australia.
| Biology and health sciences | Gram-negative bacteria | Plants |
62696 | https://en.wikipedia.org/wiki/Guinea%20pig | Guinea pig | The guinea pig or domestic guinea pig (Cavia porcellus), also known as the cavy or domestic cavy ( ), is a species of rodent belonging to the genus Cavia, family Caviidae. Breeders tend to use the name "cavy" for the animal, but "guinea pig" is more commonly used in scientific and laboratory contexts. Despite their name, guinea pigs are not native to Guinea, nor are they closely related to pigs. Instead, they originated in the Andes region of South America, where wild guinea pigs can still be found today. Studies based on biochemistry and DNA hybridization suggest they are domesticated animals that do not exist naturally in the wild, but are descendants of a closely related cavy species such as C. tschudii. Originally, they were domesticated as livestock (source of meat) in the Andean region and are still consumed in some parts of the world.
In Western society, the guinea pig has enjoyed widespread popularity as a pet since its introduction to Europe and North America by European traders in the 16th century. Their docile nature, friendly responsiveness to handling and feeding, and the relative ease of caring for them have continued to make guinea pigs a popular choice of household pets. Consequently, organizations devoted to the competitive breeding of guinea pigs have been formed worldwide. Through artificial selection, many specialized breeds with varying coat colors and textures have been selected by breeders.
Livestock breeds of guinea pig play an important role in folk culture for many indigenous Andean peoples, especially as a food source. They are not only used in folk medicine and in community religious ceremonies but also raised for their meat. Guinea pigs are an important culinary staple in the Andes Mountains, where it is known as cuy. Lately, marketers tried to increase their consumption outside South America.
Biological experimentation on domestic guinea pigs has been carried out since the 17th century. The animals were used so frequently as model organisms in the 19th and 20th centuries that the epithet guinea pig came into use to describe a human test subject. Since that time, they have mainly been replaced by other rodents, such as mice and rats. However, they are still used in research, primarily as models to study such human medical conditions as juvenile diabetes, tuberculosis, scurvy (like humans, they require dietary intake of vitamin C), and pregnancy complications.
History
Cavia porcellus is not found naturally in the wild; it is likely descended from closely related species of cavies, such as C. aperea, C. fulgida, and C. tschudii. These closely related species are still commonly found in various regions of South America. Studies from 2007 to 2010 applying molecular markers, and morphometric studies on the skull and skeletal morphology of current and mummified animals revealed the ancestor to be most likely C. tschudii. Some species of cavy, identified in the 20th century as C. anolaimae and C. guianae, may be domestic guinea pigs that have become feral by reintroduction into the wild.
Regionally known as cuy (Spanish word derived from quechua quwi), the guinea pig was first domesticated as early as 5000 BC for food by tribes in the Andean region of South America (the present-day southern part of Colombia, Ecuador, Peru, and Bolivia), some thousands years after the domestication of the South American camelids. The Moche people of ancient Peru worshipped animals and often depicted the guinea pig in their art.
Early accounts from Spanish settlers state that guinea pigs were the preferred sacrificial animal of the Inca people native to Peru. These claims are supported by archaeological digs and transcribed Quechua mythology, providing evidence that sacrificial rituals involving guinea pigs served many purposes in society such as appeasing the gods, accompanying the dead, or reading the future.
From about 1200 to the Spanish conquest in 1532, the indigenous people used selective breeding to develop many varieties of domestic guinea pigs, forming the basis for some modern domestic breeds. They continue to be a food source in the region; many households in the Andean highlands raise the animal.
In the early 1500s, Spanish, Dutch, and English traders took guinea pigs to Europe, where they quickly became popular as exotic pets among the upper classes and royalty, including Queen Elizabeth I. The earliest known written account of the guinea pig dates from 1547, in a description of the animal from Santo Domingo. Because cavies are not native to Hispaniola, the animal was believed to have been earlier introduced there by Spanish travelers. However, based on more recent excavations on West Indian islands, the animal may have been introduced to the Caribbean around 500 BC by ceramic-making horticulturalists from South America. It was present in the Ostionoid period on Puerto Rico, for example, long before the advent of the Spaniards.
The guinea pig was first described in the West in 1554 by the Swiss naturalist Conrad Gessner. Its binomial scientific name was first used by Erxleben in 1777; it is an amalgam of Pallas' generic designation (1766) and Linnaeus' specific conferral (1758).
The earliest-known European illustration of a domestic guinea pig is a painting (artist unknown) in the collection of the National Portrait Gallery in London, dated to 1580, which shows a girl in a typical Elizabethan dress holding a tortoise-shell guinea pig in her hands. She is flanked by her two brothers, one of whom holds a pet bird. The picture dates from the same period as the oldest recorded guinea pig remains in England, which are a partial cavy skeleton found at Hill Hall, an Elizabethan manor house in Essex, and dated to around 1575.
Nomenclature
Latin name
The scientific name of the common species is Cavia porcellus, with being Latin for "little pig". Cavia is Neo-Latin; it is derived from cabiai, the animal's name in the language of the Galibi tribes once native to French Guiana. Cabiai may be an adaptation of the Portuguese çavia (now savia), which is itself derived from the Tupi word saujá, meaning rat.
Guinea pig
The origin of "guinea" in "guinea pig" is hard to explain. One proposed explanation is that the animals were brought to Europe by way of Guinea, leading people to think they had originated there. "Guinea" was also frequently used in English to refer generally to any far-off, unknown country, so the name may be a colorful reference to the animal's exotic origins.
Another hypothesis suggests the "guinea" in the name is a corruption of "Guiana", an area in South America. A common misconception is that they were so named because they were sold for the price of a guinea coin. This hypothesis is untenable because the guinea was first struck in England in 1663, and William Harvey used the term "Ginny-pig" as early as 1653. Others believe "guinea" may be an alteration of the word coney (rabbit); guinea pigs were referred to as "pig coneys" in Edward Topsell's 1607 treatise on quadrupeds.
How the animals came to be called "pigs" is not clear. They are built somewhat like pigs, with large heads relative to their bodies, stout necks, and rounded rumps with no tail of any consequence; some of the sounds they emit are very similar to those made by pigs, and they spend a large amount of time eating. They can survive for long periods in small quarters, like a "pig pen," and were easily transported by ship to Europe.
Other languages
Guinea pigs are called quwi or jaca in Quechua and cuy or cuyo (plural cuyes, cuyos) in the Spanish of Ecuador, Peru, and Bolivia.
The animal's name alludes to pigs in many European languages. The German word for them is , literally "little sea pig", in Polish they are called , in Hungarian as , and in . The German word derives from the Middle High German name Merswin. This word originally meant "dolphin" and was used because of the animals' grunting sounds (which were thought to be similar).
Many other, possibly less scientifically based, explanations of the German name exist. For example, sailing ships stopping to reprovision in the New World would pick up guinea pig stores, providing an easily transportable source of fresh meat. The French term is cochon d'Inde (Indian pig), or cobaye; the Dutch called it Guinees biggetje (Guinean piglet), or cavia (in some Dutch dialects it is called Spaanse rat); and in Portuguese, the guinea pig is variously referred to as cobaia, from the Tupi word via its Latinization, or as porquinho da Índia (little Indian pig). This association with pigs is not universal among European terms; for example, the common word in Spanish is conejillo de Indias (little rabbit of the Indies).
The Chinese refer to the animal as (túnshǔ, "pig mouse"), and sometimes as (hélánzhū, 'Netherlands pig') or (tiānzhúshǔ, "Indian mouse"). The Japanese word for guinea pig is (morumotto), which derives from the name of another mountain-dwelling rodent, the marmot. This word is how the guinea pigs were called by Dutch traders, who first brought them to Nagasaki in 1843. The other, and less common, Japanese word for guinea pig, using kanji, is 天竺鼠 (てんじくねずみ or tenjiku-nezumi), which translates as "India rat".
Biology
Guinea pigs are relatively large for rodents. In pet breeds, adults typically weigh between and measure between in length. Some livestock breeds weigh when full grown. Pet breeds live an average of four to five years but may live as long as eight years. According to Guinness World Records, , the longest-lived guinea pig was 14 years, 10 months, and 2 weeks old. Most guinea pigs have fur, but one laboratory breed adopted by some pet owners, the skinny pig, is mostly furless. In contrast, several breeds have long fur, such as the Peruvian, the Silkie, and the Texel. They have four front teeth and small back teeth. Their front teeth grow continuously, so guinea pigs chew on materials such as wood to wear them down to prevent them from becoming too long. In the 1990s, a minority scientific opinion emerged proposing that caviomorphs such as guinea pigs, chinchillas, and degus are not actually rodents, and should be reclassified as a separate order of mammals (similar to the rodent-like lagomorphs which includes rabbits and hares). Subsequent research using wider sampling restored the consensus among mammalian biologists regarding the current classification of rodents, including guinea pigs, as monophyletic.
Wild cavies are found on grassy plains and occupy an ecological niche similar to that of cattle. They are social animals, living in the wild in small groups ("herds") that consist of several females ("sows"), a male ("boar"), and their young ("pups" not "piglets," a break with the preceding porcine nomenclature). Herds of animals move together, eating grass or other vegetation, yet do not store food. While they do not burrow themselves or build nests, they frequently seek shelter in the burrows of other animals, as well as in crevices and tunnels formed by vegetation. They are crepuscular and tend to be most active during dawn and dusk when it is harder for predators to spot them.
Male and female guinea pigs do not significantly differ in appearance apart from general size. The position of the anus is very close to the genitals in both sexes. Sexing animals at a young age must be done by someone trained in the differences. Female genitals are distinguished by a Y-shaped configuration formed from a vulvar flap. While male genitals may look similar, with the penis and anus forming a similar shape, the penis will protrude if pressure is applied to the surrounding hair anterior to the genital region. The male's testes may also be visible externally from scrotal swelling.
Behavior
Guinea pigs can learn complex paths to food and can accurately remember a learned path for months. Their most robust problem-solving strategy is motion. While guinea pigs can jump small obstacles, they cannot jump very high. Most of them are poor climbers and are not particularly agile. They startle easily, and when they sense danger, they either freeze in place for long periods or run for cover with rapid, darting motions. Larger groups of startled guinea pigs "stampede," running in haphazard directions as a means of confusing predators. When happily excited, guinea pigs may (often repeatedly) perform little hops in the air (a movement known as "popcorning"), analogous to the ferret's war dance or rabbit happy hops (binkies). Guinea pigs are also good swimmers, although they do not like being wet and infrequently need bathing.
Like many rodents, guinea pigs sometimes participate in social grooming and regularly self-groom. A milky-white substance is secreted from their eyes and rubbed into the hair during the grooming process. Groups of boars often chew each other's hair, but this is a method of establishing hierarchy within a group, rather than a social gesture. Dominance is also established through biting (especially of the ears), piloerection, aggressive noises, head thrusts, and leaping attacks. Non-sexual simulated mounting for dominance is also common among same-sex groups.
Guinea pig eyesight is not as good as that of a human in terms of distance and color, but they have a wider angle of vision (about 340°) and see in partial color (dichromacy). They have well-developed senses of hearing, smell, and touch.
Guinea pigs have developed a different biological rhythm from their wild counterparts and have longer periods of activity followed by short sleep in between. Activity is scattered randomly throughout the day; aside from an avoidance of intense light, no regular circadian patterns are apparent.Guinea pigs do not generally thrive when housed with other species. Larger animals may regard guinea pigs as prey, though some dogs and cats can be trained to accept them. Opinion is divided over the cohousing of guinea pigs and rabbits. Some published sources say that guinea pigs and rabbits complement each other well when sharing a cage. However, rabbits have different nutritional requirements; as lagomorphs, they synthesize their own Vitamin C, so the two species will not thrive if fed the same food when housed together. Rabbits may also harbor diseases (such as respiratory infections from Bordetella and Pasteurella), to which guinea pigs are susceptible. Housing guinea pigs with other rodents such as gerbils and hamsters may increase instances of respiratory and other infections, and such rodents may act aggressively toward guinea pigs.
Vocalization
Vocalization is the primary means of communication between members of the species. These are the most common sounds made by the guinea pig:
A "wheek" is a loud noise, the name of which is onomatopoeic, also known as a whistle. An expression of general excitement may occur in response to the presence of its owner or feeding. It is sometimes used to find other guinea pigs if they are running. If a guinea pig is lost, it may wheek for assistance.
A bubbling or purring sound is made when the guinea pig enjoys itself, such as when petting and holding. It may also make this sound when grooming, crawling around to investigate a new place, or when given food.
A rumbling sound is normally related to dominance within a group, though it can also come as a response to being scared or angry. In the case of being scared, the rumble often sounds higher, and the body vibrates shortly. While courting, a male usually purrs deeply, swaying and circling the female in a behavior called rumblestrutting. A low rumble while walking away reluctantly shows passive resistance.
Chutting and whining are sounds made in pursuit situations by the pursuer and pursuee, respectively.
A chattering sound is made by rapidly gnashing the teeth, and is generally a sign of warning. Guinea pigs tend to raise their heads when making this sound.
Squealing or shrieking is a high-pitched sound of discontent in response to pain or danger.
Chirping, a less common sound likened to bird song, seems to be related to stress or discomfort or when a baby guinea pig wants to be fed. Very rarely, the chirping will last for several minutes.
Reproduction
Males (boars) reach sexual maturity in 3–5 weeks. Similarly, females (sows) can be fertile as early as four weeks old and carry litters before becoming fully grown adults. A sow can breed year-round (with spring being the peak). A sow can have as many as five litters in a year, but six is theoretically possible. Unlike the offspring of most rodents, which are altricial at birth, newborn cavy pups are precocial, and are well-developed with hair, teeth, claws, and partial eyesight. The pups are immediately mobile and begin eating solid food immediately, though they continue to suckle. Sows can once again become pregnant 6–48 hours after giving birth, but it is not healthy for a female to be constantly pregnant.
The gestation period lasts from , with an average of . Because of the long gestation period and the large size of the pups, pregnant sows may become large and eggplant-shaped, although the change in size and shape varies depending upon the size of the litter. Litter size ranges from one to six, with three being the average; the largest recorded litter size is 9. The guinea pig mother only has two nipples, but she can readily raise the more average-sized litters of 2 to 4 pups. In smaller litters, difficulties may occur during labour due to oversized pups. Large litters result in higher incidences of stillbirth, but because the pups are delivered at an advanced stage of development, lack of access to the mother's milk has little effect on the mortality rate of newborns.
Cohabitating females assist in mothering duties if lactating; guinea pigs practice alloparental care, in which a sow may adopt the pups of another. This might take place if the original parents die or are, for some reason, separated from them. This behavior is common and is seen in many other animal species, such as the elephant.
Toxemia of pregnancy (hypertension) is a common problem and kills many pregnant females. Signs of toxemia include anorexia (loss of appetite), lack of energy, excessive salivation, a sweet or fruity breath odor due to ketones, and seizures in advanced cases. Pregnancy toxemia appears to be most common in hot climates. Other serious complications during pregnancy can include a prolapsed uterus, hypocalcaemia, and mastitis.
Females that do not give birth may develop an irreversible fusing or calcified cartilage of the pubic symphysis, a joint in the pelvis, which may occur after six months of age. If they become pregnant after this has happened, the birth canal may not widen sufficiently, which may lead to dystocia and death as they attempt to give birth.
Husbandry
Living environment
Domestic guinea pigs generally live in cages, although some owners of large numbers of cavies dedicate entire rooms to their pets. Wire mesh floors can cause injury and may be associated with an infection commonly known as bumblefoot (ulcerative pododermatitis), so cages with solid bottoms, where the animal walks directly on the bedding, are typically used. Large cages allow for adequate running space and can be constructed from wire grid panels and plastic sheeting, a style known as C&C, or "cubes and coroplast."
Red cedar (Eastern or Western) and pine, both softwoods, were commonly used as bedding. Still, these materials are believed to contain harmful phenols (aromatic hydrocarbons) and oils. Bedding materials made from hardwoods (such as aspen), paper products, and corn cobs are alternatives. Guinea pigs tend to be messy; they often jump into their food bowls or kick bedding and feces into them, and their urine sometimes crystallizes on cage surfaces, making it difficult to remove. After its cage has been cleaned, a guinea pig typically urinates and drags its lower body across the floor of the cage to mark its territory. Male guinea pigs may mark their territory in this way when they are put back into their cages after being taken out.
Guinea pigs thrive in groups of two or more; groups of sows or groups of one or more sows and a neutered boar are common combinations, but boars can sometimes live together. Guinea pigs learn to recognize and bond with other individual guinea pigs, and tests show that a boar's neuroendocrine stress response to a strange environment is significantly lowered in the presence of a bonded female but not with unfamiliar females. Groups of boars may also get along, provided their cage has enough space, they are introduced at an early age, and no females are present. In Switzerland, where owning a single guinea pig is considered harmful to its well-being, keeping a guinea pig without a companion is illegal. There is a service to rent guinea pigs, to temporarily replace a dead cage-mate. Sweden has similar laws against keeping a guinea pig by itself.
Diet
The guinea pig's natural diet is grass; their molars are particularly suited for grinding plant matter and grow continuously throughout their life. Most mammals that graze are large and have a long digestive tract. Guinea pigs have much longer colons than most rodents.
Easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. But to get nutrients out of hard-to-digest fiber, guinea pigs ferment fiber in the cecum (in the GI tract) and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients. The cecotropes are eaten directly from the anus unless the guinea pig is pregnant or obese. They share this behavior with lagomorphs (rabbits, hares, pikas) and some other animals.
In geriatric boars or sows (rarely in young ones), the muscles which allow the cecotropes to be expelled from the anus can become weak. This creates a condition known as fecal impaction, which prevents the animal from redigesting cecotropes even though harder pellets may pass through the impacted mass. The condition may be temporarily alleviated by a human carefully removing the impacted feces from the anus.
Guinea pigs benefit from a diet of fresh grass hay, such as timothy hay, in addition to food pellets, which are often based on timothy hay. Alfalfa hay is also a popular food choice, and most guinea pigs will eat large amounts of alfalfa when offered it, though some controversy exists over offering alfalfa to adult guinea pigs. Some pet owners and veterinary organizations have advised that, as a legume rather than a grass hay, alfalfa consumed in large amounts may lead to obesity, as well as bladder stones from the excess calcium in all animals except for pregnant and very young guinea pigs. However, published scientific sources mention alfalfa as a food source that can replenish protein, amino acids, and fiber.
Like humans, but unlike most other mammals, guinea pigs cannot synthesize vitamin C and must obtain this vital nutrient from food. If guinea pigs do not ingest enough vitamin C, they can suffer from potentially fatal scurvy. They require about 10 mg of vitamin C daily (20 mg if pregnant), which can be obtained through fresh, raw fruits and vegetables (such as broccoli, apple, cabbage, carrot, celery, and spinach) or dietary supplements or by eating fresh pellets designed for guinea pigs, if they have been handled properly. Healthy diets for guinea pigs require a complex balance of calcium, magnesium, phosphorus, potassium, and hydrogen ions; but adequate amounts of vitamins A, D, and E are also necessary.
Poor diets for guinea pigs have been associated with muscular dystrophy, metastatic calcification, difficulties with pregnancy, vitamin deficiencies, and teeth problems. Guinea pigs tend to be fickle eaters when it comes to fresh fruits and vegetables after having learned early in life what is and is not appropriate to consume. Their eating habits may be difficult to change after maturity. They do not respond well to sudden changes in their diet, and they may stop eating and starve rather than accept new food types. A constant supply of hay is generally recommended, as guinea pigs feed continuously and may develop bad habits if food is not present, such as chewing on their hair. Being rodents, as their teeth grow constantly (as do their nails, like humans), they routinely gnaw on things, lest their teeth become too large for their jaw (a common problem in rodents). Guinea pigs chew on cloth, paper, plastic, and rubber if available. Guinea pig owners may "Guinea Pig proof" their household, especially if they are free to roam, to avoid any destruction or harm to the guinea pig itself.
Some plants are poisonous to guinea pigs, including bracken, bryony, buttercup, charlock, deadly nightshade, foxglove, hellebore, hemlock, lily of the valley, mayweed, monkshood, privet, ragwort, rhubarb, speedwell, toadflax (both Linaria vulgaris and Linaria dalmatica), and wild celery. Additionally, any plant which grows from a bulb (e.g., tulip or onion) is normally considered poisonous, as well as ivy and oak tree leaves.
Health problems
Common ailments in domestic guinea pigs include respiratory tract infections, diarrhea, scurvy (vitamin C deficiency, typically characterized by sluggishness), abscesses due to infection (often in the neck, due to hay embedded in the throat, or from external scratches), and infections by lice, mites, or fungus.
Mange mites (Trixacarus caviae) are a common cause of hair loss, and other symptoms may also include excessive scratching, unusually aggressive behavior when touched (due to pain), and, in some instances, seizures. Guinea pigs may also suffer from "running lice" (Gliricola porcelli), a small, white insect that can be seen moving through the hair; their eggs, which appear as black or white specks attached to the hair, are sometimes referred to as "static lice." Other causes of hair loss can be hormonal upsets caused by underlying medical conditions such as ovarian cysts.
Foreign bodies, especially tiny pieces of hay or straw, can become lodged in the eyes of guinea pigs, resulting in excessive blinking, tearing, and, in some cases, an opaque film over the eye due to corneal ulcer. Hay or straw dust can also cause sneezing. While it is normal for guinea pigs to sneeze periodically, frequent sneezing may be a symptom of pneumonia, especially in response to atmospheric changes. Pneumonia may also be accompanied by torticollis and can be fatal.
Because the guinea pig has a stout, compact body, it more easily tolerates excessive cold than excessive heat. Its normal body temperature is , so its ideal ambient air temperature range is similar to a human's, about . Consistent ambient temperatures in excess of have been linked to hyperthermia and death, especially among pregnant sows. Guinea pigs are not well suited to environments that feature wind or frequent drafts, and respond poorly to extremes of humidity outside of the range of 30–70%.
Guinea pigs are prey animals whose survival instinct is to mask pain and signs of illness, and many times, health problems may not be apparent until a condition is severe or in its advanced stages. Treatment of disease is made more difficult by the extreme sensitivity guinea pigs have to most antibiotics, including penicillin, which kill off the intestinal flora and quickly bring on episodes of diarrhea and in some cases, death.
Similar to the inherited genetic diseases of other breeds of animals (such as hip dysplasia in canines), some genetic abnormalities of guinea pigs have been reported. Most commonly, the roan coloration of Abyssinian guinea pigs is associated with congenital eye disorders and problems with the digestive system. Other genetic disorders include "waltzing disease" (deafness coupled with a tendency to run in circles), palsy, and tremor conditions.
Importance
As pets
Social behaviors
If handled correctly early in life, guinea pigs become amenable to being picked up and carried and seldom bite or scratch. They are timid explorers who often hesitate to escape their cage even when an opportunity presents itself. Still, they show considerable curiosity when allowed to walk freely, especially in familiar and safe terrain. Guinea pigs that become familiar with their owner will whistle on the owner's approach; they will also learn to whistle in response to the rustling of plastic bags or the opening of refrigerator doors, where their food is most commonly stored.
Coats and grooming
Domesticated guinea pigs occur in many breeds that have developed since their introduction to Europe and North America. These varieties vary in hair and color composition. The most common variety found in pet stores is the English shorthair (also known as the American), which has a short, smooth coat, and the Abyssinian, whose coat is ruffled with cowlicks, or rosettes. Also popular among breeders are the Peruvian and the Sheltie (or Silkie), both straight longhair breeds, and the Texel, a curly longhair. Grooming of guinea pigs is primarily accomplished using combs or brushes. Shorthair breeds are typically brushed weekly, while longhair breeds may require daily grooming.
Clubs and associations
Cavy clubs and associations dedicated to the showing and breeding guinea pigs have been established worldwide. The American Cavy Breeders Association, an adjunct to the American Rabbit Breeders' Association, is the governing body in the United States and Canada. The British Cavy Council governs cavy clubs in the United Kingdom. Similar organizations exist in Australia (Australian National Cavy Council) and New Zealand (New Zealand Cavy Council). Each club publishes its standard of perfection and determines which breeds are eligible for showing.
Human allergies
Allergic symptoms, including rhinitis, conjunctivitis, and asthma, have been documented in laboratory animal workers who come into contact with guinea pigs. Allergic reactions following direct exposure to guinea pigs in domestic settings have also been reported. Two major guinea pig allergens, Cav p I and Cav p II, have been identified in guinea pig fluids (urine and saliva) and guinea pig dander. People who are allergic to guinea pigs are usually allergic to hamsters and gerbils, as well. Allergy shots can successfully treat an allergy to guinea pigs. However, treatment can take up to 18 months.
Traditional uses in Andean populations
Folklore traditions involving guinea pigs are numerous; they are exchanged as gifts, used in customary social and religious ceremonies, and frequently referred to in spoken metaphors. They also are used in traditional healing rituals by folk doctors, or curanderos, who use the animals to diagnose diseases such as jaundice, rheumatism, arthritis, and typhus. They are rubbed against the bodies of the sick and are seen as a supernatural medium. Black guinea pigs are considered especially useful for diagnoses. The animal may be cut open and its entrails examined to determine whether the cure was effective. These methods are widely accepted in many parts of the Andes, where Western medicine is unavailable or distrusted.
Peruvians consume an estimated 65 million guinea pigs each year. The animal is so entrenched in the culture that one famous painting of the Last Supper in the main cathedral in Cusco shows Christ and his disciples dining on guinea pig. The animal remains an important aspect of certain religious events in both rural and urban areas of Peru. A religious celebration, known as ("collecting the cuys"), is a major festival in many villages in the Antonio Raimondi province of eastern Peru and is celebrated in smaller ceremonies in Lima. It is a syncretistic event, combining elements of Catholicism and pre-Columbian religious practices, and revolves around the celebration of local patron saints. The exact form the takes differs from town to town; in some localities, a sirvinti (servant) is appointed to go from door to door, collecting donations of guinea pigs, while in others, guinea pigs may be brought to a communal area to be released in a mock bullfight. Meals such as cuy chactado are always served as part of these festivities, and the killing and serving of the animal are framed by some communities as a symbolic satire of local politicians or important figures. In the Tungurahua and Cotopaxi provinces of central Ecuador, guinea pigs are employed in the celebrations surrounding the feast of Corpus Christi as part of the Ensayo, which is a community meal, and the Octava, where castillos (greased poles) are erected with prizes tied to the crossbars, from which several guinea pigs may be hung. The Peruvian town of Churin has an annual festival that involves dressing guinea pigs in elaborate costumes for competition. There are also guinea pig festivals held in Huancayo, Cusco, Lima, and Huacho, featuring costumes and guinea pig dishes. Most guinea pig celebrations occur on National Guinea Pig Day (Día Nacional del Cuy) across Peru on the second Friday of October.
In popular culture and media
As a result of their widespread popularity, especially in households with children, guinea pigs have shown a presence in culture and media. Some noted appearances of the animal in literature include the short story "Pigs Is Pigs" by Ellis Parker Butler, which is a tale of bureaucratic incompetence. Two guinea pigs held at a railway station breed unchecked while humans argue whether they are "pigs" or "pets" to determine freight charges. Butler's story, in turn, inspired the Star Trek: The Original Series episode "The Trouble with Tribbles", written by David Gerrold.
In children's literature
The Fairy Caravan, a novel by Beatrix Potter, and Michael Bond's Olga da Polga series for children, both feature guinea pigs as the protagonist. Another appearance is in The Magician's Nephew by C. S. Lewis: in the first (chronologically) of his The Chronicles of Narnia series, a guinea pig is the first creature to travel to the Wood between the Worlds. In Ursula Dubosarsky's Maisie and the Pinny Gig, a little girl has a recurrent dream about a giant guinea pig, while guinea pigs feature significantly in several of Dubosarsky's other books, including the young adult novel The White Guinea Pig and The Game of the Goose.
In film and television
Guinea pigs have also been featured in film and television. In the TV movie Shredderman Rules, the main character and the main character's crush both have guinea pigs, which play a minor part in the plot. A guinea pig named Rodney, voiced by Chris Rock, was a prominent character in the 1998 film Dr. Dolittle, and Linny the Guinea Pig is a co-star on Nick Jr.'s Wonder Pets. Guinea pigs were used in some major advertising campaigns in the 1990s and 2000s, notably for Egg Banking plc, Snapple, and Blockbuster Video. In the South Park season 12 episode "Pandemic 2: The Startling", giant guinea pigs dressed in costumes rampage over the Earth. The 2009 Walt Disney Pictures movie G-Force features a group of highly intelligent guinea pigs trained as operatives of the U.S. government.
As livestock
In South America
Guinea pigs (called cuy, cuye, or curí) were originally domesticated for their meat in the Andes. Traditionally, the animal was reserved for ceremonial meals and as a delicacy by indigenous people in the Andean highlands. Still, since the 1960s, it has become more socially acceptable for consumption by all people. It continues to be a significant part of the diet in Peru and Bolivia, particularly in the Andes Mountains highlands; it is also eaten in some areas of Ecuador (mainly in the Sierra) and in Colombia, mainly in the southwestern part of the country (Cauca and Nariño departments). Because guinea pigs require much less room than traditional livestock and reproduce extremely quickly, they are a more profitable source of food and income than many traditional stock animals, such as pigs and cattle; moreover, they can be raised in an urban environment. Both rural and urban families raise guinea pigs for supplementary income, and the animals are commonly bought and sold at local markets and large-scale municipal fairs.
Guinea pig meat is high in protein and low in fat and cholesterol, and is described as being similar to rabbit and the dark meat of chicken. The animal may be served fried (chactado or frito), broiled (asado), or roasted (al horno), and in urban restaurants may also be served in a casserole or a fricassee. Ecuadorians commonly consume sopa or locro de cuy, a soup dish. Pachamanca or huatia, an earth oven cooking method, is also popular, and cuy cooked this way is usually served with chicha (corn beer) in traditional settings.
In the United States, Europe, and Japan
Andean immigrants in New York City raise and sell guinea pigs for meat, and some South American restaurants in major cities in the United States serve cuy as a delicacy. In the 1990s and 2000s, La Molina University began exporting large-breed guinea pigs to Europe, Japan, and the United States in the hope of increasing human consumption outside of countries in northern South America.
Sub-Saharan Africa
Efforts have been made to promote guinea pig husbandry in developing countries of West Africa, where they occur more widely than generally known because they are usually not covered by livestock statistics. However, it has not been known when and where the animals have been introduced to Africa. In Cameroon, they are widely distributed. In the Democratic Republic of the Congo, they can be found both in peri-urban environments as well as in rural regions, for example, in South Kivu. They are also frequently held in rural households in Iringa Region of southwestern Tanzania.
Peruvian breeding program
Peruvian research universities, especially La Molina National Agrarian University, began experimental programs in the 1960s intending to breed larger-sized guinea pigs. Subsequent university efforts have sought to change breeding and husbandry procedures in South America to make the raising of guinea pigs as livestock more economically sustainable. The variety of guinea pig produced by La Molina is fast-growing and can weigh . All the large breeds of guinea pig are known as cuy mejorados and the pet breeds are known as cuy criollos. The three original lines out of Peru were the Perú (weighing by 2 weeks), the Andina, and the Inti.
In scientific research
The use of guinea pigs in scientific experimentation dates back at least to the 17th century, when the Italian biologists Marcello Malpighi and Carlo Fracassati conducted vivisections of guinea pigs in their examinations of anatomic structures. In 1780, Antoine Lavoisier used a guinea pig in his experiments with the calorimeter, a device used to measure heat production. Guinea pigs played a major role in the establishment of germ theory in the late 19th century, through the experiments of Louis Pasteur, Émile Roux, and Robert Koch. Guinea pigs have been launched into orbital space flight several times, first by the USSR on the Sputnik 9 biosatellite of March 9, 1961 – with a successful recovery. China also launched and recovered a biosatellite in 1990 which included guinea pigs as passengers.
Guinea pigs remained popular laboratory animals until the later 20th century: about 2.5 million guinea pigs were used annually in the U.S. for research in the 1960s, but that total decreased to about 375,000 by the mid-1990s. As of 2007, they constitute about 2% of the current total of laboratory animals. In the past, they were widely used to standardize vaccines and antiviral agents; they were also often employed in studies on the production of antibodies in response to extreme allergic reactions, or anaphylaxis. Less common uses included research in pharmacology and irradiation. Since the middle 20th century, they have been replaced in laboratory contexts primarily by mice and rats. This is in part because research into the genetics of guinea pigs has lagged behind that of other rodents, although geneticists W. E. Castle and Sewall Wright made some contributions to this area of study, especially regarding coat color. The guinea pig genome was sequenced in 2008 as part of the Mammalian Genome Project, but the guinea pig sequence scaffolds have not been assigned to chromosomes.
The guinea pig was most extensively used in research and diagnosis of infectious diseases. Common uses included identification of brucellosis, Chagas disease, cholera, diphtheria, foot-and-mouth disease, glanders, Q fever, Rocky Mountain spotted fever, and various strains of typhus. They are still frequently used to diagnose tuberculosis since they are easily infected by human tuberculosis bacteria. Because guinea pigs are one of the few animals which, like humans and other primates, cannot synthesize vitamin C but must obtain it from their diet, they are ideal for researching scurvy. From the accidental discovery in 1907 that scurvy could be induced in guinea pigs to their use to prove the chemical structure of the "scorbutic factor" in 1932, the guinea pig model proved a crucial part of vitamin C research.
Complement, an important component for serology, was first isolated from the blood of the guinea pig. Guinea pigs have an unusual insulin mutation, and are a suitable species for the generation of anti-insulin antibodies. Present at a level 10 times that found in other mammals, the insulin in guinea pigs may be important in growth regulation, a role usually played by growth hormone. Additionally, guinea pigs have been identified as model organisms for the study of juvenile diabetes and, because of the frequency of pregnancy toxemia, of pre-eclampsia in human females. Their placental structure is similar to that of humans, and their gestation period can be divided into trimesters that resemble the stages of fetal development in humans.
Guinea pig strains used in scientific research are primarily outbred strains. Aside from the typical American or English stock, the two main outbred strains in laboratory use are the Hartley and Dunkin-Hartley; these English strains are albino, although pigmented strains are also available. Inbred strains are less common and are usually used for very specific research, such as immune system molecular biology. Of the inbred strains that have been created, the two still used with any frequency are, following Sewall Wright's designations, "Strain 2" and "Strain 13".
Hairless breeds of guinea pigs have been used in scientific research since the 1980s, particularly for dermatological studies. A hairless and immunodeficient breed was the result of a spontaneous genetic mutation in inbred laboratory strains from the Hartley stock at the Eastman Kodak Company in 1979. An immunocompetent hairless breed was also identified by the Institute Armand Frappier in 1978, and Charles River Laboratories has reproduced this breed for research since 1982. Cavy fanciers then began acquiring hairless breeds, and the pet hairless varieties are referred to as "skinny pigs."
Metaphorical usage
In English, the term "guinea pig" is commonly used as a metaphor for a subject of scientific experimentation, or in modern times a subject of any experiment or test. This usage dates back to the early 20th century: the earliest examples cited by the Oxford English Dictionary date from 1913 and 1920. In 1933, Consumers Research founders F. J. Schlink and Arthur Kallet wrote a book entitled 100,000,000 Guinea Pigs, extending the metaphor to consumer society. The book became a national bestseller in the United States, thus further popularizing the term, and spurred the growth of the consumer protection movement. During World War II, the Guinea Pig Club was established at Queen Victoria Hospital, East Grinstead, Sussex, England, as a social club and mutual support network for the patients of plastic surgeon Archibald McIndoe, who were undergoing previously untested reconstruction procedures. The negative connotation of the term was later employed in the novel The Guinea Pigs (1970) by Czech author Ludvík Vaculík as an allegory for Soviet totalitarianism.
| Biology and health sciences | Rodents | null |
62784 | https://en.wikipedia.org/wiki/Soybean | Soybean | The soybean, soy bean, or soya bean (Glycine max) is a species of legume native to East Asia, widely grown for its edible bean, which has numerous uses.
Traditional unfermented food uses of soybeans include soy milk, from which tofu and tofu skin are made. Fermented soy foods include soy sauce, fermented bean paste, nattō, and tempeh. Fat-free (defatted) soybean meal is a significant and cheap source of protein for animal feeds and many packaged meals. For example, soybean products, such as textured vegetable protein (TVP), are ingredients in many meat and dairy substitutes.
Soybeans contain significant amounts of phytic acid, dietary minerals and B vitamins. Soy vegetable oil, used in food and industrial applications, is another product of processing the soybean crop. Soybean is a common protein source in feed for farm animals that in turn yield animal protein for human consumption.
Etymology
The word "soy" derives from the Japanese soi, a regional variant of shōyu, meaning "soy sauce".
The name of the genus, Glycine, comes from Linnaeus. When naming the genus, Linnaeus observed that one of the species within the genus had a sweet root. Based on the sweetness, the Greek word for sweet, glykós, was Latinized. The genus name is not related to the amino acid glycine.
Description
Like most plants, soybeans grow in distinct morphological stages as they develop from seeds into fully mature plants.
Germination
The first stage of growth is germination, a method which first becomes apparent as a seed's radicle emerges. This is the first stage of root growth and occurs within the first 48 hours under ideal growing conditions. The first photosynthetic structures, the cotyledons, develop from the hypocotyl, the first plant structure to emerge from the soil. These cotyledons both act as leaves and as a source of nutrients for the immature plant, providing the seedling nutrition for its first 7 to 10 days.
Maturation
The first true leaves develop as a pair of single blades. Subsequent to this first pair, mature nodes form compound leaves with three blades. Mature trifoliolate leaves, having three to four leaflets per leaf, are often between long and broad. Under ideal conditions, stem growth continues, producing new nodes every four days. Before flowering, roots can grow per day. If rhizobia are present, root nodulation begins by the time the third node appears. Nodulation typically continues for 8 weeks before the symbiotic infection process stabilizes. The final characteristics of a soybean plant are variable, with factors such as genetics, soil quality, and climate affecting its form; however, fully mature soybean plants are generally between in height and have rooting depths between .
Flowering
Flowering is triggered by day length, often beginning once days become shorter than 12.8 hours. This trait is highly variable however, with different varieties reacting differently to changing day length. Soybeans form inconspicuous, self-fertile flowers which are borne in the axil of the leaf and are white, pink or purple. Though they do not require pollination, they are attractive to bees, because they produce nectar that is high in sugar content. Depending on the soybean variety, node growth may cease once flowering begins. Strains that continue nodal development after flowering are termed "indeterminates" and are best suited to climates with longer growing seasons. Often soybeans drop their leaves before the seeds are fully mature.
The fruit is a hairy pod that grows in clusters of three to five, each pod is long and usually contains two to four (rarely more) seeds 5–11 mm in diameter. Soybean seeds come in a wide variety of sizes and hull colors such as black, brown, yellow, and green. Variegated and bicolored seed coats are also common.
Seed resilience
The hull of the mature bean is hard, water-resistant, and protects the cotyledon and hypocotyl (or "germ") from damage. If the seed coat is cracked, the seed will not germinate. The scar, visible on the seed coat, is called the hilum (colors include black, brown, buff, gray and yellow) and at one end of the hilum is the micropyle, or small opening in the seed coat which can allow the absorption of water for sprouting.
Some seeds such as soybeans containing very high levels of protein can undergo desiccation, yet survive and revive after water absorption. A. Carl Leopold began studying this capability at the Boyce Thompson Institute for Plant Research at Cornell University in the mid-1980s. He found soybeans and corn to have a range of soluble carbohydrates protecting the seed's cell viability. Patents were awarded to him in the early 1990s on techniques for protecting biological membranes and proteins in the dry state.
Chemistry
Dry soybeans contain 36% protein and 20% fat in form of soybean oil by weight. The remainder consists of 30% carbohydrates, 9% water and 5% ash.
Soybeans comprise approximately 8% seed coat or hull, 90% cotyledons and 2% hypocotyl axis or germ.
Taxonomy
The genus Glycine may be divided into two subgenera, Glycine and Soja. The subgenus Soja includes the cultivated soybean, G. max, and the wild soybean, treated either as a separate species G. soja, or as the subspecies G. max subsp. soja. The cultivated and wild soybeans are annuals. The wild soybean is native to China, Japan, Korea and Russia. The subgenus Glycine consists of at least 25 wild perennial species: for example, G. canescens and G. tomentella, both found in Australia and Papua New Guinea. Perennial soybean (Neonotonia wightii) belongs to a different genus. It originated in Africa and is now a widespread pasture crop in the tropics.
Like some other crops of long domestication, the relationship of the modern soybean to wild-growing species can no longer be traced with any degree of certainty. It is a cultigen with a very large number of cultivars.
Ecology
Like many legumes, soybeans can fix atmospheric nitrogen, due to the presence of symbiotic bacteria from the Rhizobia group.
Cultivation
Conditions
Cultivation is successful in climates with hot summers, with optimum growing conditions in mean temperatures of ; temperatures of below and over stunt growth significantly. They can grow in a wide range of soils, with optimum growth in moist alluvial soils with good organic content. Soybeans, like most legumes, perform nitrogen fixation by establishing a symbiotic relationship with the bacterium Bradyrhizobium japonicum (syn. Rhizobium japonicum; Jordan 1982). This ability to fix nitrogen allows farmers to reduce nitrogen fertilizer use and increase yields when growing other crops in rotation with soy. There may be some trade-offs, however, in the long-term abundance of organic material in soils where soy and other crops (for example, corn) are grown in rotation. For best results, though, an inoculum of the correct strain of bacteria should be mixed with the soybean (or any legume) seed before planting. Modern crop cultivars generally reach a height of around , and take 80–120 days from sowing to harvesting.
Soils
Soil scientists Edson Lobato (Brazil), Andrew McClung (U.S.), and Alysson Paolinelli (Brazil) were awarded the 2006 World Food Prize for transforming the ecologically biodiverse savannah of the Cerrado region of Brazil into highly productive cropland that could grow profitable soybeans.
Contamination concerns
Human sewage sludge can be used as fertilizer to grow soybeans. Soybeans grown in sewage sludge likely contain elevated concentrations of metals.
Pests
Soybean plants are vulnerable to a wide range of bacterial diseases, fungal diseases, viral diseases, and parasites.
The primary bacterial diseases include bacterial blight, bacterial pustule and downy mildew affecting the soybean plant.
The Japanese beetle (Popillia japonica) poses a significant threat to agricultural crops, including soybeans, due to its voracious feeding habits. Found commonly in both urban and suburban areas, these beetles are frequently observed in agricultural landscapes where they can cause considerable damage to crops like corn, soybeans, and various fruits.
Soybean cyst nematode (SCN) is the worst pest of soybean in the US. Losses of 30% or 40% are common even without symptoms.
The corn earworm moth and bollworm (Helicoverpa zea) is a common and destructive pest of soybean growth in Virginia.
Soybeans are consumed by whitetail deer which may damage soybean plants through feeding, trampling and bedding, reducing crop yields by as much as 15%. Groundhogs are also a common pest in soybean fields, living in burrows underground and feeding nearby. One den of groundhogs can consume a tenth to a quarter of an acre of soybeans. Chemical repellents or firearms are effective for controlling pests in soybean fields.
Soybeans suffer from the fungus Pythium spinosum in Arkansas and Indiana (United States), and China.
In Japan and the United States, the Soybean dwarf virus (SbDV) causes a disease in soybeans and is transmitted by aphids.
Cultivars
Disease resistant cultivars
Resistant varieties are available. In Indian cultivars, Nataraj et al. 2020 find that anthracnose caused by Colletotrichum truncatum is resisted by a combination of 2 major genes.
PI 88788
The vast majority of cultivars in the US have soybean cyst nematode resistance (SCN resistance), but rely on only one breeding line (PI 88788) as their sole source of resistance. (The resistance genes provided by PI 88788, , and were characterized in 1997.) As a result, for example, in 2012 only 18 cultivars out of 807 recommended by the Iowa State University Extension had any ancestry outside of PI 88788. By 2020 the situation was still about the same: Of 849 there were 810 with some ancestry from PI 88788, 35 from Peking, and only 2 from PI 89772. (On the question of exclusively PI 88788 ancestry, that number was not available for 2020.) That was speculated to be in 2012—and was clearly by 2020—producing SCN populations that are virulent on PI 88788.
Production
In 2020, world production of soybeans was over 353 million tonnes, led by Brazil and the United States combined with 66% of the total (table). Production has dramatically increased across the globe since the 1960s, but particularly in South America after a cultivar that grew well in low latitudes was developed in the 1980s. The rapid growth of the industry has been primarily fueled by large increases in worldwide demand for meat products, particularly in developing countries like China, which alone accounts for more than 60% of imports.
Environmental issues
In spite of the Amazon "Soy Moratorium", soy production continues to play a significant role in deforestation when its indirect impacts are taken into account, as land used to grow soy continues to increase. This land either comes from pasture land (which increasingly supplants forested areas), or areas outside the Amazon not covered by the moratorium, such as the Cerrado region. Roughly one-fifth of deforestation can be attributed to expanding land use to produce oilseeds, primarily for soy and palm oil, whereas the expansion of beef production accounts for 41%. The main driver of deforestation is the global demand for meat, which in turn requires huge tracts of land to grow feed crops for livestock. Around 80% of the global soybean crop is used to feed livestock.
History
Soybeans were a crucial crop in East Asia long before written records began. The origin of soy bean cultivation remains scientifically debated. The closest living relative of the soybean is Glycine soja (previously called G. ussuriensis), a legume native to central China. There is evidence for soybean domestication between 7000 and 6600 BC in China, between 5000 and 3000 BC in Japan and 1000 BC in Korea.
The first unambiguously domesticated, cultigen-sized soybean was discovered in Korea at the Mumun-period Daundong site. Prior to fermented products such as fermented black soybeans (douchi), jiang (Chinese miso), soy sauce, tempeh, nattō, and miso, soy was considered sacred for its beneficial effects in crop rotation, and it was eaten by itself, and as bean curd and soy milk.
Soybeans were introduced to Java in Malay Archipelago circa 13th century or probably earlier. By the 17th century through their trade with Far East, soybeans and its products were traded by European traders (Portuguese, Spanish, and Dutch) in Asia, and reached Indian Subcontinent by this period. By the 18th century, soybeans were introduced to the Americas and Europe from China. Soy was introduced to Africa from China in the late 19th century, and is now widespread across the continent.
East Asia
The cultivation of soybeans began in the eastern half of northern China by 2000 BC, but is almost certainly much older. The earliest documented evidence for the use of Glycine of any kind comes from charred plant remains of wild soybean recovered from Jiahu in Henan province China, a Neolithic site occupied between 9000 and 7800 calendar years ago (cal bp). An abundance of archeological charred soybean specimens have been found centered around this region.
According to the ancient Chinese myth, in 2853 BC, the legendary Emperor Shennong of China proclaimed that five plants were sacred: soybeans, rice, wheat, barley, and millet. Early Chinese records mention that soybeans were a gift from the region of Yangtze River delta and Southeast China. The Great Soviet Encyclopedia claims soybean cultivation originated in China about 5000 years ago. Some scholars suggest that soybean originated in China and was domesticated about 3500 BC. Recent research, however, indicates that seeding of wild forms started early (before 5000 BC) in multiple locations throughout East Asia.
Soybeans became an important crop by the Zhou dynasty (c. 1046–256 BC) in China. However, the details of where, when, and under what circumstances soybean developed a close relationship with people are poorly understood. Soybean was unknown in South China before the Han period. From about the first century AD to the Age of Discovery (15–16th centuries), soybeans were introduced into across South and Southeast Asia. This spread was due to the establishment of sea and land trade routes. The earliest Japanese textual reference to the soybean is in the classic Kojiki (Records of Ancient Matters), which was completed in AD 712.
The oldest preserved soybeans resembling modern varieties in size and shape were found in archaeological sites in Korea dated about 1000 BC. Radiocarbon dating of soybean samples recovered through flotation during excavations at the Early Mumun period Okbang site in Korea indicated soybean was cultivated as a food crop in around 1000–900 BC. Soybeans from the Jōmon period in Japan from 3000 BC are also significantly larger than wild varieties.
Southeast Asia
Soybeans were mentioned as kadêlê (modern Indonesian term: ) in an old Javanese manuscript, Serat Sri Tanjung, which dates to 12th- to 13th-century Java. By the 13th century, the soybean had arrived and cultivated in Indonesia; it probably arrived much earlier however, carried by traders or merchants from Southern China.
The earliest known reference to it as "tempeh" appeared in 1815 in the Serat Centhini manuscript. The development of tempeh fermented soybean cake probably took place earlier, circa 17th century in Java.
Indian subcontinent
By the 1600s, soy sauce spread from southern Japan across the region through the Dutch East India Company (VOC).
While the origins and history of Soybean cultivation in the Eastern Himalayas is debated, it was potentially introduced from southern China, more specifically Yunnan province. Alternatively, it could have reached here through traders from Indonesia via Myanmar. Northeast India is viewed as a passive micro-centre within the soybean secondary gene centre. Central India is considered a tertiary gene centre particularly the area encompassing Madhya Pradesh which is also the country largest soybean producer.
Iberia
In 1603, "Vocabvlario da Lingoa de Iapam", a famous Japanese-Portuguese dictionary, was compiled and published by Jesuit priests in Nagasaki. It contains short but clear definitions for about 20 words related to soyfoods—the first in any European language.
The Luso-Hispanic traders were familiar with soybeans and soybean product through their trade with Far East since at least the 17th century. However, it was not until the late 19th century that the first attempt to cultivate soybeans in the Iberian peninsula was undertaken. In 1880, the soybean was first cultivated in Portugal in the Botanical Gardens at Coimbra (Crespi 1935).
In about 1910 in Spain the first attempts at Soybean cultivation were made by the Count of San Bernardo, who cultivated soybeans on his estates at Almillo (in southwest Spain) about 48 miles east-northeast of Seville.
North America
Soybeans were first introduced to North America from China in 1765, by Samuel Bowen, a former East India Company sailor who had visited China in conjunction with James Flint, the first Englishman legally permitted by the Chinese authorities to learn Chinese. The first "New World" soybean crop was grown on Skidaway Island, Georgia, in 1765 by Henry Yonge from seeds given him by Samuel Bowen. Bowen grew soy near Savannah, Georgia, possibly using funds from Flint, and made soy sauce for sale to England. Although soybean was introduced into North America in 1765, for the next 155 years, the crop was grown primarily for forage.
In 1831, the first soy product "a few dozen India Soy" [sauce] arrived in Canada. Soybeans were probably first cultivated in Canada by 1855, and definitely in 1895 at Ontario Agricultural College.
It was not until Lafayette Mendel and Thomas Burr Osborne showed that the nutritional value of soybean seeds could be increased by cooking, moisture or heat, that soy went from a farm animal feed to a human food.
William Morse is considered the "father" of modern soybean agriculture in America. In 1910, he and Charles Piper began to popularize what was regarded as a relatively unknown Oriental peasant crop in America into a "golden bean", with the soybean becoming one of America's largest and most nutritious farm crops.
Prior to the 1920s in the US, the soybean was mainly a forage crop, a source of oil, meal (for feed) and industrial products, with very little used as food. However, it took on an important role after World War I. During the Great Depression, the drought-stricken (Dust Bowl) regions of the United States were able to use soy to regenerate their soil because of its nitrogen-fixing properties. Farms were increasing production to meet with government demands, and Henry Ford became a promoter of soybeans. In 1931, Ford hired chemists Robert Boyer and Frank Calvert to produce artificial silk. They succeeded in making a textile fiber of spun soy protein fibers, hardened or tanned in a formaldehyde bath, which was given the name Azlon. It never reached the commercial market. Soybean oil was used by Ford in paint for the automobiles, as well as a fluid for shock absorbers.
During World War II, soybeans became important in both North America and Europe chiefly as substitutes for other protein foods and as a source of edible oil. During the war, the soybean was discovered as fertilizer due to nitrogen fixation by the United States Department of Agriculture.
Prior to the 1970s, Asian-Americans and Seventh-Day Adventists were essentially the only users of soy foods in the United States. "The soy foods movement began in small pockets of the counterculture, notably the Tennessee commune named simply The Farm, but by the mid-1970s a vegetarian revival helped it gain momentum and even popular awareness through books such as The Book of Tofu."
Although practically unseen in 1900, by 2000 soybean plantings covered more than 70 million acres, second only to corn, and it became America's largest cash crop. In 2021, 87,195,000 acres were planted, with the largest acreage in the states of Illinois, Iowa, and Minnesota.
Caribbean and West Indies
The soybean arrived in the Caribbean in the form of soy sauce made by Samuel Bowen in Savannah, Georgia, in 1767. It remains only a minor crop there, but its uses for human food are growing steadily.
Mediterranean area
The soybean was first cultivated in Italy by 1760 in the Botanical Garden of Turin. During the 1780s, it was grown in at least three other botanical gardens in Italy. The first soybean product, soy oil, arrived in Anatolia during 1909 under Ottoman Empire. The first clear cultivation occurred in 1931. This was also the first time that soybeans were cultivated in Middle East. By 1939, soybeans were cultivated in Greece.
Australia
Wild soybeans were discovered in northeastern Australia in 1770 by explorers Banks and Solander. In 1804, the first soyfood product ("Fine India Soy" [sauce]) was sold in Sydney. In 1879, the first domesticated soybeans arrived in Australia, a gift of the Minister of the Interior Department, Japan.
France
The soybean was first cultivated in France by 1779 (and perhaps as early as 1740). The two key early people and organizations introducing the soybean to France were the Society of Acclimatization (starting in 1855) and Li Yu-ying (from 1910). Li started a large tofu factory, where the first commercial soyfoods in France were made.
Africa
The soybean first arrived in Africa via Egypt in 1857. Soya Meme (Baked Soya) is produced in the village called Bame Awudome near Ho, the capital of the Volta Region of Ghana, by the Ewe people of Southeastern Ghana and southern Togo.
Central Europe
In 1873, Professor Friedrich J. Haberlandt first became interested in soybeans when he obtained the seeds of 19 soybean varieties at the Vienna World Exposition (Wiener Weltausstellung). He cultivated these seeds in Vienna, and soon began to distribute them throughout Central and Western Europe. In 1875, he first grew the soybeans in Vienna, then in early 1876 he sent samples of seeds to seven cooperators in central Europe, who planted and tested the seeds in the spring of 1876, with good or fairly good results in each case. Most of the farmers who received seeds from him cultivated them, then reported their results. Starting in February 1876, he published these results first in various journal articles, and finally in his magnum opus, Die Sojabohne (The Soybean) in 1878. In northern Europe, lupin (lupine) is known as the "soybean of the north".
Central Asia
The soybean is first in cultivated Transcaucasia in Central Asia in 1876, by the Dungans. This region has never been important for soybean production.
Central America
The first reliable reference to the soybean in this region dates from Mexico in 1877.
South America
The soybean first arrived in South America in Argentina in 1882.
Andrew McClung showed in the early 1950s that with soil amendments the Cerrado region of Brazil would grow soybeans. In June 1973, when soybean futures markets mistakenly portended a major shortage, the Nixon administration imposed an embargo on soybean exports. It lasted only a week, but Japanese buyers felt that they could not rely on U.S. supplies, and the rival Brazilian soybean industry came into existence. This led Brazil to become the world's largest producer of soybeans in 2020, with 131 million tons.
Industrial soy production in South America is characterized by wealthy management who live far away from the production site which they manage remotely. In Brazil, these managers depend heavily on advanced technology and machinery, and agronomic practices such as zero tillage, high pesticide use, and intense fertilization. One contributing factor is the increased attention on the Brazilian Cerrado in Bahia, Brazil by US farmers in the early 2000s. This was due to rising values of scarce farmland and high production costs in the US Midwest. There were many promotions of the Brazilian Cerrado by US farm producer magazines and market consultants who portrayed it as having cheap land with ideal production conditions, with infrastructure being the only thing it was lacking. These same magazines also presented Brazilian soy as inevitably out-competing American soy. Another draw to investing was the insider information about the climate and market in Brazil. A few dozen American farmers purchased varying amounts of land by a variety of means including finding investors and selling off land holdings. Many followed the ethanol company model and formed an LLC with investments from neighboring farmers, friends, and family while some turned to investment companies. Some soy farmers either liquidated their Brazilian assets or switched to remote management from the US to return to farming there and implement new farming and business practices to make their US farms more productive. Others planned to sell their now expensive Bahia land to buy land cheaper land in the frontier regions of Piauí or Tocantins to create more soybean farms.
Genetics
Chinese landraces were found to have a slightly higher genetic diversity than inbred lines by Li et al., 2010. Specific locus amplified fragment sequencing (SLAF-seq) has been used by Han et al., 2015 to study the genetic history of the domestication process, perform genome-wide association studies (GWAS) of agronomically relevant traits, and produce high-density linkage maps. An SNP array was developed by Song et al., 2013 and has been used for research and breeding; the same team applied their array in Song et al., 2015 against the USDA Soybean Germplasm Collection and obtained mapping data that are expected to yield association mapping data for such traits.
is a resistance gene against soybean rust. Rpp1-R1 is an R gene (NB-LRR) providing resistance against the rust pathogen Phakopsora pachyrhizi. Its synthesis product includes a ULP1 protease.
Qijian et al., 2017 provides the gene array.
Genetic modification
Soybeans are one of the "biotech food" crops that have been genetically modified, and genetically modified soybeans are being used in an increasing number of products. In 1995, Monsanto company introduced glyphosate-tolerant soybeans that have been genetically modified to be resistant to Monsanto's glyphosate herbicides through substitution of the Agrobacterium sp. (strain CP4) gene EPSP (5-enolpyruvyl shikimic acid-3-phosphate) synthase. The substituted version is not sensitive to glyphosate.
In 1997, about 8% of all soybeans cultivated for the commercial market in the United States were genetically modified. In 2010, the figure was 93%. As with other glyphosate-tolerant crops, concern is expressed over damage to biodiversity. A 2003 study concluded the "Roundup Ready" (RR) gene had been bred into so many different soybean cultivars, there had been little decline in genetic diversity, but "diversity was limited among elite lines from some companies".
The widespread use of such types of GM soybeans in the Americas has caused problems with exports to some regions. GM crops require extensive certification before they can be legally imported into the European Union, where there is considerable supplier and consumer reluctance to use GM products for consumer or animal use. Difficulties with coexistence and subsequent traces of cross-contamination of non-GM stocks have caused shipments to be rejected and have put a premium on non-GM soy.
A 2006 United States Department of Agriculture report found the adoption of genetically engineered (GE) soy, corn and cotton reduced the amount of pesticides used overall, but did result in a slightly greater amount of herbicides used for soy specifically. The use of GE soy was also associated with greater conservation tillage, indirectly leading to better soil conservation, as well as increased income from off-farming sources due to the greater ease with which the crops can be managed. Though the overall estimated benefits of the adoption of GE soybeans in the United States was $310 million, the majority of this benefit was experienced by the companies selling the seeds (40%), followed by biotechnology firms (28%) and farmers (20%). The patent on glyphosate-tolerant soybeans expired in 2014, so benefits can be expected to shift.
Adverse effects
Soy allergy
Allergy to soy is common, and the food is listed with other foods that commonly cause allergy, such as milk, eggs, peanuts, tree nuts, shellfish. The problem has been reported among younger children, and the diagnosis of soy allergy is often based on symptoms reported by parents and results of skin tests or blood tests for allergy. Only a few reported studies have attempted to confirm allergy to soy by direct challenge with the food under controlled conditions. It is very difficult to give a reliable estimate of the true prevalence of soy allergy in the general population. To the extent that it does exist, soy allergy may cause cases of urticaria and angioedema, usually within minutes to hours of ingestion. In rare cases, true anaphylaxis may also occur. The reason for the discrepancy is likely that soy proteins, the causative factor in allergy, are far less potent at triggering allergy symptoms than the proteins of peanut and shellfish. An allergy test that is positive demonstrates that the immune system has formed IgE antibodies to soy proteins. However, this is only a factor when soy proteins reach the blood without being digested, in sufficient quantities to reach a threshold to provoke actual symptoms.
Soy can also trigger symptoms via food intolerance, a situation where no allergic mechanism can be proven. One scenario is seen in very young infants who have vomiting and diarrhoea when fed soy-based formula, which resolves when the formula is withdrawn. Older infants can suffer a more severe disorder with vomiting, diarrhoea that may be bloody, anemia, weight loss and failure to thrive. The most common cause of this unusual disorder is a sensitivity to cow's milk, but soy formulas can also be the trigger. The precise mechanism is unclear and it could be immunologic, although not through the IgE-type antibodies that have the leading role in urticaria and anaphylaxis. However, it is also self-limiting and will often disappear in the toddler years.
In the European Union, identifying the presence of soy either as an ingredient or unintended contaminant in packaged food is compulsory. The regulation (EC) 1169/2011 on food-labeling lists 14 allergens, including soy, in packaged food must be clearly indicated on the label as part of the list of ingredients, using a distinctive typography (such as bold type or capital letters).
Thyroid function
One review noted that soy-based foods may inhibit absorption of thyroid hormone medications required for treatment of hypothyroidism. A 2015 scientific review by the European Food Safety Authority concluded that intake of isoflavones from supplements did not affect thyroid hormone levels in postmenopausal women.
Uses
Among the legumes, the soybean is valued for its high (38–45%) protein content as well as its high (approximately 20%) oil content. Soybeans are the most valuable agricultural export of the United States. Approximately 85% of the world's soybean crop is processed into soybean meal and soybean oil, the remainder processed in other ways or eaten whole.
Soybeans can be broadly classified as "vegetable" (garden) or field (oil) types. Vegetable types cook more easily, have a mild, nutty flavor, and better texture, are larger in size, higher in protein, and are lower in oil than field types. Tofu, soy milk, and soy sauce are among the top edible commodities made using soybeans. Producers prefer the higher protein cultivars bred from vegetable soybeans originally brought to the United States in the late 1930s. The "garden" cultivars are generally not suitable for mechanical combine harvesting because there is a tendency for the pods to shatter upon reaching maturity.
Nutrition
A 100-gram reference quantity of raw soybeans supplies of food energy and are 9% water, 30% carbohydrates, 20% total fat and 36% protein.
Peanuts are the only legumes with a higher fat content (48%) and calorie count (2,385 kJ). They contain less carbohydrates (21%), protein (25%) and dietary fiber (9%).
Soybeans are a rich source of essential nutrients, providing in a 100-gram serving (raw, for reference) high contents of the Daily Value (DV) especially for protein (36% DV), dietary fiber (37%), iron (121%), manganese (120%), phosphorus (101%) and several B vitamins, including folate (94%) (table). High contents also exist for vitamin K, magnesium, zinc and potassium.
For human consumption, soybeans must be processed prior to consumption–either by cooking, roasting, or fermenting–to destroy the trypsin inhibitors (serine protease inhibitors). Raw soybeans, including the immature green form, are toxic to all monogastric animals.
Protein
Most soy protein is a relatively heat-stable storage protein. This heat stability enables soy food products requiring high temperature cooking, such as tofu, soy milk and textured vegetable protein (soy flour) to be made. Soy protein is essentially identical to the protein of other legume seeds and pulses.
Soy is a good source of protein for vegetarians and vegans or for people who want to reduce the amount of meat they eat, according to the US Food and Drug Administration:
Although soybeans have high protein content, soybeans also contain high levels of protease inhibitors, which can prevent digestion. Protease inhibitors are reduced by cooking soybeans, and are present in low levels in soy products such as tofu and soy milk.
The Protein Digestibility Corrected Amino Acid Score (PDCAAS) of soy protein is the nutritional equivalent of meat, eggs, and casein for human growth and health. Soybean protein isolate has a biological value of 74, whole soybeans 96, soybean milk 91, and eggs 97.
All spermatophytes, except for the family of grasses and cereals (Poaceae), contain 7S (vicilin) and 11S (legumin) soy protein-like globulin storage proteins; or only one of these globulin proteins. S denotes Svedberg, sedimentation coefficients. Oats and rice are anomalous in that they also contain a majority of soybean-like protein. Cocoa, for example, contains the 7S globulin, which contributes to cocoa/chocolate taste and aroma, whereas coffee beans (coffee grounds) contain the 11S globulin responsible for coffee's aroma and flavor.
Vicilin and legumin proteins belong to the cupin superfamily, a large family of functionally diverse proteins that have a common origin and whose evolution can be followed from bacteria to eukaryotes including animals and higher plants.
2S albumins form a major group of homologous storage proteins in many dicot species and in some monocots but not in grasses (cereals). Soybeans contain a small but significant 2S storage protein. 2S albumin are grouped in the prolamin superfamily. Other allergenic proteins included in this 'superfamily' are the non-specific plant lipid transfer proteins, alpha amylase inhibitor, trypsin inhibitors, and prolamin storage proteins of cereals and grasses.
Peanuts, for instance, contain 20% 2S albumin but only 6% 7S globulin and 74% 11S. It is the high 2S albumin and low 7S globulin that is responsible for the relatively low lysine content of peanut protein compared to soy protein.
Carbohydrates
The principal soluble carbohydrates of mature soybeans are the disaccharide sucrose (range 2.5–8.2%), the trisaccharide raffinose (0.1–1.0%) composed of one sucrose molecule connected to one molecule of galactose, and the tetrasaccharide stachyose (1.4 to 4.1%) composed of one sucrose connected to two molecules of galactose. While the oligosaccharides raffinose and stachyose protect the viability of the soybean seed from desiccation (see above section on physical characteristics) they are not digestible sugars, so contribute to flatulence and abdominal discomfort in humans and other monogastric animals, comparable to the disaccharide trehalose. Undigested oligosaccharides are broken down in the intestine by native microbes, producing gases such as carbon dioxide, hydrogen, and methane.
Since soluble soy carbohydrates are found in the whey and are broken down during fermentation, soy concentrate, soy protein isolates, tofu, soy sauce, and sprouted soybeans are without flatus activity. On the other hand, there may be some beneficial effects to ingesting oligosaccharides such as raffinose and stachyose, namely, encouraging indigenous bifidobacteria in the colon against putrefactive bacteria.
The insoluble carbohydrates in soybeans consist of the complex polysaccharides cellulose, hemicellulose, and pectin. The majority of soybean carbohydrates can be classed as belonging to dietary fiber.
Fats
Raw soybeans are 20% fat, including saturated fat (3%), monounsaturated fat (4%) and polyunsaturated fat, mainly as linoleic acid (table).
Within soybean oil or the lipid portion of the seed is contained four phytosterols: stigmasterol, sitosterol, campesterol, and brassicasterol accounting for about 2.5% of the lipid fraction; and which can be converted into steroid hormones. Additionally soybeans are a rich source of sphingolipids.
Other constituents
Soy contains isoflavones—polyphenolic compounds, produced by legumes including peanuts and chickpeas. Isoflavones are closely related to flavonoids found in other plants, vegetables and flowers.
Soy contains the phytoestrogen coumestans, also are found in beans and split-peas, with the best sources being alfalfa, clover, and soybean sprouts. Coumestrol, an isoflavone coumarin derivative, is the only coumestan in foods.
Saponins, a class of natural surfactants (soaps), are sterols that are present in small amounts in various plant foods, including soybeans, other legumes, and cereals, such as oats.
Comparison to other major staple foods
The following table shows the nutrient content of green soybean and other major staple foods, each in respective raw form on a dry weight basis to account for their different water contents. Raw soybeans, however, are not edible and cannot be digested. These must be sprouted, or prepared and cooked for human consumption. In sprouted and cooked form, the relative nutritional and anti-nutritional contents of each of these grains is remarkably different from that of raw form of these grains reported in this table. The nutritional value of soybean and each cooked staple depends on the processing and the method of cooking: boiling, frying, roasting, baking, etc.
Soybean oil
Soybean seed contains 18–19% oil. To extract soybean oil from seed, the soybeans are cracked, adjusted for moisture content, rolled into flakes, and solvent-extracted with commercial hexane. The oil is then refined, blended for different applications, and sometimes hydrogenated. Soybean oils, both liquid and partially hydrogenated, are exported abroad, sold as "vegetable oil," or end up in a wide variety of processed foods.
Soybean meal
Soybean meal, or soymeal, is the material remaining after solvent extraction of oil from soybean flakes, with a 50% soy protein content. The meal is 'toasted' (a misnomer because the heat treatment is with moist steam) and ground in a hammer mill. Ninety-seven percent of soybean meal production globally is used as livestock feed. Soybean meal is also used in some dog foods.
Livestock feed
One of the major uses of soybeans globally is as livestock feed, predominantly in the form of soybean meal. In the European Union, for example, though it does not make up most of the weight of livestock feed, soybean meal provides around 60% of the protein fed to livestock. In the United States, 70 percent of soybean production is used for animal feed, with poultry being the number one livestock sector of soybean consumption. Spring grasses are rich in omega-3 fatty acids, whereas soy is predominantly omega-6. The soybean hulls, which mainly consist of the outer coats of the beans removed before oil extraction, can also be fed to livestock and whole soybean seeds after processing.
Food for human consumption
In addition to their use in livestock feed, soybean products are widely used for human consumption. Common soybean products include soy sauce, soy milk, tofu, soy meal, soy flour, textured vegetable protein (TVP), soy curls, tempeh, soy lecithin and soybean oil. Soybeans may also be eaten with minimal processing, for example, in the Japanese food , in which immature soybeans are boiled whole in their pods and served with salt.
In China, Japan, Vietnam and Korea, soybean and soybean products are a standard part of the diet. Tofu (豆腐 dòufu) is thought to have originated in China, along with soy sauce and several varieties of soybean paste used as seasonings. Japanese foods made from soya include miso (), nattō (), kinako () and edamame (), as well as products made with tofu such as atsuage and aburaage. In China, whole dried soybeans are sold in supermarkets and used to cook various dishes, usually after rehydration by soaking in water; they find their use in soup or as a savory dish. In Korean cuisine, soybean sprouts (콩나물 kongnamul) are used in a variety of dishes, and soybeans are the base ingredient in doenjang, cheonggukjang and ganjang. In Vietnam, soybeans are used to make soybean paste (tương) in the North with the most popular products are tương Bần, tương Nam Đàn, tương Cự Đà as a garnish for phở and gỏi cuốn dishes, as well as tofu ( or or ), soy sauce (), soy milk ( in the North or in the South), and (tofu sweet soup).
Flour
Soy flour refers to soybeans ground finely enough to pass through a 100-mesh or smaller screen where special care was taken during desolventizing (not toasted) to minimize denaturation of the protein to retain a high protein dispersibility index, for uses such as food extrusion of textured vegetable protein. It is the starting material for soy concentrate and protein isolate production.
Soy flour can also be made by roasting the soybean, removing the coat (hull), and grinding it into flour. Soy flour is manufactured with different fat levels. Alternatively, raw soy flour omits the roasting step.
Defatted soy flour is obtained from solvent extracted flakes and contains less than 1% oil.
"Natural or full-fat soy flour is made from unextracted, dehulled beans and contains about 18% to 20% oil." Its high oil content requires the use of a specialized Alpine Fine Impact Mill to grind rather than the usual hammer mill. Full-fat soy flour has a lower protein concentration than defatted flour. Extruded full-fat soy flour, ground in an Alpine mill, can replace/extend eggs in baking and cooking. Full-fat soy flour is a component of the famous Cornell bread recipe.
Low-fat soy flour is made by adding some oil back into defatted soy flour. Fat levels range from 4.5% to 9%.
High-fat soy flour can also be produced by adding back soybean oil to defatted flour, usually at 15%.
Soy lecithin can be added (up to 15%) to soy flour to make lecithinated soy flour. It increases dispersibility and gives it emulsifying properties.
Soy flour has 50% protein and 5% fiber. It has higher levels of protein, thiamine, riboflavin, phosphorus, calcium, and iron than wheat flour. It does not contain gluten. As a result, yeast-raised breads made with soy flour are dense in texture. Among many uses, soy flour thickens sauces, prevents staling in baked food, and reduces oil absorption during frying. Baking food with soy flour gives it tenderness, moistness, a rich color, and a fine texture.
Soy grits are similar to soy flour, except the soybeans have been toasted and cracked into coarse pieces.
Kinako is a soy flour used in Japanese cuisine.
Soy-based infant formula
Soy-based infant formula (SBIF) is sometimes given to infants who are not being strictly breastfed; it can be useful for infants who are either allergic to pasteurized cow milk proteins or who are being fed a vegan diet. It is sold in powdered, ready-to-feed, and concentrated liquid forms.
Some reviews have expressed the opinion that more research is needed to determine what effect the phytoestrogens in soybeans may have on infants. Diverse studies have concluded there are no adverse effects in human growth, development, or reproduction as a result of the consumption of soy-based infant formula. One of these studies, published in the Journal of Nutrition, concludes that there are:
... no clinical concerns with respect to nutritional adequacy, sexual development, neurobehavioral development, immune development, or thyroid disease. SBIFs provide complete nutrition that adequately supports normal infant growth and development. FDA has accepted SBIFs as safe for use as the sole source of nutrition.
Meat and dairy alternatives and extenders
Soybeans can be processed to produce a texture and appearance similar to many other foods. For example, soybeans are the primary ingredient in many dairy product substitutes (e.g., soy milk, margarine, soy ice cream, soy yogurt, soy cheese, and soy cream cheese) and meat alternatives (e.g. veggie burgers). These substitutes are readily available in most supermarkets. Soy milk does not naturally contain significant amounts of digestible calcium. Many manufacturers of soy milk sell calcium-enriched products, as well.
Soy products also are used as a low-cost substitute for meat and poultry products. Food service, retail and institutional (primarily school lunch and correctional) facilities regularly use such "extended" products. The extension may result in diminished flavor, but fat and cholesterol are reduced. Vitamin and mineral fortification can be used to make soy products nutritionally equivalent to animal protein; the protein quality is already roughly equivalent. The soy-based meat substitute textured vegetable protein has been used for more than 50 years as a way of inexpensively extending ground beef without reducing its nutritional value.
Soy nut butter
The soybean is used to make a product called soy nut butter which is similar in texture to peanut butter.
Sweetened soybean
Sweet-boiled beans are popular in Japan and Korea, and the sweet-boiled soybeans are called "Daizu no " in Japan and Kongjorim () in Korea. Sweet-boiled beans are even used in sweetened buns, especially in .
The boiled and pasted edamame, called , is used as one of the Sweet bean pastes in Japanese confections.
Coffee substitute
Roasted and ground soybeans can be a caffeine-free substitute for coffee. After the soybeans are roasted and ground, they look similar to regular coffee beans or can be used as a powder similar to instant coffee, with the aroma and flavor of roasted soybeans.
Other products
Soybeans with black hulls are used in Chinese fermented black beans, douchi, not to be confused with black turtle beans.
Soybeans are also used in industrial products, including oils, soap, cosmetics, resins, plastics, inks, crayons, solvents, and clothing. Soybean oil is the primary source of biodiesel in the United States, accounting for 80% of domestic biodiesel production. Soybeans have also been used since 2001 as fermenting stock in the manufacture of a brand of vodka. In 1936, Ford Motor Company developed a method where soybeans and fibers were rolled together producing a soup which was then pressed into various parts for their cars, from the distributor cap to knobs on the dashboard. Ford also informed in public relation releases that in 1935 over five million acres (20,000 km) was dedicated to growing soybeans in the United States.
Potential health benefits
Reducing risk of cancer
According to the American Cancer Society, "There is growing evidence that eating traditional soy foods such as tofu may lower the risk of cancers of the breast, prostate, or endometrium (lining of the uterus), and there is some evidence it may lower the risk of certain other cancers." There is insufficient research to indicate whether taking soy dietary supplements (e.g., as a pill or capsule) has any effect on health or cancer risk.
As of 2018, rigorous dietary clinical research in people with cancer has proved inconclusive.
Breast cancer
Although considerable research has examined the potential for soy consumption to lower the risk of breast cancer in women, as of 2016 there is insufficient evidence to reach a conclusion about a relationship between soy consumption and any effects on breast cancer. A 2011 meta-analysis stated: "Our study suggests soy isoflavones intake is associated with a significant reduced risk of breast cancer incidence in Asian populations, but not in Western populations."
Gastrointestinal and colorectal cancer
Reviews of preliminary clinical trials on people with colorectal or gastrointestinal cancer suggest that soy isoflavones may have a slight protective effect against such cancers.
Prostate cancer
A 2016 review concluded that "current evidence from observational studies and small clinical trials is not robust enough to understand whether soy protein or isoflavone supplements may help prevent or inhibit the progression of prostate cancer." A 2010 review showed that neither soy foods nor isoflavone supplements alter measures of bioavailable testosterone or estrogen concentrations in men. Soy consumption has been shown to have no effect on the levels and quality of sperm. Meta-analyses on the association between soy consumption and prostate cancer risk in men concluded that dietary soy may lower the risk of prostate cancer.
Cardiovascular health
The Food and Drug Administration (FDA) granted the following health claim for soy: "25 grams of soy protein a day, as part of a diet low in saturated fat and cholesterol, may reduce the risk of heart disease." One serving, (1 cup or 240 mL) of soy milk, for instance, contains 6 or 7 grams of soy protein.
An American Heart Association (AHA) review of a decade long study of soy protein benefits did not recommend isoflavone supplementation. The review panel also found that soy isoflavones have not been shown to reduce post-menopausal "hot flashes" and the efficacy and safety of isoflavones to help prevent cancers of the breast, uterus or prostate is in question. AHA concluded that "many soy products should be beneficial to cardiovascular and overall health because of their high content of polyunsaturated fats, fiber, vitamins, and minerals and low content of saturated fat". Other studies found that soy protein consumption could lower the concentration of low-density lipoproteins (LDL) transporting fats in the extracellular water to cells.
Research by constituent
Lignans
Plant lignans are associated with high fiber foods such as cereal brans and beans are the principal precursor to mammalian lignans which have an ability to bind to human estrogen sites. Soybeans are a significant source of mammalian lignan precursor secoisolariciresinol containing 13–273 μg/100 g dry weight.
Phytochemicals
Soybeans and processed soy foods are among the richest foods in total phytoestrogens (wet basis per 100 g), which are present primarily in the form of the isoflavones, daidzein and genistein. Because most naturally occurring phytoestrogens act as selective estrogen receptor modulators, or SERMs, which do not necessarily act as direct agonists of estrogen receptors, normal consumption of foods that contain these phytoestrogens should not provide sufficient amounts to elicit a physiological response in humans. The major product of daidzein microbial metabolism is equol. Only 33% of Western Europeans have a microbiome that produces equol, compared to 50–55% of Asians.
Soy isoflavones—polyphenolic compounds that are also produced by other legumes like peanuts and chickpeas—are under preliminary research. As of 2016, no cause-and-effect relationship has been shown in clinical research to indicate that soy isoflavones lower the risk of cardiovascular diseases.
Phytic acid
Soybeans contain phytic acid, which may act as a chelating agent and inhibit mineral absorption, especially for diets already low in minerals.
In culture
Although observations of soy consumption inducing gynecomastia on men are not conclusive, a pejorative term, "soy boy", has emerged to describe perceived emasculated young men with feminine traits.
Futures
Soybean futures are traded on the Chicago Board of Trade and have delivery dates in January (F), March (H), May (K), July (N), August (Q), September (U), November (X).
They are also traded on other commodity futures exchanges under different contract specifications:
SAFEX: The South African Futures Exchange
DC: Dalian Commodity Exchange
ODE: Osaka Dojima Commodity Exchange (formerly Kansai Commodities Exchange, KEX) in Japan
NCDEX: National Commodity and Derivatives Exchange, India.
ROFEX: Rosario Grain Exchange in Argentina
| Biology and health sciences | Food and drink | null |
62798 | https://en.wikipedia.org/wiki/Fabaceae | Fabaceae | The Fabaceae () or Leguminosae, commonly known as the legume, pea, or bean family, are a large and agriculturally important family of flowering plants. It includes trees, shrubs, and perennial or annual herbaceous plants, which are easily recognized by their fruit (legume) and their compound, stipulate leaves. The family is widely distributed, and is the third-largest land plant family in number of species, behind only the Orchidaceae and Asteraceae, with about 765 genera and nearly 20,000 known species.
The five largest genera of the family are Astragalus (over 3,000 species), Acacia (over 1,000 species), Indigofera (around 700 species), Crotalaria (around 700 species), and Mimosa (around 400 species), which constitute about a quarter of all legume species. The c. 19,000 known legume species amount to about 7% of flowering plant species. Fabaceae is the most common family found in tropical rainforests and dry forests of the Americas and Africa.
Recent molecular and morphological evidence supports the fact that the Fabaceae is a single monophyletic family. This conclusion has been supported not only by the degree of interrelation shown by different groups within the family compared with that found among the Leguminosae and their closest relations, but also by all the recent phylogenetic studies based on DNA sequences. These studies confirm that the Fabaceae are a monophyletic group that is closely related to the families Polygalaceae, Surianaceae and Quillajaceae and that they belong to the order Fabales.
Along with the cereals, some fruits and tropical roots, a number of Leguminosae have been a staple human food for millennia and their use is closely related to human evolution.
The family Fabaceae includes a number of plants that are common in agriculture including Glycine max (soybean), Phaseolus (beans), Pisum sativum (pea), Cicer arietinum (chickpeas), Vicia faba (broad bean), Medicago sativa (alfalfa), Arachis hypogaea (peanut), Ceratonia siliqua (carob), Trigonella foenum-graecum (fenugreek), and Glycyrrhiza glabra (liquorice). A number of species are also weedy pests in different parts of the world, including Cytisus scoparius (broom), Robinia pseudoacacia (black locust), Ulex europaeus (gorse), Pueraria montana (kudzu), and a number of Lupinus species.
Etymology
The name 'Fabaceae' comes from the defunct genus Faba, now included in Vicia. The term "faba" comes from Latin, and appears to simply mean "bean". Leguminosae is an older name still considered valid, and refers to the fruit of these plants, which are called legumes.
Description
Fabaceae range in habit from giant trees (like Koompassia excelsa) to small annual herbs, with the majority being herbaceous perennials. Plants have indeterminate inflorescences, which are sometimes reduced to a single flower. The flowers have a short hypanthium and a single carpel with a short gynophore, and after fertilization produce fruits that are legumes.
Growth habit
The Fabaceae have a wide variety of growth forms, including trees, shrubs, herbaceous plants, and even vines or lianas. The herbaceous plants can be annuals, biennials, or perennials, without basal or terminal leaf aggregations. Many Legumes have tendrils. They are upright plants, epiphytes, or vines. The latter support themselves by means of shoots that twist around a support or through cauline or foliar tendrils. Plants can be heliophytes, mesophytes, or xerophytes.
Leaves
The leaves are usually alternate and compound. Most often they are even- or odd-pinnately compound (e.g. Caragana and Robinia respectively), often trifoliate (e.g. Trifolium, Medicago) and rarely palmately compound (e.g. Lupinus), in the Mimosoideae and the Caesalpinioideae commonly bipinnate (e.g. Acacia, Mimosa). They always have stipules, which can be leaf-like (e.g. Pisum), thornlike (e.g. Robinia) or be rather inconspicuous. Leaf margins are entire or, occasionally, serrate. Both the leaves and the leaflets often have wrinkled pulvini to permit nastic movements. In some species, leaflets have evolved into tendrils (e.g. Vicia).
Many species have leaves with structures that attract ants which protect the plant from herbivore insects (a form of mutualism). Extrafloral nectaries are common among the Mimosoideae and the Caesalpinioideae, and are also found in some Faboideae (e.g. Vicia sativa). In some Acacia, the modified hollow stipules are inhabited by ants and are known as domatia.
Roots
Many Fabaceae host bacteria in their roots within structures called root nodules. These bacteria, known as rhizobia, have the ability to take nitrogen gas (N2) out of the air and convert it to a form of nitrogen that is usable to the host plant (NO3− or NH3). This process is called nitrogen fixation. The legume, acting as a host, and rhizobia, acting as a provider of usable nitrate, form a symbiotic relationship. Members of the Phaseoleae genus Apios form tubers, which can be edible.
Flowers
The flowers often have five generally fused sepals and five free petals. They are generally hermaphroditic and have a short hypanthium, usually cup-shaped. There are normally ten stamens and one elongated superior ovary, with a curved style. They are usually arranged in indeterminate inflorescences. Fabaceae are typically entomophilous plants (i.e. they are pollinated by insects), and the flowers are usually showy to attract pollinators.
In the Caesalpinioideae, the flowers are often zygomorphic, as in Cercis, or nearly symmetrical with five equal petals, as in Bauhinia. The upper petal is the innermost one, unlike in the Faboideae. Some species, like some in the genus Senna, have asymmetric flowers, with one of the lower petals larger than the opposing one, and the style bent to one side. The calyx, corolla, or stamens can be showy in this group.
In the Mimosoideae, the flowers are actinomorphic and arranged in globose inflorescences. The petals are small and the stamens, which can be more than just 10, have long, coloured filaments, which are the showiest part of the flower. All of the flowers in an inflorescence open at once.
In the Faboideae, the flowers are zygomorphic, and have a specialized structure. The upper petal, called the banner or standard, is large and envelops the rest of the petals in bud, often reflexing when the flower blooms. The two adjacent petals, the wings, surround the two bottom petals. The two bottom petals are fused together at the apex (remaining free at the base), forming a boat-like structure called the keel. The stamens are always ten in number, and their filaments can be fused in various configurations, often in a group of nine stamens plus one separate stamen. Various genes in the CYCLOIDEA (CYC)/DICHOTOMA (DICH) family are expressed in the upper (also called dorsal or adaxial) petal; in some species, such as Cadia, these genes are expressed throughout the flower, producing a radially symmetrical flower.
Fruit
The ovary most typically develops into a legume. A legume is a simple dry fruit that usually dehisces (opens along a seam) on two sides. A common name for this type of fruit is a "pod", although that can also be applied to a few other fruit types. A few species have evolved samarae, loments, follicles, indehiscent legumes, achenes, drupes, and berries from the basic legume fruit.
Physiology and biochemistry
The Fabaceae are rarely cyanogenic. Where they are, the cyanogenic compounds are derived from tyrosine, phenylalanine or leucine. They frequently contain alkaloids. Proanthocyanidins can be present either as cyanidin or delphinidine or both at the same time. Flavonoids such as kaempferol, quercitin and myricetin are often present. Ellagic acid has never been found in any of the genera or species analysed. Sugars are transported within the plants in the form of sucrose. C3 photosynthesis has been found in a wide variety of genera. The family has also evolved a unique chemistry. Many legumes contain toxic and indigestible substances, antinutrients, which may be removed through various processing methods. Pterocarpans are a class of molecules (derivatives of isoflavonoids) found only in the Fabaceae. Forisome proteins are found in the sieve tubes of Fabaceae; uniquely they are not dependent on ADT.
Evolution, phylogeny and taxonomy
Evolution
The order Fabales contains around 7.3% of eudicot species and the greatest part of this diversity is contained in just one of the four families that the order contains: Fabaceae. This clade also includes the families Polygalaceae, Surianaceae and Quillajaceae and its origins date back 94 to 89 million years, although it started its diversification 79 to 74 million years ago. The Fabaceae diversified during the Paleogene to become a ubiquitous part of the modern earth's biota, along with many other families belonging to the flowering plants.
The Fabaceae have an abundant and diverse fossil record, especially for the Tertiary period. Fossils of flowers, fruit, leaves, wood and pollen from this period have been found in numerous locations. The earliest fossils that can be definitively assigned to the Fabaceae appeared in the early Palaeocene (approximately 65 million years ago). Representatives of the 3 sub-families traditionally recognised as being members of the Fabaceae – Cesalpinioideae, Papilionoideae and Mimosoideaeas well as members of the large clades within these sub-familiessuch as the genistoideshave been found in periods later, starting between 55 and 50 million years ago. In fact, a wide variety of taxa representing the main lineages in the Fabaceae have been found in the fossil record dating from the middle to the late Eocene, suggesting that the majority of the modern Fabaceae groups were already present and that a broad diversification occurred during this period. Therefore, the Fabaceae started their diversification approximately 60 million years ago and the most important clades separated 50 million years ago. The age of the main Cesalpinioideae clades have been estimated as between 56 and 34 million years and the basal group of the Mimosoideae as 44 ± 2.6 million years. The division between Mimosoideae and Faboideae is dated as occurring between 59 and 34 million years ago and the basal group of the Faboideae as 58.6 ± 0.2 million years ago. It has been possible to date the divergence of some of the groups within the Faboideae, even though diversification within each genus was relatively recent. For instance, Astragalus separated from the Oxytropis 16 to 12 million years ago. In addition, the separation of the aneuploid species of Neoastragalus started 4 million years ago. Inga, another genus of the Papilionoideae with approximately 350 species, seems to have diverged in the last 2 million years.
It has been suggested, based on fossil and phylogenetic evidence, that legumes originally evolved in arid and/or semi-arid regions along the Tethys seaway during the Palaeogene Period. However, others contend that Africa (or even the Americas) cannot yet be ruled out as the origin of the family.
The current hypothesis about the evolution of the genes needed for nodulation is that they were recruited from other pathways after a polyploidy event. Several different pathways have been implicated as donating duplicated genes to the pathways need for nodulation. The main donors to the pathway were the genes associated with the arbuscular mycorrhiza symbiosis genes, the pollen tube formation genes and the haemoglobin genes. One of the main genes shown to be shared between the arbuscular mycorrhiza pathway and the nodulation pathway is SYMRK and it is involved in the plant-bacterial recognition. The pollen tube growth is similar to the infection thread development in that infection threads grow in a polar manner that is similar to a pollen tubes polar growth towards the ovules. Both pathways include the same type of enzymes, pectin-degrading cell wall enzymes. The enzymes needed to reduce nitrogen, nitrogenases, require a substantial input of ATP but at the same time are sensitive to free oxygen. To meet the requirements of this paradoxical situation, the plants express a type of haemoglobin called leghaemoglobin that is believed to be recruited after a duplication event. These three genetic pathways are believed to be part of a gene duplication event then recruited to work in nodulation.
Phylogeny and taxonomy
Phylogeny
The phylogeny of the legumes has been the object of many studies by research groups from around the world. These studies have used morphology, DNA data (the chloroplast intron trnL, the chloroplast genes rbcL and matK, or the ribosomal spacers ITS) and cladistic analysis in order to investigate the relationships between the family's different lineages. Fabaceae is consistently recovered as monophyletic. The studies further confirmed that the traditional subfamilies Mimosoideae and Papilionoideae were each monophyletic but both were nested within the paraphyletic subfamily Caesalpinioideae. All the different approaches yielded similar results regarding the relationships between the family's main clades. Following extensive discussion in the legume phylogenetics community, the Legume Phylogeny Working Group reclassified Fabaceae into six subfamilies, which necessitated the segregation of four new subfamilies from Caesalpinioideae and merging Caesapinioideae sensu stricto with the former subfamily Mimosoideae. The exact branching order of the different subfamilies is still unresolved.
Taxonomy
The Fabaceae are placed in the order Fabales according to most taxonomic systems, including the APG III system. The family now includes six subfamilies:
Cercidoideae: 12 genera and ~335 species. Mainly tropical. Bauhinia, Cercis.
Detarioideae: 84 genera and ~760 species. Mainly tropical. Amherstia, Detarium, Tamarindus.
Duparquetioideae: 1 genus and 1 species. West and Central Africa. Duparquetia.
Dialioideae: 17 genera and ~85 species. Widespread throughout the tropics. Dialium.
Caesalpinioideae: 148 genera and ~4400 species. Pantropical. Caesalpinia, Senna, Mimosa, Acacia. Includes the former subfamily Mimosoideae (80 genera and ~3200 species; mostly tropical and warm temperate Asia and America).
Faboideae (Papilionoideae): 503 genera and ~14,000 species. Cosmopolitan. Astragalus, Lupinus, Pisum.
Ecology
Distribution and habitat
The Fabaceae have an essentially worldwide distribution, being found everywhere except Antarctica and the high Arctic. The trees are often found in tropical regions, while the herbaceous plants and shrubs are predominant outside the tropics.
Biological nitrogen fixation
Biological nitrogen fixation (BNF, performed by the organisms called diazotrophs) is a very old process that probably originated in the Archean eon when the primitive atmosphere lacked oxygen. It is only carried out by Euryarchaeota and just 6 of the more than 50 phyla of bacteria. Some of these lineages co-evolved together with the flowering plants establishing the molecular basis of a mutually beneficial symbiotic relationship. BNF is carried out in nodules that are mainly located in the root cortex, although they are occasionally located in the stem as in Sesbania rostrata. The spermatophytes that co-evolved with actinorhizal diazotrophs (Frankia) or with rhizobia to establish their symbiotic relationship belong to 11 families contained within the Rosidae clade (as established by the gene molecular phylogeny of rbcL, a gene coding for part of the RuBisCO enzyme in the chloroplast). This grouping indicates that the predisposition for forming nodules probably only arose once in flowering plants and that it can be considered as an ancestral characteristic that has been conserved or lost in certain lineages. However, such a wide distribution of families and genera within this lineage indicates that nodulation had multiple origins. Of the 10 families within the Rosidae, 8 have nodules formed by actinomyces (Betulaceae, Casuarinaceae, Coriariaceae, Datiscaceae, Elaeagnaceae, Myricaceae, Rhamnaceae and Rosaceae), and the two remaining families, Ulmaceae and Fabaceae have nodules formed by rhizobia.
The rhizobia and their hosts must be able to recognize each other for nodule formation to commence. Rhizobia are specific to particular host species although a rhizobia species may often infect more than one host species. This means that one plant species may be infected by more than one species of bacteria. For example, nodules in Acacia senegal can contain seven species of rhizobia belonging to three different genera. The most distinctive characteristics that allow rhizobia to be distinguished apart are the rapidity of their growth and the type of root nodule that they form with their host. Root nodules can be classified as being either indeterminate, cylindrical and often branched, and determinate, spherical with prominent lenticels. Indeterminate nodules are characteristic of legumes from temperate climates, while determinate nodules are commonly found in species from tropical or subtropical climates.
Nodule formation is common throughout the Fabaceae. It is found in the majority of its members that only form an association with rhizobia, which in turn form an exclusive symbiosis with the Fabaceae (with the exception of Parasponia, the only genus of the 18 Ulmaceae genera that is capable of forming nodules). Nodule formation is present in all the Fabaceae sub-families, although it is less common in the Caesalpinioideae. All types of nodule formation are present in the subfamily Papilionoideae: indeterminate (with the meristem retained), determinate (without meristem) and the type included in Aeschynomene. The latter two are thought to be the most modern and specialised type of nodule as they are only present in some lines of the subfamily Papilionoideae. Even though nodule formation is common in the two monophyletic subfamilies Papilionoideae and Mimosoideae they also contain species that do not form nodules. The presence or absence of nodule-forming species within the three sub-families indicates that nodule formation has arisen several times during the evolution of the Fabaceae and that this ability has been lost in some lineages. For example, within the genus Acacia, a member of the Mimosoideae, A. pentagona does not form nodules, while other species of the same genus readily form nodules, as is the case for Acacia senegal, which forms both rapidly and slow growing rhizobial nodules.
Chemical ecology
A large number of species within many genera of leguminous plants, e.g. Astragalus, Coronilla, Hippocrepis, Indigofera, Lotus, Securigera and Scorpiurus, produce chemicals that derive from the compound 3-nitropropanoic acid (3-NPA, beta-nitropropionic acid). The free acid 3-NPA is an irreversible inhibitor of mitochondrial respiration, and thus the compound inhibits the tricarboxylic acid cycle. This inhibition caused by 3-NPA is especially toxic to nerve cells and represents a very general toxic mechanism suggesting a profound ecological importance due to the big number of species producing this compound and its derivatives. A second and closely related class of secondary metabolites that occur in many species of leguminous plants is defined by isoxazolin-5-one derivatives. These compounds occur in particular together with 3-NPA and related derivatives at the same time in the same species, as found in Astragalus canadensis and Astragalus collinus. 3-NPA and isoxazlin-5-one derivatives also occur in many species of leaf beetles (see defense in insects).
Economic and cultural importance
Legumes are economically and culturally important plants due to their extraordinary diversity and abundance, the wide variety of edible vegetables they represent and due to the variety of uses they can be put to: in horticulture and agriculture, as a food, for the compounds they contain that have medicinal uses and for the oil and fats they contain that have a variety of uses.
Food and forage
The history of legumes is tied in closely with that of human civilization, appearing early in Asia, the Americas (the common bean, several varieties) and Europe (broad beans) by 6,000 BCE, where they became a staple, essential as a source of protein.
Their ability to fix atmospheric nitrogen reduces fertilizer costs for farmers and gardeners who grow legumes, and means that legumes can be used in a crop rotation to replenish soil that has been depleted of nitrogen. Legume seeds and foliage have a comparatively higher protein content than non-legume materials, due to the additional nitrogen that legumes receive through the process. Legumes are commonly used as natural fertilizers. Some legume species perform hydraulic lift, which makes them ideal for intercropping.
Farmed legumes can belong to numerous classes, including forage, grain, blooms, pharmaceutical/industrial, fallow/green manure and timber species, with most commercially farmed species filling two or more roles simultaneously.
There are of two broad types of forage legumes. Some, like alfalfa, clover, vetch, and Arachis, are sown in pasture and grazed by livestock. Other forage legumes such as Leucaena or Albizia are woody shrub or tree species that are either broken down by livestock or regularly cut by humans to provide fodder.
Grain legumes are cultivated for their seeds, and are also called pulses. The seeds are used for human and animal consumption or for the production of oils for industrial uses. Grain legumes include both herbaceous plants like beans, lentils, lupins, peas and peanuts, and trees such as carob, mesquite and tamarind.
Lathyrus tuberosus, once extensively cultivated in Europe, forms tubers used for human consumption.
Bloom legume species include species such as lupin, which are farmed commercially for their blooms, and thus are popular in gardens worldwide. Laburnum, Robinia, Gleditsia (honey locust), Acacia, Mimosa, and Delonix are ornamental trees and shrubs.
Industrial farmed legumes include Indigofera, cultivated for the production of indigo, Acacia, for gum arabic, and Derris, for the insecticide action of rotenone, a compound it produces.
Fallow or green manure legume species are cultivated to be tilled back into the soil to exploit the high nitrogen levels found in most legumes. Numerous legumes are farmed for this purpose, including Leucaena, Cyamopsis and Sesbania.
Various legume species are farmed for timber production worldwide, including numerous Acacia species, Dalbergia species, and Castanospermum australe.
Melliferous plants offer nectar to bees and other insects to encourage them to carry pollen from the flowers of one plant to others thereby ensuring pollination. Many Fabaceae species are important sources of pollen and nectar for bees, including for honey production in the beekeeping industry. Example Fabaceae such as alfalfa, and various clovers including white clover and sweet clover, are important sources of nectar and honey for the Western honey bee.
Industrial uses
Natural gums
Natural gums are vegetable exudates that are released as the result of damage to the plant such as that resulting from the attack of an insect or a natural or artificial cut. These exudates contain heterogeneous polysaccharides formed of different sugars and usually containing uronic acids. They form viscous colloidal solutions. There are different species that produce gums. The most important of these species belong to the Fabaceae. They are widely used in the pharmaceutical, cosmetic, food, and textile sectors. They also have interesting therapeutic properties; for example gum arabic is antitussive and anti-inflammatory. The most well known gums are tragacanth (Astragalus gummifer), gum arabic (Acacia senegal) and guar gum (Cyamopsis tetragonoloba).
Dyes
Several species of Fabaceae are used to produce dyes. The heartwood of logwood, Haematoxylon campechianum, is used to produce red and purple dyes. The histological stain called haematoxylin is produced from this species. The wood of the Brazilwood tree (Caesalpinia echinata) is also used to produce a red or purple dye. The Madras thorn (Pithecellobium dulce) has reddish fruit that are used to produce a yellow dye. Indigo dye is extracted from the indigo plant Indigofera tinctoria that is native to Asia. In Central and South America dyes are produced from two species in the same genus: indigo and Maya blue from Indigofera suffruticosa and Natal indigo from Indigofera arrecta. Yellow dyes are extracted from Butea monosperma, commonly called flame of the forest and from dyer's greenweed, (Genista tinctoria).
Ornamentals
Legumes have been used as ornamental plants throughout the world for many centuries. Their vast diversity of heights, shapes, foliage and flower colour means that this family is commonly used in the design and planting of everything from small gardens to large parks. The following is a list of the main ornamental legume species, listed by subfamily.
Subfamily Caesalpinioideae: Bauhinia forficata, Caesalpinia gilliesii, Caesalpinia spinosa, Ceratonia siliqua, Cercis siliquastrum, Gleditsia triacanthos, Gymnocladus dioica, Parkinsonia aculeata, Senna multiglandulosa.
Subfamily Mimosoideae: Acacia caven, Acacia cultriformis, Acacia dealbata, Acacia karroo, Acacia longifolia, Acacia melanoxylon, Acacia paradoxa, Acacia retinodes, Acacia saligna, Acacia verticillata, Acacia visco, Albizzia julibrissin, Calliandra tweediei, Paraserianthes lophantha, Prosopis chilensis.
Subfamily Faboideae: Clianthus puniceus, Cytisus scoparius, Erythrina crista-galli, Erythrina falcata, Laburnum anagyroides, Lotus peliorhynchus, Lupinus arboreus, Lupinus polyphyllus, Otholobium glandulosum, Retama monosperma, Robinia hispida, Robinia luxurians, Robinia pseudoacacia, Sophora japonica, Sophora macnabiana, Sophora macrocarpa, Spartium junceum, Teline monspessulana, Tipuana tipu, Wisteria sinensis.
Emblematic Fabaceae
The cockspur coral tree (Erythrina crista-galli), is the national flower of Argentina and Uruguay.
The elephant ear tree (Enterolobium cyclocarpum) is the national tree of Costa Rica, by Executive Order of 31 August 1959.
The brazilwood tree (Caesalpinia echinata) has been the national tree of Brazil since 1978.
The golden wattle Acacia pycnantha is Australia's national flower.
The Hong Kong orchid tree Bauhinia blakeana is the national flower of Hong Kong.
Image gallery
| Biology and health sciences | Fabales | null |
62889 | https://en.wikipedia.org/wiki/Night%20monkey | Night monkey | Night monkeys, also known as owl monkeys or douroucoulis (), are nocturnal New World monkeys of the genus Aotus, the only member of the family Aotidae (). The genus comprises eleven species which are found across Panama and much of South America in primary and secondary forests, tropical rainforests and cloud forests up to . Night monkeys have large eyes which improve their vision at night, while their ears are mostly hidden, giving them their name Aotus, meaning "earless".
Night monkeys are the only truly nocturnal monkeys with the exception of some cathemeral populations of Azara's night monkey, who have irregular bursts of activity during day and night. They have a varied repertoire of vocalisations and live in small family groups of a mated pair and their immature offspring. Night monkeys have monochromatic vision which improves their ability to detect visual cues at night.
Night monkeys are threatened by habitat loss, the pet trade, hunting for bushmeat, and by biomedical research. They constitute one of the few monkey species affected by the often deadly human malaria protozoan Plasmodium falciparum and are therefore used as experimental subjects in malaria research. The Peruvian night monkey is classified by the International Union for Conservation of Nature (IUCN) as an Endangered species, while four are Vulnerable species, four are Least-concern species, and two are data deficient.
Taxonomy
Until 1983, all night monkeys were placed into only one (A. lemurimus) or two species (A. lemurinus and A. azarae). Chromosome variability showed that there was more than one species in the genus and Hershkovitz (1983) used morphological and karyological evidence to propose nine species, one of which is now recognised as a junior synonym. He split Aotus into two groups: a northern, gray-necked group (A. lemurinus, A. hershkovitzi, A. trivirgatus and A. vociferans) and a southern, red-necked group (A. miconax, A. nancymaae, A. nigriceps and A. azarae). Arguably, the taxa otherwise considered subspecies of A. lemurinus – brumbacki, griseimembra and zonalis – should be considered separate species, whereas A. hershkovitzi arguably is a junior synonym of A. lemurinus. A new species from the gray-necked group was recently described as A. jorgehernandezi. As is the case with some other splits in this genus, an essential part of the argument for recognizing this new species was differences in the chromosomes. Chromosome evidence has also been used as an argument for merging "species", as was the case for considering infulatus a subspecies of A. azarae rather than a separate species. One extinct species is known from the fossil record.
Classification
Family Aotidae
Genus Aotus
Aotus lemurinus (gray-necked) group:
Gray-bellied night monkey, Aotus lemurinus
Panamanian night monkey, Aotus zonalis
Gray-handed night monkey, Aotus griseimembra
Hernández-Camacho's night monkey, Aotus jorgehernandezi
Brumback's night monkey, Aotus brumbacki
Three-striped night monkey, Aotus trivirgatus
Spix's night monkey, Aotus vociferans
Aotus azarae (red-necked) group:
Azara's night monkey, Aotus azarae
Peruvian night monkey, Aotus miconax
Nancy Ma's night monkey, Aotus nancymaae
Black-headed night monkey, Aotus nigriceps
Group Incertae sedis
Aotus dindensis
Physical characteristics
Night monkeys have large brown eyes; the size improves their nocturnal vision increasing their ability to be active at night. They are sometimes said to lack a tapetum lucidum, the reflective layer behind the retina possessed by many nocturnal animals. Other sources say they have a tapetum lucidum composed of collagen fibrils. At any rate, night monkeys lack the tapetum lucidum composed of riboflavin crystals possessed by lemurs and other strepsirrhines, which is an indication that their nocturnality is a secondary adaption evolved from ancestrally diurnal primates.
Their ears are rather difficult to see; this is why their genus name, Aotus (meaning "earless") was chosen. There is little data on the weights of wild night monkeys. From the figures that have been collected, it appears that males and females are similar in weight; the heaviest species is Azara's night monkey at around , and the lightest is Brumback's night monkey, which weighs between . The male is slightly taller than the female, measuring , respectively.
Ecology
Night monkeys can be found in Panama, Colombia, Ecuador, Peru, Brazil, Paraguay, Argentina, Bolivia, and Venezuela. The species that live at higher elevations tend to have thicker fur than the monkeys at sea level. Night monkeys can live in forests undisturbed by humans (primary forest) as well as in forests that are recovering from human logging efforts (secondary forest).
Distribution
A primary distinction between red-necked and gray-necked night monkeys is spatial distribution. Gray-necked night monkeys (Aotus lemurinus group) are found north of the Amazon River, while the red-necked group (Aotus azare group) are localized south of the Amazon River. Red-necked night monkeys are found throughout various regions of the Amazon rainforest of South America, with some variation occurring between the four species. Nancy Ma's night monkey occurs in both flooded and unflooded tropical rainforest regions of Peru, preferring moist swamp and mountainous areas. This species has been observed nesting in regions of the Andes and has recently been introduced to Colombia, likely as a result of post-research release into the community. The black-headed night monkey is also found mainly in the Peruvian Amazon (central and upper Amazon), however its range extends throughout Brazil and Bolivia to the base of the Andes mountain chain. Night monkeys such like the black-headed night monkey, generally inhabit cloud forests; areas with consistent presence of low clouds with a high mist and moisture content which allows for lush and rich vegetation to grow year round, providing excellent food and lodging sources. The Peruvian night monkey, like Nancy Ma's night monkey, is endemic to the Peruvian Andes however it is found at a higher elevation, approximately above sea level and therefore exploits different niches of this habitat. The distribution of A. azare, extends further towards the Atlantic Ocean, spanning Argentina, Bolivia and the drier, south western regions of Paraguay, however unlike the other red-necked night monkey species, it is not endemic to Brazil.
Sleep sites
During the daylight hours, night monkeys rest in shaded tree areas. These species have been observed exploiting four different types of tree nests, monkeys will rest in; holes formed in the trunks of trees, in concave sections of branches surrounded by creepers and epiphytes, in dense areas of epiphyte, climber and vine growth and in areas of dense foliage. These sleeping sites provide protection from environmental stressors such as heavy rain, sunlight and heat. Sleeping sites are therefore carefully chosen based upon tree age, density of trees, availability of space for the group, ability of site to provide protection, ease of access to the site and availability of site with respect to daily routines. While night monkeys are an arboreal species, nests have not been observed in higher strata of the rainforest ecosystem, rather a higher density of nests were recorded at low-mid vegetation levels. Night monkeys represent a territorial species, territories are defended by conspecifics through the use of threatening and agonistic behaviours. Ranges between night monkey species often do overlap and result in interspecific aggressions such as vocalizing and chasing which may last up to an hour.
Diet
Night monkeys are primarily frugivorous, as fruits are easily distinguished through the use of olfactory cues, but leaf and insect consumption has also been observed in the cathemeral night monkey species A. azare. A study conducted by Wolovich et al., indicated that juveniles and females were much better at catching both crawling and flying insects than adult males. In general, the technique used by night monkeys in insect capturing is to use the palm of the hand to flatten a prey insect against a tree branch and then proceed to consume the carcass. During the winter months or when food sources are reduced, night monkeys have also been observed foraging on flowers such as Tabebuia heptaphylla, however this does not represent a primary food source.
Reproduction
In night monkeys, mating occurs infrequently, however females are fertile year-round, with reproductive cycles range from 13 to 25 days. The gestation period for night monkey is approximately 117– 159 days but varies from species to species. Birthing season extends from September to March and is species-dependent, with one offspring being produced per year; however, in studies conducted in captivity, twins were observed. Night monkeys reach puberty at a relatively young age, between 7 and 11 months, and most species attain full sexual maturity by the time they reach 2 years of age. A. azare represents an exception reaching sexual maturity by the age of 4.
Behavior
The name "night monkey" comes from the fact that all species are active at night and are, in fact, the only truly nocturnal monkeys (an exception is the subspecies of Azara's night monkey, Aotus azarae azarae, which is cathemeral). Night monkeys make a notably wide variety of vocal sounds, with up to eight categories of distinct calls (gruff grunts, resonant grunts, sneeze grunts, screams, low trills, moans, gulps, and hoots), and a frequency range of 190–1,950 Hz. Unusual among the New World monkeys, they are monochromats, that is, they have no colour vision, presumably because it is of no advantage given their nocturnal habits. They have a better spatial resolution at low light levels than other primates, which contributes to their ability to capture insects and move at night. Night monkeys live in family groups consisting of a mated pair and their immature offspring. Family groups defend territories by vocal calls and scent marking.
The night monkey is socially monogamous, and all night monkeys form pair bonds. Only one infant is born each year. The male is the primary caregiver, and the mother carries the infant for only the first week or so of its life. This is believed to have developed because it increases the survival of the infant and reduces the metabolic costs on the female. Adults will occasionally be evicted from the group by same-sex individuals, either kin or outsiders.
Nocturnality
The family Aotidae is the only family of nocturnal species within the suborder Anthropoidea. While the order primates is divided into prosimians; many of which are nocturnal, the anthropoids possess very few nocturnal species and therefore it is highly likely that the ancestors of the family Aotidae did not exhibit nocturnality and were rather diurnal species. The presence of nocturnal behavior in Aotidae therefore exemplifies a derived trait; an evolutionary adaptation that conferred greater fitness advantages onto the night monkey. Night monkey share some similarities with nocturnal prosimians including low basal metabolic rate, small body size and good ability to detect visual cues at low light levels. Their responses to olfactory stimulus are intermediate between those of the prosimians and diurnal primate species, however the ability to use auditory cues remains more similar to diurnal primate species than to nocturnal primate species. This provides further evidence to support the hypothesis that nocturnality is a derived trait in the family Aotidae.
As the ancestor of Aotidae was likely diurnal, selective and environmental pressures must have been exerted on the members of this family which subsequently resulted in the alteration of their circadian rhythm to adapt to fill empty niches. Being active in the night rather than during the day time, gave Aotus access to better food sources, provided protection from predators, reduced interspecific competition and provided an escape from the harsh environmental conditions of their habitat. To begin, resting during the day allows for decreased interaction with diurnal predators. Members of the family Aotidae, apply the predation avoidance theory, choosing very strategic covered nests sites in trees. These primates carefully choose areas with sufficient foliage and vines to provide cover from the sun and camouflage from predators, but which simultaneously allow for visibility of ground predators and permit effective routes of escape should a predator approach too quickly. Activity at night also permits night monkeys to avoid aggressive interactions with other species such as competing for food and territorial disputes; as they are active when most other species are inactive and resting.
Night monkeys also benefit from a nocturnal life style as activity in the night provides a degree of protection from the heat of the day and the thermoregulation difficulties associated. Although night monkey, like all primates are endothermic, meaning they are able to produce their own heat, night monkeys undergo behavioural thermoregulation in order to minimize energy expenditure. During the hottest points of the day, night monkeys are resting and therefore expending less energy in the form of heat. As they carefully construct their nests, night monkeys also benefit from the shade provided by the forest canopy which enables them to cool their bodies through the act of displacing themselves into a shady area. Additionally, finding food is energetically costly and completing this process during the day time usually involves the usage of energy in the form of calories and lipid reserves to cool the body down. Foraging during the night when it is cooler, and when there is less competition, supports the optimal foraging theory; maximize energy input while minimizing energy output.
While protection from predators, interspecific interactions, and the harsh environment propose ultimate causes for nocturnal behavior as they increase the species fitness, the proximate causes of nocturnality are linked to the environmental effects on circadian rhythm. While diurnal species are stimulated by the appearance of the sun, in nocturnal species, activity is highly impacted by the degree of moon light available. The presence of a new moon has correlated with inhibition of activity in night monkeys who exhibit lower levels of activity with decreasing levels of moon light. Therefore, the lunar cycle has a significant influence on the foraging and a nocturnal behaviors of night monkey species.
Pair-bonded social animals (social monogamy)
Night monkeys are socially monogamous—they form a bond and mate with one partner. They live in small groups consisting of a pair of reproductive adults, one infant and one to two juveniles. These species exhibit mate guarding, a practice in which the male individual will protect the female he is bonded to and prevent other conspecifics from attempting to mate with her. Mate guarding likely evolved as a means of reducing energy expenditure when mating. As night monkey territories generally have some edge overlap, there can be a large number of individuals coexisting in one area which may make it difficult for a male to defend many females at once due to high levels of interspecific competition for mates. Night monkeys form bonded pairs and the energy expenditure of protecting a mate is reduced. Pair bonding may also be exhibited as a result of food distribution. In the forest, pockets of food can be dense or very patchy and scarce. Females, as they need energy stores to support reproduction are generally distributed to areas with sufficient food sources. Males will therefore also have to distribute themselves to be within proximity to females, this form of food distribution lends itself to social monogamy as finding females may become difficult if males have to constantly search for females which may be widely distributed depending on food availability that year.
However, while this does explain social monogamy, it does not explain the high degree of paternal care which is exhibited by these primates. After the birth of an infant, males are the primary carrier of the infant, carrying offspring up to 90% of the time. In addition to aiding in child care, males will support females during lactation through sharing their foraged food with lactating females. Generally, food sharing is not observed in nature as the search for food requires a great degree of energy expenditure, but in the case of night monkey males, food sharing confers offspring survival advantages. As lactating females may be too weak to forage themselves, they may lose the ability to nurse their child, food sharing therefore ensures that offspring will be well feed. The act of food sharing is only observed among species where there is a high degree of fidelity in paternity. Giving up valuable food sources would not confer an evolutionary advance unless it increased an individual's fitness; in this case, paternal care ensures success of offspring and therefore increases the father's fitness.
Olfactory communication and foraging
Recent studies have proposed that night monkeys rely on olfaction and olfactory cues for foraging and communication significantly more than other diurnal primate species. This trend is reflected in the species physiology; members of Aotidae possess larger scent perception organs than their diurnal counterparts. The olfactory bulb, accessory olfactory bulb and volume of lateral olfactory tract are all larger in Aotus than in any of the other new world monkey species. It is therefore likely that increased olfaction capacities improved the fitness of these nocturnal primate species; they produced more offspring and passed on these survival enhancing traits. The benefits of increased olfaction in night monkeys are twofold; increased ability to use scent cues has facilitated night time foraging and is also an important factor in mate selection and sexual attractivity.
As a substantial portion of the night monkey's activities occurring during the dark hours of the night, there is a much lower reliance of visual and tactile cues. When foraging at night, members of the family Aotidae will smell fruits and leaves before ingesting to determine the quality and safety of the food source. As they are highly frugivorous and cannot perceive colour well, smell becomes the primary determinant of the ripeness of fruits and is therefore an important component in the optimal foraging methods of these primates. Upon finding a rich food source, night monkeys have been observed scent marking not only the food source, but the route from their sleeping site to the food source as well. Scent can therefore be used as an effective method of navigation and reduce energy expenditure during subsequent foraging expeditions. Night monkeys possess several scent glands covered by greasy hair patches, which secrete pheromones that can be transferred onto vegetation or other conspecifics. Scent glands are often located subcaudal, but also occur near the muzzle and the sternum. The process of scent marking is accomplished through the rubbing of the hairs covering scent glands onto the desired "marked item".
Olfactory cues are also of significant importance in the process of mating and mate guarding. Male night monkeys will rub subcaudal glands onto their female partner in a process called "partner marking" in order to relay the signal to coexisting males that the female is not available for mating. Night monkeys also send chemical signals through urine to communicate reproductive receptivity. In many cases, male night monkeys have been observed drinking the urine of their female mate; it is proposed that the pheromones in the urine can indicate the reproductive state of a female and indicate ovulation. This is especially important in night monkeys as they cannot rely on visual cues, such as the presence of a tumescence, to determine female reproductive state. Therefore, olfactory communication in night monkeys is a result of sexual selection; sexually dimorphic trait conferring increased reproductive success. This trait demonstrates sexual dimorphism, as males have larger subcaudal scent glands compared to female counterparts and sex differences have been recorded in the glandular secretions of each sex. There is a preference for scents of a particular type; those which indicate reproductive receptivity, which increases species fitness by facilitating the production of offspring.
Conservation
According to the IUCN (the International Union for Conservation of Nature), the Peruvian night monkey is classified an Endangered species, four species are Vulnerable, four are Least-concern species, and two are data deficient. Most night monkey species are threatened by varying levels of habitat loss throughout their range, caused by agricultural expansion, cattle ranching, logging, armed conflict, and mining operations. To date, it is estimated that more than 62% of the habitat of the Peruvian night monkey has been destroyed or degraded by human activities. However, some night monkey species have become capable of adapting exceptionally well to anthropogenic influences in their environment. Populations of Peruvian night monkey have been observed thriving in small forest fragments and plantation or farmland areas, however this is likely possible given their small body size and may not be an appropriate alternate habitat option for other larger night monkey species. Studies have already been conducted into the feasibility of agroforestry; plantations which simultaneously support local species biodiversity. In the case of A. miconax, coffee plantations with introduced shade trees, provided quality habitat spaces. While the coffee plantation benefited from the increased shade—reducing weed growth and desiccation, night monkeys used the space as a habitat, a connection corridor or stepping stone area between habitats that provided a rich food source. However, some researchers question the agroforestry concept, maintaining that monkeys are more susceptible to hunting, predator and pathogens in plantation fields, thus indicating the need for further research into the solution before implementation.
Night monkeys are additionally threatened by both national and international trade for bushmeat and domestic pets. Since 1975, the pet trade of night monkeys has been regulated by CITES (the Convention on International Trade in Endangered Species). In the last forty years, nearly 6,000 live night monkeys and more than 7,000 specimens have been traded from the nine countries which they call home. While the restrictive laws put into place by CITES are aiding in the reduction of these numbers, 4 out of 9 countries, show deficiencies in maintaining the standards outlined by CITES Increased attention and enforcement of these laws will be imperative for the sustainability of night monkey populations.
Use in biomedical research poses another threat to night monkey biodiversity. Species such as Nancy Ma's night monkey, like human beings, are susceptible to infection by the Plasmodium falciparum parasite responsible for malaria. This trait caused them to be recommended by the World Health Organization as test subjects in the development of malaria vaccines. Up to 2008, more than 76 night monkeys died as a result of vaccine testing; some died from malaria, while others perished due to medical complications from the testing.
Increased research and knowledge of night monkey ecology is an invaluable tool in determining conservation strategies for these species and raising awareness for consequences of the anthropogenic threats facing these primates. Radio-collaring of free ranging primates proposes a method of obtaining more accurate and complete data surrounding primate behavior patterns. This in turn can aid in understanding what measures need to be taken to promote the conservation of these species. Radio collaring not only allows for the identification of individuals within a species, increased sample size, more detailed dispersal and range patterns, but also facilitates educational programs which raise awareness for the current biodiversity crisis. The usage of radio-collaring while potentially extremely valuable, has been shown to interfere with social group interactions, the development of better collaring techniques and technology will therefore be imperative in the realisation and successful use of radio collars on night monkeys.
| Biology and health sciences | New World monkeys | Animals |
62893 | https://en.wikipedia.org/wiki/Dingo | Dingo | The dingo (either included in the species Canis familiaris, or considered one of the following independent taxa: Canis familiaris dingo, Canis dingo, or Canis lupus dingo) is an ancient (basal) lineage of dog found in Australia. Its taxonomic classification is debated as indicated by the variety of scientific names presently applied in different publications. It is variously considered a form of domestic dog not warranting recognition as a subspecies, a subspecies of dog or wolf, or a full species in its own right.
The dingo is a medium-sized canine that possesses a lean, hardy body adapted for speed, agility, and stamina. The dingo's three main coat colourations are light ginger or tan, black and tan, or creamy white. The skull is wedge-shaped and appears large in proportion to the body. The dingo is closely related to the New Guinea singing dog: their lineage split early from the lineage that led to today's domestic dogs, and can be traced back through Maritime Southeast Asia to Asia. The oldest remains of dingoes in Australia are around 3,500 years old.
A dingo pack usually consists of a mated pair, their offspring from the current year, and sometimes offspring from the previous year.
Etymology
The name "dingo" comes from the Dharug language used by the Indigenous Australians of the Sydney area.
The first British colonists to arrive in Australia in 1788 established a settlement at Port Jackson and noted "dingoes" living with Indigenous Australians. The name was first recorded in 1789 by Watkin Tench in his Narrative of the Expedition to Botany Bay:
Related Dharug words include "ting-ko" meaning "bitch", and "tun-go-wo-re-gal" meaning "large dog". The dingo has different names in different indigenous Australian languages, such as boolomo, dwer-da, joogoong, kal, kurpany, maliki, mirigung, noggum, papa-inura, and wantibirri. Some authors propose that a difference existed between camp dingoes and wild dingoes as they had different names among indigenous tribes. The people of the Yarralin, Northern Territory, region frequently call those dingoes that live with them walaku, and those that live in the wilderness ngurakin. They also use the name walaku to refer to both dingoes and dogs. The colonial settlers of New South Wales wrote using the name dingo only for camp dogs. It is proposed that in New South Wales the camp dingoes only became wild after the collapse of Aboriginal society.
Taxonomy
Dogs associated with indigenous people were first recorded by Jan Carstenszoon in the Cape York Peninsula area in 1623. In 1699, Captain William Dampier visited the coast of what is now Western Australia and recorded that "my men saw two or three beasts like hungry wolves, lean like so many skeletons, being nothing but skin and bones". In 1788, the First Fleet arrived in Botany Bay under the command of Australia's first colonial governor, Arthur Phillip, who took ownership of a dingo and in his journal made a brief description with an illustration of the "Dog of New South Wales". In 1793, based on Phillip's brief description and illustration, the "Dog of New South Wales" was classified by Friedrich Meyer as Canis dingo.
In 1999, a study of the maternal lineage through the use of mitochondrial DNA (mDNA) as a genetic marker indicates that the dingo and New Guinea singing dog developed at a time when human populations were more isolated from each other. In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf Canis lupus its wild subspecies, and proposed two additional subspecies: "familiaris Linnaeus, 1758 [domestic dog]" and "dingo Meyer, 1793 [domestic dog]". Wozencraft included hallstromi—the New Guinea singing dog—as a taxonomic synonym for the dingo. He referred to the mDNA study as one of the guides in forming his decision. The inclusion of familiaris and dingo under a "domestic dog" clade has been noted by other mammalogists, and their classification under the wolf debated.
In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs (Canis familiaris), which therefore should not be assessed for the IUCN Red List.
In 2020, the American Society of Mammalogists considered the dingo a synonym of the domestic dog. However, recent DNA sequencing of a 'pure' wild dingo from South Australia suggests that the dingo has a different DNA methylation pattern to the German Shepherd. In 2024, a study found that the Dingo and New Guinea singing dog show 5.5% genome introgression from the ancestor of the recently extinct Japanese wolf, with Japanese dogs showing 4% genome introgression. This introgression occurred before the ancestor of the Japanese wolf arrived in Japan.
Domestic status
The dingo is regarded as a feral dog because it descended from domesticated ancestors. The dingo's relationship with Indigenous Australians is one of commensalism, in which two organisms live in close association, but do not depend on each other for survival. They both hunt and sleep together. The dingo is, therefore, comfortable enough around humans to associate with them, but is still capable of living independently. Any free-ranging, unowned dog can be socialised to become an owned dog, as some dingoes do when they join human families. Although the dingo exists in the wild, it associates with humans, but has not been selectively bred unlike other domesticated animals. Therefore, its status as a domestic animal is not clear. Whether the dingo was a wild or domesticated species was not clarified from Meyer's original description, which translated from the German language reads:
It is not known if it is the only dog species in New South Wales, and if it can also still be found in the wild state; however, so far it appears to have lost little of its wild condition; moreover, no divergent varieties have been discovered.
History
The earliest known dingo remains, found in Western Australia, date to 3,450 years ago. Based on a comparison of modern dingoes with these early remains, dingo morphology has not changed over these thousands of years. This suggests that no artificial selection has been applied over this period and that the dingo represents an early form of dog. They have lived, bred, and undergone natural selection in the wild, isolated from other dogs until the arrival of European settlers, resulting in a unique breed.
In 2020, an MDNA study of ancient dog remains from the Yellow River and Yangtze River basins of southern China showed that most of the ancient dogs fell within haplogroup A1b, as do the Australian dingoes and the pre-colonial dogs of the Pacific, but in low frequency in China today. The specimen from the Tianluoshan archaeological site, Zhejiang province dates to 7,000 YBP (years before present) and is basal to the entire haplogroup A1b lineage. The dogs belonging to this haplogroup were once widely distributed in southern China, then dispersed through Southeast Asia into New Guinea and Oceania, but were replaced in China by dogs of other lineages 2,000 YBP.
The oldest reliable date for dog remains found in mainland Southeast Asia is from Vietnam at 4,000 YBP, and in Island Southeast Asia from Timor-Leste at 3,000 YBP. The earliest dingo remains in the Torres Straits date to 2,100 YBP. In New Guinea, the earliest dog remains date to 2,500–2,300 YBP from Caution Bay near Port Moresby, but no ancient New Guinea singing dog remains have been found.
The earliest dingo skeletal remains in Australia are estimated at 3,450 YBP from the Mandura Caves on the Nullarbor Plain, south-eastern Western Australia; 3,320 YBP from Woombah Midden near Woombah, New South Wales; and 3,170 YBP from Fromme's Landing on the Murray River near Mannum, South Australia. Dingo bone fragments were found in a rock shelter located at Mount Burr, South Australia, in a layer that was originally dated 7,000–8,500 YBP. Excavations later indicated that the levels had been disturbed, and the dingo remains "probably moved to an earlier level." The dating of these early Australian dingo fossils led to the widely held belief that dingoes first arrived in Australia 4,000 YBP and then took 500 years to disperse around the continent. However, the timing of these skeletal remains was based on the dating of the sediments in which they were discovered, and not the specimens themselves.
In 2018, the oldest skeletal bones from the Madura Caves were directly carbon dated between 3,348 and 3,081 YBP, providing firm evidence of the earliest dingo and that dingoes arrived later than had previously been proposed. The next-most reliable timing is based on desiccated flesh dated 2,200 YBP from Thylacine Hole, 110 km west of Eucla on the Nullarbor Plain, southeastern Western Australia. When dingoes first arrived, they would have been taken up by Indigenous Australians, who then provided a network for their swift transfer around the continent. Based on the recorded distribution time for dogs across Tasmania and cats across Australia once indigenous Australians had acquired them, the dispersal of dingoes from their point of landing until they occupied continental Australia is proposed to have taken only 70 years. The red fox is estimated to have dispersed across the continent in only 60–80 years.
At the end of the last glacial maximum and the associated rise in sea levels, Tasmania became separated from the Australian mainland 12,000 YBP, and New Guinea 6,500–8,500 YBP by the inundation of the Sahul Shelf. Fossil remains in Australia date to around 3,500 YBP and no dingo remains have been uncovered in Tasmania, so the dingo is estimated to have arrived in Australia at a time between 3,500 and 12,000 YBP. To reach Australia through Island Southeast Asia even at the lowest sea level of the last glacial maximum, a journey of at least over open sea between ancient Sunda and Sahul was necessary, so they must have accompanied humans on boats.
Phylogeny
Whole genome sequencing indicates that, while dogs are a genetically divergent subspecies of the grey wolf, the dog is not a descendant of the extant grey wolf. Rather, these are sister taxa which share a common ancestor from a ghost population of wolves that disappeared at the end of the Late Pleistocene. The dog and the dingo are not separate species. The dingo and the Basenji are basal members of the domestic dog clade.
Mitochondrial genome sequences indicate that the dingo falls within the domestic dog clade, and that the New Guinea singing dog is genetically closer to those dingoes that live in southeastern Australia than to those that live in the northwest. The dingo and New Guinea singing dog lineage can be traced back from Island Southeast Asia to Mainland Southeast Asia. Gene flow from the genetically divergent Tibetan wolf forms 2% of the dingo's genome, which likely represents ancient admixture in eastern Eurasia.
By the close of the last ice age 11,700 years ago, five ancestral dog lineages had diversified from each other, with one of these being represented today by the New Guinea singing dog. In 2020, the first whole genome sequencing of the dingo and the New Guinea singing dog was undertaken. The study indicates that the ancestral lineage of the dingo/New Guinea singing dog clade arose in southern East Asia, migrated through Island Southeast Asia 9,900 , and reached Australia 8,300 ; however, the human population which brought them remains unknown. The dingo's genome indicates that it was once a domestic dog which commenced a process of feralisation since its arrival 8,300 years ago, with the new environment leading to changes in those genomic regions which regulate metabolism, neurodevelopment, and reproduction.
A 2016 genetic study shows that the lineage of those dingoes found today in the northwestern part of the Australian continent split from the lineage of the New Guinea singing dog and southeastern dingo 8,300 years ago, followed by a split between the New Guinea singing dog lineage from the southeastern dingo lineage 7,800 years ago. The study proposes that two dingo migrations occurred when sea levels were lower and Australia and New Guinea formed one landmass named Sahul that existed until 6,500–8,000 years ago. Whole genome analysis of the dingo indicates there are three sub-populations which exist in Northeast (Tropical), Southeast (Alpine), and West/Central Australia (Desert). Morphological data showing the dingo skulls from Southeastern Australia (Alpine dingoes) being quite distinct from the other ecotypes. And genomic and mitochondrial DNA sequencing demonstrating at least 2 dingo mtDNA haplotypes colonised Australia.
In 2020, a genetic study found that the New Guinea Highland wild dogs were genetically basal to the dingo and the New Guinea singing dog, and therefore the potential originator of both.
Description
The dingo is a medium-sized canid with a lean, hardy body that is adapted for speed, agility, and stamina. The head is the widest part of the body, wedge-shaped, and large in proportion to the body. Captive dingoes are longer and heavier than wild dingoes, as they have access to better food and veterinary care. The average wild dingo male weighs and the female , compared with the captive male and the female . The average wild dingo male length is and the female , compared with the captive male and the female . The average wild dingo male stands at the shoulder height of and the female , compared with the captive male and the female . Dingoes rarely carry excess fat and the wild ones display exposed ribs. Dingoes from northern and northwestern Australia are often larger than those found in central and southern Australia. The dingo is similar to the New Guinea singing dog in morphology apart from the dingo's greater height at the withers. The average dingo can reach speeds of up to 60 kilometres per hour.
Compared with the dog, the dingo is able to rotate its wrists and can turn doorknobs or raise latches in order to escape confinement. Dingo shoulder joints are unusually flexible, and they can climb fences, cliffs, trees, and rocks. These adaptations help dingoes climbing in difficult terrain, where they prefer high vantage points. A similar adaptation can be found in the Norwegian Lundehund, which was developed on isolated Norwegian islands to hunt in cliff and rocky areas. Wolves do not have this ability.
Compared with the skull of the dog, the dingo possesses a longer muzzle, longer carnassial teeth, longer and more slender canine teeth, larger auditory bullae, a flatter cranium with a larger sagittal crest, and larger nuchal lines. In 2014, a study was conducted on pre-20th century dingo specimens that are unlikely to have been influenced by later hybridisation. The dingo skull was found to differ relative to the domestic dog by its larger palatal width, longer rostrum, shorter skull height, and wider sagittal crest. However, this was rebutted with the figures falling within the wider range of the domestic dog and that each dog breed differs from the others in skull measurements. Based on a comparison with the remains of a dingo found at Fromme's Landing, the dingo's skull and skeleton have not changed over the past 3,000 years. Compared to the wolf, the dingo possesses a paedomorphic cranium similar to domestic dogs. However, the dingo has a larger brain size compared to dogs of the same body weight, with the dingo being more comparable with the wolf than dogs are. In this respect, the dingo resembles two similar mesopredators, the dhole and the coyote. The eyes are triangular (or almond-shaped) and are hazel to dark in colour with dark rims. The ears are erect and occur high on the skull.
Coat colour
The dingo's three main coat colours are described as being light ginger (or tan), black and tan, and creamy white. The ginger colour ranges from a deep rust to a pale cream and can be found in 74% of dingoes. Often, small white markings are seen on the tip of the tail, the feet, and the chest, but with no large white patches. Some do not exhibit white tips. The black and tan dingoes possess a black coat with a tan muzzle, chest, belly, legs, and feet and can be found in 12% of dingoes. Solid white can be found in 2% of dingoes and solid black 1%. Only three genes affect coat colour in the dingo compared with nine genes in the domestic dog. The ginger colour is dominant and carries the other three main colours – black, tan, and white. White dingoes breed true, and black and tan dingoes breed true; when these cross, the result is a sandy colour. The coat is not oily, nor does it have a dog-like odour. The dingo has a single coat in the tropical north of Australia and a double thick coat in the cold mountains of the south, the undercoat being a wolf-grey colour. Patchy and brindle coat colours can be found in dingoes with no dog ancestry and these colours are less common in dingoes of mixed ancestry.
Tail
The dingo's tail is flattish, tapering after mid-length and does not curve over the back, but is carried low.
Gait
When walking, the dingo's rear foot steps in line with the front foot, and these do not possess dewclaws.
Lifespan
Dingoes in the wild live 3–5 years with few living past 7–8 years. Some have been recorded living up to 10 years. In captivity, they live for 14–16 years. One dingo has been recorded to live just under 20 years.
Adaptation
Hybrids, distribution and habitat
The wolf-like canids are a group of large carnivores that are genetically closely related because their chromosomes number 78; therefore they can potentially interbreed to produce fertile hybrids. In the Australian wild there exist dingoes, feral dogs, and the crossings of these two, which produce dingo–dog hybrids. Most studies looking at the distribution of dingoes focus on the distribution of dingo-dog hybrids, instead.
Dingoes occurred throughout mainland Australia before European settlement. They are not found in the fossil record of Tasmania, so they apparently arrived in Australia after Tasmania had separated from the mainland due to rising sea levels. The introduction of agriculture reduced dingo distribution, and by the early 1900s, large barrier fences, including the Dingo Fence, excluded them from the sheep-grazing areas. Land clearance, poisoning, and trapping caused the extinction of the dingo and hybrids from most of their former range in southern Queensland, New South Wales, Victoria, and South Australia. Today, they are absent from most of New South Wales, Victoria, the southeastern third of South Australia, and the southwestern tip of Western Australia. They are sparse in the eastern half of Western Australia and the adjoining areas of the Northern Territory and South Australia. They are regarded as common across the remainder of the continent.
The dingo could be considered an ecotype or an ecospecies that has adapted to Australia's unique environment. The dingo's present distribution covers a variety of habitats, including the temperate regions of eastern Australia, the alpine moorlands of the eastern highlands, the arid hot deserts of Central Australia, and the tropical forests and wetlands of Northern Australia. The occupation of, and adaption to, these habitats may have been assisted by their relationship with indigenous Australians.
Prey and diet
A 20-year study of the dingo's diet was conducted across Australia by the federal and state governments. These examined a total of 13,000 stomach contents and fecal samples. For the fecal samples, determining the matching tracks of foxes and feral cats was possible without including these samples in the study, but in distinguishing between the tracks left by dingoes and those of dingo hybrids or feral dogs was impossible. The study found that these canines prey on 177 species represented by 72.3% mammals (71 species), 18.8% birds (53 species), 3.3% vegetation (seeds), 1.8% reptiles (23 species), and 3.8% insects, fish, crabs, and frogs (28 species). The relative proportions of prey are much the same across Australia, apart from more birds being eaten in the north and south-east coastal regions, and more lizards in Central Australia. Some 80% of the diet consisted of 10 species: red kangaroo, swamp wallaby, cattle, dusky rat, magpie goose, common brushtail possum, long-haired rat, agile wallaby, European rabbit, and common wombat. Of the mammals eaten, 20% could be regarded as large.
However, the relative proportions of the size of prey mammals varied across regions. In the tropical coast region of northern Australia, agile wallabies, dusky rats, and magpie geese formed 80% of the diet. In Central Australia, the rabbit has become a substitute for native mammals, and during droughts, cattle carcasses provide most of the diet. On the Barkly Tableland, no rabbits occur nor does any native species dominate the diet, except for long-haired rats that form occasional plagues. In the Fortescue River region, the large red kangaroo and common wallaroo dominate the diet, as few smaller mammals are found in this area. On the Nullarbor Plain, rabbits and red kangaroos dominate the diet, and twice as much rabbit is eaten as red kangaroo. In the temperate mountains of eastern Australia, swamp wallaby and red-necked wallaby dominate the diet on the lower slopes and wombat on the higher slopes. Possums are commonly eaten here when found on the ground. In coastal regions, dingoes patrol the beaches for washed-up fish, seals, penguins, and other birds.
Dingoes drink about a litre of water each day in the summer and half a litre in winter. In arid regions during the winter, dingoes may live from the liquid in the bodies of their prey, as long as the number of prey is sufficient. In arid Central Australia, weaned pups draw most of their water from their food. There, regurgitation of water by the females for the pups was observed. During lactation, captive females have no higher need of water than usual, since they consume the urine and feces of the pups, thus recycling the water and keeping the den clean. Tracked dingoes in the Strzelecki Desert regularly visited water-points every 3–5 days, with two dingoes surviving 22 days without water during both winter and summer.
Hunting behaviour
Dingoes, dingo hybrids, and feral dogs usually attack from the rear as they pursue their prey. They kill their prey by biting the throat, which damages the trachea and the major blood vessels of the neck. The size of the hunting pack is determined by the type of prey targeted, with large packs formed to help hunt large prey. Large prey can include kangaroos, cattle, water buffalo, and feral horses. Dingoes will assess and target prey based on the prey's ability to inflict damage. Large kangaroos are the most commonly killed prey. The main tactic is to sight the kangaroo, bail it up, then kill it. Dingoes typically hunt large kangaroos by having lead dingoes chase the quarry toward the paths of their pack mates, which are skilled at cutting corners in chases. The kangaroo becomes exhausted and is then killed. This same tactic is used by wolves, African wild dogs, and hyenas. Another tactic shared with African wild dogs is a relay pursuit until the prey is exhausted. A pack of dingoes is three times as likely to bring down a kangaroo than an individual because the killing is done by those following the lead chaser, which has also become exhausted.
Two patterns are seen for the final stage of the attack. An adult or juvenile kangaroo is nipped at the hamstrings of the hind legs to slow it before an attack to the throat. A small adult female or juvenile is bitten on the neck or back by dingoes running beside it. In one area of Central Australia, dingoes hunt kangaroos by chasing them into a wire fence, where they become temporarily immobilised.
The largest male red kangaroos tend to ignore dingoes, even when the dingoes are hunting the younger males and females. A large eastern grey kangaroo successfully fought off an attack by a single dingo that lasted over an hour. Wallabies are hunted in a similar manner to kangaroos, the difference being that a single dingo hunts using scent rather than sight and the hunt may last several hours.
Dingo packs may attack young cattle and buffalo, but never healthy, grown adults. They focus on the sick or injured young. The tactics include harassing a mother with young, panicking a herd to separate the adults from the young, or watching a herd and looking for any unusual behaviour that might then be exploited.
One 1992 study in the Fortescue River region observed that cattle defend their calves by circling around the calves or aggressively charging dingoes. In one study of 26 approaches, 24 were by more than one dingo and only four resulted in calves being killed.
Dingoes often revisited carcasses. They did not touch fresh cattle carcasses until these were largely skin and bone, and even when these were plentiful, they still preferred to hunt kangaroos.
Of 68 chases of sheep, 26 sheep were seriously injured, but only eight were killed. The dingoes could outrun the sheep and the sheep were defenceless. However, the dingoes in general appeared not to be motivated to kill sheep, and in many cases just loped alongside the sheep before veering off to chase another sheep. For those that did kill and consume sheep, a large quantity of kangaroo was still in their diet, indicating once again a preference for kangaroo.
Lone dingoes can run down a rabbit, but are more successful by targeting kits near rabbit warrens. Dingoes take nestling birds, in addition to birds that are moulting and therefore cannot fly. Predators often use highly intelligent hunting techniques. Dingoes on Fraser Island have been observed using waves to entrap, tire, and help drown an adult swamp wallaby and an echidna. In the coastal wetlands of northern Australia, dingoes depend on magpie geese for a large part of their diet and a lone dingo sometimes distracts these while a white-breasted sea eagle makes a kill that is too heavy for it to carry off, with the dingo then driving the sea eagle away. They also scavenge on prey dropped from the nesting platforms of sea eagles. Lone dingoes may hunt small rodents and grasshoppers in grass by using their senses of smell and hearing, then pouncing on them with their forepaws.
Competitors
Dingoes and their hybrids co-exist with the native quoll. They also co-occur in the same territory as the introduced European red fox and feral cat, but little is known about the relationships between these three. Dingoes and their hybrids can drive off foxes from sources of water and occasionally eat feral cats. Dingoes can be killed by feral water buffalo and cattle goring and kicking them, from snake bite, and predation on their pups (and occasionally adults) by wedge-tailed eagles.
Communication
Like all domestic dogs, dingoes tend towards phonetic communication. However, in contrast to domestic dogs, dingoes howl and whimper more, and bark less. Eight sound classes with 19 sound types have been identified.
Barking
Compared to most domestic dogs, the bark of a dingo is short and monosyllabic, and is rarely used. Barking was observed to make up only 5% of vocalisations. Dog barking has always been distinct from wolf barking. Australian dingoes bark mainly in swooshing noises or in a mixture of atonal and tonal sounds. In addition, barking is almost exclusively used for giving warnings. Warn-barking in a homotypical sequence and a kind of "warn-howling" in a heterotypical sequence have also been observed. The bark-howling starts with several barks and then fades into a rising and ebbing howl and is probably (similar to coughing) used to warn the puppies and members of the pack. Additionally, dingoes emit a sort of "wailing" sound, which they mostly use when approaching a watering hole, probably to warn already present dingoes.
According to the present state of knowledge, getting Australian dingoes to bark more frequently by putting them in contact with other domestic dogs is not possible. However, German zoologist Alfred Brehm reported a dingo that learned the more "typical" form of barking and how to use it, while its brother did not. Whether dingoes bark or bark-howl less frequently in general is not certain.
Howling
Dingoes have three basic forms of howling (moans, bark-howls, and snuffs) with at least 10 variations. Usually, three kinds of howls are distinguished: long and persistent, rising and ebbing, and short and abrupt.
Observations have shown that each kind of howling has several variations, though their purpose is unknown. The frequency of howling varies with the season and time of day, and is also influenced by breeding, migration, lactation, social stability, and dispersal behaviour. Howling can be more frequent in times of food shortage, because the dogs become more widely distributed within their home range.
Additionally, howling seems to have a group function, and is sometimes an expression of joy (for example, greeting-howls). Overall, howling was observed less frequently in dingoes than among grey wolves. It may happen that one dog will begin to howl, and several or all other dogs will howl back and bark from time to time. In the wilderness, dingoes howl over long distances to attract other members of the pack, to find other dogs, or to keep intruders at bay. Dingoes howl in chorus with significant pitches, and with increasing number of pack members, the variability of pitches also increases. Therefore, dingoes are suspected to be able to measure the size of a pack without visual contact. Moreover, their highly variable chorus howls have been proposed to generate a confounding effect in the receivers by making pack size appear larger.
Other forms
Growling, making up about 65% of the vocalisations, is used in an agonistic context for dominance, and as a defensive sound. Similar to many domestic dogs, a reactive usage of defensive growling is only rarely observed. Growling very often occurs in combination with other sounds, and has been observed almost exclusively in swooshing noises (similar to barking).
During observations in Germany, dingoes were heard to produce a sound that observers have called Schrappen. It was only observed in an agonistic context, mostly as a defence against obtrusive pups or for defending resources. It was described as a bite intention, during which the receiver is never touched or hurt. Only a clashing of the teeth could be heard.
Aside from vocal communication, dingoes communicate, like all domestic dogs, via scent marking specific objects (for example, Spinifex) or places (such as waters, trails, and hunting grounds) using chemical signals from their urine, feces, and scent glands. Males scent mark more frequently than females, especially during the mating season. They also scent rub, whereby a dog rolls its neck, shoulders, or back on something that is usually associated with food or the scent markings of other dogs.
Unlike wolves, dingoes can react to social cues and gestures from humans.
Behaviour
Dingoes tend to be nocturnal in warmer regions, but less so in cooler areas. Their main period of activity is around dusk and dawn, making them a crepuscular species in the colder climates. The periods of activity are short (often less than 1 hour) with short times of resting. Dingoes have two kinds of movement: a searching movement (apparently associated with hunting) and an exploratory movement (probably for contact and communication with other dogs). According to studies in Queensland, the wild dogs (dingo hybrids) there move freely at night through urban areas and cross streets and seem to get along quite well.
Social behaviour
The dingo's social behaviour is about as flexible as that of a coyote or grey wolf, which is perhaps one of the reasons the dingo was originally believed to have descended from the Indian wolf. While young males are often solitary and nomadic in nature, breeding adults often form a settled pack. However, in areas of the dingo's habitat with a widely spaced population, breeding pairs remain together, apart from others. Dingo distributions are a single dingo, 73%; two dingoes, 16%; three dingoes, 5%; four dingoes, 3%; and packs of five to seven dingoes, 3%. A dingo pack usually consists of a mated pair, their offspring from the current year, and sometimes offspring from the previous year.
Where conditions are favourable among dingo packs, the pack is stable with a distinct territory and little overlap between neighbours. The size of packs often appears to correspond to the size of prey available in the pack's territory. Desert areas have smaller groups of dingoes with a more loose territorial behaviour and sharing of the water sites. The average monthly pack size was noted to be between three and 12 members.
Similar to other canids, a dingo pack largely consists of a mated pair, their current year's offspring, and occasionally a previous year's offspring. Dominance hierarchies exist both between and within males and females, with males usually being more dominant than females. However, a few exceptions have been noted in captive packs. During travel, while eating prey, or when approaching a water source for the first time, the breeding male will be seen as the leader, or alpha. Subordinate dingoes approach a more dominant dog in a slightly crouched posture, ears flat, and tail down, to ensure peace in the pack. Establishment of artificial packs in captive dingoes has failed.
Reproduction
Dingoes breed once annually, depending on the estrous cycle of the females, which according to most sources, only come in heat once per year. Dingo females can come in heat twice per year, but can only be pregnant once a year, with the second time only seeming to be pregnant.
Males are virile throughout the year in most regions, but have a lower sperm production during the summer in most cases. During studies on dingoes from the Eastern Highlands and Central Australia in captivity, no specific breeding cycle could be observed. All were potent throughout the year. The breeding was only regulated by the heat of the females. A rise in testosterone was observed in the males during the breeding season, but this was attributed to the heat of the females and copulation. In contrast to the captive dingoes, captured dingo males from Central Australia did show evidence of a male breeding cycle. Those dingoes showed no interest in females in heat (this time other domestic dogs) outside the mating season (January to July) and did not breed with them.
The mating season usually occurs in Australia between March and May (according to other sources between April and June). During this time, dingoes may actively defend their territories using vocalisations, dominance behaviour, growling, and barking.
Most females in the wild start breeding at the age of 2 years. Within packs, the alpha female tends to go into heat before subordinates and actively suppresses mating attempts by other females. Males become sexually mature between the ages of 1 and 3 years. The precise start of breeding varies depending on age, social status, geographic range, and seasonal conditions. Among dingoes in captivity, the pre-estrus was observed to last 10–12 days. However, the pre-estrus may last as long as 60 days in the wild.
In general, the only dingoes in a pack that successfully breed are the alpha pair, and the other pack members help with raising the pups. Subordinates are actively prevented from breeding by the alpha pair and some subordinate females have a false pregnancy. Low-ranking or solitary dingoes can successfully breed if the pack structure breaks up.
The gestation period lasts for 61–69 days and the size of the litter can range from 1 to 10 (usually 5) pups, with the number of males born tending to be higher than that of females. Pups of subordinate females usually get killed by the alpha female, which causes the population increase to be low even in good times. This behaviour possibly developed as an adaptation to the fluctuating environmental conditions in Australia. Pups are usually born between May and August (the winter period), but in tropical regions, breeding can occur at any time of the year.
At the age of 3 weeks, the pups leave the den for the first time, and leave it completely at 8 weeks. Dens are mostly underground. Reports exist of dens in abandoned rabbit burrows, rock formations, under boulders in dry creeks, under large spinifex, in hollow logs, and augmented burrows of monitor lizards and wombat burrows. The pups usually stray around the den within a radius of , and are accompanied by older dogs during longer travels. The transition to consuming solid food is normally accomplished by all members of the pack during the age of 9 to 12 weeks. Apart from their own experiences, pups also learn through observation. Young dingoes usually become independent at the age of 3–6 months or they disperse at the age of 10 months, when the next mating season starts.
Migration
Dingoes usually remain in one area and do not undergo seasonal migrations. However, during times of famine, even in normally "safe" areas, dingoes travel into pastoral areas, where intensive, human-induced control measures are undertaken. In Western Australia in the 1970s, young dogs were found to travel for long distances when necessary. About 10% of the dogs captured—all younger than 12 months—were later recaptured far away from their first location. Among these, the average travelled distance for males was and for females . Therefore, travelling dingoes had lower chances of survival in foreign territories, and they are apparently unlikely to survive long migrations through occupied territories. The rarity of long migration routes seemed to confirm this. During investigations in the Nullarbor Plain, even longer migration routes were recorded. The longest recorded migration route of a radio-collared dingo was about .
Attacks on humans
Dingoes generally avoid conflict with humans, but they are large enough to be dangerous. Most attacks involve people feeding wild dingoes, particularly on K'gari (formerly Fraser Island), which is a special centre of dingo-related tourism. The vast majority of dingo attacks are minor in nature, but some can be major, and a few have been fatal: the death of two-month-old Azaria Chamberlain in the Northern Territory in 1980 is one of them. Many Australian national parks have signs advising visitors not to feed wildlife, partly because this practice is not healthy for the animals, and partly because it may encourage undesirable behaviour, such as snatching or biting by dingoes, kangaroos, goannas, and some birds.
Impact
Ecological
Extinction of thylacines
Some researchers propose that the dingo caused the extirpation of the thylacine, the Tasmanian devil, and the Tasmanian native hen from mainland Australia because of the correlation in space and time with the dingo's arrival. Recent studies have questioned this proposal, suggesting that climate change and increasing human populations may have been the cause. Dingoes do not seem to have had the same ecological impact that invasive red foxes have in modern times. This might be connected to the dingo's way of hunting and the size of their favoured prey, as well as to the low number of dingoes in the time before European colonisation.
In 2017, a genetic study found that the population of the northwestern dingoes had commenced expanding since 4,000—6,000 years ago. This was proposed to be due either to their first arrival in Australia or to the commencement of the extinction of the thylacine, with the dingo expanding into the thylacine's former range.
Interactions with humans
The first British colonists who settled at Port Jackson, in 1788, recorded the dingo living with indigenous Australians, and later at Melville Island, in 1818. Furthermore, they were noted at the lower Darling and Murray rivers in 1862, indicating that the dingo was possibly semi-domesticated (or at least utilised in a "symbiotic" manner) by aboriginal Australians. When livestock farming began expanding across Australia, in the early 19th century, dingoes began preying on sheep and cattle. Numerous population-control measures have been implemented since then, including a nation-wide fencing project, with only limited success.
Dingoes can be tame when they come in frequent contact with humans. Furthermore, some dingoes live with humans. Many indigenous Australians and early European settlers lived alongside dingoes. Indigenous Australians would take dingo pups from the den and tame them until sexual maturity and the dogs would leave.
According to David Jenkins, a research fellow at Charles Sturt University, the breeding and reintroduction of pure dingoes is no easy option and, as of 2007, there were no studies that seriously dealt with this topic, especially in areas where dingo populations are already present.
Interactions with other animals
Much of the present place of wild dogs in the Australian ecosystem, especially in the urban areas, remains unknown. Although the ecological role of dingoes in Northern and Central Australia is well understood, the same does not apply to the role of wild dogs in the east of the continent. In contrast to some claims, dingoes are assumed to have a positive impact on biodiversity in areas where feral foxes are present.
Dingoes are regarded as apex predators and possibly perform an ecological key function. Likely (with increasing evidence from scientific research), they control the diversity of the ecosystem by limiting the number of prey and keeping the competition in check. Wild dogs hunt feral livestock such as goats and pigs, as well as native prey and introduced animals. The low number of feral goats in Northern Australia is possibly caused by the presence of the dingoes, but whether they control the goats' numbers is still disputable. Studies from 1995 in the northern wet forests of Australia found the dingoes there did not reduce the number of feral pigs, but their predation only affects the pig population together with the presence of water buffaloes (which hinder the pigs' access to food).
Observations concerning the mutual impact of dingoes and red fox and cat populations suggest dingoes limit the access of foxes and cats to certain resources. As a result, a disappearance of the dingoes may cause an increase of red fox and feral cat numbers, and therefore, a higher pressure on native animals. These studies found the presence of dingoes is one of the factors that keep fox numbers in an area low, and therefore reduces pressure on native animals, which then do not disappear from the area. The countrywide numbers of red foxes are especially high where dingo numbers are low, but other factors might be responsible for this, depending on the area. Evidence was found for a competition between wild dogs and red foxes in the Blue Mountains of New South Wales, since many overlaps occurred in the spectrum of preferred prey, but only evidence for local competition, not on a grand scale, was found.
Also, dingoes can live with red foxes and feral cats without reducing their numbers in areas with sufficient food resources (for example, high rabbit numbers) and hiding places. Nearly nothing is known about the relationship of wild dogs and feral cats, except both mostly live in the same areas. Although wild dogs also eat cats, whether this affects the cat populations is not known.
Additionally, the disappearance of dingoes might increase the prevalence of kangaroo, rabbit, and Australian brushturkey numbers. In the areas outside the Dingo Fence, the number of emus is lower than in the areas inside. However, the numbers changed depending on the habitat. Since the environment is the same on both sides of the fence, the dingo was assumed to be a strong factor for the regulation of these species. Therefore, some people demand that dingo numbers should be allowed to increase or dingoes should be reintroduced in areas with low dingo populations to lower the pressure on endangered populations of native species and to reintroduce them in certain areas. In addition, the presence of the Australian brushturkey in Queensland increased significantly after dingo baiting was conducted.
The dingo's habitat covers most of Australia, but they are absent in the southeast and Tasmania, and an area in the southwest (see map). As Australia's largest extant terrestrial predators, dingoes prey on mammals up to the size of the large red kangaroo, in addition to the grey kangaroo, wombat, wallaby, quoll, possum and most other marsupials; they frequently pursue birds, lizards, fish, crabs, crayfish, eels, snakes, frogs, young crocodiles, larger insects, snails, carrion, human refuse, and sometimes fallen fruits or seeds.
Dingoes can also be of potential benefit to their environment, as they will hunt Australia's many introduced and invasive species. This includes human-introduced animals such as deer and their offspring (sambar, chital, and red deer) and water buffalo, in addition to the highly invasive rabbits, red foxes, feral and domestic cats, some feral dogs, sheep, and calves. Rarely, a pack of dingoes will pursue the larger and more dangerous dromedary camel, feral donkey, or feral horse; unattended young animals, or sick, weak, or elderly individuals are at greatest risk.
Cultural
Cultural opinions about the dingo are often based on its perceived "cunning", and the idea that it is an intermediate between civilisation and wildness.
Some of the early European settlers looked on dingoes as domestic dogs, while others thought they were more like wolves. Over the years, dingoes began to attack sheep, and their relationship to the Europeans changed very quickly; they were regarded as devious and cowardly, since they did not fight bravely in the eyes of the Europeans, and vanished into the bush. Additionally, they were seen as promiscuous or as devils with a venomous bite or saliva, so they could be killed unreservedly. Over the years, dingo trappers gained some prestige for their work, especially when they managed to kill hard-to-catch dingoes. Dingoes were associated with thieves, vagabonds, bushrangers, and parliamentary opponents. From the 1960s, politicians began calling their opponents "dingo", meaning they were cowardly and treacherous, and it has become a popular form of attack since then. Today, the word "dingo" still stands for "coward" and "cheat", with verb and adjective forms used, as well.
The image of the dingo has ranged among some groups from the instructive to the demonic.
Ceremonies (like a keen at the Cape York Peninsula in the form of howling) and dreamtime stories are connected to the dingo, which were passed down through the generations.
The dingo plays a prominent role in the Dreamtime stories of indigenous Australians, but it is rarely depicted in their cave paintings when compared with the extinct thylacine. One of the tribal elders of the people of the Yarralin, Northern Territory region tells that the Dreamtime dingo is the ancestor of both dingoes and humans. The dingoes "are what we would be if we were not what we are."
Similar to how Europeans acquired dingoes, the Aboriginal people of Australia acquired dogs from the immigrants very quickly. This process was so fast that Francis Barrallier (surveyor on early expeditions around the colony at Port Jackson) discovered in 1802 that five dogs of European origin were there before him. One theory holds that other domestic dogs adopt the role of the "pure" dingo. Introduced animals, such as the water buffalo and the domestic cat, have been adopted into the indigenous Aboriginal culture in the forms of rituals, traditional paintings, and dreamtime stories.
Most of the published myths originate from the Western Desert and show a remarkable complexity. In some stories, dingoes are the central characters, while in others, they are only minor ones. One time, an ancestor from the Dreamtime created humans and dingoes or gave them their current shape. Stories mention creation, socially acceptable behaviour, and explanations why some things are the way they are. Myths exist about shapeshifters (human to dingo or vice versa), "dingo-people", and the creation of certain landscapes or elements of those landscapes, like waterholes or mountains.
Economic
Livestock farming expanded across Australia from the early 1800s, which led to conflict between the dingo and graziers. Sheep, and to a lesser extent cattle, are an easy target for dingoes. The pastoralists and the government bodies that support this industry have shot, trapped, and poisoned dingoes or destroyed dingo pups in their dens. After two centuries of persecution, the dingo or dingo–dog hybrids can still be found across most of the continent.
Research on the real extent of the damage and the reason for this problem only started recently. Livestock can die from many causes, and when the carcass is found, determining with certainty the cause of death is often difficult. Since the outcome of an attack on livestock depends to a high degree on the behaviour and experience of the predator and the prey, only direct observation is certain to determine whether an attack was by dingoes or other domestic dogs. Even the existence of remnants of the prey in the scat of wild dogs does not prove they are pests, since wild dogs also eat carrion.
The cattle industry can tolerate low to moderate, and sometimes high, numbers of wild dogs (therefore dingoes are not so easily regarded as pests in these areas). In the case of sheep and goats, a zero-tolerance attitude is common. The biggest threats are dogs that live inside or near the paddock areas. The extent of sheep loss is hard to determine due to the wide pasture lands in some parts of Australia.
In 2006, cattle losses in the Northern Territory rangeland grazing areas were estimated to be up to 30%.
Therefore, factors such as availability of native prey, as well as the defending behaviour and health of the cattle, play an important role in the number of losses. A study in Central Australia in 2003 confirmed that dingoes only have a low impact on cattle numbers when a sufficient supply of other prey (such as kangaroos and rabbits) is available. In some parts of Australia, the loss of calves is assumed to be minimised if horned cattle are used instead of polled. The precise economic impact is not known, and the rescue of some calves is unlikely to compensate for the necessary costs of control measures. Calves usually suffer less lethal wounds than sheep due to their size and the protection by adult cattle, so they have a higher chance of surviving an attack. As a result, the evidence of a dog attack may only be discovered after the cattle have been herded back into the enclosure, and signs such as bitten ears, tails, and other wounds are discovered.
The opinions of cattle owners regarding dingoes are more variable than those of sheep owners. Some cattle owners believe that the weakened mother losing her calf is better in times of drought so that she does not have to care for her calf, too. Therefore, these owners are more hesitant to kill dingoes. The cattle industry may benefit from the predation of dingoes on rabbits, kangaroos, and rats. Furthermore, the mortality rate of calves has many possible causes, and discriminating between them is difficult. The only reliable method to assess the damage would be to document all pregnant cows, then observe their development and those of their calves. The loss of calves in observed areas where dingoes were controlled was higher than in other areas. Loss of livestock is, therefore, not necessarily caused by the occurrence of dingoes and is independent from wild dogs. One researcher has stated that for cattle stations where dingoes were controlled, kangaroos were abundant, and this affects the availability of grass.
Domestic dogs are the only terrestrial predators in Australia that are big enough to kill fully grown sheep, and only a few sheep manage to recover from the severe injuries. In the case of lambs, death can have many causes apart from attacks by predators, which are blamed for the deaths because they eat from the carcasses. Although attacks by red foxes are possible, such attacks are more rare than previously thought. The fact that the sheep and goat industry is much more susceptible to damage caused by wild dogs than the cattle industry is mostly due to two factors – the flight behaviour of the sheep and their tendency to flock together in the face of danger, and the hunting methods of wild dogs, along with their efficient way of handling goats and sheep.
Therefore, the damage to the livestock industry does not correlate to the numbers of wild dogs in an area (except that no damage occurs where no wild dogs occur).
According to a report from the government of Queensland, wild dogs cost the state about $30 million annually due to livestock losses, the spread of diseases, and control measures. Losses for the livestock industry alone were estimated to be as high as $18 million. In Barcaldine, Queensland, up to one-fifth of all sheep are killed by dingoes annually, a situation which has been described as an "epidemic". According to a survey among cattle owners in 1995, performed by the Park and Wildlife Service, owners estimated their annual losses due to wild dogs (depending on the district) to be from 1.6% to 7.1%.
In 2018, a study in northern South Australia indicates that fetal/calf loss averages 18.6%, with no significant reduction due to dingo baiting. The calf losses did not correlate with increased dingo activity, and the cattle diseases pestivirus and leptospirosis were a major cause. Dingoes then scavenged on the carcasses. There was also evidence of dingo predation on calves.
Among the indigenous Australians, dingoes were also used as hunting aids, living hot water bottles, and camp dogs. Their scalps were used as a kind of currency, their teeth were traditionally used for decorative purposes, and their fur for traditional costumes.
Sometimes "pure" dingoes are important for tourism, when they are used to attract visitors. However, this seems to be common only on Fraser Island, where the dingoes are extensively used as a symbol to enhance the attraction of the island. Tourists are drawn to the experience of personally interacting with dingoes. Pictures of dingoes appear on brochures, many websites, and postcards advertising the island.
Legal status
The dingo is recognised as a native animal under the laws of all Australian jurisdictions. Australia has over 500 national parks of which all but six are managed by the states and territories. , the legal status of the dingo varies between these jurisdictions and in some instances it varies between different regions of a single jurisdiction. some of these jurisdictions classify dingoes as an invasive native.
Australian government: Section 528 of the Environment Protection and Biodiversity Conservation Act 1999 defines a native species as one that was present in Australia before the year 1400. The dingo is protected in all Australian government managed national parks and reserves, World Heritage Areas, and other protected areas.
Australian Capital Territory: The dingo is listed as a "pest animal" outside national parks and reserves in the Pest Plants and Animals (Pest Animals) Declaration 2016 (No 1) made under the Pest Plants and Animals Act 2005, which calls for a management plan for pest animals. The Nature Conservation Act 2014 protects native animals in national parks and reserves but excludes this protection to "pest animals" declared under the Pest Plants and Animals Act 2005.
New South Wales: The dingo falls under the definition of "wildlife" under the National Parks and Wildlife Act 1974 however it also becomes "unprotected fauna" under Schedule 11 of the act. The Wild Dog Destruction Act (1921) applies only to the western division of the state and includes the dingo in its definition of "wild dogs". The act requires landowners to destroy any wild dogs on their property and any person owning a dingo or half-bred dingo without a permit faces a fine. In other parts of the state, dingoes can be kept as pets under the Companion Animals Act 1998 as a dingo is defined under this act as a "dog". The dingo has been proposed for listing under the Threatened Species Conservation Act because it is argued that these dogs had established populations before the arrival of Europeans, but no decision has been made.
Northern Territory: The dingo is a "vertebrate that is indigenous to Australia" and therefore "protected wildlife" under the Territory Parks and Wildlife Conservation Act 2014. A permit is required for all matters dealing with protected wildlife.
Queensland: The dingo is listed as "least concern wildlife" in the Nature Conservation (Wildlife) Regulation 2006 under the Nature Conservation Act 1992, therefore the dingo is protected in National Parks and conservation areas. The dingo is listed as a "pest" in the Land Protection (Pest and Stock Route Management) Regulation 2003 under the Land Protection (Pest and Stock Route Management) Act 2002, which requires land owners to take reasonable steps to keep their lands free of pests.
South Australia: The National Parks and Wildlife Act 1972 defines a protected animal as one that is indigenous to Australia but then lists the dingo as an "unprotected species" under Schedule 11. The purpose of the Dog Fence Act 1946 is to prevent wild dogs entering into the pastoral and agricultural areas south of the dog-proof fence. The dingo is listed as a "wild dog" under this act, and landowners are required to maintain the fence and destroy any wild dog within the vicinity of the fence by shooting, trapping or baiting. The dingo is listed as an "unprotected species" in the Natural Resources Management Act 2004, which allows landowners to lay baits "to control animals" on their land just north of the dog fence.
Tasmania: Tasmania does not have a native dingo population. The dingo is listed as a "restricted animal" in the Nature Conservation Act 2002 and cannot be imported without a permit. Once imported into Tasmania, a dingo is listed as a dog under the Dog Control Act 2000.
Victoria: The dingo is a "vertebrate taxon" that is "indigenous" to Australia and therefore "wildlife" under the Wildlife Act 1975, which protects wildlife. The act mandates that a permit is required to keep a dingo, and that this dingo must not be cross-bred with a dog. The act allows an order to be made to unprotect dingoes in certain areas of the state. The Order in Council made on the 28 September 2010 includes the far north-west of the state and all of the state north-east of Melbourne. It was made to protect stock on private land. The order allows dingoes to be trapped, shot or baited by any person on private land in these regions, while protecting the dingo on state-owned land.
Western Australia: Dingoes are considered as "unprotected" native fauna under the Western Australian Wildlife Conservation Act. The dingo is recorded as a "declared pest" on the Western Australian Organism List. This list records those species that have been declared as pests under the Biosecurity and Agriculture Management Act 2007, and these are regarded as pests across all of Western Australia. Landowners must take the prescribed measures to deal with declared pests on their land. The policy of the WA government is to promote eradication of dingoes in the livestock grazing areas but leave them undisturbed in the rest of the state.
Control measures
Dingo attacks on livestock led to widescale efforts to repel them from areas with intensive agricultural usage, and all states and territories have enacted laws for the control of dingoes. In the early 20th century, fences were erected to keep dingoes away from areas frequented by sheep, and a tendency to routinely eradicate dingoes developed among some livestock owners. Established methods for the control of dingoes in sheep areas entailed the employment of specific workers on every property. The job of these people (who were nicknamed "doggers") was to reduce the number of dingoes by using steel traps, baits, firearms and other methods. The responsibility for the control of wild dogs lay solely in the hands of the landowners. At the same time, the government was forced to control the number of dingoes. As a result, a number of measures for the control of dingoes developed over time. It was also considered that dingoes travel over long distances to reach areas with richer prey populations, and the control methods were often concentrated along "paths" or "trails" and in areas that were far away from sheep areas. All dingoes were regarded as a potential danger and were hunted.
Apart from the introduction of the poison 1080 (extensively used for 40 years and nicknamed "doggone"), the methods and strategies for controlling wild dogs have changed little over time. Information concerning cultural importance to indigenous people and the importance of dingoes and the impact of control measures on other species is also lacking in some areas. Historically, the attitudes and needs of indigenous people were not taken into account when dingoes were controlled. Other factors that might be taken into account are the genetic status (degree of interbreeding) of dingoes in these areas, ownership and land usage, as well as a reduction of killing measures to areas outside the zones. However, most control measures and the appropriate studies are there to minimise the loss of livestock and not to protect dingoes.
Increasing pressure from environmentalists against the random killing of dingoes, as well as the impact on other animals, demanded that more information needed to be gathered to prove the necessity of control measures and to disprove the claim of unnecessary killings. Today, permanent population control is regarded as necessary to reduce the impact of all wild dogs and to ensure the survival of the "pure" dingo in the wild.
Guardian animals
To protect livestock, livestock guardian dogs (for example, Maremmas), donkeys, alpacas and llamas are used.
Dingo Fence
In the 1920s, the Dingo Fence was erected on the basis of the Wild Dog Act (1921) and, until 1931, thousands of miles of Dingo Fences had been erected in several areas of South Australia. In the year 1946, these efforts were directed to a single goal, and the Dingo Fence was finally completed. The fence connected with other fences in New South Wales and Queensland. The main responsibilities in maintaining the Dingo Fence still lies with the landowners whose properties border on the fence and who receive financial support from the government.
Reward system
A reward system (local, as well from the government) was active from 1846 to the end of the 20th century, but there is no evidence that – despite the billions of dollars spent – it was ever an efficient control method. Therefore, its importance declined over time.
Dingo scalping commenced in 1912 with the passage of the Wild Dogs Act by the government of South Australia. In an attempt to reduce depredation on livestock, that government offered a bounty for dingo skins, and this program was later repeated in Western Australia and the Northern Territory. One writer argues that this new legislation and economic driver had significant impacts on Aboriginal society in the region. This act was followed by updates and amendments, including 1931, 1938, and 1948.
Poisoning
Baits with the poison 1080 are regarded as the fastest and safest method for dog control, since they are extremely susceptible. Even small amounts of poison per dog are sufficient (0.3 mg per kg). The application of aerial baiting is regulated in the Commonwealth by the Civil Aviation Regulations (1988). The assumption that the tiger quoll might be damaged by the poison led to the dwindling of areas where aerial baiting could be performed. In areas where aerial baiting is no longer possible, it is necessary to put down baits.
From 2004, cyanide-ejectors and protection collars (filled with 1080 on certain spots) have been tested.
In 2016, controversy surrounded a plan to inject a population of dingoes on Pelorus Island, off the coast of northern Queensland, Australia, with pills that would release a fatal dose of 1080 poison two years after the dingoes were to be intentionally released to help eradicate goats. The dingoes were dubbed 'death-row dingoes', and the plan was blocked due to concerns for a locally threatened shorebird.
Neutering
Owners of dingoes and other domestic dogs are sometimes asked to neuter their pets and keep them under observation to reduce the number of stray/feral dogs and prevent interbreeding with dingoes.
Efficiency of measures
The efficiency of control measures was questioned in the past and is often questioned today, as well as whether they stand in a good cost-benefit ratio. The premium system proved to be susceptible to deception and to be useless on a large scale, and can therefore only be used for getting rid of "problem-dogs". Animal traps are considered inhumane and inefficient on a large scale, due to the limited efficacy of baits. Based on studies, it is assumed that only young dogs that would have died anyway can be captured. Furthermore, wild dogs are capable of learning and sometimes are able to detect and avoid traps quite efficiently. In one case, a dingo bitch followed a dogger and triggered his traps one after another by carefully pushing her paw through the sand that covered the trap.
Poisonous baits can be very effective when they are of good meat quality; however, they do not last long and are occasionally taken by red foxes, quolls, ants and birds. Aerial baiting can nearly eliminate whole dingo populations. Livestock guardian dogs can effectively minimise livestock losses, but are less effective on wide open areas with widely distributed livestock. Furthermore, they can be a danger to the livestock or be killed by control measures themselves when they are not sufficiently supervised by their owners. Fences are reliable in keeping wild dogs from entering certain areas, but they are expensive to build, need permanent maintenance, and only cause the problem to be relocated.
Control measures mostly result in smaller packs and a disruption of pack structure. The measures seem to be rather detrimental to the livestock industry because the empty territories are taken over by young dogs and the predation then increases. Nonetheless, it is regarded as unlikely that the control measures could completely eradicate the dingo in Central Australia, and the elimination of all wild dogs is not considered a realistic option.
It has been shown that culling a small percentage of immature dingoes on Fraser Island had little significant negative impact on the overall island population, though this is being disputed.
Conservation of purebreds
Until 2004, the dingo was categorised as of "least concern" on the Red List of Threatened Species. In 2008, it was recategorised as "vulnerable", following the decline in numbers to around 30% of "pure" dingoes, due to crossbreeding with domestic dogs. In 2018, the IUCN regarded the dingo as a feral dog and discarded it from the Red List.
Dingoes are reasonably abundant in large parts of Australia, but there is some argument that they are endangered due to interbreeding with other dogs in many parts of their range. Dingoes receive varying levels of protection in conservation areas such as national parks and natural reserves in New South Wales, the Northern Territory and Victoria, Arnhem Land and other Aboriginal lands, UNESCO World Heritage Sites, and the whole of the Australian Capital Territory. In some states, dingoes are regarded as declared pests and landowners are allowed to control the local populations. Throughout Australia, all other wild dogs are considered pests.
Fraser Island is a 1,840 square kilometre World Heritage Site located off Australia's eastern coast. The island is home to a genetically distinct population of dingoes that are free of dog introgression, estimated to number 120. These dingoes are unique because they are closely related to the southeastern dingoes but share a number of genes with the New Guinea singing dog and show some evidence of admixture with the northwestern dingoes. Because of their conservation value, in February 2013, a report on Fraser Island dingo management strategies was released, with options including ending the intimidation of dingoes, tagging practice changes and regular veterinarian checkups, as well as a permanent dingo sanctuary on the island. According to DNA examinations from 2004, the dingoes of Fraser Island are "pure", as opposed to dingo—dog hybrids. However, skull measurements from the mid-1990s had a different result. A 2013 study showed that dingoes living in the Tanami Desert are among the "purest" in Australia.
Groups that have devoted themselves to the conservation of the "pure" dingo by using breeding programs include the Australian Native Dog Conservation Society and the Australian Dingo Conservation Association. Presently, the efforts of the dingo conservation groups are considered to be ineffective because most of their dogs are untested or are known to be hybrids.
Dingo conservation efforts focus primarily on preventing interbreeding between dingoes and other domestic dogs in order to conserve the population of pure dingoes. This is extremely difficult and costly. Conservation efforts are hampered by the fact that it is not known how many pure dingoes still exist in Australia. Steps to conserve the pure dingo can only be effective when the identification of dingoes and other domestic dogs is absolutely reliable, especially in the case of living specimens. Additionally, conservation efforts are in conflict with control measures.
Conservation of pure and survivable dingo populations is promising in remote areas, where contact with humans and other domestic dogs is rare. Under New South Wales state policy in parks, reserves and other areas not used by agriculture, these populations are only to be controlled when they pose a threat to the survival of other native species. The introduction of "dog-free" buffer zones around areas with pure dingoes is regarded as a realistic method to stop interbreeding. This is enforced to the extent that all wild dogs can be killed outside the conservation areas. However, studies from the year 2007 indicate that even an intensive control of core areas is probably not able to stop the process of interbreeding.
According to the Dingo Discovery Sanctuary and Research Centre, many studies are finding a case for the re-introduction of the dingo into previously occupied areas in order to return some balance to badly degraded areas as a result of "unregulated and ignorant farming practices".
Dingo densities have been measured at up to 3 per square kilometre (0.8/sq mi) in both the Guy Fawkes River region of New South Wales and in South Australia at the height of a rabbit plague.
Hybridisation
In 2023, a study of 402 wild and captive dingoes using 195,000 points across the dingo genome indicates that past studies of hybridisation were over-estimated and that pure dingoes are more common than they were originally thought to be.
In 2021, DNA testing of over 5,000 wild-living canines from across Australia found that 31 were feral domestic dogs and 27 were first generation hybrids. This finding challenges the perception that dingoes are nearly extinct and have been replaced by feral domestic dogs.
Coat colour cannot be used to distinguish hybrids. Dingo-like domestic dogs and dingo-hybrids can be generally distinguished by the more dog-typical kind of barking that exists among the hybrids, and differences in the breeding cycle, certain skull characteristics, and genetic analyses can be used for differentiation. Despite all the characteristics that can be used for distinguishing between dingoes and other domestic dogs, there are two problems that should not be underestimated. First, there is no real clarity regarding at what point a dog is regarded as a "pure" dingo, and, secondly, no distinguishing feature is completely reliable — it is not known which characteristics permanently remain under the conditions of natural selection.
There are two main opinions regarding this process of interbreeding. The first, and likely most common, position states that the "pure" dingo should be preserved via strong controls of the wild dog populations, and only "pure" or "nearly-pure" dingoes should be protected. The second position is relatively new and is of the opinion that people must accept that the dingo has changed and that it is impossible to bring the "pure" dingo back. Conservation of these dogs should therefore be based on where and how they live, as well as their cultural and ecological role, instead of concentrating on precise definitions or concerns about "genetic purity". Both positions are controversially discussed.
Due to this interbreeding, there is a wider range of fur colours, skull shapes and body size in the modern-day wild dog population than in the time before the arrival of the Europeans. Over the course of the last 40 years, there has been an increase of about 20% in the average wild dog body size. It is currently unknown whether, in the case of the disappearance of "pure" dingoes, remaining hybrids would alter the predation pressure on other animals. It is also unclear what kind of role these hybrids would play in the Australian ecosystems. However, it is unlikely that the dynamics of the various ecosystems will be excessively disturbed by this process.
In 2011, a total of 3,941 samples were included in the first continent-wide DNA study of wild dogs. The study found that 46% were pure dingoes which exhibited no dog alleles (gene expressions). There was evidence of hybridisation in every region sampled. In Central Australia only 13% were hybrids; however, in southeastern Australia 99% were hybrids or feral dogs. Pure dingo distribution was 88% in the Northern Territory, intermediate numbers in Western Australia, South Australia and Queensland, and 1% in New South Wales and Victoria. Almost all wild dogs showed some dingo ancestry, with only 3% of dogs showing less than 80% dingo ancestry. This indicates that domestic dogs have a low survival rate in the wild or that most hybridisation is the result of roaming dogs that return to their owners. No populations of feral dogs have been found in Australia.
In 2016, a three dimensional geometric morphometric analysis of the skulls of dingoes, dogs and their hybrids found that dingo-dog hybrids exhibit morphology closer to the dingo than to the parent group dog. Hybridisation did not push the unique Canis dingo cranial morphology towards the wolf phenotype, therefore hybrids cannot be distinguished from dingoes based on cranial measures. The study suggests that the wild dingo morphology is dominant when compared with the recessive dog breed morphology, and concludes that although hybridisation introduces dog DNA into the dingo population, the native cranial morphology remains resistant to change.
| Biology and health sciences | Carnivora | null |
62929 | https://en.wikipedia.org/wiki/Aqua%20regia | Aqua regia | Aqua regia (; from Latin, "regal water" or "royal water") is a mixture of nitric acid and hydrochloric acid, optimally in a molar ratio of 1:3. Aqua regia is a fuming liquid. Freshly prepared aqua regia is colorless, but it turns yellow, orange or red within seconds from the formation of nitrosyl chloride and nitrogen dioxide. It was so named by alchemists because it can dissolve noble metals like gold and platinum, though not all metals.
Preparation and decomposition
Upon mixing of concentrated hydrochloric acid and concentrated nitric acid, chemical reactions occur. These reactions result in the volatile products nitrosyl chloride and chlorine gas:
as evidenced by the fuming nature and characteristic yellow color of aqua regia. As the volatile products escape from solution, aqua regia loses its potency. Nitrosyl chloride (NOCl) can further decompose into nitric oxide (NO) and elemental chlorine ():
This dissociation is equilibrium-limited. Therefore, in addition to nitrosyl chloride and chlorine, the fumes over aqua regia also contain nitric oxide (NO). Because nitric oxide readily reacts with atmospheric oxygen, the gases produced also contain nitrogen dioxide, (red fume):
Applications
Aqua regia is primarily used to produce chloroauric acid, the electrolyte in the Wohlwill process for refining the highest purity (99.999%) gold.
Aqua regia is also used in etching and in specific analytic procedures. It is also used in some laboratories to clean glassware of organic compounds and metal particles. This method is preferred among most over the more traditional chromic acid bath for cleaning NMR tubes, because no traces of paramagnetic chromium can remain to spoil spectra. While chromic acid baths are discouraged because of the high toxicity of chromium and the potential for explosions, aqua regia is itself very corrosive and has been implicated in several explosions due to mishandling.
Because its components react quickly, resulting in its decomposition, aqua regia quickly loses its effectiveness (yet remains a strong acid), so its components are usually only mixed immediately before use.
Chemistry
Dissolving gold
Aqua regia dissolves gold, although neither constituent acid will do so alone. Nitric acid is a powerful oxidizer, which will dissolve a very small quantity of gold, forming gold(III) ions (). The hydrochloric acid provides a ready supply of chloride ions (), which react with the gold ions to produce tetrachloroaurate(III) anions (), also in solution. The reaction with hydrochloric acid is an equilibrium reaction that favors formation of tetrachloroaurate(III) anions. This results in a removal of gold ions from solution and allows further oxidation of gold to take place. The gold dissolves to become chloroauric acid. In addition, gold may be dissolved by the chlorine present in aqua regia. Appropriate equations are:
Au + 3 + 4 HCl + 3 + + 2
or
Au + + 4 HCl + NO + + .
Solid tetrachloroauric acid may be isolated by evaporating the excess aqua regia, and decomposing the residual nitric acid by repeatedly heating the solution with additional hydrochloric acid. That step reduces nitric acid (see decomposition of aqua regia). If elemental gold is desired, it may be selectively reduced with reducing agents such as sulfur dioxide, hydrazine, oxalic acid, etc. The equation for the reduction of oxidized gold () by sulfur dioxide () is the following:
Dissolving platinum
Similar equations can be written for platinum. As with gold, the oxidation reaction can be written with either nitric oxide or nitrogen dioxide as the nitrogen oxide product:
The oxidized platinum ion then reacts with chloride ions resulting in the chloroplatinate ion:
Experimental evidence reveals that the reaction of platinum with aqua regia is considerably more complex. The initial reactions produce a mixture of chloroplatinous acid () and nitrosoplatinic chloride (). The nitrosoplatinic chloride is a solid product. If full dissolution of the platinum is desired, repeated extractions of the residual solids with concentrated hydrochloric acid must be performed:
and
The chloroplatinous acid can be oxidized to chloroplatinic acid by saturating the solution with molecular chlorine () while heating:
Dissolving platinum solids in aqua regia was the mode of discovery for the densest metals, iridium and osmium, both of which are found in platinum ores and are not dissolved by aqua regia, instead collecting as insoluble metallic powder (elemental Ir, Os) on the base of the vessel.
Precipitating dissolved platinum
As a practical matter, when platinum group metals are purified through dissolution in aqua regia, gold (commonly associated with PGMs) is precipitated by treatment with iron(II) chloride. Platinum in the filtrate, as hexachloroplatinate(IV), is converted to ammonium hexachloroplatinate by the addition of ammonium chloride. This ammonium salt is extremely insoluble, and it can be filtered off. Ignition (strong heating) converts it to platinum metal:
Unprecipitated hexachloroplatinate(IV) is reduced with elemental zinc, and a similar method is suitable for small scale recovery of platinum from laboratory residues.
Reaction with tin
Aqua regia reacts with tin to form tin(IV) chloride, containing tin in its highest oxidation state:
Reaction with other substances
It can react with iron pyrite to form Iron(III) chloride:
History
Aqua regia first appeared in the De inventione veritatis ("On the Discovery of Truth") by pseudo-Geber (after ), who produced it by adding sal ammoniac (ammonium chloride) to nitric acid. The preparation of aqua regia by directly mixing hydrochloric acid with nitric acid only became possible after the discovery in the late sixteenth century of the process by which free hydrochloric acid can be produced.
The third of Basil Valentine's keys () shows a dragon in the foreground and a fox eating a rooster in the background. The rooster symbolizes gold (from its association with sunrise and the sun's association with gold), and the fox represents aqua regia. The repetitive dissolving, heating, and redissolving (the rooster eating the fox eating the rooster) leads to the buildup of chlorine gas in the flask. The gold then crystallizes in the form of gold(III) chloride, whose red crystals Basil called "the rose of our masters" and "the red dragon's blood". The reaction was not reported again in the chemical literature until 1895.
Antoine Lavoisier called aqua regia nitro-muriatic acid in 1789.
When Germany invaded Denmark in World War II, Hungarian chemist George de Hevesy dissolved the gold Nobel Prizes of German physicists Max von Laue (1914) and James Franck (1925) in aqua regia to prevent the Nazis from confiscating them. The German government had prohibited Germans from accepting or keeping any Nobel Prize after jailed peace activist Carl von Ossietzky had received the Nobel Peace Prize in 1935. De Hevesy placed the resulting solution on a shelf in his laboratory at the Niels Bohr Institute. It was subsequently ignored by the Nazis who thought the jar—one of perhaps hundreds on the shelving—contained common chemicals. After the war, de Hevesy returned to find the solution undisturbed and precipitated the gold out of the acid. The gold was returned to the Royal Swedish Academy of Sciences and the Nobel Foundation. They re-cast the medals and again presented them to Laue and Franck.
| Physical sciences | Specific acids | Chemistry |
62996 | https://en.wikipedia.org/wiki/Theophylline | Theophylline | Theophylline, also known as 1,3-dimethylxanthine, is a drug that inhibits phosphodiesterase and blocks adenosine receptors. It is used to treat chronic obstructive pulmonary disease (COPD) and asthma. Its pharmacology is similar to other methylxanthine drugs (e.g., theobromine and caffeine). Trace amounts of theophylline are naturally present in tea, coffee, chocolate, yerba maté, guarana, and kola nut.
The name 'theophylline' derives from "Thea"—the former genus name for tea + Legacy Greek φύλλον (phúllon, "leaf") + -ine.
Medical uses
The main actions of theophylline involve:
relaxing bronchial smooth muscle
increasing heart muscle contractility and efficiency (positive inotrope)
increasing heart rate (positive chronotropic)
increasing blood pressure
increasing renal blood flow
anti-inflammatory effects
central nervous system stimulatory effect, mainly on the medullary respiratory center
The main therapeutic uses of theophylline are for treating:
Chronic obstructive pulmonary disease (COPD)
Asthma
infant apnea
Blocks the action of adenosine; an inhibitory neurotransmitter that induces sleep, contracts the smooth muscles and relaxes the cardiac muscle.
Treatment of post-dural puncture headache.
Performance enhancement in sports
Theophylline and other methylxanthines are often used for their performance-enhancing effects in sports, as these drugs increase alertness, bronchodilation, and increase the rate and force of heart contraction. There is conflicting information about the value of theophylline and other methylxanthines as prophylaxis against exercise-induced asthma.
Adverse effects
The use of theophylline is complicated by its interaction with various drugs and by the fact that it has a narrow therapeutic window (<20 mcg/mL). Its use must be monitored by direct measurement of serum theophylline levels to avoid toxicity. It can also cause nausea, diarrhea, increase in heart rate, abnormal heart rhythms, and CNS excitation (headaches, insomnia, irritability, dizziness and lightheadedness). Seizures can also occur in severe cases of toxicity, and are considered to be a neurological emergency.
Its toxicity is increased by erythromycin, cimetidine, and fluoroquinolones, such as ciprofloxacin. Some lipid-based formulations of theophylline can result in toxic theophylline levels when taken with fatty meals, an effect called dose dumping, but this does not occur with most formulations of theophylline. Theophylline toxicity can be treated with beta blockers. In addition to seizures, tachyarrhythmias are a major concern. Theophylline should not be used in combination with the SSRI fluvoxamine.
Spectroscopy
UV-visible
Theophylline is soluble in 0.1N NaOH and absorbs maximally at 277 nm with an extinction coefficient of 10,200 (cm−1 M−1).
Proton NMR
The characteristic signals, distinguishing theophylline from related methylxanthines, are approximately 3.23δ and 3.41δ, corresponding to the unique methylation possessed by theophylline. The remaining proton signal, at 8.01δ, corresponds to the proton on the imidazole ring, not transferred between the nitrogen. The transferred proton between the nitrogen is a variable proton and only exhibits a signal under certain conditions.
13C-NMR
The unique methylation of theophylline corresponds to the following signals: 27.7δ and 29.9δ. The remaining signals correspond to carbons characteristic of the xanthine backbone.
Natural occurrences
Theophylline is naturally found in cocoa beans. Amounts as high as 3.7 mg/g have been reported in Criollo cocoa beans.
Trace amounts of theophylline are also found in brewed tea, although brewed tea provides only about 1 mg/L, which is significantly less than a therapeutic dose.
Trace amounts of theophylline are also found in guarana (Paullinia cupana) and in kola nuts.
Pharmacology
Pharmacodynamics
Like other methylated xanthine derivatives, theophylline is both a
competitive nonselective phosphodiesterase inhibitor which increases intracellular levels of cAMP and cGMP, activates PKA, inhibits TNF-alpha and inhibits leukotriene synthesis, and reduces inflammation and innate immunity
nonselective adenosine receptor antagonist, antagonizing A1, A2, and A3 receptors almost equally, which explains many of its cardiac effects. Theophylline activates histone deacetylases.
Pharmacokinetics
Absorption
When theophylline is administered intravenously, bioavailability is 100%.
Distribution
Theophylline is distributed in the extracellular fluid, in the placenta, in the mother's milk and in the central nervous system. The volume of distribution is 0.5 L/kg. The protein binding is 40%.
Metabolism
Theophylline is metabolized extensively in the liver. It undergoes N-demethylation via cytochrome P450 1A2. It is metabolized by parallel first order and Michaelis-Menten pathways. Metabolism may become saturated (non-linear), even within the therapeutic range. Small dose increases may result in disproportionately large increases in serum concentration. Methylation to caffeine is also important in the infant population. Smokers and people with hepatic (liver) impairment metabolize it differently. Cigarette and marijuana smoking induces metabolism of theophylline, increasing the drug's metabolic clearance.
Excretion
Theophylline is excreted unchanged in the urine (up to 10%). Clearance of the drug is increased in children (age 1 to 12), teenagers (12 to 16), adult smokers, elderly smokers, as well as in cystic fibrosis, and hyperthyroidism. Clearance of the drug is decreased in these conditions: elderly, acute congestive heart failure, cirrhosis, hypothyroidism and febrile viral illnesses.
The elimination half-life varies: 30 hours for premature neonates, 24 hours for neonates, 3.5 hours for children ages 1 to 9, 8 hours for adult non-smokers, 5 hours for adult smokers, 24 hours for those with hepatic impairment, 12 hours for those with congestive heart failure NYHA class I-II, 24 hours for those with congestive heart failure NYHA class III-IV, 12 hours for the elderly.
History
Theophylline was first extracted from tea leaves and chemically identified around 1888 by the German biologist Albrecht Kossel. Seven years later, a chemical synthesis starting with 1,3-dimethyluric acid was described by Emil Fischer and Lorenz Ach. The Traube purine synthesis, an alternative method to synthesize theophylline, was introduced in 1900 by another German scientist, Wilhelm Traube. Theophylline's first clinical use came in 1902 as a diuretic. It took an additional 20 years until it was first reported as an asthma treatment. The drug was prescribed in a syrup up to the 1970s as Theostat 20 and Theostat 80, and by the early 1980s in a tablet form called Quibron.
| Physical sciences | Alkaloids | Chemistry |
63025 | https://en.wikipedia.org/wiki/Variable%20star | Variable star | A variable star is a star whose brightness as seen from Earth (its apparent magnitude) changes systematically with time. This variation may be caused by a change in emitted light or by something partly blocking the light, so variable stars are classified as either:
Intrinsic variables, whose luminosity actually changes periodically; for example, because the star swells and shrinks.
Extrinsic variables, whose apparent changes in brightness are due to changes in the amount of their light that can reach Earth; for example, because the star has an orbiting companion that sometimes eclipses it.
Many, possibly most, stars exhibit at least some oscillation in luminosity: the energy output of the Sun, for example, varies by about 0.1% over an 11-year solar cycle.
Discovery
An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago may be the oldest preserved historical document of the discovery of a variable star,
the eclipsing binary Algol. Aboriginal Australians are also known to have observed the variability of Betelgeuse and Antares, incorporating these brightness changes into narratives that are passed down through oral tradition.
Of the modern astronomers, the first variable star was identified in 1638 when Johannes Holwarda noticed that Omicron Ceti (later named Mira) pulsated in a cycle taking 11 months; the star had previously been described as a nova by David Fabricius in 1596. This discovery, combined with supernovae observed in 1572 and 1604, proved that the starry sky was not eternally invariable as Aristotle and other ancient philosophers had taught. In this way, the discovery of variable stars contributed to the astronomical revolution of the sixteenth and early seventeenth centuries.
The second variable star to be described was the eclipsing variable Algol, by Geminiano Montanari in 1669; John Goodricke gave the correct explanation of its variability in 1784. Chi Cygni was identified in 1686 by G. Kirch, then R Hydrae in 1704 by G. D. Maraldi. By 1786, ten variable stars were known. John Goodricke himself discovered Delta Cephei and Beta Lyrae. Since 1850, the number of known variable stars has increased rapidly, especially after 1890 when it became possible to identify variable stars by means of photography.
In 1930, astrophysicist Cecilia Payne published the book The Stars of High Luminosity, in which she made numerous observations of variable stars, paying particular attention to Cepheid variables. Her analyses and observations of variable stars, carried out with her husband, Sergei Gaposchkin, laid the basis for all subsequent work on the subject.
The latest edition of the General Catalogue of Variable Stars (2008) lists more than 46,000 variable stars in the Milky Way, as well as 10,000 in other galaxies, and over 10,000 'suspected' variables.
Detecting variability
The most common kinds of variability involve changes in brightness, but other types of variability also occur, in particular changes in the spectrum. By combining light curve data with observed spectral changes, astronomers are often able to explain why a particular star is variable.
Variable star observations
Variable stars are generally analysed using photometry, spectrophotometry and spectroscopy. Measurements of their changes in brightness can be plotted to produce light curves. For regular variables, the period of variation and its amplitude can be very well established; for many variable stars, though, these quantities may vary slowly over time, or even from one period to the next. Peak brightnesses in the light curve are known as maxima, while troughs are known as minima.
Amateur astronomers can do useful scientific study of variable stars by visually comparing the star with other stars within the same telescopic field of view of which the magnitudes are known and constant. By estimating the variable's magnitude and noting the time of observation a visual lightcurve can be constructed. The American Association of Variable Star Observers collects such observations from participants around the world and shares the data with the scientific community.
From the light curve the following data are derived:
are the brightness variations periodical, semiperiodical, irregular, or unique?
what is the period of the brightness fluctuations?
what is the shape of the light curve (symmetrical or not, angular or smoothly varying, does each cycle have only one or more than one minima, etcetera)?
From the spectrum the following data are derived:
what kind of star is it: what is its temperature, its luminosity class (dwarf star, giant star, supergiant, etc.)?
is it a single star, or a binary? (the combined spectrum of a binary star may show elements from the spectra of each of the member stars)
does the spectrum change with time? (for example, the star may turn hotter and cooler periodically)
changes in brightness may depend strongly on the part of the spectrum that is observed (for example, large variations in visible light but hardly any changes in the infrared)
if the wavelengths of spectral lines are shifted this points to movements (for example, a periodical swelling and shrinking of the star, or its rotation, or an expanding gas shell) (Doppler effect)
strong magnetic fields on the star betray themselves in the spectrum
abnormal emission or absorption lines may be indication of a hot stellar atmosphere, or gas clouds surrounding the star.
In very few cases it is possible to make pictures of a stellar disk. These may show darker spots on its surface.
Interpretation of observations
Combining light curves with spectral data often gives a clue as to the changes that occur in a variable star. For example, evidence for a pulsating star is found in its shifting spectrum because its surface periodically moves toward and away from us, with the same frequency as its changing brightness.
About two-thirds of all variable stars appear to be pulsating. In the 1930s astronomer Arthur Stanley Eddington showed that the mathematical equations that describe the interior of a star may lead to instabilities that cause a star to pulsate. The most common type of instability is related to oscillations in the degree of ionization in outer, convective layers of the star.
When the star is in the swelling phase, its outer layers expand, causing them to cool. Because of the decreasing temperature the degree of ionization also decreases. This makes the gas more transparent, and thus makes it easier for the star to radiate its energy. This in turn makes the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. This makes the gas more opaque, and radiation temporarily becomes captured in the gas. This heats the gas further, leading it to expand once again. Thus a cycle of expansion and compression (swelling and shrinking) is maintained.
The pulsation of cepheids is known to be driven by oscillations in the ionization of helium (from He++ to He+ and back to He++).
Nomenclature
In a given constellation, the first variable stars discovered were designated with letters R through Z, e.g. R Andromedae. This system of nomenclature was developed by Friedrich W. Argelander, who gave the first previously unnamed variable in a constellation the letter R, the first letter not used by Bayer. Letters RR through RZ, SS through SZ, up to ZZ are used for the next discoveries, e.g. RR Lyrae. Later discoveries used letters AA through AZ, BB through BZ, and up to QQ through QZ (with J omitted). Once those 334 combinations are exhausted, variables are numbered in order of discovery, starting with the prefixed V335 onwards.
Classification
Variable stars may be either intrinsic or extrinsic.
Intrinsic variable stars: stars where the variability is being caused by changes in the physical properties of the stars themselves. This category can be divided into three subgroups.
Pulsating variables, stars whose radius alternately expands and contracts as part of their natural evolutionary ageing processes.
Eruptive variables, stars who experience eruptions on their surfaces like flares or mass ejections.
Cataclysmic or explosive variables, stars that undergo a cataclysmic change in their properties like novae and supernovae.
Extrinsic variable stars: stars where the variability is caused by external properties like rotation or eclipses. There are two main subgroups.
Eclipsing binaries, double stars or planetary systems where, as seen from Earth's vantage point the stars occasionally eclipse one another as they orbit, or the planet eclipses its star.
Rotating variables, stars whose variability is caused by phenomena related to their rotation. Examples are stars with extreme "sunspots" which affect the apparent brightness or stars that have fast rotation speeds causing them to become ellipsoidal in shape.
These subgroups themselves are further divided into specific types of variable stars that are usually named after their prototype. For example, dwarf novae are designated U Geminorum stars after the first recognized star in the class, U Geminorum.
Intrinsic variable stars
Examples of types within these divisions are given below.
Pulsating variable stars
Pulsating stars swell and shrink, affecting their brightness and spectrum. Pulsations are generally split into: radial, where the entire star expands and shrinks as a whole; and non-radial, where one part of the star expands while another part shrinks.
Depending on the type of pulsation and its location within the star, there is a natural or fundamental frequency which determines the period of the star. Stars may also pulsate in a harmonic or overtone which is a higher frequency, corresponding to a shorter period. Pulsating variable stars sometimes have a single well-defined period, but often they pulsate simultaneously with multiple frequencies and complex analysis is required to determine the separate interfering periods. In some cases, the pulsations do not have a defined frequency, causing a random variation, referred to as stochastic. The study of stellar interiors using their pulsations is known as asteroseismology.
The expansion phase of a pulsation is caused by the blocking of the internal energy flow by material with a high opacity, but this must occur at a particular depth of the star to create visible pulsations. If the expansion occurs below a convective zone then no variation will be visible at the surface. If the expansion occurs too close to the surface the restoring force will be too weak to create a pulsation. The restoring force to create the contraction phase of a pulsation can be pressure if the pulsation occurs in a non-degenerate layer deep inside a star, and this is called an acoustic or pressure mode of pulsation, abbreviated to p-mode. In other cases, the restoring force is gravity and this is called a g-mode. Pulsating variable stars typically pulsate in only one of these modes.
Cepheids and cepheid-like variables
This group consists of several kinds of pulsating stars, all found on the instability strip, that swell and shrink very regularly caused by the star's own mass resonance, generally by the fundamental frequency. Generally the Eddington valve mechanism for pulsating variables is believed to account for cepheid-like pulsations. Each of the subgroups on the instability strip has a fixed relationship between period and absolute magnitude, as well as a relation between period and mean density of the star. The period-luminosity relationship was first established for Delta Cepheids by Henrietta Leavitt, and makes these high luminosity Cepheids very useful for determining distances to galaxies within the Local Group and beyond. Edwin Hubble used this method to prove that the so-called spiral nebulae are in fact distant galaxies.
The Cepheids are named only for Delta Cephei, while a completely separate class of variables is named after Beta Cephei.
Classical Cepheid variables
Classical Cepheids (or Delta Cephei variables) are population I (young, massive, and luminous) yellow supergiants which undergo pulsations with very regular periods on the order of days to months. On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of Cepheid variables. However, the namesake for classical Cepheids is the star Delta Cephei, discovered to be variable by John Goodricke a few months later.
Type II Cepheids
Type II Cepheids (historically termed W Virginis stars) have extremely regular light pulsations and a luminosity relation much like the δ Cephei variables, so initially they were confused with the latter category. Type II Cepheids stars belong to older Population II stars, than do the type I Cepheids. The Type II have somewhat lower metallicity, much lower mass, somewhat lower luminosity, and a slightly offset period versus luminosity relationship, so it is always important to know which type of star is being observed.
RR Lyrae variables
These stars are somewhat similar to Cepheids, but are not as luminous and have shorter periods. They are older than type I Cepheids, belonging to Population II, but of lower mass than type II Cepheids. Due to their common occurrence in globular clusters, they are occasionally referred to as cluster Cepheids. They also have a well established period-luminosity relationship, and so are also useful as distance indicators. These A-type stars vary by about 0.2–2 magnitudes (20% to over 500% change in luminosity) over a period of several hours to a day or more.
Delta Scuti variables
Delta Scuti (δ Sct) variables are similar to Cepheids but much fainter and with much shorter periods. They were once known as Dwarf Cepheids. They often show many superimposed periods, which combine to form an extremely complex light curve. The typical δ Scuti star has an amplitude of 0.003–0.9 magnitudes (0.3% to about 130% change in luminosity) and a period of 0.01–0.2 days. Their spectral type is usually between A0 and F5.
SX Phoenicis variables
These stars of spectral type A2 to F5, similar to δ Scuti variables, are found mainly in globular clusters. They exhibit fluctuations in their brightness in the order of 0.7 magnitude (about 100% change in luminosity) or so every 1 to 2 hours.
Rapidly oscillating Ap variables
These stars of spectral type A or occasionally F0, a sub-class of δ Scuti variables found on the main sequence. They have extremely rapid variations with periods of a few minutes and amplitudes of a few thousandths of a magnitude.
Long period variables
The long period variables are cool evolved stars that pulsate with periods in the range of weeks to several years.
Mira variables
Mira variables are Asymptotic giant branch (AGB) red giants. Over periods of many months they fade and brighten by between 2.5 and 11 magnitudes, a 6 fold to 30,000 fold change in luminosity. Mira itself, also known as Omicron Ceti (ο Cet), varies in brightness from almost 2nd magnitude to as faint as 10th magnitude with a period of roughly 332 days. The very large visual amplitudes are mainly due to the shifting of energy output between visual and infra-red as the temperature of the star changes. In a few cases, Mira variables show dramatic period changes over a period of decades, thought to be related to the thermal pulsing cycle of the most advanced AGB stars.
Semiregular variables
These are red giants or supergiants. Semiregular variables may show a definite period on occasion, but more often show less well-defined variations that can sometimes be resolved into multiple periods. A well-known example of a semiregular variable is Betelgeuse, which varies from about magnitudes +0.2 to +1.2 (a factor 2.5 change in luminosity). At least some of the semi-regular variables are very closely related to Mira variables, possibly the only difference being pulsating in a different harmonic.
Slow irregular variables
These are red giants or supergiants with little or no detectable periodicity. Some are poorly studied semiregular variables, often with multiple periods, but others may simply be chaotic.
Long secondary period variables
Many variable red giants and supergiants show variations over several hundred to several thousand days. The brightness may change by several magnitudes although it is often much smaller, with the more rapid primary variations are superimposed. The reasons for this type of variation are not clearly understood, being variously ascribed to pulsations, binarity, and stellar rotation.
Beta Cephei variables
Beta Cephei (β Cep) variables (sometimes called Beta Canis Majoris variables, especially in Europe) undergo short period pulsations in the order of 0.1–0.6 days with an amplitude of 0.01–0.3 magnitudes (1% to 30% change in luminosity). They are at their brightest during minimum contraction. Many stars of this kind exhibits multiple pulsation periods.
Slowly pulsating B-type stars
Slowly pulsating B (SPB) stars are hot main-sequence stars slightly less luminous than the Beta Cephei stars, with longer periods and larger amplitudes.
Very rapidly pulsating hot (subdwarf B) stars
The prototype of this rare class is V361 Hydrae, a 15th magnitude subdwarf B star. They pulsate with periods of a few minutes and may simultaneous pulsate with multiple periods. They have amplitudes of a few hundredths of a magnitude and are given the GCVS acronym RPHS. They are p-mode pulsators.
PV Telescopii variables
Stars in this class are type Bp supergiants with a period of 0.1–1 day and an amplitude of 0.1 magnitude on average. Their spectra are peculiar by having weak hydrogen while on the other hand carbon and helium lines are extra strong, a type of extreme helium star.
RV Tauri variables
These are yellow supergiant stars (actually low mass post-AGB stars at the most luminous stage of their lives) which have alternating deep and shallow minima. This double-peaked variation typically has periods of 30–100 days and amplitudes of 3–4 magnitudes. Superimposed on this variation, there may be long-term variations over periods of several years. Their spectra are of type F or G at maximum light and type K or M at minimum brightness. They lie near the instability strip, cooler than type I Cepheids more luminous than type II Cepheids. Their pulsations are caused by the same basic mechanisms related to helium opacity, but they are at a very different stage of their lives.
Alpha Cygni variables
Alpha Cygni (α Cyg) variables are nonradially pulsating supergiants of spectral classes Bep to AepIa. Their periods range from several days to several weeks, and their amplitudes of variation are typically of the order of 0.1 magnitudes. The light changes, which often seem irregular, are caused by the superposition of many oscillations with close periods. Deneb, in the constellation of Cygnus is the prototype of this class.
Gamma Doradus variables
Gamma Doradus (γ Dor) variables are non-radially pulsating main-sequence stars of spectral classes F to late A. Their periods are around one day and their amplitudes typically of the order of 0.1 magnitudes.
Pulsating white dwarfs
These non-radially pulsating stars have short periods of hundreds to thousands of seconds with tiny fluctuations of 0.001 to 0.2 magnitudes. Known types of pulsating white dwarf (or pre-white dwarf) include the DAV, or ZZ Ceti, stars, with hydrogen-dominated atmospheres and the spectral type DA; DBV, or V777 Her, stars, with helium-dominated atmospheres and the spectral type DB; and GW Vir stars, with atmospheres dominated by helium, carbon, and oxygen. GW Vir stars may be subdivided into DOV and PNNV stars.
Solar-like oscillations
The Sun oscillates with very low amplitude in a large number of modes having periods around 5 minutes. The study of these oscillations is known as helioseismology. Oscillations in the Sun are driven stochastically by convection in its outer layers. The term solar-like oscillations is used to describe oscillations in other stars that are excited in the same way and the study of these oscillations is one of the main areas of active research in the field of asteroseismology.
BLAP variables
A Blue Large-Amplitude Pulsator (BLAP) is a pulsating star characterized by changes of 0.2 to 0.4 magnitudes with typical periods of 20 to 40 minutes.
Fast yellow pulsating supergiants
A fast yellow pulsating supergiant (FYPS) is a luminous yellow supergiant with pulsations shorter than a day. They are thought to have evolved beyond a red supergiant phase, but the mechanism for the pulsations is unknown. The class was named in 2020 through analysis of TESS observations.
Eruptive variable stars
Eruptive variable stars show irregular or semi-regular brightness variations caused by material being lost from the star, or in some cases being accreted to it. Despite the name, these are not explosive events.
Protostars
Protostars are young objects that have not yet completed the process of contraction from a gas nebula to a veritable star. Most protostars exhibit irregular brightness variations.
Herbig Ae/Be stars
Variability of more massive (2–8 solar mass) Herbig Ae/Be stars is thought to be due to gas-dust clumps, orbiting in the circumstellar disks.
Orion variables
Orion variables are young, hot pre–main-sequence stars usually embedded in nebulosity. They have irregular periods with amplitudes of several magnitudes. A well-known subtype of Orion variables are the T Tauri variables. Variability of T Tauri stars is due to spots on the stellar surface and gas-dust clumps, orbiting in the circumstellar disks.
FU Orionis variables
These stars reside in reflection nebulae and show gradual increases in their luminosity in the order of 6 magnitudes followed by a lengthy phase of constant brightness. They then dim by 2 magnitudes (six times dimmer) or so over a period of many years. V1057 Cygni for example dimmed by 2.5 magnitude (ten times dimmer) during an eleven-year period. FU Orionis variables are of spectral type A through G and are possibly an evolutionary phase in the life of T Tauri stars.
Giants and supergiants
Large stars lose their matter relatively easily. For this reason variability due to eruptions and mass loss is fairly common among giants and supergiants.
Luminous blue variables
Also known as the S Doradus variables, the most luminous stars known belong to this class. Examples include the hypergiants η Carinae and P Cygni. They have permanent high mass loss, but at intervals of years internal pulsations cause the star to exceed its Eddington limit and the mass loss increases hugely. Visual brightness increases although the overall luminosity is largely unchanged. Giant eruptions observed in a few LBVs do increase the luminosity, so much so that they have been tagged supernova impostors, and may be a different type of event.
Yellow hypergiants
These massive evolved stars are unstable due to their high luminosity and position above the instability strip, and they exhibit slow but sometimes large photometric and spectroscopic changes due to high mass loss and occasional larger eruptions, combined with secular variation on an observable timescale. The best known example is Rho Cassiopeiae.
R Coronae Borealis variables
While classed as eruptive variables, these stars do not undergo periodic increases in brightness. Instead they spend most of their time at maximum brightness, but at irregular intervals they suddenly fade by 1–9 magnitudes (2.5 to 4000 times dimmer) before recovering to their initial brightness over months to years. Most are classified as yellow supergiants by luminosity, although they are actually post-AGB stars, but there are both red and blue giant R CrB stars. R Coronae Borealis (R CrB) is the prototype star. DY Persei variables are a subclass of R CrB variables that have a periodic variability in addition to their eruptions.
Wolf–Rayet variables
Classic population I Wolf–Rayet stars are massive hot stars that sometimes show variability, probably due to several different causes including binary interactions and rotating gas clumps around the star. They exhibit broad emission line spectra with helium, nitrogen, carbon and oxygen lines. Variations in some stars appear to be stochastic while others show multiple periods.
Gamma Cassiopeiae variables
Gamma Cassiopeiae (γ Cas) variables are non-supergiant fast-rotating B class emission line-type stars that fluctuate irregularly by up to 1.5 magnitudes (4 fold change in luminosity) due to the ejection of matter at their equatorial regions caused by the rapid rotational velocity.
Flare stars
In main-sequence stars major eruptive variability is exceptional. It is common only among the flare stars, also known as the UV Ceti variables, very faint main-sequence stars which undergo regular flares. They increase in brightness by up to two magnitudes (six times brighter) in just a few seconds, and then fade back to normal brightness in half an hour or less. Several nearby red dwarfs are flare stars, including Proxima Centauri and Wolf 359.
RS Canum Venaticorum variables
These are close binary systems with highly active chromospheres, including huge sunspots and flares, believed to be enhanced by the close companion. Variability scales ranges from days, close to the orbital period and sometimes also with eclipses, to years as sunspot activity varies.
Cataclysmic or explosive variable stars
Supernovae
Supernovae are the most dramatic type of cataclysmic variable, being some of the most energetic events in the universe. A supernova can briefly emit as much energy as an entire galaxy, brightening by more than 20 magnitudes (over one hundred million times brighter). The supernova explosion is caused by a white dwarf or a star core reaching a certain mass/density limit, the Chandrasekhar limit, causing the object to collapse in a fraction of a second. This collapse "bounces" and causes the star to explode and emit this enormous energy quantity. The outer layers of these stars are blown away at speeds of many thousands of kilometers per second. The expelled matter may form nebulae called supernova remnants. A well-known example of such a nebula is the Crab Nebula, left over from a supernova that was observed in China and elsewhere in 1054. The progenitor object may either disintegrate completely in the explosion, or, in the case of a massive star, the core can become a neutron star (generally a pulsar) or a black hole.
Supernovae can result from the death of an extremely massive star, many times heavier than the Sun. At the end of the life of this massive star, a non-fusible iron core is formed from fusion ashes. This iron core is pushed towards the Chandrasekhar limit till it surpasses it and therefore collapses. One of the most studied supernovae of this type is SN 1987A in the Large Magellanic Cloud.
A supernova may also result from mass transfer onto a white dwarf from a star companion in a double star system. The Chandrasekhar limit is surpassed from the infalling matter. The absolute luminosity of this latter type is related to properties of its light curve, so that these supernovae can be used to establish the distance to other galaxies.
Luminous red nova
Luminous red novae are stellar explosions caused by the merger of two stars. They are not related to classical novae. They have a characteristic red appearance and very slow decline following the initial outburst.
Novae
Novae are also the result of dramatic explosions, but unlike supernovae do not result in the destruction of the progenitor star. Also unlike supernovae, novae ignite from the sudden onset of thermonuclear fusion, which under certain high pressure conditions (degenerate matter) accelerates explosively. They form in close binary systems, one component being a white dwarf accreting matter from the other ordinary star component, and may recur over periods of decades to centuries or millennia. Novae are categorised as fast, slow or very slow, depending on the behaviour of their light curve. Several naked eye novae have been recorded, Nova Cygni 1975 being the brightest in the recent history, reaching 2nd magnitude.
Dwarf novae
Dwarf novae are double stars involving a white dwarf in which matter transfer between the component gives rise to regular outbursts. There are three types of dwarf nova:
U Geminorum stars, which have outbursts lasting roughly 5–20 days followed by quiet periods of typically a few hundred days. During an outburst they brighten typically by 2–6 magnitudes. These stars are also known as SS Cygni variables after the variable in Cygnus which produces among the brightest and most frequent displays of this variable type.
Z Camelopardalis stars, in which occasional plateaux of brightness called standstills are seen, part way between maximum and minimum brightness.
SU Ursae Majoris stars, which undergo both frequent small outbursts, and rarer but larger superoutbursts. These binary systems usually have orbital periods of under 2.5 hours.
DQ Herculis variables
DQ Herculis systems are interacting binaries in which a low-mass star transfers mass to a highly magnetic white dwarf. The white dwarf spin period is significantly shorter than the binary orbital period and can sometimes be detected as a photometric periodicity. An accretion disk usually forms around the white dwarf, but its innermost regions are magnetically truncated by the white dwarf. Once captured by the white dwarf's magnetic field, the material from the inner disk travels along the magnetic field lines until it accretes. In extreme cases, the white dwarf's magnetism prevents the formation of an accretion disk.
AM Herculis variables
In these cataclysmic variables, the white dwarf's magnetic field is so strong that it synchronizes the white dwarf's spin period with the binary orbital period. Instead of forming an accretion disk, the accretion flow is channeled along the white dwarf's magnetic field lines until it impacts the white dwarf near a magnetic pole. Cyclotron radiation beamed from the accretion region can cause orbital variations of several magnitudes.
Z Andromedae variables
These symbiotic binary systems are composed of a red giant and a hot blue star enveloped in a cloud of gas and dust. They undergo nova-like outbursts with amplitudes of up to 4 magnitudes. The prototype for this class is Z Andromedae.
AM CVn variables
AM CVn variables are symbiotic binaries where a white dwarf is accreting helium-rich material from either another white dwarf, a helium star, or an evolved main-sequence star. They undergo complex variations, or at times no variations, with ultrashort periods.
Extrinsic variable stars
There are two main groups of extrinsic variables: rotating stars and eclipsing stars.
Rotating variable stars
Stars with sizeable sunspots may show significant variations in brightness as they rotate, and brighter areas of the surface are brought into view. Bright spots also occur at the magnetic poles of magnetic stars. Stars with ellipsoidal shapes may also show changes in brightness as they present varying areas of their surfaces to the observer.
Non-spherical stars
Ellipsoidal variables
These are very close binaries, the components of which are non-spherical due to their tidal interaction. As the stars rotate the area of their surface presented towards the observer changes and this in turn affects their brightness as seen from Earth.
Stellar spots
The surface of the star is not uniformly bright, but has darker and brighter areas (like the sun's solar spots). The star's chromosphere too may vary in brightness. As the star rotates we observe brightness variations of a few tenths of magnitudes.
FK Comae Berenices variables
These stars rotate extremely rapidly (~100 km/s at the equator); hence they are ellipsoidal in shape. They are (apparently) single giant stars with spectral types G and K and show strong chromospheric emission lines. Examples are FK Com, V1794 Cygni and UZ Librae. A possible explanation for the rapid rotation of FK Comae stars is that they are the result of the merger of a (contact) binary.
BY Draconis variable stars
BY Draconis stars are of spectral class K or M and vary by less than 0.5 magnitudes (70% change in luminosity).
Magnetic fields
Alpha2 Canum Venaticorum variables
Alpha2 Canum Venaticorum (α2 CVn) variables are main-sequence stars of spectral class B8–A7 that show fluctuations of 0.01 to 0.1 magnitudes (1% to 10%) due to changes in their magnetic fields.
SX Arietis variables
Stars in this class exhibit brightness fluctuations of some 0.1 magnitude caused by changes in their magnetic fields due to high rotation speeds.
Optically variable pulsars
Few pulsars have been detected in visible light. These neutron stars change in brightness as they rotate. Because of the rapid rotation, brightness variations are extremely fast, from milliseconds to a few seconds. The first and the best known example is the Crab Pulsar.
Eclipsing binaries
Extrinsic variables have variations in their brightness, as seen by terrestrial observers, due to some external source. One of the most common reasons for this is the presence of a binary companion star, so that the two together form a binary star. When seen from certain angles, one star may eclipse the other, causing a reduction in brightness. One of the most famous eclipsing binaries is Algol, or Beta Persei (β Per).
Algol variables
Algol variables undergo eclipses with one or two minima separated by periods of nearly constant light. The prototype of this class is Algol in the constellation Perseus.
Double Periodic variables
Double periodic variables exhibit cyclical mass exchange which causes the orbital period to vary predictably over a very long period. The best known example is V393 Scorpii.
Beta Lyrae variables
Beta Lyrae (β Lyr) variables are extremely close binaries, named after the star Sheliak. The light curves of this class of eclipsing variables are constantly changing, making it almost impossible to determine the exact onset and end of each eclipse.
W Serpentis variables
W Serpentis is the prototype of a class of semi-detached binaries including a giant or supergiant transferring material to a massive more compact star. They are characterised, and distinguished from the similar β Lyr systems, by strong UV emission from accretions hotspots on a disc of material.
W Ursae Majoris variables
The stars in this group show periods of less than a day. The stars are so closely situated to each other that their surfaces are almost in contact with each other.
Planetary transits
Stars with planets may also show brightness variations if their planets pass between Earth and the star. These variations are much smaller than those seen with stellar companions and are only detectable with extremely accurate observations. Examples include HD 209458 and GSC 02652-01324, and all of the planets and planet candidates detected by the Kepler Mission.
| Physical sciences | Stellar astronomy | null |
63031 | https://en.wikipedia.org/wiki/Hound | Hound | A hound is a type of hunting dog used by hunters to track or chase prey.
Description
Hounds can be contrasted with gun dogs that assist hunters by identifying prey and/or recovering shot quarry. The hound breeds were the first hunting dogs. They have either a powerful sense of smell, great speed, or both. There are three types of hound, with several breeds type:
Sighthounds (also called gazehounds) follow prey predominantly by speed, keeping it in sight. These dogs are fast and assist hunters in catching game: fox, hare, deer, and elk.
Scenthounds follow prey or others (like missing people) by tracking its scent. These dogs have endurance, but are not fast runners.
The remaining breeds of hound follow their prey using both sight and scent. They are difficult to classify, as they are neither strictly sighthounds nor strictly scenthounds.
List of hound breeds
Afghan Hound
Africanis
Alpine Dachsbracke
American Foxhound
American Leopard Hound
Andalusian Hound
Artois Hound
Austrian Black and Tan Hound
Azawakh
Basenji
Basset Artesien Normand
Basset Bleu de Gascogne
Basset Fauve de Bretagne
Basset Hound
Bavarian Mountain Hound
Beagle
Beagle-Harrier
Billy
Black and Tan Coonhound
Blackmouth Cur
Bloodhound
Bluetick Coonhound
Borzoi
Bosnian Broken-haired Hound
Briquet Griffon Vendéen
Chippiparai
Cirneco dell'Etna
Combai
Coonhound
Cretan Hound
Dachshund
Drever
Dumfriesshire Black and Tan Foxhound
Estonian Hound
English Coonhound
English Foxhound
Feist
Finnish Hound
Galgo Español
Gascon Saintongeois
German Hound
Grand Basset Griffon Vendéen
Grand Bleu de Gascogne
Grand Fauve de Bretagne
Grand Griffon Vendéen
Greek Harehound
Greyhound
Griffon Bleu de Gascogne
Griffon Fauve de Bretagne
Hamiltonstövare
Hanover Hound
Harrier
Ibizan Hound
Indian pariah dog
Italian Greyhound
Irish Wolfhound
Istrian Coarse-haired Hound
Istrian Shorthaired Hound
Kai Ken
Kanni
Kishu Ken
Lakeland Trailhound
Lithuanian Hound
Longdog
Lurcher
Magyar agár
Mountain Cur
Mudhol Hound
Otterhound
Petit Basset Griffon Vendéen
Petit Bleu de Gascogne
Pharaoh Hound
Plott Hound
Podenco Canario
Polish Greyhound
Polish Hound
Portuguese Podengo
Posavac Hound
Rajapalayam
Rampur Greyhound
Rastreador Brasileiro
Redbone Coonhound
Rhodesian Ridgeback
Sabueso Español
Saluki
Schillerstövare
Segugio dell'Appennino
Segugio Italiano a pelo forte
Segugio Italiano a pelo raso
Segugio Maremmano
Serbian Hound
Serbian Tricolour Hound
Schweizer Laufhund
Schweizerischer Niederlaufhund
Scottish Deerhound
Shikoku
Silken Windhound
Sloughi
Slovenský kopov or Slovak Hound
Smalandstövare
Styrian Coarse-haired Hound
Taigan
Tatranský durič or Tatra Hound
Treeing Walker Coonhound
Trigg Hound
Transylvanian Hound
Tyrolean Hound
Ukrainian Chortai
Welsh Foxhound
Westphalian Dachsbracke
Whippet
| Biology and health sciences | Dogs | null |
63034 | https://en.wikipedia.org/wiki/Strigidae | Strigidae | The true owls or typical owls (family Strigidae) are one of the two generally accepted families of owls, the other being the barn owls and bay owls (Tytonidae). This large family comprises 230 living or recently extinct species in 24 genera. The Strigidae owls have a cosmopolitan distribution and are found on every continent except Antarctica.
Morphology
While typical owls (hereafter referred to simply as owls) vary greatly in size, with the smallest species, the elf owl, being a hundredth the size of the largest, the Eurasian eagle-owl and Blakiston's fish owl, owls generally share an extremely similar body plan. They tend to have large heads, short tails, cryptic plumage, and round facial discs around the eyes. The family is generally arboreal (with a few exceptions like the burrowing owl) and obtain their food on the wing. The wings are large, broad, rounded, and long. As is the case with most birds of prey, in many owl species females are larger than males.
Because of their nocturnal habits, they tend not to exhibit sexual dimorphism in their plumage. Specialized feathers and wing shape suppress the noise produced by flying, both taking off, flapping and gliding. This silent flight allows owls to hunt without being heard by their prey. Owls possess three physical attributes that are thought to contribute to their silent flight capability. First, on the leading edge of the wing, there is a comb of stiff feathers. Second, the trailing edge of the wing contains a flexible fringe. Finally, owls have downy material distributed on the tops of their wings that creates a compliant but rough surface (similar to that of a soft carpet). All these factors result in significant aerodynamic noise reductions. The toes and tarsi are feathered in some species, and more so in species at higher latitudes. Numerous species of owls in the genus Glaucidium and the northern hawk-owl have eye patches on the backs of their heads, apparently to convince other birds they are being watched at all times. Numerous nocturnal species have ear-tufts, feathers on the sides of the head that are thought to have a camouflage function, breaking up the outline of a roosting bird. The feathers of the facial disc are arranged in order to increase sound delivered to the ears. Hearing in owls is highly sensitive and the ears are asymmetrical allowing the owl to localise a sound in multiple directions. Owls can pinpoint the position of prey, such as a squeaking mouse, by computing when the sound from the object reaches the owl's ears. If the sound reaches the left ear first, the mouse must be to the left of the owl. The owl's brain will then direct the head to directly face the mouse. In addition to hearing, owls have massive eyes relative to their body size. Contrary to popular belief, however, owls cannot see well in extreme dark and are able to see well in the day.
Owls are also able to rotate their heads by as much as 270 degrees in either direction without damaging the blood vessels in their necks and heads, and without disrupting blood flow to their brains. Researchers have found four major biological adaptations that allow for this unique capability. First, in the neck there is a major artery, called the vertebral artery, that feeds the brain. This artery passes through bony holes in the vertebra. These bony holes are ten times larger in diameter than the artery that passes through them (extra space in the transverse foramina) which creates air pockets that allow for more movement of the artery when twisted. 12 of the 14 cervical vertebrae in the owl's neck have this adaptation. This vertebral artery also enters the neck higher up than it does in other birds. Instead of going in at the 14th cervical vertebrae, it enters in at the 12th cervical vertebrae. Finally, the small vessel connection between the carotid and the vertebral arteries allow the exchanging of blood between two blood vessels. These cross connections allow for uninterrupted blood flow to the brain. This means that even if one route is blocked during extreme head rotations, another route can continue blood circulation to the brain.
Several owl species also have fluorescent pigments called porphyrins under their wings. A large group of pigments defined by nitrogen-containing pyrole rings, including chlorophyll and heme (in animal blood), make up the porphyrins. Other bird species will use porphyrins to pigment eggshells in the oviduct. Owl species, however, use porphyrins as a pigment in their plumage. Porphyrins are most prevalent in new feathers and are easily destroyed by sunlight. Porphyrin pigments in feathers fluoresce under UV light, allowing biologists to more accurately classify the age of owls. The relative ages of the feathers are differentiated by the intensity of fluorescence that they emit when the primaries and secondaries are exposed to black light. This method helps to detect the subtle differences between third and fourth generation feathers, whereas looking at wear and color makes age determination difficult.
Niche competition
It has been noted that there is some competition for niche space between the spotted owl and the barred owl (both of which are true owls) . This competition is related to deforestation, and therefore a reduction in niche quantity and quality. This deforestation is more specifically the result of overlogging and forest fires. These two species of owl are known to traditionally live in mature forests of old and tall trees, which at this point in time are mostly limited to public lands. As niche overlap is occurring in these two families, there is a concern with the barred owls encroaching on the spotted owl's North American habitats, causing a decline of the spotted owl. As noted above, these species prefer mature forests which, due to deforestation, are at limited supply and take a long time to reestablish after deforestation has occurred. Because the northern spotted owl shares its territories and competes with other species, it is declining at a more rapid pace. This invasion by barred owls occurred about 50 years ago in the Pacific Northwest, and despite their low numbers, they are considered an invasive species because of the harm done to native spotted owls. In this competition for resources, hunting locations and general niches, the barred owl is pushing the spotted owl to local extinction. It is thought that the rapid decrease in population size of spotted owls will cause a trophic cascade, since the spotted owls help provide a healthy ecosystem.
Behaviour
Owls are generally nocturnal and/or crepuscular and spend much of the day roosting. They are often misperceived as ‘tame’ since they allow humans to approach quite closely before taking flight, but in reality they are attempting to avoid detection through stillness. Their cryptic plumage and the inconspicuous locations they adopt are an effort to avoid predators and mobbing by small birds.
Communication
Owls, such as the eagle-owl, will use visual signaling in intraspecific communication (communication within the species), both in territorial habits and parent-offspring interactions. Some researchers believe owls can employ various visual signals in other situations involving intraspecific interaction. Experimental evidence suggests that owl feces and the remains of prey can act as visual signals. This new type of signaling behavior could potentially indicate the owls' current reproductive state to intruders, including other territorial owls or non-breeding floaters. Feces are an ideal material for marking due to its minimal energetic costs, and can also continue to indicate territorial boundaries even when occupied in activities other than territorial defense. Preliminary evidence also suggests that owls will use feces and the feathers of their prey to signal their breeding status to members within the same species.
Migration
Some species of owl are migratory. One such species, the northern saw-whet owl, migrates south even when food and resources are ample in the north.
Habitat, climate and seasonal changes
Some owls have a higher survival rate and are more likely to reproduce in a habitat that contains a mixture of old growth forests and other vegetation types. Old growth forests provide ample dark areas for owls to hide from predators Like many organisms, spotted owls rely on forest fires to create their habitat and provide areas for foraging. Unfortunately, climate change and intentional fire suppression have altered natural fire habits. Owls avoid badly burned areas but they benefit from the mosaics of heterogeneous habitats created by fires. This is not to say that all fires are good for owls. Owls only thrive when fires are not of high severity and not large stand-replacing (high-severity fires that burn most of the vegetation) which create large canopy gaps that are not adequate for owls.
Parasites
Avian malaria or Plasmodium relictum affects owls and specifically, 44% of northern and Californian spotted owls harbor 17 strains of the parasite. As mentioned in the niche competition section above, spotted owls and barred owls are in competition so their niche overlap may be resulting in the plasmodium parasite having more hosts in a concentrated area but this is not certain.
Predators
The main predators of owls are other species of owls. An example of this occurs with the northern saw-whet owl that lives in the northern U.S. and lives low to the ground in brushy areas typically of cedar forests. These owls eat mice, and perch in trees at eye level. Their main predators are barred owls and great horned owls.
Systematics
The family Strigidae was introduced by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1819.
A molecular phylogenetic study of the owls by Jessie Salter and collaborators published in 2020 found that the family Strigidae was divided into two sister clades and some of the traditional genera were paraphyletic. The placement of three monotypic genera remained uncertain due to the degraded nature of the available DNA. Based on these results Frank Gill, Pamela Rasmussen and David Donsker updated the online list of world birds that they maintain on behalf of the International Ornithological Committee (IOC).
The cladogram below is based on the results of the study by Salter and collaborators published in 2020. The subfamilies are those defined by Edward Dickinson and James Van Remsen Jr. in 2013. A genetic study published in 2021 suggested that the genus Scotopelia may be embedded within Ketupa.
The 235 extant or recently extinct species are assigned to 23 genera:
Genus Uroglaux – Papuan hawk-owl
Genus Ninox – Australasian hawk-owls, 37 species of which one is recently extinct
Genus Margarobyas – bare-legged owl or Cuban screech-owl
Genus Taenioptynx – two species previous placed in Glaucidium
Genus Micrathene – elf owl
Genus Xenoglaux – long-whiskered owlet
Genus Aegolius – saw-whet owls, five species of which one is recently extinct
Genus Athene – nine species
Genus Surnia – northern hawk-owl
Genus Glaucidium – pygmy owls, 29 species
Genus Otus – scops owls, 58 species including three extinct species formerly placed in Mascarenotus
Genus Ptilopsis – white-faced owls, two species
Genus Asio – eared owls, nine species
Genus Jubula – maned owl
Genus Bubo – eagle-owls, horned-owls and snowy owl, 10 species
Genus Scotopelia – fishing owls, three species
Genus Ketupa – fish owls and eagle-owls, 12 species (including 9 species previously placed in Bubo)
Genus Psiloscops – flammulated owl
Genus Gymnasio – Puerto Rican owl
Genus Megascops – screech-owls, 25 species
Genus Pulsatrix – spectacled owls, three species
Genus Lophostrix – crested owl
Genus Strix – earless owls, 22 species, including four previously placed in Ciccaba
Late Quaternary prehistoric extinctions
Genus Grallistrix – stilt-owls, four species
Kaua‘i stilt-owl, Grallistrix auceps
Maui stilt-owl, Grallistrix erdmani
Moloka‘i stilt-owl, Grallistrix geleches
O‘ahu stilt-owl, Grallistrix orion
Genus Ornimegalonyx – Caribbean giant owls, one or two species
Cuban giant owl, Ornimegalonyx oteroi
Ornimegalonyx sp. – probably subspecies of O. oteroi
Genus Asphaltoglaux
Asphalt miniature owl, Asphaltoglaux cecileae
Genus Oraristrix
La Brea owl, Oraristrix brea
Fossil record
Mioglaux (Late Oligocene? – Early Miocene of WC Europe) – includes "Bubo" poirreiri
Intulula (Early/Middle Miocene of WC Europe) – includes "Strix/Ninox" brevis
Yarquen (Middle Miocene of Argentina)
Alasio (Middle Miocene of Vieux-Collonges, France) – includes "Strix" collongensis
The fossil database for Strigiformes is highly diverse and shows an origin from ~60MYA into the Pleistocene. The maximum age range for the Strigiformes clade extends to 68.6MYA.
Placement unresolved:
"Otus/Strix" wintershofensis – fossil (Early/Middle Miocene of Wintershof West, Germany) – may be close to extant genus Ninox
"Strix" edwardsi – fossil (Middle Miocene of Grive-Saint-Alban, France)
"Asio" pygmaeus – fossil (Early Pliocene of Odesa, Ukraine)
Strigidae gen. et sp. indet. UMMP V31030 (Rexroad Late Pliocene of Kansas, USA) – Strix/Bubo?
Ibiza owl, Strigidae gen. et sp. indet. – prehistoric (Late Pleistocene/Holocene of Es Pouàs, Ibiza)
The supposed fossil heron "Ardea" lignitum (Late Pliocene of Germany) was apparently a strigid owl, possibly close to Bubo. The Early–Middle Eocene genus Palaeoglaux from west-central Europe is sometimes placed here, but given its age, it is probably better considered its own family for the time being.
| Biology and health sciences | Strigiformes | Animals |
63087 | https://en.wikipedia.org/wiki/Siamese%20cat | Siamese cat | The Siamese cat (; แมวสยาม, Maeo Sayam) is one of the first distinctly recognised breeds of Asian cat. It derives from the Wichianmat landrace. The Siamese cat is one of several varieties of cats native to Thailand (known as Siam before 1939). The original Siamese became one of the most popular breeds in Europe and North America in the 19th century. Siamese cats have a distinctive colourpoint coat, resulting from a temperature-sensitive type of albinism.
Distinct features like blue almond-shaped eyes, a triangular head shape, large ears, an elongated, slender, and muscular body, and various forms of point colouration characterise the modern-style Siamese. The modern-style Siamese's point-colouration resembles the "old-style" foundation stock. The "old-style" Siamese have a round head and body. They have been re-established by multiple registries as the Thai cat. Siamese and Thai cats are selectively bred and pedigreed in multiple cat fancier and breeder organisations. The terms "Siamese" or "Thai" are used for cats from this specific breed, which are by definition all purebred cats with a known and formally registered ancestry. The ancestry registration is the cat's pedigree or "paperwork".
The Siamese is a part of the foundation stock for crossbreeding with other cats. The crossbreeding resulted in many different types of cats, like the Oriental Shorthair and Colourpoint Shorthair. The Oriental Shorthair and Colourpoint Shorthair were developed to expand the range of coat patterns. The breeding of the Oriental and Colourpoint Shorthairs resulted in a long-haired variant called the Himalayan. The long-haired Siamese is recognised internationally as a Balinese cat. The breeding also created the hair-mutation breeds, including the Cornish Rex, Sphynx, Peterbald, and blue-point Siamese cat.
History
Origins
Thailand
A description and depiction of the Wichianmat (Siamese cat) first appears in a collection of ancient manuscripts called the Tamra Maew (The Cat-Book Poems), thought to originate from the Ayutthaya Kingdom (1351 to 1767 AD). Over a dozen are now kept in the National Library of Thailand. The manuscripts have resurfaced outside of Thailand and are now in the British Library and National Library of Australia.
At the end of the Burmese–Siamese war, the capitol was sacked on 7 April 1767. The Burmese army burned everything in sight and returned to Burma, taking Siamese noblemen and royal family members with them as captives. A Thai legend states that the King of Burma Hsinbyushin found and read the poem for the Thai cats in the Tamra Maew. The poem describes Thai cats as being as rare as gold, and anyone who owns this cat will become wealthy. He told his army to round up all the Suphalak cats and bring them back to Burma along with the other treasures. Today in Thailand, people tell this legend as a humorous explanation of the rarity of Thai cats.
Siamese
The pointed cat known in the West as "Siamese", recognised for its distinctive markings, is one of several breeds of cats from Siam described and illustrated in manuscripts called "Tamra Maew" (Cat Poems). The "Tamra Maew" is estimated to have been written from the 14th to the 18th century. In 1878, U.S. President Rutherford B. Hayes received the first documented Siamese to reach the United States. The cat, named "Siam," was sent from Bangkok to the American Consul.
In 1884, the British Consul-General in Bangkok, Edward Blencowe Gould (1847–1916), brought a breeding pair of the cats, Pho and Mia, back to Britain as a gift for his sister, Lilian Jane Gould (who, married in 1895 as Lilian Jane Veley, went on to co-found the Siamese Cat Club in 1901). In 1885, Gould's UK cats Pho and Mia produced three Siamese kittens—Duen Ngai, Kalohom, and Khromata—who were shown with their parents that same year at London's Crystal Palace Show. Their appearance and behaviour attracted attention, but all three of the kittens died soon after the show, their cause of death not documented.
By 1886, four Siamese cats were imported to the UK by Eva Forestier Walker (surnamed Vyvyan after 1887 marriage) and her sister, Ada. These Siamese imports were long, had rounded heads with wedge-shaped muzzles, and large ears . The cats ranged from substantial to slender but were not either extreme. The difference in the pointed coat pattern had not been seen before in cats by Westerners. Over the next several years, fanciers imported a small number of cats, forming the base breeding pool for the entire breed in Britain. It is believed that most Siamese in Britain today are descended from about eleven of these original imports. In Britain, they were called the "Royal Cat of Siam." Some reports say that they had previously been kept only by Siamese royalty. Research does not show evidence of any organised royal breeding programme in Siam.
Traditional Siamese versus modern development
In the 1950s–1960s, as the Siamese was increasing in popularity, many breeders and cat show judges began to favour the more slender look. Breeders created increasingly long, fine-boned, narrow-headed cats through generations of selective breeding. Eventually, the modern show Siamese was bred to be extremely elongated, with a lean, tubular body, long, slender legs, a very long, very thin tail that tapers gradually into a point, and a long, wedge-shaped head topped by extremely large, wide-set ears.
By the mid-1980s, cats of the original style had largely disappeared from cat shows. Still, a few breeders, particularly in the UK, continued to breed and register them, resulting in today's two types of Siamese: the modern, "show-style", standardised Siamese, and the "Traditional Siamese", both descended from the same distant ancestors, but with few or no recent ancestors in common, and effectively forming distinct sub-breeds, with some pressure to separate them.
In addition to the modern Siamese breed category, The International Cat Association (TICA) and the World Cat Federation (WCF) now accept Siamese cats of the less extreme type, and any wichianmat cat imported directly from Thailand, under the new breed name Thai. Other, mostly unofficial, names for the traditional variety are "Old-style Siamese" and "Classic Siamese", with an American variation nicknamed "Applehead".
Appearance
The breed standard of the modern Siamese calls for an elongated, tubular, and muscular body and a triangular head, forming a triangle from the tip of the nose to each tip of the ear. The eyes are almond-shaped and light blue, while the ears are large, wide-based, and positioned more towards the side of the head. The breed has a long neck, a slender tail, and fur that is short, glossy, fine and adheres to the body with no undercoat. Its pointed colour scheme and blue eyes distinguish it from the closely related Oriental Shorthair. The modern Siamese shares the pointed colour pattern with the Thai, or traditional Siamese, but they differ in head and body type.
The pointed pattern is a form of partial albinism, resulting from a mutation in tyrosinase, an enzyme involved in melanin production. The mutated tyrosinase enzyme is heat-sensitive; it fails to work at normal body temperatures but becomes active in cooler (< 33 °C) areas of the skin. The heat-sensitive enzyme results in a dark colouration in the coolest parts of the cat's body, like the extremities and the face, which are cooled by the airflow through their sinuses. Siamese kittens are cream or white at birth and develop visible points in the first few months of life in colder parts of their body. By the time a kitten is four weeks old, the points should be sufficiently distinguishable to recognise which colour they are.
Siamese cats tend to darken with age, and generally, adult Siamese living in warm climates have lighter coats than those in cool climates. Originally the vast majority of Siamese had seal (extremely dark brown, almost black) points, but occasionally Siamese was born with "blue" (a cool grey) points, genetically a dilution of seal point; chocolate (lighter brown) points, a genetic variation of seal point; or lilac (pale warm grey) points, genetically a diluted chocolate. These colours were considered "inferior" seal points and were not qualified for showing or breeding. These shades were eventually accepted by the breed associations and became more common through breeding programmes specifically aimed at producing these colours. Later, outcrosses with other breeds developed Siamese-mix cats with points in other cat colours and patterns, including red and cream points, lynx (tabby) points, and tortoise-shell ("tortie") points.
In the United Kingdom, all pointed Siamese-style cats are considered part of the Siamese breed. The Cat Fanciers' Association, considers only the four original fur colours as Siamese:
seal point,
blue point,
the chocolate point, and
lilac point.
Oriental Shorthair cats with colour points in colours or patterns aside from these four are considered Colourpoint Shorthair in that registry. The World Cat Federation has also adopted this classification, treating the Colourpoint Shorthair as a distinct breed.
Many Siamese cats from Thailand had a kink in their tails, but over the years, this trait has been considered a flaw. Breeders have largely eradicated it, but the kinked tail persists among street cats in Thailand.
Temperament
Siamese are usually very affectionate and intelligent cats, renowned for their social nature. Many enjoy being with people and are sometimes described as "extroverts". Often they bond strongly with a single person. Myrna Milani describes the Siamese as being more diurnal, more likely to stay close to their owner, and less likely to hunt than other cats.
Health
Based on Swedish insurance data, which tracked cats only up to 12.5 years, Siamese and Siamese-derived breeds have a higher mortality rate than other breeds. 68% lived to 10 years or more and 42% to 12.5 years or more. The majority of deaths were caused by neoplasms, mainly mammary tumours. The Siamese also has a higher rate of morbidity. They are at higher risk of neoplastic and gastrointestinal problems but have a lower risk of feline lower urinary tract disease. A UK study of veterinary records found a life expectancy of 11.69 years for the Siamese compared with 11.74 years overall.
The Siamese has been found to have a predisposition to progressive retinal atrophy.
The same albino allele that produces coloured points means that Siamese cats' blue eyes lack a tapetum lucidum, a structure which amplifies dim light in the eyes of other cats. The mutation in the tyrosinase also results in abnormal neurological connections between the eye and the brain. The optic chiasm has abnormal uncrossed wiring; many early Siamese were cross-eyed to compensate, but like the kinked tails, the crossed eyes have been seen as a fault, and due to selective breeding the trait is far less common today. Still, this lack of a tapetum lucidum even in uncross-eyed cats, causes reduced vision for the cat at night. This trait makes them vulnerable to urban dangers such as night-time vehicular traffic. Unlike many other blue-eyed white cats, Siamese cats do not have reduced hearing ability.
The Siamese suffers from abnormal visual projections due to the lateral geniculate body of the eye differing from normal felines. Fibres located in the temporal retina cross over in the chiasm instead of remaining uncrossed.
The Siamese is predisposed to periocular leukotrichia, pinnal alopecia, and psychogenic alopecia.
Young Siamese cats are predisposed to histiocytic cutaneous mast cell tumours.
The Siamese is one of the more commonly affected breeds for gangliosidosis 1. An autosomal recessive mutation in the GBL1 gene is responsible for the condition in the breed.
Breeds derived from the Siamese
Balinese – Natural mutation of the Siamese cat; a longhaired Siamese. In the largest US registry, the Cat Fanciers Association (CFA) is limited to the four traditional Siamese coat colours of seal point, blue point (a dilute of seal point), chocolate point, and lilac point (a dilute of the chocolate point). Other registries in the US and worldwide recognise a greater diversity of colours.
Birman – After almost all the individuals of the breed died out during the years of World War II, French breeders reconstructed the breed through interbreeding with various other breeds, including the Siamese. Modern Birman cats have inherited their pointed coat patterns from the Siamese.
Burmese – is a breed of domesticated cats descended from a specific cat, Wong Mau, who was found in Burma in 1930 by Joseph Cheesman Thompson. She was brought to San Francisco, where she was bred with Siamese.
Havana Brown – resulted from crossing a chocolate-point Siamese with a black cat.
Colourpoint Shorthair – a Siamese-type cat registered in CFA with pointed coat colours aside from the traditional CFA Siamese coat colours; originally developed by crosses with other shorthair cats. Considered part of the Siamese breed in most cat associations but considered a separate breed in CFA and WCF. Variations can include lynx points and tortie points.
Himalayan – Longhaired breed originally derived from crosses of Persians to Siamese and pointed domestic longhair cats to introduce the point markings and the colours chocolate and lilac. After these initial crosses were used to introduce the colours, further breed development was performed by crossing these cats to the Persian breed. In Europe, they are referred to as colourpoint Persians. In CFA, they are a colour division of the Persian breed.
Javanese – in CFA, a longhaired version of the Colourpoint Shorthair (i.e. a "Colourpoint Longhair"). In WCF, "Javanese" is an alias of the Oriental Longhair.
Neva Masquerade – derived in Russia by naturally or selectively crossing Siberian cats with Siamese cats or related colourpoint cats. It bears the Siamese colourpoint gene, but the original foundation stock is unclear.
Ocicat – a spotted cat originally produced by a cross between Siamese and Abyssinian.
Oriental Shorthair – a Siamese-style cat in non-pointed coat patterns and colours, including solid, tabby, silver/smoke, and tortoise-shell.
Oriental Longhair – a longhaired version of the Oriental Shorthair.
Ragdoll – selectively bred from "alley cats" foundation stock in the USA. It bears the Siamese colourpoint mutation gene.
Savannah – The Savannah is a domestic hybrid cat breed. It is a hybridisation between a serval and a domestic cat. (The first was bred with a Siamese)
Snowshoe – a cream and white breed with blue eyes and some points that were produced through the cross-breeding of the Siamese and bi-coloured American Shorthair in the 1960s.
Thai Cat – also called the Wichian Mat or Old-Style Siamese, the original type of Siamese imported from Thailand in the 19th century and still bred in Thailand today; and throughout the first half of the 20th century, the only type of Siamese.
Tonkinese – originally a cross between a Siamese cat and a Burmese. Tonkinese × Tonkinese matings can produce kittens with a Burmese sepia pattern, a Siamese pointed pattern, or a Tonkinese mink pattern (which is something in between the first two, with less pattern contrast than the Siamese but greater than the Burmese); often with aqua eyes.
Toybob – cat breed of Russian origin. It bears the Siamese colourpoint mutation gene.
Mekong Bobtail (Thai Bobtail)
In media, literature and film
Siamese cats have been protagonists in literature and film for adults and children since the 1930s. Clare Turlay Newberry's Babette features a Siamese kitten escaping from a New York apartment in 1937. British publisher Michael Joseph recorded his relationship with his Siamese cat in Charles: The Story of a Friendship (1943). The "Siamese Cat Song" sequence ("We are Siamese if you please") in Disney's Lady and the Tramp (1955), features the cats "Si" and "Am", both titled after the former name of Thailand, where the breed originated. The 1958 film adaptation of Bell, Book and Candle features Kim Novak's Siamese cat "Pyewacket", a witch's familiar.
The Incredible Journey (1961) by Sheila Burnford tells the story of three pets, including the Siamese cat "Tao", as they travel through the Canadian wilderness searching for their beloved masters. The book was a modest success when first published but became widely known after 1963 when it was loosely adapted into a film of the same name by Walt Disney. Disney also employed the same Siamese in the role of "DC" for its 1965 crime caper That Darn Cat!, with The New York Times commenting "The feline that plays the informant, as the F.B.I. puts it, is superb. [...] This elegant, blue-eyed creature is a paragon of suavity and grace".
| Biology and health sciences | Cats | null |
63137 | https://en.wikipedia.org/wiki/Mass%20production | Mass production | Mass production, also known as flow production, series production, series manufacture, or continuous production, is the production of substantial amounts of standardized products in a constant flow, including and especially on assembly lines. Together with job production and batch production, it is one of the three main production methods.
The term mass production was popularized by a 1926 article in the Encyclopædia Britannica supplement that was written based on correspondence with Ford Motor Company. The New York Times used the term in the title of an article that appeared before the publication of the Britannica article.
The idea of mass production is applied to many kinds of products: from fluids and particulates handled in bulk (food, fuel, chemicals and mined minerals), to clothing, textiles, parts and assemblies of parts (household appliances and automobiles).
Some mass production techniques, such as standardized sizes and production lines, predate the Industrial Revolution by many centuries; however, it was not until the introduction of machine tools and techniques to produce interchangeable parts were developed in the mid-19th century that modern mass production was possible.
Overview
Mass production involves making many copies of products, very quickly, using assembly line techniques to send partially complete products to workers who each work on an individual step, rather than having a worker work on a whole product from start to finish. The emergence of mass production allowed supply to outstrip demand in many markets, forcing companies to seek new ways to become more competitive. Mass production ties into the idea of overconsumption and the idea that we as humans consume too much.
Mass production of fluid matter typically involves piping with centrifugal pumps or screw conveyors (augers) to transfer raw materials or partially complete products between vessels. Fluid flow processes such as oil refining and bulk materials such as wood chips and pulp are automated using a system of process control which uses various instruments to measure variables such as temperature, pressure, volumetric and level, providing feedback.
Bulk materials such as coal, ores, grains and wood chips are handled by belt, chain, slat, pneumatic or screw conveyors, bucket elevators and mobile equipment such as front-end loaders. Materials on pallets are handled with forklifts. Also used for handling heavy items like reels of paper, steel or machinery are electric overhead cranes, sometimes called bridge cranes because they span large factory bays.
Mass production is capital-intensive and energy-intensive, for it uses a high proportion of machinery and energy in relation to workers. It is also usually automated while total expenditure per unit of product is decreased. However, the machinery that is needed to set up a mass production line (such as robots and machine presses) is so expensive that in order to attain profits there must be some assurance that the product will be successful.
One of the descriptions of mass production is that "the skill is built into the tool", which means that the worker using the tool may not need the skill. For example, in the 19th or early 20th century, this could be expressed as "the craftsmanship is in the workbench itself" (not the training of the worker). Rather than having a skilled worker measure every dimension of each part of the product against the plans or the other parts as it is being formed, there were jigs ready at hand to ensure that the part was made to fit this set-up. It had already been checked that the finished part would be to specifications to fit all the other finished parts—and it would be made more quickly, with no time spent on finishing the parts to fit one another. Later, once computerized control came about (for example, CNC), jigs were obviated, but it remained true that the skill (or knowledge) was built into the tool (or process, or documentation) rather than residing in the worker's head. This is the specialized capital required for mass production; each workbench and set of tools (or each CNC cell, or each fractionating column) is different (fine-tuned to its task).
History
Pre-industrial
Standardized parts and sizes and factory production techniques were developed in pre-industrial times; before the invention of machine tools the manufacture of precision parts, especially metal ones, was highly labour-intensive.
Crossbows made with bronze parts were produced in China during the Warring States period. The Qin Emperor unified China at least in part by equipping large armies with these weapons, which were fitted with a sophisticated trigger mechanism made of interchangeable parts. The Terracotta Army guarding the Emperor's tomb is also believed to have been created through the use of standardized molds on an assembly line.
In ancient Carthage, ships of war were mass-produced on a large scale at a moderate cost, allowing them to efficiently maintain their control of the Mediterranean. Many centuries later, the Republic of Venice would follow Carthage in producing ships with prefabricated parts on an assembly line: the Venetian Arsenal produced nearly one ship every day in what was effectively the world's first factory, which at its height employed 16,000 people.
The invention of movable type has allowed for documents such as books to be mass produced. The first movable type system was invented in China by Bi Sheng, during the reign of the Song dynasty, where it was used to, among other things, issue paper money. The oldest extant book produced using metal type is Jikji, printed in Korea in the year 1377. Johannes Gutenberg, through his invention of the printing press and production of the Gutenberg Bible, introduced movable type to Europe. Through this introduction, mass production in the European publishing industry was made commonplace, leading to a democratization of knowledge, increased literacy and education, and the beginnings of modern science.
French artillery engineer Jean-Baptiste de Gribeauval introduced the standardization of cannon design in the late 18th century. He streamlined production and management of cannonballs and cannons by limiting them to only three calibers, and he improved their effectiveness by requiring more spherical ammunition. Redesigning these weapons to use interchangeable wheels, screws, and axles simplified mass production and repair.
Industrial
In the Industrial Revolution, simple mass production techniques were used at the Portsmouth Block Mills in England to make ships' pulley blocks for the Royal Navy in the Napoleonic Wars. It was achieved in 1803 by Marc Isambard Brunel in cooperation with Henry Maudslay under the management of Sir Samuel Bentham. The first unmistakable examples of manufacturing operations carefully designed to reduce production costs by specialized labour and the use of machines appeared in the 18th century in England.
The Navy was in a state of expansion that required 100,000 pulley blocks to be manufactured a year. Bentham had already achieved remarkable efficiency at the docks by introducing power-driven machinery and reorganising the dockyard system. Brunel, a pioneering engineer, and Maudslay, a pioneer of machine tool technology who had developed the first industrially practical screw-cutting lathe in 1800 which standardized screw thread sizes for the first time which in turn allowed the application of interchangeable parts, collaborated on plans to manufacture block-making machinery. By 1805, the dockyard had been fully updated with the revolutionary, purpose-built machinery at a time when products were still built individually with different components. A total of 45 machines were required to perform 22 processes on the blocks, which could be made into one of three possible sizes. The machines were almost entirely made of metal thus improving their accuracy and durability. The machines would make markings and indentations on the blocks to ensure alignment throughout the process. One of the many advantages of this new method was the increase in labour productivity due to the less labour-intensive requirements of managing the machinery. Richard Beamish, assistant to Brunel's son and engineer, Isambard Kingdom Brunel, wrote:
So that ten men, by the aid of this machinery, can accomplish with uniformity, celerity and ease, what formerly required the uncertain labour of one hundred and ten.
By 1808, annual production from the 45 machines had reached 130,000 blocks and some of the equipment was still in operation as late as the mid-twentieth century. Mass production techniques were also used to rather limited extent to make clocks and watches, and to make small arms, though parts were usually non-interchangeable. Though produced on a very small scale, Crimean War gunboat engines designed and assembled by John Penn of Greenwich are recorded as the first instance of the application of mass production techniques (though not necessarily the assembly-line method) to marine engineering. In filling an Admiralty order for 90 sets to his high-pressure and high-revolution horizontal trunk engine design, Penn produced them all in 90 days. He also used Whitworth Standard threads throughout. Prerequisites for the wide use of mass production were interchangeable parts, machine tools and power, especially in the form of electricity.
Some of the organizational management concepts needed to create 20th-century mass production, such as scientific management, had been pioneered by other engineers (most of whom are not famous, but Frederick Winslow Taylor is one of the well-known ones), whose work would later be synthesized into fields such as industrial engineering, manufacturing engineering, operations research, and management consultancy. Although after leaving the Henry Ford Company which was rebranded as Cadillac and later was awarded the Dewar Trophy in 1908 for creating interchangeable mass-produced precision engine parts, Henry Ford downplayed the role of Taylorism in the development of mass production at his company. However, Ford management performed time studies and experiments to mechanize their factory processes, focusing on minimizing worker movements. The difference is that while Taylor focused mostly on efficiency of the worker, Ford also substituted for labor by using machines, thoughtfully arranged, wherever possible.
In 1807, Eli Terry was hired to produce 4,000 wooden movement clocks in the Porter Contract. At this time, the annual yield for wooden clocks did not exceed a few dozen on average. Terry developed a milling machine in 1795, in which he perfected Interchangeable parts. In 1807, Terry developed a spindle cutting machine, which could produce multiple parts at the same time. Terry hired Silas Hoadley and Seth Thomas to work the Assembly line at the facilities. The Porter Contract was the first contract which called for mass production of clock movements in history. In 1815, Terry began mass-producing the first shelf clock. Chauncey Jerome, an apprentice of Eli Terry mass-produced up to 20,000 brass clocks annually in 1840 when he invented the cheap 30-hour OG clock.
The United States Department of War sponsored the development of interchangeable parts for guns produced at the arsenals at Springfield, Massachusetts and Harpers Ferry, Virginia (now West Virginia) in the early decades of the 19th century, finally achieving reliable interchangeability by about 1850. This period coincided with the development of machine tools, with the armories designing and building many of their own. Some of the methods employed were a system of gauges for checking dimensions of the various parts and jigs and fixtures for guiding the machine tools and properly holding and aligning the work pieces. This system came to be known as armory practice or the American system of manufacturing, which spread throughout New England aided by skilled mechanics from the armories who were instrumental in transferring the technology to the sewing machines manufacturers and other industries such as machine tools, harvesting machines and bicycles. Singer Manufacturing Co., at one time the largest sewing machine manufacturer, did not achieve interchangeable parts until the late 1880s, around the same time Cyrus McCormick adopted modern manufacturing practices in making harvesting machines.
During World War II, The United States mass-produced many vehicles and weapons, such as ships (i.e. Liberty Ships, Higgins boats ), aircraft (i.e. North American P-51 Mustang, Consolidated B-24 Liberator, Boeing B-29 Superfortress), jeeps (i.e. Willys MB), trucks, tanks (i.e. M4 Sherman) and M2 Browning and M1919 Browning machine guns. Many vehicles, transported by ships have been shipped in parts and later assembled on-site.
For the ongoing energy transition, many wind turbine components and solar panels are being mass-produced. Wind turbines and solar panels are being used in respectively wind farms and solar farms.
In addition, in the ongoing climate change mitigation, large-scale carbon sequestration (through reforestation, blue carbon restoration, etc) has been proposed. Some projects (such as the Trillion Tree Campaign) involve planting a very large amount of trees. In order to speed up such efforts, fast propagation of trees may be useful. Some automated machines have been produced to allow for fast (vegetative) plant propagation.Also, for some plants that help to sequester carbon (such as seagrass), techniques have been developed to help speed up the process .
Mass production benefited from the development of materials such as inexpensive steel, high strength steel and plastics. Machining of metals was greatly enhanced with high-speed steel and later very hard materials such as tungsten carbide for cutting edges. Fabrication using steel components was aided by the development of electric welding and stamped steel parts, both which appeared in industry in about 1890. Plastics such as polyethylene, polystyrene and polyvinyl chloride (PVC) can be easily formed into shapes by extrusion, blow molding or injection molding, resulting in very low cost manufacture of consumer products, plastic piping, containers and parts.
An influential article that helped to frame and popularize the 20th century's definition of mass production appeared in a 1926 Encyclopædia Britannica supplement. The article was written based on correspondence with Ford Motor Company and is sometimes credited as the first use of the term.
Factory electrification
Electrification of factories began very gradually in the 1890s after the introduction of a practical DC motor by Frank J. Sprague and accelerated after the AC motor was developed by Galileo Ferraris, Nikola Tesla and Westinghouse, Mikhail Dolivo-Dobrovolsky and others. Electrification of factories was fastest between 1900 and 1930, aided by the establishment of electric utilities with central stations and the lowering of electricity prices from 1914 to 1917.
Electric motors were several times more efficient than small steam engines because central station generation were more efficient than small steam engines and because line shafts and belts had high friction losses. Electric motors also allowed more flexibility in manufacturing and required less maintenance than line shafts and belts. Many factories saw a 30% increase in output simply from changing over to electric motors.
Electrification enabled modern mass production, as with Thomas Edison's iron ore processing plant (about 1893) that could process 20,000 tons of ore per day with two shifts, each of five men. At that time it was still common to handle bulk materials with shovels, wheelbarrows and small narrow-gauge rail cars, and for comparison, a canal digger in previous decades typically handled five tons per 12-hour day.
The biggest impact of early mass production was in manufacturing everyday items, such as at the Ball Brothers Glass Manufacturing Company, which electrified its mason jar plant in Muncie, Indiana, U.S., around 1900. The new automated process used glass-blowing machines to replace 210 craftsman glass blowers and helpers. A small electric truck was used to handle 150 dozen bottles at a time where previously a hand truck would carry six dozen. Electric mixers replaced men with shovels handling sand and other ingredients that were fed into the glass furnace. An electric overhead crane replaced 36 day laborers for moving heavy loads across the factory.
According to Henry Ford:
The provision of a whole new system of electric generation emancipated industry from the leather belt and line shaft, for it eventually became possible to provide each tool with its own electric motor. This may seem only a detail of minor importance. In fact, modern industry could not be carried out with the belt and line shaft for a number of reasons. The motor enabled machinery to be arranged in the order of the work, and that alone has probably doubled the efficiency of industry, for it has cut out a tremendous amount of useless handling and hauling. The belt and line shaft were also tremendously wasteful – so wasteful indeed that no factory could be really large, for even the longest line shaft was small according to modern requirements. Also high speed tools were impossible under the old conditions – neither the pulleys nor the belts could stand modern speeds. Without high speed tools and the finer steels which they brought about, there could be nothing of what we call modern industry.
Mass production was popularized in the late 1910s and 1920s by Henry Ford's Ford Motor Company, which introduced electric motors to the then-well-known technique of chain or sequential production. Ford also bought or designed and built special purpose machine tools and fixtures such as multiple spindle drill presses that could drill every hole on one side of an engine block in one operation and a multiple head milling machine that could simultaneously machine 15 engine blocks held on a single fixture. All of these machine tools were arranged systematically in the production flow and some had special carriages for rolling heavy items into machining position. Production of the Ford Model T used 32,000 machine tools.
Buildings
The process of prefabrication, wherein parts are created separately from the finished product, is at the core of all mass-produced construction. Early examples include movable structures reportedly utilized by Akbar the Great, and the chattel houses built by emancipated slaves on Barbados. The Nissen hut, first used by the British during World War I, married prefabrication and mass production in a way that suited the needs of the military. The simple structures, which cost little and could be erected in just a couple of hours, were highly successful: over 100,000 Nissen huts were produced during World War I alone, and they would go on to serve in other conflicts and inspire a number of similar designs.
Following World War II, in the United States, William Levitt pioneered the building of standardized tract houses in 56 different locations around the country. These communities were dubbed Levittowns, and they were able to be constructed quickly and cheaply through the leveraging of economies of scale, as well as the specialization of construction tasks in a process akin to an assembly line. This era also saw the invention of the mobile home, a small prefabricated house that can be transported cheaply on a truck bed.
In the modern industrialization of construction, mass production is often used for prefabrication of house components.
Fabrics and Materials
Mass production has significantly impacted the fashion industry, particularly in the realm of fibers and materials. The advent of synthetic fibers, such as polyester and nylon, revolutionized textile manufacturing by providing cost-effective alternatives to natural fibers. This shift enabled the rapid production of inexpensive clothing, contributing to the rise of fast fashion. This reliance on mass production has raised concerns about environmental sustainability and labor conditions, spurring the need for greater ethical and sustainable practices within the fashion industry.
The use of assembly lines
Mass production systems for items made of numerous parts are usually organized into assembly lines. The assemblies pass by on a conveyor, or if they are heavy, hung from an overhead crane or monorail.
In a factory for a complex product, rather than one assembly line, there may be many auxiliary assembly lines feeding sub-assemblies (i.e. car engines or seats) to a backbone "main" assembly line. A diagram of a typical mass-production factory looks more like the skeleton of a fish than a single line.
Vertical integration
Vertical integration is a business practice that involves gaining complete control over a product's production, from raw materials to final assembly.
In the age of mass production, this caused shipping and trade problems in that shipping systems were unable to transport huge volumes of finished automobiles (in Henry Ford's case) without causing damage, and also government policies imposed trade barriers on finished units.
Ford built the Ford River Rouge Complex with the idea of making the company's own iron and steel in the same large factory site where parts and car assembly took place. River Rouge also generated its own electricity.
Upstream vertical integration, such as to raw materials, is away from leading technology toward mature, low-return industries. Most companies chose to focus on their core business rather than vertical integration. This included buying parts from outside suppliers, who could often produce them as cheaply or cheaper.
Standard Oil, the major oil company in the 19th century, was vertically integrated partly because there was no demand for unrefined crude oil, but kerosene and some other products were in great demand. The other reason was that Standard Oil monopolized the oil industry. The major oil companies were, and many still are, vertically integrated, from production to refining and with their own retail stations, although some sold off their retail operations. Some oil companies also have chemical divisions.
Lumber and paper companies at one time owned most of their timber lands and sold some finished products such as corrugated boxes. The tendency has been to divest of timber lands to raise cash and to avoid property taxes.
Advantages and disadvantages
The economies of mass production come from several sources. The primary cause is a reduction of non-productive effort of all types. In craft production, the craftsman must bustle about a shop, getting parts and assembling them. He must locate and use many tools many times for varying tasks. In mass production, each worker repeats one or a few related tasks that use the same tool to perform identical or near-identical operations on a stream of products. The exact tool and parts are always at hand, having been moved down the assembly line consecutively. The worker spends little or no time retrieving and/or preparing materials and tools, and so the time taken to manufacture a product using mass production is shorter than when using traditional methods.
The probability of human error and variation is also reduced, as tasks are predominantly carried out by machinery; error in operating such machinery has more far-reaching consequences. A reduction in labour costs, as well as an increased rate of production, enables a company to produce a larger quantity of one product at a lower cost than using traditional, non-linear methods.
However, mass production is inflexible because it is difficult to alter a design or production process after a production line is implemented. Also, all products produced on one production line will be identical or very similar, and introducing variety to satisfy individual tastes is not easy. However, some variety can be achieved by applying different finishes and decorations at the end of the production line if necessary. The starter cost for the machinery can be expensive so the producer must be sure it sells or the producers will lose a lot of money.
The Ford Model T produced tremendous affordable output but was not very good at responding to demand for variety, customization, or design changes. As a consequence Ford eventually lost market share to General Motors, who introduced annual model changes, more accessories and a choice of colors.
With each passing decade, engineers have found ways to increase the flexibility of mass production systems, driving down the lead times on new product development and allowing greater customization and variety of products.
Compared with other production methods, mass production can create new occupational hazards for workers. This is partly due to the need for workers to operate heavy machinery while also working close together with many other workers. Preventative safety measures, such as fire drills, as well as special training is therefore necessary to minimise the occurrence of industrial accidents.
Socioeconomic impacts
In the 1830s, French political thinker and historian Alexis de Tocqueville identified one of the key characteristics of America that would later make it so amenable to the development of mass production: the homogeneous consumer base. De Tocqueville wrote in his Democracy in America (1835) that "The absence in the United States of those vast accumulations of wealth which favor the expenditures of large sums on articles of mere luxury ... impact to the productions of American industry a character distinct from that of other countries' industries. [Production is geared toward] articles suited to the wants of the whole people".
Mass production improved productivity, which was a contributing factor to economic growth and the decline in work week hours, alongside other factors such as transportation infrastructures (canals, railroads and highways) and agricultural mechanization. These factors caused the typical work week to decline from 70 hours in the early 19th century to 60 hours late in the century, then to 50 hours in the early 20th century and finally to 40 hours in the mid-1930s.
Mass production permitted great increases in total production. Using a European crafts system into the late 19th century it was difficult to meet demand for products such as sewing machines and animal powered mechanical harvesters. By the late 1920s many previously scarce goods were in good supply. One economist has argued that this constituted "overproduction" and contributed to high unemployment during the Great Depression. Say's law denies the possibility of general overproduction and for this reason classical economists deny that it had any role in the Great Depression.
Mass production allowed the evolution of consumerism by lowering the unit cost of many goods used.
Mass production has been linked to the Fast Fashion Industry, often leaving the consumer with lower quality garments for a lower cost. Most fast-fashion clothing is mass-produced, which means it is typically made of cheap fabrics, such as polyester, and constructed poorly in order to keep short turnaround times to meet the demands of consumers and shifting trends.
| Technology | Basics_6 | null |
63229 | https://en.wikipedia.org/wiki/Bee-eater | Bee-eater | The bee-eaters are a group of birds in the family Meropidae, containing three genera and thirty-one species. Most species are found in Africa and Asia, with a few in southern Europe, Australia, and New Guinea. They are characterised by richly coloured plumage, slender bodies, and usually elongated central tail feathers. All have long down-turned bills and medium to long wings, which may be pointed or round. Male and female plumages are usually similar.
As their name suggests, bee-eaters predominantly eat flying insects, especially bees and wasps, which are caught on the wing from an open perch. The insect's stinger is removed by repeatedly hitting and rubbing the insect on a hard surface. During this process, pressure is applied to the insect's body, thereby discharging most of the venom.
Most bee-eaters are gregarious. They form colonies, nesting in burrows tunnelled into vertical sandy banks, often at the side of a river or in flat ground. As they mostly live in colonies, large numbers of nest holes may be seen together. The eggs are white, with typically five to the clutch. Most species are monogamous, and both parents care for their young, sometimes with assistance from related birds in the colony.
Bee-eaters may be killed by raptors; their nests are raided by rodents, weasels, martens and snakes, and they can carry various parasites. Some species are adversely affected by human activity or habitat loss, but none meet the International Union for Conservation of Nature's vulnerability criteria, and all are therefore evaluated as "least concern". Their conspicuous appearance means that they have been mentioned by ancient writers and incorporated into mythology.
Taxonomy
The bee-eaters were first named as a scientific group by the French polymath Constantine Samuel Rafinesque-Schmaltz, who created the bird subfamily Meropia for these birds in 1815. The name, now modernised as Meropidae, is derived from Merops, the Ancient Greek for "bee-eater", and the English term "bee-eater" was first recorded in 1668, referring to the European species.
The bee-eaters have been considered to be related to other families, such as the rollers, hoopoes and kingfishers, but ancestors of those families diverged from the bee-eaters at least forty million years ago, so any relationship is not close. The scarcity of fossils is unhelpful. Bee-eater fossils from the Pleistocene (2,588,000 to 11,700 years ago) have been found in Austria, and there are Holocene (from 11,700 years ago to present) specimens from Israel and Russia, but all have proved to be of the extant European bee-eater. Opinions have varied as to the bee-eater's nearest relatives. In 2001, Fry considered the kingfishers to be the most likely, whereas a large study published in 2008 found that bee-eaters are sister to all other Coraciiformes (rollers, ground rollers, todies, motmots and kingfishers). A 2009 book supported Fry's contention, but then a later study in 2015 suggested that the bee-eaters are sister to the rollers. The 2008 and 2015 papers both linked the kingfishers to the New World motmots.
More recent molecular phylogenetic studies have confirmed that the bee-eaters are more closely related to the rollers and ground rollers than they are to the todies, motmots and kingfishers. The relationship between the families is shown the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
The bee-eaters are generally similar in appearance, although they are normally divided into three genera. Nyctyornis comprises two large species with long throat feathers, the blue-bearded bee-eater and the red-bearded bee-eater, both of which have rounded wings, a ridged culmen, feathered nostrils and a relatively sluggish lifestyle. The purple-bearded bee-eater is the sole member of Meropogon, which is intermediate between Nyctyornis and the typical bee-eaters, having rounded wings and a "beard", but a smooth culmen and no nostril feathers. All the remaining species are normally retained in the single genus Merops. There are close relationships within this genus, for example the red-throated bee-eater and the white-fronted bee-eater form a superspecies, but formerly suggested genera, such as Aerops, Melittophagus, Bombylonax and Dicrocercus, have not been generally accepted for several decades since a 1969 paper united them in the current arrangement.
Species in taxonomic order
The bee-eater family contains the following species.
The Asian green bee-eater, African green bee-eater, and Arabian green bee-eater were previously considered to be a single species, and are still treated as such by some authorities.
A 2007 nuclear and mitochondrial DNA study produced a possible phylogenetic tree, although the position of the purple-bearded bee-eater seems anomalous, in that it appears amongst Merops species.
Description
The bee-eaters are morphologically a fairly uniform group. They share many features with related Coraciiformes such as the kingfishers and rollers, being large-headed (although less so than their relatives), short-necked, brightly plumaged and short-legged. Their wings may be rounded or pointed, with the wing shape closely correlated with the species' preferred foraging habitat and migratory tendencies. Shorter, rounder wings are found on species that are sedentary and make typically short foraging flights in denser forests and reed-beds. Those with more elongated wings are more migratory. All the bee-eaters are highly aerial; they take off strongly from perches, fly directly without undulations, and are able to change direction quickly, although they rarely hover.
The flight feathers of the wing comprise 10 primaries, the outermost being very small, and 13 secondaries, and there are 12 tail feathers.
The bills of bee-eaters are curved, long and end in a sharp point. The bill can bite strongly, particularly at the tip, and it is used as a pair of forceps with which to snatch insects from the air and crush smaller prey. The short legs have weak feet, and when it is moving on the ground a bee-eater's gait is barely more than a shuffle. The feet have sharp claws used for perching on vertical surfaces and also for nest excavation.
The plumage of the family is generally very bright and in most species is mainly or at least partially green, although the two carmine bee-eaters are primarily rose-coloured. Most of the Merops bee-eaters have a black bar through the eye and many have differently coloured throats and faces. The extent of the green in these species varies from almost complete in the green bee-eater to barely any green in the white-throated bee-eater. Three species, from equatorial Africa, have no green at all in their plumage, the black bee-eater, the blue-headed bee-eater and the rosy bee-eater. Many species have elongated central tail feathers.
There is little visible difference between the sexes in most of the family, although in several species the iris is red in the males and brown-red in the females, and in species with tail-streamers these may be slightly longer in males. Both the European and red-bearded bee-eaters have sex-based differences in their plumage colour, and the female rainbow bee-eater has shorter tail streamers than the male, which terminate in a club-shape that he lacks. There may be instances where bee-eaters are sexually dichromatic at the ultraviolet part of the colour spectrum, which humans cannot see. A study of blue-tailed bee-eater found that males were more colourful than females in UV light. Their overall colour was also affected by body condition, suggesting that there was a signalling component to plumage colour. Juveniles are generally similar to adults, except for the two Nyctyornis species, in which the young have mainly green plumage.
Bee-eaters have calls that are characteristic for each species. Most sound simple to the human ear, but show significant variability when studied in detail, carrying significant information for the birds.
Distribution and habitat
The bee-eaters have an Old World distribution, occurring from Europe to Australia. The centre of diversity of the family is Africa, although a number of species also occur in Asia. Single species occur in each of Europe, (the European bee-eater), Australia (the rainbow bee-eater) and Madagascar (the olive bee-eater, also found on mainland Africa). Of the three genera, Merops, which has the majority of the species, occurs across the entirety of the family's distribution. Nyctyornis is restricted to Asia, ranging from India and southern China to the Indonesian islands of Sumatra and Borneo. The genus Meropogon has a single species restricted to Sulawesi in Indonesia.
Bee-eaters are fairly indiscriminate in their choice of habitat. Their requirements are simply an elevated perch from which to watch for prey and a suitable ground substrate in which to dig their breeding burrow. Because their prey is entirely caught on the wing they are not dependent on any vegetation type. A single species, the blue-headed bee-eater, is found inside closed rainforest where it forages close to the ground in poor light in the gaps between large trees. Six other species are also closely associated with rainforest, but occur in edge habitat such as along rivers, in tree-fall gaps, off trees overhanging ravines or on emergent tree crowns above the main canopy.
Species that breed in subtropical or temperate areas of Europe, Asia and Australia are all migratory. The European bee-eaters that breed in southern Europe and Asia migrate to West and southern Africa. Another population of the same species breeds in South Africa and Namibia; these birds move northwards after breeding. In Australia the rainbow bee-eater is migratory in the southern areas of its range, migrating to Indonesia and New Guinea, but occurs year-round in northern Australia. Several species of bee-eater, are intra-African migrants; the white-throated bee-eater, for example, breeds on the southern edge of the Sahara and winters further south in equatorial rainforest. The most unusual migration is that of the southern carmine bee-eater, which has a three-stage migration; after breeding in a band between Angola and Mozambique it moves south to Botswana, Namibia and South Africa before moving north to its main wintering grounds in northern Angola, Congo and Tanzania.
Behaviour
The bee-eaters are diurnal, although a few species may migrate during the night if the terrain en route is unsuitable for stopping or if they are crossing the sea. Bee-eaters are highly social, and pairs sitting or roosting together are often so close that they touch (an individual distance of zero). Many species are colonial in the breeding season and some species are also highly gregarious when not nesting.
The social structures of the red-throated bee-eater and the white-fronted bee-eaters have been described as more complex than for any other bird species. The birds exist in colonies located on nesting cliffs, and have a stable structure all year round. These colonies typically contain five to 50 burrows, occasionally up to 200, and are composed of clans of two or three pairs, their helpers, and their offspring. The helpers are male offspring from a previous year. Within the colony, the males alternate between guarding their mate and attempting to make forced copulations with other females. The females in turn attempt to lay eggs in their neighbour's nests, an example of brood parasitism. Some individuals also specialise in kleptoparasitism, stealing prey collected by other colony members. The colony's daily routine is to emerge from the nesting holes or roosting branches soon after dawn, preen and sun themselves for an hour, then disperse to feed. Feeding territories are divided by clan, with each clan defending its territory from all others of the same species, including clans of the same colony. The clans return to the colony before dusk, and engage in more social behaviour before retiring for the night. Colonies are situated several hundred metres apart and have little to do with each other, although young individuals may disperse between colonies. As such, these species can be thought to have four tiers of social kinship: the individual pair, the family unit, the clan, and the colony as a whole.
Bee-eaters spend around 10% of their day on comfort activities. These include sunning themselves, dust bathing and water bathing. Sunning behaviour helps warm birds in the morning, reducing the need to use energy to raise their temperature. It also has a social aspect, as multiple birds adopt the same posture. Finally, it may help stimulate parasites in the feathers, making them easier to find and remove. Due to their hole-nesting lifestyle, bee-eaters accumulate a number of external parasites such as mites and flies. Together with sunning, bouts of dust bathing (or water bathing where available), as well as rigorous preening, keep the feathers and skin in good health. Bathing with water involves making shallow dives into a water body and then returning to a perch to preen.
Diet and feeding
The bee-eaters are almost exclusively aerial hunters of insect prey. Prey is caught either on the wing or more commonly from an exposed perch from which the bee-eater watches for prey. Smaller, rounder-winged bee-eaters typically hunt from branches and twigs closer to the ground, whereas the larger species hunt from tree tops or telephone wires. One unusual technique often used by carmine bee-eaters is to ride on the backs of bustards.
Prey can be spotted from a distance; European bee-eaters are able to spot a bee away, and blue-cheeked bee-eaters have been observed flying out to catch large wasps. Prey is approached directly or from behind. Prey that lands on the ground or on plants is usually not pursued. Small prey may be eaten on the wing, but larger items are returned to the perch where they are beaten until dead and then broken up. Insects with poisonous stings are first smacked on the branch, then, with the bird's eyes closed, rubbed to discharge the venom sac and stinger. This behaviour is innate, as demonstrated by a juvenile bird in captivity, which performed the task when first presented with wild bees. This bird was stung on the first five tries, but by ten bees, it was as adept at handling bees as adult birds.
Bee-eaters consume a wide range of insects; beyond a few distasteful butterflies they consume almost any insect from tiny Drosophila flies to large beetles and dragonflies. At some point bee-eaters have been recorded eating beetles, mayflies, stoneflies, cicadas, termites, crickets and grasshoppers, mantises, true flies and moths. For many species, the dominant prey item are stinging members of the order Hymenoptera, namely wasps and bees. In a survey of 20 studies, the proportion of the diet made up by bees and wasps varied from 20% to 96%, with the average being 70%. Of these honeybees can comprise a large part of the diet, as much as 89% of the overall intake. The preference for bees and wasps may have arisen because of the numerical abundance of these suitably sized insects. The giant honeybee is a particularly commonly eaten species. These bees attempt to congregate in a mass defence against the bee-eaters. In Israel, a European bee-eater was documented attempting to eat a small bat that it had caught, which probably could not fit down its throat.
Like kingfishers, bee-eaters regurgitate pellets of undigested material, typically long black oblongs.
Predation of honey bees
If an apiary is set up close to a bee-eater colony, a larger number of honey bees are eaten because they are more abundant. However, studies show the bee-eaters do not intentionally fly into the apiary, rather they feed on the insects caught on pastures and meadows within a radius of from the colony, this maximum distance being reached only when there is a shortage of food. Observations show that the birds actually enter the apiary only in cold and rainy periods, when the bees do not leave the hive and other insect prey are harder for the bee-eaters to detect.
Many bee-keepers believe that the bee-eaters are the main obstacle causing worker bees not to forage, and instead stay inside the hives for much of the day between May and the end of August. However, a study carried out in a eucalyptus forest in the Alaluas region in the Murqub District in Libya, east of Tripoli, showed that the bee-eaters were not the main obstacle to bee foraging; in some cases, the foraging rate was higher in the presence of the birds than in their absence. The average bird meal consisted of 90.8% honey bees and 9.2% beetles.
Predation is more likely when the bees are queening or during the peak of migration, from late March till mid-April, and in mid-September. Hives close to or under trees or overhead cables are at increased risk as the birds pounce on flying insects from these perches.
Breeding
Bee-eaters are monogamous during a nesting season, and in sedentary species, pairs may stay together for multiple years. Migratory bee-eaters may find new mates each breeding season. The courtship displays of the bee-eaters are rather unspectacular, with some calling and raising of throat and wing feathers. The exception is the performance of the white-throated bee-eater. Their "butterfly display" involves both members of a pair performing a gliding display flight with shallow wing-beats; they then perch facing each other, raising and folding their wings while calling. Most members of the family engage in courtship feeding, where the male presents prey items to the female, and such feeding can account for much, if not all, of the energy females require for egg creation.
Like almost all Coraciiformes the bee-eaters are cavity nesters. In the case of the bee-eaters the nests are burrows dug into the ground, either into the sides of earth cliffs or directly into level soil. Both types of nesting site are vulnerable, those on level ground are vulnerable to trampling and small predators, whereas those in cliffs, which are often the banks of rivers, are vulnerable to flash floods, which can wipe out dozens or hundreds of nests. Many species will nest either on cliffs or on level ground but prefer cliffs, although Böhm's bee-eater always nests on level ground. The burrows are dug by both birds in the pair, sometimes assisted by helpers. The soil or sand is loosened with jabs of the sharp bill, then the feet are used to kick out the loose soil. It has been suggested that riverine loess deposits that do not crumble when excavated may be favoured by the larger bee-eaters. There may be several false starts where nests are dug partway before being abandoned; in solitary species this can give the impression of colonial living even when that is not the case. The process of nest building can take as long as twenty days to complete, during which time the bill can be blunted and shortened. Nests are generally used only for a single season and are rarely used twice by the bee-eaters, but abandoned nests may be used by other birds, snakes and bats as shelter and breeding sites.
No nesting material is used in the breeding cavity. One white egg is laid each day until the typical clutch of about five eggs is complete. Incubation starts soon after the first egg is laid, with both parents sharing this duty in the day, but only the female at night. The eggs hatch in about 20 days, and the newly hatched young are blind, pink and naked. For most species, the eggs do not all hatch at the same time, so if food is in short supply only the older chicks survive. Adults and young defecate in the nest, and their pellets are trodden underfoot, making the nest cavity very malodorous. The chicks are in the nest for about 30 days.
Bee-eaters may nest as single pairs, loose colonies or dense colonies. Smaller species tend to nest solitarily, while medium-sized bee-eaters have small colonies, and larger and migratory species nest in large colonies that can number in the thousands. In some instances, colonies may contain more than one species of bee-eater. In species that nest gregariously, breeding pairs may be assisted by up to five helpers. These birds may alternate between breeding themselves and helping in successive years.
Predators and parasites
Bee-eater nests may be raided by rats and snakes, and the adults are hunted by birds of prey such as the Levant sparrowhawk. The little bee-eater and red-throated bee-eaters are hosts of the greater honeyguide and the lesser honeyguide, both brood parasites. The young honeyguides kill the bee-eater's chicks and destroy any eggs. The begging call of the honeyguide sounds like two bee-eater chicks, ensuring a good supply of food from the adult bee-eaters.
Bee-eaters may be infested by several blood-feeding flies of the genus Carnus, and the biting fly Ornithophila metallica. Other parasites include chewing lice of the genera Meromenopon, Brueeliaa and Meropoecus, some of which are specialist parasites of bee-eaters, and the stickfast flea Echidnophaga gallinacea. The hole-nesting lifestyle of bee-eaters means that they tend to carry a higher burden of external parasites than non-hole-nesting bird species. Bee-eaters may also be infected by protozoan blood parasites of the genus Haemoproteus including H. meropis.
Fly larvae of the genus Fannia live in the nests of at least European bee-eaters, and feed on faeces and food remains. Their presence and cleaning activities appear to benefit the developing bee-eaters.
Status
The International Union for Conservation of Nature (IUCN) assesses species vulnerability in terms of total population and the rate of any population decline. None of the bee-eaters meet the IUCN vulnerability criteria, and all are therefore evaluated as "Least-concern species".
Open country species, which comprise the majority of bee-eaters, have mostly expanded in range as more land is converted to agriculture, but some tropical forest species have suffered declines through loss of habitat, although no species or subspecies gives serious cause for concern. There is some human persecution of bee-eaters, with nest holes being blocked, adults shot or limed, or young taken for food. More generally problematic is the unintended destruction of nests. This can occur through cattle trampling, as with the blue-headed bee-eater in Kenya, or loss of forests, with massive conversion of native forest to oil palm plantations in Malaysia being particularly concerning.
A study of the southern carmine bee-eater in Zimbabwe showed that it was affected by deliberate interference and persecution and loss of woodlands, and that nesting sites are lost through poor water management leading to river bank damage, dam construction and panning for gold. Colonies are becoming concentrated into the national parks and the Zambezi Valley. The well-studied European bee-eater is trapped and shot on migration in countries bordering the Mediterranean, an estimated 4,000–6,000 annually being killed in Cyprus alone, but with a global population of between 170,000 and 550,000 pairs even losses on that scale make little overall impact.
In culture
Bee-eaters were mentioned by ancient writers such as Aristotle and Virgil, who both advised beekeepers to kill the birds. Aristotle knew that bee-eaters nested at the end of tunnels up to long and the size of their clutch. He said that nesting adults were fed by their own young, based on the observed actual help at the nest by related birds.
In Greek mythology, the Theban Botres was fatally struck by his father when he desecrated a ritual sacrifice of a ram to the god Apollo by tasting the victim's brains. The god took pity on him, turning him into a bee-eater.
The Ancient Egyptians believed that bee-eaters had medical properties, prescribing the application of bee-eater fat to deter biting flies, and treating the eyes with the smoke from charred bee-eater legs to cure an unspecified female complaint.
In Hinduism, the shape of the bird in flight was thought to resemble a bow, with the long bill as an arrow. This led to a Sanskrit name meaning "Vishnu's bow" and an association with archer gods. Scandalmongers were thought to be reincarnated as bee-eaters, because of the metaphorical poison they bore in their mouths.
Depictions in classical art are rare for such striking birds. The only known Ancient Egyptian example is a relief, probably of a little green bee-eater, on a wall of Queen Hatshepsut's mortuary temple, and an early Roman mural depicting blue-cheeked bee-eaters was found in the villa of Agrippina. Bee-eaters have been depicted on the postage stamps of at least 38 countries, the European and Carmine bee-eaters being the most common subjects, with 18 and 11 countries respectively.
| Biology and health sciences | Coraciiformes | Animals |
63258 | https://en.wikipedia.org/wiki/Cornish%20Rex | Cornish Rex | The Cornish Rex is a breed of domestic cat. The Cornish Rex only has down hair. Most breeds of cat have three different types of hair in their coats: the outer fur or "guard hairs", a middle layer called the "awn hair"; and the down hair or undercoat, which is very fine and about 1 cm long. Cornish Rexes only have the undercoat. The curl in their fur is caused by a different mutation and gene than that of the Devon Rex. The breed originated in Cornwall, Great Britain.
Characteristics
The coat of a Cornish Rex is extremely fine and naturally curly. Their light coat means that they are best suited for indoor living in warm and dry conditions, as they are sensitive to low temperatures.
The breed is sometimes referred to as the Greyhound of the cats, because of the sleek appearance and the galloping run characteristic of the breed.
Appearance
According to the Governing Council of the Cat Fancy (GCCF) standard, the Cornish Rex's colour is irrelevant — therefore the cat may be any colour.
Aside from the distinctive the coat, the Cornish Rex is set apart by its 'foreign type', slender legs and tail, oval eyes, and wedge shaped head. The Cornish Rex's ears are large and wide at the base with rounded tips and are described as being almost mussel shell shaped. The eyes are medium in size and come in all varieties of eye colour. The body is slender and muscular with small paws.
Genetics
In 2013, researchers identified the mutation that defines the Cornish Rex breed. Genome-wide analyses were performed in the Cornish Rex breed and in 11 phenotypically diverse breeds and two random bred populations. A gene on chromosome A1, the lysophosphatidic acid receptor 6 (LPAR6), was identified to have a 4 base pair deletion. This induces a premature stop codon in the receptor that is absent in all straight haired cats analyzed. LPAR6 encodes a receptor essential for maintaining the structural integrity of the hair shaft. In humans, LPAR6 mutations result in a form of ectodermal dysplasia characterised by a woolly hair phenotype.
Origin
The Cornish Rex is a genetic mutation that originated from a litter of kittens born in the 1950s on a farm in Cornwall, UK. One of the kittens, a cream-colored male named Kallibunker, had an extremely unusual, fine and curly coat; he was the first Cornish Rex. The owner then backcrossed Kallibunker to his mother to produce 2 other curly-coated kittens. The male, Poldhu, sired a female called Lamorna Cove who was later brought to America and crossed with a Siamese, giving the breed their long whippy tails and big ears.
The Devon Rex looks similar in appearance to the Cornish Rex but has guard hairs and sheds. The Devon Rex mutation is different from the Cornish Rex mutation in that the Devon has shortened guard hairs, while the Cornish Rex lacks guard hairs altogether.
Hypoallergenic
Despite some belief to the contrary, the Cornish Rex's short hair does not make it non- or hypo-allergenic. Allergic reactions from cats are not the result of hair length, but from a glycoprotein known as Fel d 1, produced in the sebaceous glands of the skin, saliva, and urine. Most people who have cat allergies are reacting to this protein in cat saliva and cat dander: when the cat cleans its fur, the saliva dries and is transformed into dust that people breathe in. Since Cornish Rex cats groom as much as or even more than ordinary cats, a Cornish Rex cat can still produce a reaction in people who are allergic to cats. It is, however, widely reported to cause lesser to little allergic reaction.
| Biology and health sciences | Cats | Animals |
63355 | https://en.wikipedia.org/wiki/Columbidae | Columbidae | Columbidae is a bird family consisting of doves and pigeons. It is the only family in the order Columbiformes. These are stout-bodied birds with short necks and short slender bills that in some species feature fleshy ceres. They feed largely on plant matter, feeding on seeds (granivory), fruit (frugivory), and foliage (folivory). The family occurs worldwide, often in close proximity with humans, but the greatest diversity is in the Indomalayan and Australasian realms.
Columbidae contains 344 species divided into 50 genera. 59 species are listed as threatened, and 13 are extinct, including the dodo, an island bird, and the passenger pigeon, whose flocks were once counted in the billions.
In colloquial English, the smaller species tend to be called "doves", and the larger ones "pigeons", although the distinction is not consistent, and there is no scientific separation between them. Historically, the common names for these birds involve a great deal of variation. The bird most commonly referred to as "pigeon" is the domestic pigeon, or rock dove, which is common in many cities as the feral pigeon.
Doves and pigeons build relatively flimsy nests, often using sticks and other debris, which may be placed on branches of trees, on ledges, or on the ground, depending on species. They lay one or (usually) two white eggs at a time, and both parents care for the young. Unlike most birds, both sexes of doves and pigeons produce "crop milk" to feed to their young, secreted by a sloughing of fluid-filled cells from the lining of the crop.
Unfledged baby doves and pigeons are called squabs and are generally able to fly by five weeks old. These fledglings, with their immature squeaking voices, are called squeakers once they are weaned, and leave the nest after 25–32 days.
Since ancient times, many species in Columbidae have developed intricate cultural and practical relations with humans. Doves were important symbols of the goddesses Innana, Asherah, and Aphrodite, and revered by the early Christian, Islamic and Jewish religions. Domestication of pigeons led to significant use of homing pigeons for communication, including war pigeons, such as the 32 pigeons who were awarded the Dickin Medal for "brave service" to their country, in World War II.
Etymology
is a French word that derives from the Latin , for a chick, while dove is an ultimately Germanic word, possibly referring to the bird's diving flight. The English dialectal word appears to derive from Latin . A group of doves has sometimes been called a "dule", taken from the French word ().
Origin and evolution
Columbiformes is one of the most diverse non-passerine clades of neoavians, and its origins are in the Cretaceous and the result of a rapid diversification at the end of the K-Pg boundary. Whole genome analyses have found the columbiformes form a sister clade of a group conformed by the sandgrouses (Pterocliformes) and mesites (Mesitornithiformes).
Taxonomy and systematics
The name 'Columbidae' for the family was first used by the English zoologist William Elford Leach in a guide to the contents of the British Museum published in 1819. Columbidae is the only living family in the order Columbiformes. The sandgrouse (Pteroclidae) were formerly placed here, but were moved to a separate order, Pterocliformes, based on anatomical differences (such as the inability to drink by "sucking" or "pumping").
The Columbidae were usually divided into five subfamilies, probably inaccurately. For example, the American ground and quail doves (Geotrygon), which are usually placed in the Columbinae, seem to be two distinct subfamilies. The order presented here follows Baptista etal. (1997), with some updates.
The arrangement of genera and naming of subfamilies is in some cases provisional because analyses of different DNA sequences yield results that differ, often radically, in the placement of certain (mainly Indo-Australian) genera. This ambiguity, probably caused by long branch attraction, seems to confirm the first pigeons evolved in the Australasian region, and that the "Treronidae" and allied forms (crowned and pheasant pigeons, for example) represent the earliest radiation of the group.
The family Columbidae also contains the former family Raphidae, consisting of the extinct Rodrigues solitaire and the dodo. These species are now known to be part of the Indo-Australian radiation that produced the three small subfamilies mentioned above, with the fruit doves and pigeons (including the Nicobar pigeon). Therefore, they are here included as a subfamily Raphinae, pending better material evidence of their exact relationships.
These taxonomic issues are exacerbated by columbids not being well represented in the fossil record, with no truly primitive forms having been found to date. The genus Gerandia has been described from Early Miocene deposits in France, but while it was long believed to be a pigeon, it is now considered a sandgrouse. Fragmentary remains of a probably "ptilinopine" Early Miocene pigeon were found in the Bannockburn Formation of New Zealand and described as Rupephaps; "Columbina" prattae from roughly contemporary deposits of Florida is nowadays tentatively separated in Arenicolumba, but its distinction from Columbina/Scardafella and related genera needs to be more firmly established (e.g. by cladistic analysis). Apart from that, all other fossils belong to extant genera.
List of genera
Fossil species of uncertain placement:
Genus †Arenicolumba Steadman, 2008
Genus †Rupephaps Worthy, Hand, Worthy, Tennyson, & Scofield, 2009 (St. Bathans pigeon, Miocene of New Zealand)
Subfamily Columbinae (typical pigeons and doves)
Tribe Columbini
Genus Patagioenas (American pigeons, 17 species)
Genus †Ectopistes (passenger pigeon; extinct 1914)
Genus Reinwardtoena (3 species)
Genus Turacoena (3 species)
Genus Macropygia (typical cuckoo-doves, 15 species)
Genus Streptopelia (turtle doves and collared doves, 13 species)
Genus †Dysmoropelia Olson, 1975 (Saint Helena dove) (prehistoric)
Genus Columba (Old World pigeons, 35 species of which 2 recently extinct)
Genus Spilopelia (2 species)
Genus Nesoenas (3 species)
Tribe Zenaidini [Leptotilinae] (quail-doves and allies)
Genus Geotrygon (10 species)
Genus Leptotrygon (olive-backed quail-dove)
Genus Leptotila (11 species)
Genus Zenaida (7 species)
Genus Zentrygon (8 species)
Genus Starnoenas (blue-headed quail-dove) - this genus has recently been found to be basal to the entire subfamily Columbinae, and may be better placed in its own subfamily.
Subfamily Claravinae (American ground doves)
Genus Claravis (blue ground dove)
Genus Paraclaravis (2 species)
Genus Uropelia (long-tailed ground dove)
Genus Metriopelia (4 species)
Genus Columbina (9 species)
Subfamily Raphinae
Tribe Phabini (bronzewings and relatives)
Genus Henicophaps (2 species)
Genus Gallicolumba (bleeding-hearts and allies, 7 species)
Genus Pampusana (13 species of which 3 recently extinct)
Genus Ocyphaps (crested pigeon)
Genus Petrophassa (rock pigeons, 2 species)
Genus Leucosarcia (wonga pigeon)
Genus Geopelia (5 species)
Genus Phaps (Australian bronzewings, 3 species)
Genus Geophaps (3 species)
Tribe Raphini [Didunculinae; Otidiphabinae; Gourinae]
Genus ?†Natunaornis (Viti Levu giant pigeon) (prehistoric)
Genus Trugon (thick-billed ground pigeon)
Genus †Microgoura (Choiseul crested pigeon, extinct early 20th century)
Genus Otidiphaps (pheasant pigeon)
Genus Goura (crowned pigeons, 4 species)
Genus Didunculus (tooth-billed pigeon)
Genus ?†Deliaphaps De Pietri, Scofield, Tennyson, Hand, & Worthy, 2017 (Zealandian dove, Miocene of New Zealand)
Genus Caloenas (Nicobar pigeon)
Genus †Bountyphaps Worthy & Wragg, 2008 (Henderson Island pigeon) (prehistoric)
Subtribe Raphina (Dodo and solitaire)
Genus †Raphus (dodo, extinct late 17th century)
Genus †Pezophaps (Rodrigues solitaire, extinct c. 1730)
Tribe Turturini
Genus Phapitreron (brown doves, 3 species)
Genus Oena (Namaqua dove, tentatively placed here)
Genus Turtur (wood doves, 5 species; tentatively placed here)
Genus Chalcophaps (emerald doves, 3 species)
Tribe Treronini
Genus Treron (green pigeons, 30 species)
Tribe Ptilinopini (fruit doves and imperial pigeons)
Genus Ducula (imperial pigeons, 42 species)
Genus Ptilinopus [Drepanoptila; Alectroenas] (fruit doves, around 50 living species, 1–2 recently extinct)
Genus Hemiphaga (2 species)
Genus Lopholaimus (topknot pigeon)
Genus Cryptophaps (sombre pigeon)
Genus Gymnophaps (mountain pigeons, 4 species)
Genus ?†Tongoenas Steadman & Takano, 2020 (Tongan giant pigeon) (prehistoric)
Description
Size and appearance
Pigeons and doves exhibit considerable variation in size, ranging in length from , and in weight from to above . The largest species is the crowned pigeon of New Guinea, which is nearly turkey-sized, at a weight of . The smallest is the common ground dove (Columbina passerina) of the genus Columbina, which is the same size as a house sparrow, weighing as little as . The dwarf fruit dove, which may measure as little as , has a marginally smaller total length than any other species from this family. One of the largest arboreal species, the Marquesan imperial pigeon, currently battles extinction.
Anatomy and physiology
Overall, the anatomy of Columbidae is characterized by short legs, short bills with a fleshy cere, and small heads on large, compact bodies. Like some other birds, the Columbidae have no gall bladders. Some medieval naturalists concluded they have no bile (gall), which in the medieval theory of the four humours explained the allegedly sweet disposition of doves. In fact, however, they do have bile (as Aristotle had earlier realized), which is secreted directly into the gut.
The wings of most species are large, and have eleven primary feathers; pigeons have strong wing muscles (wing muscles comprise 31–44% of their body weight) and are among the strongest fliers of all birds.
In a series of experiments in 1975 by Dr.Mark B. Friedman, using doves, their characteristic head bobbing was shown to be due to their natural desire to keep their vision constant. It was shown yet again in a 1978 experiment by Dr.Barrie J. Frost, in which pigeons were placed on treadmills; it was observed that they did not bob their heads, as their surroundings were constant.
Feathers
Columbidae have unique body feathers, with the shaft being generally broad, strong, and flattened, tapering to a fine point, abruptly. In general, the aftershaft is absent; however, small ones on some tail and wing feathers may be present. Body feathers have very dense, fluffy bases, are attached loosely into the skin, and drop out easily. Possibly serving as a predator avoidance mechanism, large numbers of feathers fall out in the attacker's mouth if the bird is snatched, facilitating the bird's escape. The plumage of the family is variable.
Granivorous species tend to have dull plumage, with a few exceptions, whereas the frugivorous species have brightly coloured plumage. The genera Chalcophaps, Ptilinopus and Alectroenas include some of the most brightly coloured pigeons. Pigeons and doves may be sexually monochromatic or dichromatic. In addition to bright colours, some pigeon species may have crests or other ornamentation.
Flight
Many Columbidae are excellent fliers due to the lift provided by their large wings, which results in low wing loading; They are highly maneuverable in flight and have a low aspect ratio due to the width of their wings, allowing for quick flight launches and ability to escape from predators, but at a high energy cost. A few species are long-distance migrants, with some populations of the European turtle dove migrating in excess of 5,000 km between northern Europe in summer and tropical Africa in winter, and the Oriental turtle dove nearly as far in eastern Asia between eastern Siberia and southern China.
Distribution and habitat
Pigeons and doves are distributed everywhere on Earth, except for the driest areas of the Sahara Desert, Antarctica and its surrounding islands, and the high Arctic. They have colonised most of the world's oceanic islands, reaching eastern Polynesia and the Chatham Islands in the Pacific, Mauritius, the Seychelles and Réunion in the Indian Ocean, and the Azores in the Atlantic Ocean.
The family has adapted to most of the habitats available on the planet. These species may be arboreal, terrestrial, or semi-terrestrial. Various species also inhabit savanna, grassland, desert, temperate woodland and forest, mangrove forest, and even the barren sands and gravels of atolls.
Some species have large natural ranges. The eared dove ranges across the entirety of South America from Colombia to Tierra del Fuego, the Eurasian collared dove has a massive (if discontinuous) distribution from Britain across Europe, the Middle East, India, Pakistan and China, and the laughing dove across most of sub-Saharan Africa, as well as India, Pakistan, and the Middle East.
When including human-mediated introductions, the largest range of any species is that of the rock dove, also known as the common pigeon. This species had a large natural distribution from Britain and Ireland to northern Africa, across Europe, Arabia, Central Asia, India, the Himalayas and up into China and Mongolia. The range of the species increased dramatically upon domestication, as the species went feral in cities around the world. The common pigeon is currently resident across most of North America, and has established itself in cities and urban areas in South America, sub-Saharan Africa, Southeast Asia, Japan, Australia, and New Zealand. A 2020 study found that the east coast of the United States includes two pigeon genetic megacities, in New York and Boston, and observes that the birds do not mix together.
As well as the rock dove, several other species of pigeon have become established outside of their natural range after escaping captivity, and other species have increased their natural ranges due to habitat changes caused by human activity.
Other species of Columbidae have tiny, restricted distributions, usually seen on small islands, such as the whistling dove, which is endemic to the tiny Kadavu Island in Fiji, the Caroline ground dove, restricted to two islands, Truk and Pohnpei in the Caroline Islands, and the Grenada dove, which is only found on the island of Grenada in the Caribbean.
Some continental species also have tiny distributions, such as the black-banded fruit dove, which is restricted to a small area of the Arnhem Land of Australia, the Somali pigeon, found only in a tiny area of northern Somalia, and Moreno's ground dove, endemic to the area around Salta and Tucuman in northern Argentina.
Behaviour
Feeding
Seeds and fruit form the major component of the diets of pigeons and doves, and the family can be loosely divided between seed-eating, or granivorous, species, and fruit-and-mast-eating, or frugivorous, species, though many species use both resources.
The granivorous species typically feed on seed found on the ground, whereas the frugivorous species tend to feed in trees. The morphological adaptations used to distinguish between the two groups include granivores tending to having thick walls in their gizzards, intestines, and esophagi, with the frugivores evolved with thin walls, and the fruit-eating species have short intestines, as opposed to the seed eaters having longer intestines. Frugivores are capable of clinging to branches and even hang upside down to reach fruit.
In addition to fruit and seeds, a number of other food items are taken by many species. Some, particularly the ground doves and quail-doves, eat a large number of prey items such as insects and worms. One species, the atoll fruit dove, is specialised in taking insect and reptile prey. Snails, moths, and other insects are taken by white-crowned pigeons, orange fruit doves, and ruddy ground doves.
Urban feral pigeons, descendants of domestic rock doves (Columbia Livia), reside in urban environments, disturbing their natural feeding habits. They depend on human activities and interactions to obtain food, causing them to forage for spilled food or food provided by humans.
Status and conservation
While many species of pigeons and doves have benefited from human activities and have increased their ranges, many other species have declined in numbers and some have become threatened or even succumbed to extinction. Among the ten species to have become extinct since 1600 (the conventional date for estimating modern extinctions) are two of the most famous extinct species, the dodo and the passenger pigeon.
The passenger pigeon was exceptional for a number of reasons. In modern times, it is the only pigeon species that was not an island species to have become extinct even though it was once the most numerous species of bird on Earth. Its former numbers are difficult to estimate, but one ornithologist, Alexander Wilson, estimated one flock he observed contained over two billion birds. The decline of the species was abrupt; in 1871, a breeding colony was estimated to contain over a hundred million birds, yet the last individual in the species was dead by 1914. Although habitat loss was a contributing factor, the species is thought to have been massively over-hunted, being used as food for slaves and, later, the poor, in the United States throughout the 19thcentury.
The dodo, and its extinction, was more typical of the extinctions of pigeons in the past. Like many species that colonise remote islands with few predators, it lost much of its predator avoidance behaviour, along with its ability to fly. The arrival of people, along with a suite of other introduced species such as rats, pigs, and cats, quickly spelled the end for this species and many other island species that have become extinct.
Around 59 species of pigeons and doves are threatened with extinction today, about 19% of all species. Most of these are tropical and live on islands. All of the species are threatened by introduced predators, habitat loss, hunting, or a combination of these factors. In some cases, they may be extinct in the wild, as is the Socorro dove of Socorro Island, Mexico, last seen in the wild in 1972, driven to extinction by habitat loss and introduced feral cats. In some areas, a lack of knowledge means the true status of a species is unknown; the Negros fruit dove has not been seen since 1953, and may or may not be extinct, and the Polynesian ground dove is classified as critically endangered, as whether it survives or not on remote islands in the far west of the Pacific Ocean is unknown.
Various conservation techniques are employed to prevent these extinctions, including laws and regulations to control hunting pressure, the establishment of protected areas to prevent further habitat loss, the establishment of captive populations for reintroduction back into the wild (ex situ conservation), and the translocation of individuals to suitable habitats to create additional populations.
Military
The pigeon was used in both World War I and World War II, notably by the Australian, French, German, American, and UK forces. They were also awarded for their service with various laurels throughout. On 2 December 1943, three pigeonsWinkie, Tyke, and White Vision, serving with Britain's Royal Air Force, were awarded the first Dickin medal for rescuing an air force crew during World WarII. Thirty-two pigeons have been decorated with the Dickin Medal, citing their "brave service" in war contributions, including Commando, G.I. Joe, Paddy, Royal Blue, and William of Orange.
Cher Ami, a homing pigeon in World War I, was awarded the Croix de Guerre Medal, by France, with a palm Oak Leaf Cluster for his service in Verdun. Despite having almost lost a leg and being shot in the chest, he managed to travel around 25 miles to deliver the message that saved 194 men of the Lost Battalion of the 77th Infantry Division in the Battle of the Argonne, in October 1918. When Cher Ami died, he was mounted and is part of the permanent exhibit at the National Museum of American History of the Smithsonian Institution.
A grand ceremony was held in Buckingham Palace to commemorate a platoon of pigeons that braved the battlefields of Normandy to deliver vital plans to Allied forces on the fringes of Germany. Three of the actual birds that received the medals are on show in the London Military Museum so that well-wishers can pay their respects. In Brussels, there is a monument commemorating pigeons that served in World War I, the .
Domestication
The rock dove has been domesticated for hundreds of years. It has been bred into several varieties kept by hobbyists, of which the best known is the homing pigeon or racing homer. Other popular breeds are tumbling pigeons such as the Birmingham roller, and fancy varieties that are bred for certain physical characteristics such as large feathers on the feet or fan-shaped tails. Domesticated rock pigeons are also bred as carrier pigeons, used for thousands of years to carry brief written messages, and release doves used in ceremonies. White doves are also used for entertainment and amusement, as they are capable of solving puzzles and performing intricate tricks. A variant called the zurito, bred for its speed, may be used in live pigeon shooting.
In religion
In ancient Mesopotamia, doves were prominent animal symbols of Inanna-Ishtar, the goddess of love, sexuality, and war. Doves are shown on cultic objects associated with Inanna as early as the beginning of the third millennium BC. Lead dove figurines were discovered in the temple of Ishtar at Aššur, dating to the thirteenth century BC, and a painted fresco from Mari, Syria, shows a giant dove emerging from a palm tree in the temple of Ishtar, indicating that the goddess herself was sometimes believed to take the form of a dove. In the Epic of Gilgamesh, Utnapishtim releases a dove and a raven to find land; the dove merely circles and returns. Only then does Utnapishtim send forth the raven, which does not return, and Utnapishtim concludes the raven has found land.
In the ancient Levant, doves were used as symbols for the Canaanite mother goddess Asherah. The ancient Greek word for "dove" was peristerá, which may be derived from the Semitic phrase peraḥ Ištar, meaning "bird of Ishtar". In classical antiquity, doves were sacred to the Greek goddess Aphrodite, who absorbed this association with doves from Inanna-Ishtar. Aphrodite frequently appears with doves in ancient Greek pottery. The temple of Aphrodite Pandemos on the southwest slope of the Athenian Acropolis was decorated with relief sculptures of doves with knotted fillets in their beaks and votive offerings of small, white, marble doves were discovered in the temple of Aphrodite at Daphni. During Aphrodite's main festival, the Aphrodisia, her altars would be purified with the blood of a sacrificed dove. Aphrodite's associations with doves influenced the Roman goddesses Venus and Fortuna, causing them to become associated with doves as well.
In the Hebrew Bible, doves or young pigeons are acceptable burnt offerings for those who cannot afford a more expensive animal. In Genesis, Noah sends a dove out of the ark, but it came back to him because the floodwaters had not receded. Seven days later, he sent it again and it came back with an olive branch in her mouth, indicating the waters had receded enough for an olive tree to grow. "Dove" is also a term of endearment in the Song of Songs and elsewhere. In Hebrew, Jonah (יוֹנָה) means dove. The "sign of Jonas" in Matthew 16 is related to the "sign of the dove".
Jesus's parents sacrificed doves on his behalf after his circumcision (Luke 2:24). Later, the Holy Spirit descended upon Jesus at his baptism like a dove (Matthew), and subsequently the "peace dove" became a common Christian symbol of the Holy Spirit.
In Islam, doves and the pigeon family in general are respected and favoured because they are believed to have assisted the final prophet of Islam, Muhammad, in distracting his enemies outside the cave of Thaw'r, in the great Hijra. A pair of pigeons had built a nest and laid eggs at once, and a spider had woven cobwebs, which in the darkness of the night made the fugitives believe that Muhammad could not be in that cave.
As food
Several species of pigeons and doves are used as food; however, all types are edible. Domesticated or hunted pigeons have been used as a source of food since the times of the Ancient Middle East, Ancient Rome, and Medieval Europe. It is a familiar meat in Jewish, Arab, and French cuisines. According to the Tanakh, doves are kosher, and they are the only birds that may be used for a korban. (Other kosher birds may be eaten, but not brought as a korban.) Pigeon is also used in Asian cuisines such as Chinese, Assamese, and Indonesian cuisines.
In Europe, the wood pigeon is commonly shot as a game bird, while rock pigeons were originally domesticated as a food species, and many breeds were developed for the quality of their meat. The extinction of the passenger pigeon in North America was at least partly due to shooting for use as food. Mrs Beeton's Book of Household Management contains recipes for roast pigeon and pigeon pie, a popular, inexpensive food in Victorian industrial Britain.
List of monuments depicting pigeons
There are many public monuments around the world devoted to and depicting pigeons.
| Biology and health sciences | Columbiformes | null |
63522 | https://en.wikipedia.org/wiki/Crohn%27s%20disease | Crohn's disease | Crohn's disease is a chronic inflammatory bowel disease characterized by recurrent episodes of intestinal inflammation, primarily manifesting as diarrhea and abdominal pain. Unlike ulcerative colitis, inflammation can occur anywhere in the gastrointestinal tract, though it most frequently affects the ileum and colon, involving all layers of the intestinal wall. Symptoms may be non-specific and progress gradually, often delaying diagnosis. About one-third of patients have colonic disease, another third have ileocolic disease, and the remaining third have isolated ileal disease. Systemic symptoms such as chronic fatigue, weight loss, and low-grade fevers are common. Organs such as the skin and joints can also be affected. Complications can include bowel obstructions, fistulas, nutrition problems, and an increased risk of intestinal cancers.
Crohn's disease is influenced by genetic, environmental, and immunological factors. Smoking is a major modifiable risk factor, especially in Western countries, where it doubles the likelihood of developing the disease. Dietary shifts from high-fiber to processed foods may reduce microbiota diversity and increase risk, while high-fiber diets can offer some protection. Genetic predisposition plays a significant role, with first-degree relatives facing a five-fold increased risk, particularly due to mutations in genes like NOD2 that affect immune response. The condition results from a dysregulated immune response to gut bacteria and increased intestinal permeability, alongside changes in the gut microbiome.
Diagnosing Crohn's disease can be complex due to symptom overlap with other gastrointestinal disorders. It typically involves a combination of clinical history, physical examination, and various diagnostic tests. Key methods include ileocolonoscopy, which identifies the disease in about 90% of cases, and imaging techniques like CT and MRI enterography, which help assess the extent of the disease and its complications. Histological examination of biopsy samples is the most reliable method for confirming diagnosis.
Management of Crohn's disease is individualized, focusing on disease severity and location to achieve mucosal healing and improve long-term outcomes. Treatment may include corticosteroids for quick symptom relief, immunosuppressants for maintaining remission, and biologics like anti-TNF therapies, which are effective for both induction and maintenance. Surgery may be necessary for complications such as blockages. Despite ongoing treatment, Crohn's disease is a chronic condition with no cure, often leading to a higher risk of related health issues and reduced life expectancy.
The disease is most prevalent in North America and Western Europe, particularly among Ashkenazi Jews, with prevalence rates of 322 per 100,000 in Germany, 319 in Canada, and 300 in the United States. There is also a rising prevalence in newly industrialized countries, such as 18.6 per 100,000 in Hong Kong and 3.9 in Taiwan. The typical age of onset is between 20 and 30 years, with an increasing number of cases among children.
Signs and symptoms
Crohn's disease is characterized by recurring flares of intestinal inflammation, with diarrhea and abdominal pain as the primary symptoms. Symptoms may be non-specific and progress gradually, and many people have symptoms for years before diagnosis. Unlike ulcerative colitis, inflammation can occur anywhere in the gastrointestinal tract, most often in the ileum and colon, and can involve all layers of the intestine. Disease location tends to be stable, with a third of patients having colonic disease, a third having ileocolic disease, and a third having ileal disease. The disease may also involve perianal, upper gastrointestinal, and extraintestinal organs.
Gastrointestinal
Diarrhea affects 82% of people at the onset of Crohn's disease, with severity ranging from mild to severe enough to require substitution of water and electrolytes. With Crohn's disease, diarrhea is frequent and urgent rather than voluminous.
Abdominal pain affects at least 70% of people during the course of Crohn's disease. It can result directly from intestinal inflammation, or from complications such as strictures and fistulas. Pain most commonly occurs in the lower right abdomen.
Rectal bleeding is less common than in ulcerative colitis, and is more likely to occur with inflammation in the colon or rectum. Bleeding in the colon or rectum is bright red, whereas bleeding in higher segments causes dark or black stools.
Bloating, flatus, and other symptoms of irritable bowel syndrome occur in 41% of people in remission.
Perianal involvement occurs in 18–43% of cases, more frequently if the colon and rectum are inflamed, and can cause fistulas, skin tags, hemorrhoids, fissures, ulcers, and strictures.
Upper gastrointestinal involvement is rare, occurring in 0.5-16% of cases, and may cause symptoms such as pain while swallowing, difficulty swallowing, vomiting, and nausea.
Systemic
Crohn's disease often presents with systemic symptoms, including:
Chronic fatigue, which lasts for at least 6 months and cannot be cured by rest, occurs in 80% of people with Crohn's disease, including 30% of people who are in remission.
Fevers, typically low-grade, are often reported as initial symptoms of Crohn's disease. High-grade fevers are often a result of abscesses.
Weight loss often occurs due to diarrhea and reduced appetite.
Extraintestinal
Extraintestinal manifestations occur in 21–47% of cases, and include symptoms such as:
Mouth ulcers, such as canker sores.
Eye inflammation, such as uveitis, scleritis, and episcleritis.
Skin inflammation, such as erythema nodosum and pyoderma gangrenosum.
Blood conditions such as portal hypertension, thromboembolism, thrombosis, pulmonary embolism.
Joint inflammation such as arthritis, ankylosing spondylitis, sacroiliitis.
Respiratory conditions such as obstructive sleep apnea and chest infections.
Liver, bile duct, and gallbladder conditions such as primary sclerosing cholangitis and cirrhosis.
Mental disorders such as depression and anxiety.
Metabolic bone disease.
Complications
Bowel damage due to inflammation occurs in half of cases within 10 years of diagnosis, and can lead to stricturing or penetrating disease forms. This can cause complications such as:
Bowel obstructions may occur due to strictures, particularly in the small intestine, and may require surgical removal of the affected segment (bowel resection) or surgical dilation (stricturoplasty).
Fistulas, openings in the gut, result from penetrating disease and can cause diarrhea, urinary tract infections, and stool leakage to the vagina or skin. It is treated by bowel resection or fistulotomy
Abscesses, infected pockets, can also result from penetrating disease, causing abdominal pain, fever, and chills. It may be treated by surgical drainage.
Malnutrition occurs in 38.9% of people in remission and 82.8% of people with active disease due to malabsorption in the small intestine, reduced appetite, and drug interactions. This can cause complications such as:
Anemia occurs in 6–74% of cases as a result of iron deficiency and blood loss. It is treated by oral iron or intravenous iron depending on disease activity.
Vitamin D deficiency is prevalent and can cause osteoporosis, and is treated by oral supplementation.
Folic acid, vitamin B12, zinc, magnesium, and selenium deficiencies may also occur, and are treated through oral supplementation.
Impaired growth and nutritional deficiency occur in 65–85% of children with Crohn's disease.
Intestinal cancers may develop as a result of prolonged or severe inflammation. This includes:
Colorectal cancer has a prevalence of 7% at 30 years after diagnosis and accounts for 15% of deaths in people with Crohn's. Risk is higher if the disease occurs in most of the colon. Endoscopic surveillance is performed to detect and remove polyps, while surgery is required for dysplasia beyond the mucosal surface.
Small bowel cancer has a prevalence of 1.6%, at least 12-times greater in people with Crohn's disease. Unlike colorectal cancer, endoscopic surveillance is ineffective and not recommended for small bowel cancer.
Causes
Risk factors
Smoking is a major modifiable risk factor for Crohn's disease, particularly in Western countries, where it doubles the risk. This risk is higher in females and varies with age. Smoking is also linked to earlier disease onset, increased need for immunosuppression, more surgeries, and higher recurrence rates. Ethnic differences have been noted, with studies in Japan linking passive smoking to the disease. Proposed mechanisms for smoking's effects include impaired autophagy, direct toxicity to immune cells, and changes in the microbiome.
Diet may influence the development of Crohn's disease by affecting the gut microbiome. The shift from high-fiber, low-fat foods to processed foods reduces microbiota diversity, increasing the risk of Crohn's disease. Conversely, high-fiber diets may reduce risk by up to 40%, likely due to the production of anti-inflammatory short-chain fatty acids from fiber metabolism by gut bacteria. The Mediterranean diet is also linked to a lower risk of later-onset Crohn's disease. Since diet's effect on the microbiome is temporary, its role in gut dysbiosis is controversial.
Childhood antibiotic exposure is linked to a higher risk of Crohn's disease due to changes in the intestinal microbiome, which shapes the immune system in early life. Other medications, like oral contraceptives, aspirin, and NSAIDs, may also increase risk by up to two-fold. Conversely, breastfeeding and statin use may reduce risk, though breastfeeding's effects are inconsistent. Early life factors such as mode of delivery, pet exposure, and infections—related to the hygiene hypothesis—also significantly influence risk, likely due to influences on the microbiome.
Genetics
Genetics significantly influences the risk of Crohn's disease. First-degree relatives of affected individuals have a five-fold increased risk, while identical twins have a 38–50% risk if one twin is affected. Genome-wide association studies have identified around 200 loci linked to Crohn's, most found in non-coding regions that regulate gene expression and overlap with other immune-related conditions, such as ankylosing spondylitis and psoriasis. While genetics can predict disease location, it does not determine complications like stricturing. A substantial portion of inherited risk is attributed to a few key polymorphisms.
NOD2 mutations are the primary genetic risk factor for ileal Crohn's disease, impairing the function of immune cells, particularly Paneth cells. These mutations are found in 10–27% of individuals with Crohn's disease, predominantly in Caucasian populations. Heterozygotes (one mutated copy) have a three-fold risk, while homozygotes (two copies) have a 20–40 fold risk.
ATG16L1 mutations impair autophagy and immune defense, and are more common in Caucasians.
IL23R mutations increase inflammatory signaling of the interleukin-23 pathway, and are more common in Caucasians.
TNFSF15 mutations are the primary genetic risk factor in Asian populations.
IL10RA mutations impair the anti-inflammatory signaling of interleukin-10, causing early-onset Crohn's disease with high penetrance.
Mechanism
Crohn's disease is believed to be caused by a dysregulated immune response to gut bacteria, though the exact mechanism is unknown. This is evidenced by the disease's links to genes involved in bacteria defense and its occurrence in the ileum and colon, the most bacteria-dense segments of the intestine. In Crohn's disease, a permeable intestinal barrier and a deficient innate immune response enable bacteria to enter intestinal tissue, causing an excessive inflammatory response from T helper 1 (Th1) and T helper 17 (Th17) cells. An altered microbiome may also be causatory and serve as the link to environmental factors.
Intestinal barrier
The epithelial barrier is a single layer of epithelial cells covered in antimicrobial mucus that protects the intestine from gut bacteria. Epithelial cells are joined by tight junction proteins, which are reduced by Crohn's-linked polymorphisms. In particular, claudin-5 and claudin-8 are reduced, while pore-forming claudin-2 is increased, causing intestinal permeability. Epithelial cells under stress emit inflammatory signals such as the unfolded protein response to stimulate the immune system, and Crohn's-linked polymorphisms to the ATG16L1 gene lower the threshold at which this response is triggered.
In a functional state, the intestinal epithelium and IgA dimers work together to manage and keep the luminal microflora distinct from the mucosal immune system. Paneth cells exist in the epithelial barrier of the small intestine and secrete α-defensins to prevent bacteria from entering gut tissue. Genetic polymorphisms associated with Crohn's disease can impair this ability and lead to Crohn's disease in the ileum. NOD2 is a receptor produced by Paneth cells to sense bacteria, and mutations to NOD2 can inhibit the antimicrobial activity of Paneth cells. ATG16L1, IRGM, and LRRK2 are proteins involved in selective autophagy, the mechanism by which Paneth cells secrete α-defensins, and mutations to these genes also impair the antimicrobial activity of Paneth cells.
Intraepithelial lymphocytes (IELs) are immune cells that exist in the epithelial barrier, consisting mostly of activated T cells. They interact with gut bacteria directly and emit signals to regulate the intestinal immune system. IELs in Crohn's disease produce increased levels of inflammatory cytokines IL-17, IFNγ, and TNF. It is hypothesized that inflammatory signals from the immune system and alterations to the gut microbiome influence IELs to produce inflammatory signals, contributing to Crohn's disease.
Immune system
Normally, intestinal macrophages have reduced inflammatory behavior while retaining their ability to consume and destroy pathogens. In Crohn's disease, the number and activity of macrophages is reduced, enabling the entrance of pathogens into intestinal tissue. Macrophages degrade internal pathogens through autophagy, which is impaired by Crohn's-linked polymorphisms in genes such as NOD2 and ATG16L1. Additionally, people with Crohn's tend to have a separate abnormal population of macrophages that secrete proinflammatory cytokines such as TNF and IL-6.
Neutrophils are recruited from the bloodstream in response to inflammatory signals, and defend tissue by secreting antimicrobial substances and consuming pathogens. In Crohn's disease, neutrophil recruitment is delayed and autophagy is impaired, allowing bacteria to survive in intestinal tissue. Dysfunction in neutrophil secretion of reactive oxygen species, which are toxic to bacteria, is associated with very early onset Crohn's disease. Although neutrophils are important in bacterial defense, their subsequent accumulation in Crohn's disease damages the epithelial barrier and perpetuates inflammation.
Innate lymphoid cells (ILCs) consist of subtypes including ILC1s, ILC2s, and ILC3s. ILC3s are particularly important for regenerating the epithelial barrier through secretion of IL-17 by NCR- ILC3s and IL-22 by NCR+ ILC3s. During Crohn's disease, inflammatory signals from antigen-presenting cells, such as IL-23, cause excessive IL-17 and IL-22 secretion. Although these cytokines protect the intestinal barrier, excessive production damages the barrier through increased inflammation and neutrophil recruitment. Additionally, IL-12 from activated dendritic cells influence NCR+ ILC3s to transform into inflammatory IFNγ-producing ILC1s.
Naive T cells are activated primarily by dendritic cells, which then differentiate into anti-inflammatory T regulatory cells (Tregs) or inflammatory T helper cells to maintain balance. In Crohn's disease, macrophages and antigen-presenting cells secrete IL-12, IL-18, and IL-23 in response to pathogens, increasing Th1 and T17 differentiation and promoting inflammation via IL-17, IFNγ and TNF. IL-23 is particularly important, and IL-23 receptor polymorphisms that increase activity are linked with Crohn's disease. Tregs suppress inflammation via IL-10, and mutations to IL-10 and its receptor cause very early onset Crohn's disease.
Microbiome
People with Crohn's disease tend to have altered microbiomes, although no disease-specific microorganisms have been identified. An altered microbiome may link environmental factors with Crohn's, though causality is uncertain. Firmicutes tend to be reduced, particularly Faecalibacterium prausnitzii, which produces short-chain fatty acids that reduce inflammation. Bacteroidetes and proteobacteria tend to be increased, particularly adherent-invasive E. coli, which attaches to intestinal epithelial cells. Additionally, mucolytic and sulfate-reducing bacteria are elevated, contributing to damage to the intestinal barrier.
Alterations in gut viral and fungal communities may contribute to Crohn's disease. Caudovirales bacteriophage sequences found in children with Crohn's suggest a potential biomarker for early-onset disease. A meta-analysis showed lower viral diversity in Crohn's patients compared to healthy individuals, with increased Synechococcus phage S CBS1 and Retroviridae viruses. Additionally, a Japanese study found that the fungal microbiota in Crohn's patients differs significantly from that of healthy individuals, particularly with an abundance of Candida.
Diagnosis
Diagnosis of Crohn's disease may be challenging since its symptoms overlap with other gastrointestinal diseases. An accurate diagnosis requires a combined assessment of clinical history, physical examination, and diagnostic tests.
Endoscopy
Ileocolonoscopy is the primary procedure for diagnosing Crohn's disease in the ileum and colon, accurately identifying it in about 90% of cases. During this exam, doctors closely examine the intestinal lining and take small tissue samples for further testing. Signs of Crohn's disease include uneven inflammation and 'skip lesions', which are patches of inflammation separated by healthy tissue. The ulcers can be small (less than 5 mm) or larger (over 5 mm), often appearing cobblestone-like. Their depth helps determine disease severity. Unlike ulcerative colitis, Crohn's disease usually does not affect the rectum or cause continuous inflammation around the bowel.
In certain cases, such as disease in the upper small bowel, standard colonoscopy may be ineffective. Physicians may then opt for device-assisted enteroscopy or capsule endoscopy. While capsule endoscopy is effective in detecting abnormalities, it may not reliably diagnose Crohn's disease and carries a risk of retention, which is about 1.6% when Crohn's disease is suspected and increases to 13% if already diagnosed. To reduce this risk, physicians typically perform small-bowel imaging and use a patency capsule that disintegrates within 48 to 72 hours. Once the patency capsule has passed through the intestine, capsule endoscopy may be performed.
Device-assisted enteroscopy is not typically the first choice for diagnosing small-bowel Crohn's disease due to its invasiveness and higher costs. The procedure closely examines the small intestine using specialized tools, such as longer endoscopes or balloon-assisted devices, making it easier for doctors to visualize and treat issues. It often requires sedation and is generally reserved for patients needing a tissue sample or immediate treatment.
Cross-sectional Imaging
Cross-sectional imaging techniques, like bowel ultrasonography (BUS), CT enterography (CTE), and MRI enterography (MRE), are essential for understanding how extensive Crohn's disease is and whether there are any complications, like blockages or abnormal connections between organs. All three methods are quite accurate for diagnosing Crohn's disease and spotting these complications.
CTE involves radiation and requires the use of contrast agents (substances that help show details in images). Despite this, CTE is very effective, with over 80% accuracy in diagnosing the disease and its complications, including blockages and fistulas (abnormal connections).
MRE is particularly good for detecting intestinal narrowing, with 89% sensitivity and 94% specificity. It is the preferred option for examining fistulas and abscesses in the pelvic area.
BUS is a non-invasive method that doesn't involve radiation and is effective for assessing the intestinal wall and related issues, such as fistulas and abscesses. Despite its limitations, it can accurately identify signs of Crohn's disease when the bowel wall thickens to more than 3 mm, achieving high accuracy rates (88–100%) when also considering factors like fistulas and abscesses.
Histology
The most reliable way to confirm a diagnosis of Crohn's disease is through a histological examination of biopsy samples or tissue removed during surgery. This process helps distinguish Crohn's disease from ulcerative colitis and other types of colitis, particularly infections. While no features are unique to Crohn's disease, typical signs include patchy chronic inflammation, irregularities in the intestinal lining, granulomas (not related to tissue injury), and abnormal villi structure in the terminal ileum. A pathologist specializing in inflammatory bowel disease is important for accurate Crohn's disease diagnoses. Even if biopsy results are unclear, doctors can still suggest a Crohn's disease diagnosis based on clinical symptoms, endoscopic findings, and imaging results.
Disease activity indexes
The Crohn's Disease Activity Index (CDAI) is a scoring system to assess the symptoms associated with Crohn's disease. It assigns a score based on eight clinical factors, including overall well-being, frequency of loose stools, abdominal pain, presence of abdominal masses, changes in weight, low hemoglobin levels, and use of opiates for diarrhea. The CDAI is primarily used in clinical trials to evaluate the effectiveness of treatments and to determine whether the disease is in remission. This is particularly significant, as approximately 50% of patients who report feeling well may still exhibit signs of active disease in the intestine, while some patients with symptoms may present with normal intestinal findings.
The Harvey–Bradshaw Index (HBI) provides a more streamlined approach by assessing only clinical factors, thus eliminating the need for laboratory tests. Neither the CDAI nor the HBI incorporates diagnostic procedures such as endoscopies or imaging studies; instead, they focus exclusively on symptom tracking. The HBI is generally considered easier to apply than the CDAI and may be more suitable for certain clinical trials and routine practice due to its simplicity in calculation and reduced reliance on patient recall of symptoms.
The Crohn's Disease Endoscopic Index of Severity (CDEIS) is a scoring system used during endoscopy to evaluate Crohn's disease severity. It assesses six factors: deep and shallow ulcers, nonulcerated and ulcerated stenosis, the area covered by ulcers, and the overall disease-affected area across five intestinal sections. Scores range from 0 to 44, with higher scores indicating more severe disease. While often seen as the standard for measuring severity, CDEIS can be complex to calculate and may underestimate severity if only one segment, particularly the ileum, is affected. There are also no clear score cutoffs for specific outcomes or treatment responses, limiting its effectiveness in determining remission.
The Simple Endoscopic Score for Crohn's Disease (SES-CD) offers a more straightforward approach than the CDEIS scoring system, using four key factors to evaluate Crohn's disease during an endoscopy. These factors include the presence and size of ulcers, the area affected by ulcers, the overall extent of the disease, and any narrowing of the intestine (stenosis). The first three factors are scored from 0 to 3 in each of the five sections of the intestine, with a maximum score of 15 for each section. Stenosis is scored separately, ranging from 0 to 11. This results in a total SES-CD score that can range from 0 to 56, with higher scores indicating more severe disease.
Laboratory testing
While no lab test can definitively confirm or rule out Crohn's disease, results from serum and stool tests can help support the diagnosis:
The antimicrobial antibody ASCA is a well-known blood test marker used in the diagnosis of Crohn's disease. Approximately 60–70% of individuals with Crohn's disease test positive for ASCA, while only 10–15% of those with ulcerative colitis and less than 5% of patients with other types of colitis have positive results.
The autoantibody pANCA is found in 10–15% of Crohn's disease cases, in 60–70% of ulcerative colitis cases, and in less than 5% of patients with other types of colitis that aren't inflammatory bowel disease. Additionally, patients with Crohn's disease who test positive for pANCA often show symptoms similar to those of ulcerative colitis.
C-reactive protein (CRP) is a blood marker that indicates inflammation and can help monitor Crohn's disease activity. However, about one third of patients with active disease may have normal CRP levels, while one third with high levels of CRP have inactive disease. Moreover, CRP's ability to predict disease progression is not well established.
Fecal calprotectin is a stool test used to differentiate inflammatory bowel disease, like Crohn's disease, from irritable bowel syndrome. While there are no official cut-off values, it is commonly used to assess disease activity in Crohn's. Elevated levels may also indicate other intestinal infections or inflammatory conditions, not just Crohn's disease.
Differential diagnosis
Crohn's disease has similar endoscopic, radiographic and histological features with other inflammatory or infectious diseases. 10% of people with Crohn's disease are initially diagnosed with indeterminate colitis.
Behçet’s disease can cause intestinal inflammation, primarily featuring single ulcers and symptoms outside the intestines, which differ from Crohn's disease. Recurrent sores in the mouth and genitals raise suspicion for Behçet's. A pathergy test, where the skin is lightly pricked to check for a red bump or ulcer, can help confirm the diagnosis. Eye inflammation (uveitis) and skin issues are also common in Behçet's disease.
Intestinal lymphoma lacks symptoms that differentiate it from other conditions; diagnosis can only be confirmed through histological examination of tissue samples.
Intestinal tuberculosis can cause symptoms such as fever, night sweats, and ulcers in the transverse colon, along with a swollen ileocecal valve. Key tissue changes include granulomas, which may be caseating, confluent, or large. A positive smear test for acid-fast bacillus and imaging that detects necrotic lymph nodes are also important indicators of the disease.
Ischemic colitis is also a possible alternative diagnosis to Crohn's disease. It often shows swelling and redness of the inner lining of the colon, while the rectum usually remains unaffected.
Classification
The Montreal classification system is a widely used framework for categorizing the phenotypes of Crohn's disease. It considers three primary factors: the age at diagnosis (divided into three groups: less than 16 years, 17 to 40 years, and over 40 years), the location of the disease (which can be ileal, colonic, ileocolonic, or isolated upper), and the behavior of the disease (including non-stricturing/non-penetrating, stricturing, penetrating, and perianal types).
Management
The management of Crohn's disease is customized based on the severity, location, and behavior of the disease. Providers also assess the risk of aggressive disease to determine the need for more intensive treatment. Risk factors include diagnosis before age 30, extensive disease involvement, perianal complications, deep ulcers, and history of surgery. A key goal of treatment is to achieve mucosal healing, which restores the intestinal lining. Mucosal healing is linked to better outcomes, such as fewer flare-ups, reduced hospitalizations, steroid-free remission, and a longer interval without surgery.
Corticosteroids
Steroids are often used to quickly induce remission and relieve symptoms in Crohn's disease, but they are ineffective for maintaining remission. Options include intravenous steroids, prednisone, and budesonide, with budesonide preferred for its safety, though it's limited to mild to moderate cases in the ileum and right colon. Patients on systemic steroids should switch to other medications for long-term remission, as prolonged use can cause adrenal issues, weight gain, cataracts, hypertension, and diabetes. Additionally, systemic steroids may increase the risk of serious infections and mortality in moderate to severe Crohn's disease.
Conventional immunosuppressants
Thiopurines, like azathioprine and 6-mercaptopurine, maintain remission in Crohn's disease but do not induce it initially. Since thiopurines take 6 to 12 weeks to work, steroids are often used to manage symptoms during this time. Before starting thiopurines, liver metabolism is assessed and Epstein-Barr virus is tested in patients under 25. Around 15% to 20% of patients stop thiopurines due to side effects, including low blood cell counts, liver problems, nausea, vomiting, allergic reactions, and acute pancreatitis. Thiopurines also raise the risk of certain cancers and serious conditions, necessitating regular lab monitoring.
Methotrexate is used to induce and maintain remission in Crohn's disease, being slightly more effective than thiopurines and taking 8 to 16 weeks to work. About 17% of patients stop taking it due to side effects like nausea, vomiting, headaches, and fatigue. It can affect liver health and, rarely, lower blood cell counts, requiring regular blood tests. Methotrexate may also cause anemia and mouth sores, so daily folic acid is recommended. Additionally, it may increase the risk of certain skin cancers and lymphoma. Methotrexate is discontinued during pregnancy due to the risks of miscarriage and birth defects.
Biologics
Anti-TNF therapy is the most effective treatment for inducing and maintaining remission, with FDA-approved agents including infliximab, adalimumab, and certolizumab pegol. It blocks the inflammatory protein TNF and induces cell death in activated T cells. Responses may occur within a week, but full effects can take up to six weeks. Loss of response can happen due to the development of antidrug antibodies, necessitating a switch in agents or drug classes. Anti-TNF agents are often combined with thiopurines or methotrexate to minimize antibody development. Side effects include injection-site reactions, a higher risk of infection, a slight increase in melanoma risk, and rare cases of cytopenias and liver toxicity.
Vedolizumab is the first treatment designed specifically for the gut in moderate to severe Crohn's disease. It blocks the molecule α4β7 that helps white blood cells enter the gut, reducing inflammation. Unlike natalizumab, it does not carry a risk of the serious brain infection PML. While vedolizumab can induce remission, it works slowly, taking about 12 weeks to show effects, and its overall effectiveness is limited. However, patients who respond well can maintain remission for up to a year. Since it specifically targets the gut, it does not significantly increase the risk of serious side effects or infections, except for mild nasal infections.
Ustekinumab, approved for moderate to severe Crohn's disease in October 2016, has been FDA-approved for psoriasis since 2009. It appears to be comparable to anti-TNF therapy in both the induction and maintenance of remission, functioning by blocking the inflammatory molecules IL-12 and IL-23. The onset of action is similar to that of anti-TNF treatments, with responses typically observed within six weeks. Notably, Ustekinumab does not seem to increase the risk of serious infections, although the studies conducted in Crohn's disease have been relatively short-term.
Small molecules
The JAK inhibitor such as upadacitinib is approved for treatment of moderate to severe Crohn's disease, with a large multi-centre randomized control trial demonstrating its effectiveness in induction and maintenance of disease.
Surgery
Many individuals with Crohn's disease may require a bowel resection to remove part of the intestine due to blockages, lesions, infections, or ineffective medications. Since surgery is not a cure, the goal is to preserve as much of the small bowel as possible, and extensive resections can lead to short bowel syndrome. In cases with widespread strictures, only the most prominent stricture is typically resected, while minor strictures may be dilated through strictureplasty. After a resection, the healthy ends of the intestine are rejoined in a primary anastomosis.
Approximately six to twelve months after surgery, patients usually undergo a colonoscopy to check for inflammation, using the Rutgeerts scoring system to assess the likelihood of recurrence. About 50% may experience a return of symptoms within five years, and nearly 40% may need a second surgery within ten years, often due to inflammation near the anastomosis. While drug therapy aims to prevent recurrences, its effectiveness remains uncertain.
Diet
Enteral nutrition, which administers nutrients as a powder or liquid, is the primary method for inducing remission in children with Crohn's disease. It can induce remission in up to 80% of cases. Outside of Japan, it is not used to treat Crohn's disease in adults due to unpalatable formulations.
Parenteral nutrition, which administers nutrients directly into the bloodstream, may be used to supply nutrients to patients when there is extensive inflammation that impairs absorption. Parenteral nutrition has the same effectiveness as enteral nutrition in inducing remission.
The Mediterranean diet, rich in fruits, vegetables, unsaturated fats, and lean protein, is beneficial to general health but has not been found to reduce the rate of flares in Crohn's disease.
Cooking and processing fruits and vegetables to reduce fibrousness increases tolerance of those foods in people with intestinal strictures.
Other treatments
Mesalamine is not effective for inducing or maintaining remission in Crohn's disease, but is nevertheless commonly used as a treatment due to its perceived benefits and low risk of side effects.
Antibiotics are not effective in inducing or maintaining remission in Crohn's disease. However, they are effective in treating perianal fistulas and inflammation when used alongside anti-TNF therapy.
Fecal microbiota transplants have shown potential effectiveness in preliminary studies. However, larger studies are needed to verify its efficacy.
Acupuncture influences the immune system by stimulating the vagus nerve. Early studies have shown efficacy in improving clinical symptoms, but larger studies are needed.
Cannabis is a potential therapy for Crohn's disease by targeting the endocannabinoid system. Although it has shown benefits in animal models, its efficacy as a treatment is uncertain.
Cognitive behavioral therapy has shown short-term positive psychological effects in people with Crohn's disease, but it has no long-term effect on physical or psychological health.
Outlook
Crohn's disease is a chronic condition requiring ongoing management, as there is currently no cure. Inflammation is typically controlled through medications such as steroids and immunosuppressants, and in severe cases, surgery may be necessary. The clinical course of the disease is classified into four patterns:
Remission: Severity decreases in response to treatment, leading to sustained remission.
Improved and Stable: Severity lessens, but mild inflammation persists.
Relapsing: The disease fluctuates between periods of remission and severe inflammation.
Refractory: Severe inflammation continues without respite.
Approximately 40% to 56% of individuals with Crohn's disease achieve clinical remission after one year of infliximab treatment, increasing to 56% to 58% when combined with an immunosuppressant. Furthermore, 16% to 39% attain both clinical and endoscopic remission, showing no signs of inflammation in the intestine. Once in remission, individuals have an 80% chance of maintaining this state for the following year. Conversely, 10% to 15% of individuals may experience ongoing active disease without remission.
Chronic inflammation from Crohn's disease increases the risk of heart problems, cancers, arthritis, osteoporosis (weakened bones), and mental health issues. Some medications can also raise the chances of infections and cancers. Because of these combined risks, people with Crohn's disease tend to have a shorter lifespan compared to those who are healthy. In Canada, studies show that women affected with Crohn's disease live about 7.7 years less than unaffected women, and affected men live about 7.7 years less than otherwise expected.
Epidemiology
Crohn's disease is most prevalent in North America and Western Europe, particularly among Ashkenazi jews and possibly more common in women. The annual incidence in North America is 0–20.2 new cases per 100,000 people, while incidence in Europe is 0.3–12.7 per 100,000. The prevalence of Crohn's disease is 322 per 100,000 in Germany, 319 per 100,000 in Canada, and 300 per 100,000 in the United States. The prevalence of Crohn's disease has risen in newly industrialized countries, with rates of 18.6 per 100,000 in Hong Kong and 3.9 per 100,000 in Taiwan.
The typical age of onset is between 20 and 30 years, with a smaller peak around 50 years, leading to a median onset age of 30. About 20 to 25% of patients presenting with inflammatory bowel disease are children under 18 years old, while 80% are adolescents. Additionally, the incidence of Crohn's disease in children is on the rise, with 2.5–11.4 new cases per 100,000 and a prevalence of 58 per 100,000.
History
Giovanni Battista Morgagni, often referred to as the father of anatomic pathology, provided one of the earliest detailed accounts of the disease in his 1761 treatise, noting specific autopsy findings in a young patient who suffered from severe gastrointestinal symptoms.
The first notable series of cases of Crohn's disease was reported by Polish surgeon Antoni Leśniowski in 1903, followed by Scottish surgeon Thomas Kennedy Dalziel in 1913, who described nine patients exhibiting significant pathological features treated by surgical resection. However, the disease only gained widespread recognition with a landmark 1932 article by Burrill B. Crohn, Leon Ginzburg, and Gordon D. Oppenheimer. In this publication, they introduced the term "regional ileitis" based on their observations of chronic inflammation in the terminal ileum of 14 patients.
Over the following decades, Crohn's disease was recognized as affecting various parts of the gastrointestinal tract, with reports of involvement from the esophagus to the colon. This period also marked the identification of skip lesions—areas of healthy bowel between diseased sections—adding to the understanding of the disease's pathology. Public awareness of Crohn's disease increased significantly after President Eisenhower underwent surgery for the condition in 1956, which highlighted its impact on quality of life and encouraged discussions about the disease.
In 1960, ulcerative colitis and Crohn's colitis were officially classified as distinct diseases, despite lingering beliefs that Crohn's disease could not manifest in the colon. During this decade, advancements such as fiberoptic colonoscopy and the capability to perform biopsies significantly enhanced the diagnosis and management of Crohn's disease, facilitating improved visualization of the gastrointestinal tract and more accurate assessments of disease severity. Subsequent decades saw the testing of various medications for Crohn's disease in clinical trials, including the identification of methotrexate's efficacy in 1989.
In the 1990s, the focus of treatment for Crohn's disease began to shift towards biologic therapies, particularly anti-TNF agents. Concurrently, nutritional therapy gained prominence in managing pediatric cases and instances of malnutrition. The introduction of MRI enterography emerged as a safe and effective method for monitoring disease activity. This was further augmented by the FDA's approval of capsule endoscopy in 2001, which allowed for improved imaging of the small intestine. Since the inception of genome-wide association studies in 2005, several genetic markers associated with Crohn's disease have been identified, contributing to a deeper understanding of the condition.
Support organizations such as the Crohn's & Colitis Foundation have also emerged, providing resources and community for patients, helping to raise awareness and funding for research initiatives. Today, Crohn's disease continues to be a focus of extensive research, aiming to improve treatment outcomes and enhance the quality of life for those affected.
Etymology
Crohn's disease is named after Dr. Burrill Crohn, though its eponymous association arose from complex circumstances. Initially, researchers Ginzburg and Oppenheimer identified a pattern of the disease and compiled 12 cases, all linked to surgeon A. A. Berg. However, Berg declined authorship due to his lack of prior involvement. Ginzburg and Oppenheimer then connected with Crohn, who received the manuscript, which was later published with his name listed first and two additional cases included.
Originally, the disease was referred to as "regional ileitis," reflecting the findings of the time, but subsequent reports revealed its presence throughout the gastrointestinal tract, leading to the adoption of the eponym. In Poland, it was historically called “Lesniowski-Crohn's disease.” There has been growing criticism of medical eponyms for their inaccuracies, prompting a movement towards using non-possessive forms, such as "Crohn disease," which has gained traction in recent years among academic and medical publications.
| Biology and health sciences | Specific diseases | Health |
63531 | https://en.wikipedia.org/wiki/Ulcerative%20colitis | Ulcerative colitis | Ulcerative colitis (UC) is one of the two types of inflammatory bowel disease (IBD), with the other type being Crohn's disease. It is a long-term condition that results in inflammation and ulcers of the colon and rectum. The primary symptoms of active disease are abdominal pain and diarrhea mixed with blood (hematochezia). Weight loss, fever, and anemia may also occur. Often, symptoms come on slowly and can range from mild to severe. Symptoms typically occur intermittently with periods of no symptoms between flares. Complications may include abnormal dilation of the colon (megacolon), inflammation of the eye, joints, or liver, and colon cancer.
The cause of UC is unknown. Theories involve immune system dysfunction, genetics, changes in the normal gut bacteria, and environmental factors. Rates tend to be higher in the developed world with some proposing this to be the result of less exposure to intestinal infections, or to a Western diet and lifestyle. The removal of the appendix at an early age may be protective. Diagnosis is typically by colonoscopy with tissue biopsies.
Several medications are used to treat symptoms and bring about and maintain remission, including aminosalicylates such as mesalazine or sulfasalazine, steroids, immunosuppressants such as azathioprine, and biologic therapy. Removal of the colon by surgery may be necessary if the disease is severe, does not respond to treatment, or if complications such as colon cancer develop. Removal of the colon and rectum generally cures the condition.
Signs and symptoms
Gastrointestinal
People with ulcerative colitis usually present with diarrhea mixed with blood, of gradual onset that persists for an extended period of time (weeks). It is estimated that 90% of people experience rectal bleeding (of varying severity), 90% experience watery or loose stools with increased stool frequency (diarrhea), and 75-90% of people experience bowel urgency. Additional symptoms may include fecal incontinence, mucous rectal discharge, and nocturnal defecations. With proctitis (inflammation of the rectum), people with UC may experience urgency or rectal tenesmus, which is the urgent desire to evacuate the bowels but with the passage of little stool. Tenesmus may be misinterpreted as constipation, due to the urge to defecate despite small volume of stool passage. Bloody diarrhea and abdominal pain may be more prominent features in severe disease. The severity of abdominal pain with UC varies from mild discomfort to very painful bowel movements and abdominal cramping. High frequency of bowel movements, weight loss, nausea, fatigue, and fever are also common during disease flares. Chronic bleeding from the GI tract, chronic inflammation, and iron deficiency often leads to anemia, which can affect quality of life.
The clinical presentation of ulcerative colitis depends on the extent of the disease process. Up to 15% of individuals may have severe disease upon initial onset of symptoms. A substantial proportion (up to 45%) of people with a history of UC without any ongoing symptoms (clinical remission) have objective evidence of ongoing inflammation. Ulcerative colitis is associated with a generalized inflammatory process that can affect many parts of the body. Sometimes, these associated extra-intestinal symptoms are the initial signs of the disease.
Extent of involvement
In contrast to Crohn's disease, which can affect areas of the gastrointestinal tract outside of the colon, ulcerative colitis is usually confined to the colon. Inflammation in ulcerative colitis is usually continuous, typically involving the rectum, with involvement extending proximally (to sigmoid colon, ascending colon, etc.). In contrast, inflammation with Crohn's disease is often patchy, with so-called "skip lesions" (intermittent regions of inflamed bowel).
The disease is classified by the extent of involvement, depending on how far the disease extends: proctitis (rectal inflammation), left sided colitis (inflammation extending to descending colon), and extensive colitis (inflammation proximal to the descending colon). Proctosigmoiditis describes inflammation of the rectum and sigmoid colon. Pancolitis describes involvement of the entire colon, extending from the rectum to the cecum. While usually associated with Crohn's disease, ileitis (inflammation of the ileum) also occurs in UC. About 17% of individuals with UC have ileitis. Ileitis more commonly occurs in the setting of pancolitis (occurring in 20% of cases of pancolitis), and tends to correlate with the activity of colitis. This so-called "backwash ileitis" can occur in 10–20% of people with pancolitis and is believed to be of little clinical significance.
Severity of disease
In addition to the extent of involvement, UC is also characterized by severity of disease. Severity of disease is defined by symptoms, objective markers of inflammation (endoscopic findings, blood tests), disease course, and the impact of the disease on day-to-day life. Most patients are categorized through endoscopy and fecal calprotectin levels. Indicators of low risk for future complications in mild and moderate UC include the following parameters: exhibiting less than 6 stools daily and lack of fever/weight loss. Other indicators include lack of extraintestinal symptoms, low levels of the inflammatory markers C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR), and fecal calprotectin, and later age of diagnosis (over 40 years). Mild disease correlates with fewer than four stools daily; in addition, mild urgency and rectal bleeding may occur intermittently. Mild disease lacks systemic signs of toxicity (e.g. fever, chills, weight changes) and exhibits normal levels of the serum inflammatory markers ESR and CRP.
Moderate to severe disease correlates with more than six stools daily, frequent bloody stools and urgency. Moderate abdominal pain, low-grade fever, , and anemia may develop. ESR and CRP are usually elevated.
The Mayo Score, which incorporates a combination of clinical symptoms (stool frequency and amount of rectal bleeding) with endoscopic findings and a physicians assessment of severity, is often used clinically to classify UC as mild, moderate or severe.
Acute-Severe Ulcerative Colitis (ASUC) is a severe form which presents acutely and with severe symptoms. This fulminant type is associated with severe symptoms (usually diarrhea, rectal bleeding and abdominal pain) and is usually associated with systemic symptoms including fever. It is associated with a high mortality rate as compared to milder forms of UC, with a 3-month and 12 month mortality rate of 0.84% and 1% respectively. People with fulminant UC may have inflammation extending beyond just the mucosal layer, causing impaired colonic motility and leading to toxic megacolon. Toxic megacolon represents a medical emergency, one often treated surgically. If the serous membrane is involved, a colonic perforation may ensue, which has a 50% mortality rate in people with UC. Other complications include hemorrhage, venous thromboembolism, and secondary infections of the colon including C. difficile or cytomegalovirus colitis.
Ulcerative colitis may improve and enter remission.
Extraintestinal manifestations and complications
UC is characterized by immune dysregulation and systemic inflammation, which may result in symptoms and complications outside the colon. Commonly affected organs include: eyes, joints, skin, and liver. The frequency of such extraintestinal manifestations has been reported as between 6 and 47%.
UC may affect the mouth. About 8% of individuals with UC develop oral manifestations. The two most common oral manifestations are aphthous stomatitis and angular cheilitis. Aphthous stomatitis is characterized by ulcers in the mouth, which are benign, noncontagious and often recurrent. Angular chelitis is characterized by redness at the corners of the mouth, which may include painful sores or breaks in the skin. Very rarely, benign pustules may occur in the mouth (pyostomatitis vegetans).
UC may affect the eyes manifesting in scleritis, iritis, and conjunctivitis. Patients may be asymptomatic or experience redness, burning, or itching in eyes. Inflammation may occur in the interior portion of the eye, leading to uveitis and iritis. Uveitis can cause blurred vision and eye pain, especially when exposed to light (photophobia). Untreated, uveitis can lead to permanent vision loss. Inflammation may also involve the white part of the eye (sclera) or the overlying connective tissue (episclera), causing conditions called scleritis and episcleritis. Ulcerative colitis is most commonly associated with uveitis and episcleritis.
UC may cause several joint manifestations, including a type of rheumatologic disease known as seronegative arthritis, which may affect few large joints (oligoarthritis), the vertebra (ankylosing spondylitis) or several small joints of the hands and feet (peripheral arthritis). Often the insertion site where muscle attaches to bone (entheses) becomes inflamed (enthesitis). Inflammation may affect the sacroiliac joint (sacroiliitis). It is estimated that around 50% of IBD patients suffer from migratory arthritis. Synovitis, or inflammation of the synovial fluid surrounding a joint, can occur for months and recur in later times but usually does not erode the joint. The symptoms of arthritis include joint pain, swelling, and effusion, and often leads to significant morbidity. Ankylosing spondylitis and sacroilitis usually occur independent of bowel disease activity in UC.
Ulcerative colitis may affect the skin. The most common type of skin manifestation, erythema nodosum, presents in up to 3% of UC patients. It develops as raised, tender red nodules usually appearing on the outer areas of the arms or legs, especially in the anterior tibial area (shins). The nodules have diameters that measure approximately 1–5 cm. Erythema nodosum is due to inflammation of the underlying subcutaneous tissue (panniculitis), and biopsy will display focal panniculitis (although is often unnecessary in diagnosis). In contrast to joint-related manifestations, erythema nodosum often occurs alongside intestinal disease. Thus, treatment of UC can often lead to resolution of skin nodules.
Another skin condition associated with UC is pyoderma gangrenosum, which presents as deep skin ulcerations. Pyoderma gangrenosum is seen in about 1% of patients with UC and its formation is usually independent of bowel inflammation. Pyoderma gangrenosum is characterized by painful lesions or nodules that become ulcers which progressively grow. The ulcers are often filled with sterile pus-like material. In some cases, pyoderma gangrenosum may require injection with corticosteroids. Treatment may also involve inhibitors of tumor necrosis factor (TNF), a cytokine that promotes cell survival.
Other associations determined between the skin and ulcerative colitis include a skin condition known as hidradenitis suppurativa (HS). This condition represents a chronic process in which follicles become occluded leading to recurring inflammation of nodules and abscesses and even fistulas tunnels in the skin that drain fluid.
Ulcerative colitis may affect the circulatory and endocrine system. UC increases the risk of blood clots in both arteries and veins; painful swelling of the lower legs can be a sign of deep venous thrombosis, while difficulty breathing may be a result of pulmonary embolism (blood clots in the lungs). The risk of blood clots is about threefold higher in individuals with IBD. The risk of venous thromboembolism is high in ulcerative colitis due to hypercoagulability from inflammation, especially with active or extensive disease. Additional risk factors may include surgery, hospitalization, pregnancy, the use of corticosteroids and tofacitinib, a JAK inhibitor.
Osteoporosis may occur related to systemic inflammation or prolonged steroid use in the treatment of UC, which increases the risk of bone fractures. Clubbing, a deformity of the ends of the fingers, may occur. Amyloidosis may occur, especially with severe and poorly controlled disease, which usually presents with protein in the urine (proteinuria) and nephritic syndrome.
Primary sclerosing cholangitis
Ulcerative colitis (UC) has a significant association with primary sclerosing cholangitis (PSC), a progressive inflammatory disorder of small and large bile ducts. Up to 70-90% of people with primary sclerosing cholangitis have ulcerative colitis. As many as 5% of people with UC may progress to develop primary sclerosing cholangitis. PSC is more common in men, and often begins between 30 and 40 years of age. It can present asymptomatically or exhibit symptoms of itchiness (pruritis) and fatigue. Other symptoms include systemic signs such as fever and night sweats. Such symptoms are often associated with a bacterial episodic version of PSC. Upon physical exam, one may discern enlarged liver contours (hepatomegaly) or enlarged spleen (splenomegaly) as well as areas of excoriation. Yellow coloring of the skin, or jaundice, may also be present due to excess of bile byproduct buildup (bilirubin) from the biliary tract.
In diagnosis, lab results often reveal a pattern indicative of biliary disease (cholestatic pattern). This is often displayed by markedly elevated alkaline phosphatase levels and milder or no elevation in liver enzyme levels. Results of endoscopic retrograde cholangiography (ERC) may show bile ducts with thicker walls, areas of dilation or narrowing. However, some patients with UC and PSC have inflammation that has significantly affected only ramified intrahepatic bile ducts of smaller diameter, also known as "small ducts", which are not visualized by ERC.
In some cases, primary sclerosing cholangitis occurs several years before the bowel symptoms of ulcerative colitis develop. PSC does not parallel the onset, extent, duration, or activity of the colonic inflammation in ulcerative colitis. In addition, colectomy does not have an impact on the course of primary sclerosing cholangitis in individuals with UC. PSC is associated with an increased risk of colorectal cancer and cholangiocarcinoma (bile duct cancer). PSC is a progressive condition, and may result in cirrhosis of the liver. No specific therapy has been proven to affect the long-term course of PSC.
Causes
Ulcerative colitis is an autoimmune disease characterized by T-cells infiltrating the colon. No direct causes for UC are known, but factors such as genetics, environment, and an overactive immune system play a role. UC is associated with comorbidities that produce symptoms in many areas of the body outside the digestive system.
Genetic factors
A genetic component to the cause of UC can be hypothesized based on aggregation of UC in families, variation of prevalence between different ethnicities, genetic markers and linkages. In addition, the identical twin concordance rate is 10%, whereas the dizygotic twin concordance rate is only 3%. Between 8 and 14% of people with ulcerative colitis have a family history of inflammatory bowel disease. In addition, people with a first degree relative with UC have a four-fold increase in their risk of developing the disease.
Twelve regions of the genome may be linked to UC, including, in the order of their discovery, chromosomes 16, 12, 6, 14, 5, 19, 1, and 3, but none of these loci has been consistently shown to be at fault, suggesting that the disorder is influenced by multiple genes. For example, chromosome band 1p36 is one such region thought to be linked to inflammatory bowel disease. Some of the putative regions encode transporter proteins such as OCTN1 and OCTN2. Other potential regions involve cell scaffolding proteins such as the MAGUK family. Human leukocyte antigen associations may even be at work. In fact, this linkage on chromosome 6 may be the most convincing and consistent of the genetic candidates.
Multiple autoimmune disorders are associated with ulcerative colitis, including celiac disease, psoriasis, lupus erythematosus, rheumatoid arthritis, episcleritis, and scleritis. Ulcerative colitis is also associated with acute intermittent porphyria.
Environmental factors
Many hypotheses have been raised for environmental factors contributing to the pathogenesis of ulcerative colitis, including diet, breastfeeding and medications. Breastfeeding may have a protective effect in the development of ulcerative colitis. One study of isotretinoin found a small increase in the rate of UC.
As the colon is exposed to many dietary substances which may encourage inflammation, dietary factors have been hypothesized to play a role in the pathogenesis of both ulcerative colitis and Crohn's disease. However, research does not show a link between diet and the development of ulcerative colitis. Few studies have investigated such an association; one study showed no association of refined sugar on the number of people affected of ulcerative colitis. High intake of unsaturated fat and vitamin B6 may enhance the risk of developing ulcerative colitis. Other identified dietary factors that may influence the development and/or relapse of the disease include meat protein and alcoholic beverages. Specifically, sulfur has been investigated as being involved in the cause of ulcerative colitis, but this is controversial. Sulfur restricted diets have been investigated in people with UC and animal models of the disease. The theory of sulfur as an etiological factor is related to the gut microbiota and mucosal sulfide detoxification in addition to the diet.
As a result of a class-action lawsuit and community settlement with DuPont, three epidemiologists conducted studies on the population surrounding a chemical plant that was exposed to PFOA at levels greater than in the general population. The studies concluded that there was an association between PFOA exposure and six health outcomes, one of which being ulcerative colitis.
Alternative theories
Levels of sulfate-reducing bacteria tend to be higher in persons with ulcerative colitis, which could indicate higher levels of hydrogen sulfide in the intestine. An alternative theory suggests that the symptoms of the disease may be caused by toxic effects of the hydrogen sulfide on the cells lining the intestine.
Infection by Mycobacterium avium, subspecies paratuberculosis, has been proposed as the ultimate cause of both ulcerative colitis and Crohn's disease.
Pathophysiology
An increased amount of colonic sulfate-reducing bacteria has been observed in some people with ulcerative colitis, resulting in higher concentrations of the toxic gas hydrogen sulfide. Human colonic mucosa is maintained by the colonic epithelial barrier and immune cells in the lamina propria (see intestinal mucosal barrier). The short-chain fatty acid n-butyrate gets oxidized through the beta oxidation pathway into carbon dioxide and ketone bodies. It has been shown that n-butyrate helps supply nutrients to this epithelial barrier. Studies have proposed that hydrogen sulfide plays a role in impairing this beta-oxidation pathway by interrupting the short chain acetyl-CoA dehydrogenase, an enzyme within the pathway. Furthermore, it has been suggested that the protective effect of smoking in ulcerative colitis is due to the hydrogen cyanide from cigarette smoke reacting with hydrogen sulfide to produce the non-toxic isothiocyanate, thereby inhibiting sulfides from interrupting the pathway. An unrelated study suggested that the sulfur contained in red meats and alcohol may lead to an increased risk of relapse for people in remission.
Other proposed mechanisms driving the pathophysiology of ulcerative colitis involve an abnormal immune response to the normal gut microbiota. This involves abnormal activity of antigen presenting cells (APCs) including dendritic cells and macrophages. Normally, dendritic cells and macrophages patrol the intestinal epithelium and phagocytose (engulf and destroy) pathogenic microorganisms and present parts of the microorganism as antigens to T-cells to stimulate differentiation and activation of the T-cells. However, in ulcerative colitis, aberrant activity of dendritic cells and macrophages results in them phagocytosing bacteria of the normal gut microbiome. After ingesting the microbiome bacterium, the APCs release the cytokine TNFα which stimulates inflammatory signaling and recruits inflammatory cells to the intestines, leading to the inflammation that is characteristic of ulcerative colitis. The TNF inhibitors, including infliximab, adalimumab and golimumab, are used to inhibit this step during the treatment of ulcerative colitis. After phagocytosing the microbe, the APCs then enter the mesenteric lymph nodes where they present antigens to naive T-cells while also releasing the pro-inflammatory cytokines IL-12 and IL-23 which lead to T cell differentiation into Th1 and Th17 T-cells. IL-12 and IL-23 signaling is blocked by the biologic ustekinumab and IL-23 is blocked by guselkumab, mirikizumab and risankizumab, medications that are used in the treatment of ulcerative colitis. From the mesenteric lymph node, the T-cells then enter the intestinal lymphatic venule which provides transport to the intestinal epithelium where they mediate further inflammation characteristic of ulcerative colitis. The T-cells exit the lymphatic venule via the adhesion protein mucosal vascular addressin cell adhesion molecule 1 MAdCAM-1, the ulcerative colitis biologic treatment vedolizumab inhibits T-cell migration out of the lymphatic venules by blocking binding to MAdCAM-1. While the medications ozanimod and etrasimod inhibit the sphingosine-1-phosphate receptor to prevent T-cell migration into the efferent lymphatic venules. Once the mature Th1 and Th17 T-cells exit the efferent lymphatic venule, they travel to the intestinal mucosa and cause further inflammation. T-cell mediated inflammation is thought to be driven by the JAK-STAT intracellular T-cell signaling pathway, leading to the transcription, translation and release of inflammatory cytokines. This T-cell JAK-STAT signaling is inhibited by the medications tofacitinib, filgotinib and upadacitinib which are used in the treatment of ulcerative colitis.
Diagnosis
The initial diagnostic workup for ulcerative colitis consists of a complete history and physical examination, assessment of signs and symptoms, laboratory tests and endoscopy. Severe UC can exhibit high erythrocyte sedimentation rate (ESR), decreased albumin (a protein produced by the liver), and various changes in electrolytes. As discussed previously, UC patients often also display elevated alkaline phosphatase. Inflammation in the intestine may also cause higher levels of fecal calprotectin or lactoferrin.
Specific testing may include the following:
A complete blood count is done to check for anemia; thrombocytosis, a high platelet count, is occasionally seen
Electrolyte studies and kidney function tests are done, as chronic diarrhea may be associated with hypokalemia, hypomagnesemia and kidney injury.
Liver function tests are performed to screen for bile duct involvement: primary sclerosing cholangitis.
Imaging such as x-ray or CT scan to evaluate for possible perforation or toxic megacolon
Stool culture and Clostridioides difficile stool assay to rule out infectious colitis
Inflammatory markers, such as erythrocyte sedimentation rate or C-reactive protein
Lower endoscopy to evaluate the rectum and distal large intestine (sigmoidoscopy) or entire colon and end of the small intestine (colonoscopy) for ulcers and inflammation
Although ulcerative colitis is a disease of unknown causation, inquiry should be made as to unusual factors believed to trigger the disease.
The simple clinical colitis activity index was created in 1998 and is used to assess the severity of symptoms.
Endoscopic
The best test for diagnosis of ulcerative colitis remains endoscopy, which is examination of the internal surface of the bowel using a flexible camera. Initially, a flexible sigmoidoscopy may be completed to establish the diagnosis. The physician may elect to limit the extent of the initial exam if severe colitis is encountered to minimize the risk of perforation of the colon. However, a complete colonoscopy with entry into the terminal ileum should be performed to rule out Crohn's disease, and assess extent and severity of disease. Endoscopic findings in ulcerative colitis include: erythema (redness of the mucosa), friability of the mucosa, superficial ulceration, and loss of the vascular appearance of the colon. When present, ulcerations may be confluent. Pseudopolyps may be observed.
Ulcerative colitis is usually continuous from the rectum, with the rectum almost universally being involved. Perianal disease is rare. The degree of involvement endoscopically ranges from proctitis (rectal inflammation) to left sided colitis (extending to descending colon), to extensive colitis (extending proximal to descending colon).
Histologic
Biopsies of the mucosa are taken during endoscopy to confirm the diagnosis of UC and differentiate it from Crohn's disease, which is managed differently clinically. Histologic findings in ulcerative colitis includes: distortion of crypt architecture, crypt abscesses, and inflammatory cells in the mucosa (lymphocytes, plasma cells, and granulocytes). Unlike the transmural inflammation seen in Crohn's disease, the inflammation of ulcerative colitis is limited to the mucosa.
Laboratory tests
Blood and stool tests serve primarily to assess disease severity, level of inflammation and rule out causes of infectious colitis. All individuals with suspected ulcerative colitis should have stool testing to rule out infection.
A complete blood count may demonstrate anemia, leukocytosis, or thrombocytosis. Anemia may be caused by inflammation or bleeding. Chronic blood loss may lead to iron deficiency as a cause for anemia, particularly microcytic anemia (small red blood cells), which can be evaluated with a serum ferritin, iron, total iron-binding capacity and transferrin saturation. Anemia may be due to a complication of treatment from azathioprine, which can cause low blood counts, or sulfasalazine, which can result in folate deficiency. Thiopurine metabolites (from azathioprine) and a folate level can help.
UC may cause high levels of inflammation throughout the body, which may be quantified with serum inflammatory markers, such as CRP and ESR. However, elevated inflammatory markers are not specific for UC and elevations are commonly seen in other conditions, including infection. In addition, inflammatory markers are not uniformly elevated in people with ulcerative colitis. Twenty five percent of individuals with confirmed inflammation on endoscopic evaluation have a normal CRP level. Serum albumin may also be low related to inflammation, in addition to loss of protein in the GI tract associated with bleeding and colitis. Low serum levels of vitamin D are associated with UC, although the significance of this finding is unclear.
Specific antibody markers may be elevated in ulcerative colitis. Specifically, perinuclear antineutrophil cytoplasmic antibodies (pANCA) are found in 70 percent of cases of UC. Antibodies against Saccharomyces cerevisiae may be present, but are more often positive in Crohn's disease compared with ulcerative colitis. However, due to poor accuracy of these serolologic tests, they are not helpful in the diagnostic evaluation of possible inflammatory bowel disease.
Several stool tests may help quantify the extent of inflammation present in the colon and rectum. Fecal calprotectin is elevated in inflammatory conditions affecting the colon, and is useful in distinguishing irritable bowel syndrome (noninflammatory) from a flare in inflammatory bowel disease. Fecal calprotectin is 88% sensitive and 79% specific for the diagnosis of ulcerative colitis. If the fecal calprotectin is low, the likelihood of inflammatory bowel disease are less than 1 percent. Lactoferrin is an additional nonspecific marker of intestinal inflammation.
Imaging
Overall, imaging tests, such as x-ray or CT scan, may be helpful in assessing for complications of ulcerative colitis, such as perforation or toxic megacolon. Bowel ultrasound (US) is a cost-effective, well-tolerated, non-invasive and readily available tool for the management of patients with inflammatory bowel disease (IBD), including UC, in clinical practice. Some studies demonstrated that bowel ultrasound is an accurate tool for assessing disease activity in people with ulcerative colitis. Imaging is otherwise of limited use in diagnosing ulcerative colitis. Magnetic resonance imaging (MRI) is necessary to diagnose underlying PSC.
Abdominal xray is often the test of choice and may display nonspecific findings in cases of mild or moderate ulcerative colitis. In circumstances of severe UC, radiographic findings may include thickening of the mucosa, often termed "thumbprinting", which indicates swelling due to fluid displacement (edema). Other findings may include colonic dilation and stool buildup evidencing constipation.
Similar to xray, in mild ulcerative colitis, double contrast barium enema often shows nonspecific findings. Conversely, barium enema may display small buildups of barium in microulcerations. Severe UC can be characterized by various polyps, colonic shortening, loss of haustrae (the small bulging pouches in the colon), and narrowing of the colon. It is important to note that barium enema should not be conducted in patients exhibiting very severe symptoms as this may slow or stop stool passage through the colon causing ileus and toxic megacolon.
Other methods of imaging include computed tomography (CT) and magnetic resonance imaging (MRI). Both may depict colonic wall thickening but have decreased ability to find early signs of wall changes when compared to barium enema. In cases of severe ulcerative colitis, however, they often exhibit equivalent ability to detect colonic changes.
Doppler ultrasound is the last means of imaging that may be used. Similar to the imaging methods mentioned earlier, this may show some thickened bowel wall layers. In severe cases, this may show thickening in all bowel wall layers (transmural thickness).
Differential diagnosis
Several conditions may present in a similar manner as ulcerative colitis and should be excluded. Such conditions include: Crohn's disease, infectious colitis, nonsteroidal anti-inflammatory drug enteropathy, and irritable bowel syndrome. Alternative causes of colitis should be considered, such as ischemic colitis (inadequate blood flow to the colon), radiation colitis (if prior exposure to radiation therapy), or chemical colitis. Pseudomembranous colitis may occur due to Clostridioides difficile infection following administration of antibiotics. Entamoeba histolytica is a protozoan parasite that causes intestinal inflammation. A few cases have been misdiagnosed as UC with poor outcomes occurring due to the use of corticosteroids.
The most common disease that mimics the symptoms of ulcerative colitis is Crohn's disease, as both are inflammatory bowel diseases that can affect the colon with similar symptoms. It is important to differentiate these diseases since their courses and treatments may differ. In some cases, however, it may not be possible to tell the difference, in which case the disease is classified as indeterminate colitis. Crohn's disease can be distinguished from ulcerative colitis in several ways. Characteristics that indicate Crohn's include evidence of disease around the anus (perianal disease). This includes anal fissures and abscesses as well as fistulas, which are abnormal connections between various bodily structures.
Infectious colitis is another condition that may present in similar manner to ulcerative colitis. Endoscopic findings are also oftentimes similar. One can discern whether a patient has infectious colitis by employing tissue cultures and stool studies. Biopsy of the colon is another beneficial test but is more invasive.
Other forms of colitis that may present similarly include radiation and diversion colitis. Radiation colitis occurs after irradiation and often affects the rectum or sigmoid colon, similar to ulcerative colitis. Upon histology radiation colitis may indicate eosinophilic infiltrates, abnormal epithelial cells, or fibrosis. Diversion colitis, on the other hand, occurs after portions of bowel loops have been removed. Histology in this condition often shows increased growth of lymphoid tissue.
In patients who have undergone transplantation, graft versus host disease may also be a differential diagnosis. This response to transplantation often causes prolonged diarrhea if the colon is affected. Typical symptoms also include rash. Involvement of the upper gastrointestinal tract may lead to difficulty swallowing or ulceration. Upon histology, graft versus host disease may present with crypt cell necrosis and breakdown products within the crypts themselves.
Management
Standard treatment for ulcerative colitis depends on the extent of involvement and disease severity. The goal is to induce remission initially with medications, followed by the administration of maintenance medications to prevent a relapse. The concept of induction of remission and maintenance of remission is very important. The medications used to induce and maintain a remission somewhat overlap, but the treatments are different. Physicians first direct treatment to inducing remission, which involves relief of symptoms and mucosal healing of the colon's lining, and then longer-term treatment to maintain remission and prevent complications.
For acute stages of the disease, a low fiber diet may be recommended.
Medication
The first-line maintenance medication for ulcerative colitis in remission is mesalazine (also known as mesalamine or 5-ASA). For patients with active disease limited to the left colon (descending colon) or proctitis, mesalazine is also the first-line agent, and a combination of suppositories and oral mesalazine may be tried. Adding corticosteroids such as prednisone is also common in active disease, especially if remission is not achieved through mesalazine monotherapy, but they are not used in long-term treatment as their risks then outweigh their benefits. Immunosuppressive medications such as azathioprine and biological agents such as infliximab, adalimumab, ustekinumab, vedolizumab, or risankizumab are given in severe disease or if a patient cannot achieve remission with mesalazine and corticosteroids. As an alternative to mesalazine, one of its prodrugs such as sulfasalazine may be chosen for treatment of active disease or maintenance therapy, but the prodrugs have greater potential for serious side effects and have not been demonstrated to be superior to mesalazine in large trials.
A formulation of budesonide was approved by the U.S. Food and Drug Administration (FDA) for treatment of active ulcerative colitis in January 2013. In 2018, tofacitinib was approved for treatment of moderately to severely active ulcerative colitis in the United States, the first oral medication indicated for long term use in this condition. The evidence on methotrexate does not show a benefit in producing remission in people with ulcerative colitis. Cyclosporine is effective for severe UC and tacrolimus has also shown benefits. Etrasimod was approved for medical use in the United States in October 2023.
Aminosalicylates
Sulfasalazine has been a major agent in the therapy of mild to moderate ulcerative colitis for over 50 years. In 1977, it was shown that 5-aminosalicylic acid (5-ASA, mesalazine/mesalamine) was the therapeutically active component in sulfasalazine. Many 5-ASA drugs have been developed with the aim of delivering the active compound to the large intestine to maintain therapeutic efficacy but with reduction of the side effects associated with the sulfapyridine moiety in sulfasalazine. Oral 5-ASA drugs are particularly effective in inducing and in maintaining remission in mild to moderate ulcerative colitis. Rectal suppository, foam or liquid enema formulations of 5-ASA are used for colitis affecting the rectum, sigmoid or descending colon, and have been shown to be effective especially when combined with oral treatment.
Biologics
Biologic treatments such as the TNF inhibitors infliximab, adalimumab, and golimumab are commonly used to treat people with UC who are no longer responding to corticosteroids. Tofacitinib and vedolizumab can also produce good clinical remission and response rates in UC. Biologics may be used early in treatment (step down approach), or after other treatments have failed to induce remission (step up approach); the strategy should be individualized.
Unlike aminosalicylates, biologics can cause serious side effects such as an increased risk of developing extra-intestinal cancers, heart failure; and weakening of the immune system, resulting in a decreased ability of the immune system to clear infections and reactivation of latent infections such as tuberculosis. For this reason, people on these treatments are closely monitored and are often tested for hepatitis and tuberculosis annually.
Etrasimod, a once-daily oral sphingosine 1-phosphate (S1P) receptor modulator that selectively activates S1P receptor subtypes 1, 4, and 5 with no detectable activity on S1P 2 or 3, is in development for treatment of immune-mediated diseases, including ulcerative colitis, and was shown in 2 randomized trials to be effective and well tolerated as induction and maintenance therapy in patients with moderately to severely active ulcerative colitis.
Nicotine
Unlike Crohn's disease, ulcerative colitis has a lesser chance of affecting smokers than non-smokers. In select individuals with a history of previous tobacco use, resuming low dose smoking may improve signs and symptoms of active ulcerative colitis, but it is not recommended due to the overwhelmingly negative health effects of tobacco. Studies using a transdermal nicotine patch have shown clinical and histological improvement. In one double-blind, placebo-controlled study conducted in the United Kingdom, 48.6% of people with UC who used the nicotine patch, in conjunction with their standard treatment, showed complete resolution of symptoms. Another randomized, double-blind, placebo-controlled, single-center clinical trial conducted in the United States showed that 39% of people who used the patch showed significant improvement, versus 9% of those given a placebo. However, nicotine therapy is generally not recommended due to side effects and inconsistent results.
Iron supplementation
The gradual loss of blood from the gastrointestinal tract, as well as chronic inflammation, often leads to anemia, and professional guidelines suggest routinely monitoring for anemia with blood tests repeated every three months in active disease and annually in quiescent disease. Adequate disease control usually improves anemia of chronic disease, but iron deficiency anemia should be treated with iron supplements. The form in which treatment is administered depends both on the severity of the anemia and on the guidelines that are followed. Some advise that parenteral iron be used first because people respond to it more quickly, it is associated with fewer gastrointestinal side effects, and it is not associated with compliance issues. Others require oral iron to be used first, as people eventually respond and many will tolerate the side effects.
Anticholinergics
Anticholinergic drugs, more specifically muscarinic antagonists, are sometimes used to treat abdominal cramps in connection with ulcerative colitis through their calming effect on colonic peristalsis (reducing both amplitude and frequency) and intestinal tone. Some medical authorities suggest over-the-counter anticholinergic drugs as potential helpful treatments for abdominal cramping in mild ulcerative colitis. However, their use is contraindicated especially in moderate to severe disease states because of the potential for anticholinergic treatment to induce toxic megacolon in patients with colonic inflammation. Toxic megacolon is a state in which the colon is abnormally distended, and may in severe or untreated cases lead to colonic perforation, sepsis, and death.
Immunosuppressant therapies, infection risks and vaccinations
Many patients affected by ulcerative colitis need immunosuppressant therapies, which may be associated with a higher risk of contracting opportunistic infectious diseases.
Many of these potentially harmful diseases, such as Hepatitis B, Influenza, chickenpox, herpes zoster virus, pneumococcal pneumonia, or human papilloma virus, can be prevented by vaccines. Each drug used in the treatment of IBD should be classified according to the degree of immunosuppression induced in the patient. Several guidelines suggest investigating patients' vaccination status before starting any treatment and performing vaccinations against vaccine preventable diseases when required.
Compared to the rest of the population, patients affected by IBD are known to be at higher risk of contracting some vaccine-preventable diseases. Patients treated with Janus kinase inhibitor showed higher risk of Shingles. Nevertheless, despite the increased risk of infections, vaccination rates in IBD patients are known to be suboptimal and may also be lower than vaccination rates in the general population.
Surgery
Unlike in Crohn's disease, the gastrointestinal aspects of ulcerative colitis can generally be cured by surgical removal of the large intestine, though extraintestinal symptoms may persist. This procedure is necessary in the event of: exsanguinating hemorrhage, frank perforation, or documented or strongly suspected carcinoma. Surgery is also indicated for people with severe colitis or toxic megacolon. People with symptoms that are disabling and do not respond to drugs may wish to consider whether surgery would improve the quality of life.
The removal of the entire large intestine, known as a proctocolectomy, results in a permanent ileostomy – where a stoma is created by pulling the terminal ileum through the abdomen. Intestinal contents are emptied into a removable ostomy bag which is secured around the stoma using adhesive.
Another surgical option for ulcerative colitis that is affecting most of the large bowel is called the ileal pouch-anal anastomosis (IPAA). This is a two- or three-step procedure. In a three-step procedure, the first surgery is a sub-total colectomy, in which the large bowel is removed, but the rectum remains in situ, and a temporary ileostomy is made. The second step is a proctectomy and formation of the ileal pouch (commonly known as a "j-pouch"). This involves removing the large majority of the remaining rectal stump and creating a new "rectum" by fashioning the end of the small intestine into a pouch and attaching it to the anus. After this procedure, a new type of ileostomy is created (known as a loop ileostomy) to allow the anastomoses to heal. The final surgery is a take-down procedure where the ileostomy is reversed and there is no longer the need for an ostomy bag. When done in two steps, a proctocolectomy – removing both the colon and rectum – is performed alongside the pouch formation and loop ileostomy. The final step is the same take-down surgery as in the three-step procedure. Time taken between each step can vary, but typically a six- to twelve-month interval is recommended between the first two steps, and a minimum of two to three months is required between the formation of the pouch and the ileostomy take-down.
While the ileal pouch procedure removes the need for an ostomy bag, it does not restore normal bowel function. In the months following the final operation, patients typically experience 8–15 bowel movements a day. Over time this number decreases, with many patients reporting 4–6 bowel movements after one year post-op. While many patients have success with this procedure, there are a number of known complications. Pouchitis, inflammation of the ileal pouch resulting in symptoms similar to ulcerative colitis, is relatively common. Pouchitis can be acute, remitting, or chronic however treatment using antibiotics, steroids, or biologics can be highly effective. Other complications include fistulas, abscesses, and pouch failure. Depending on the severity of the condition, pouch revision surgery may need to be performed. In some cased the pouch may need to be de-functioned or removed and an ileostomy recreated.
The risk of cancer arising from an ileal pouch anal anastomosis is low. However, annual surveillance with pouchoscopy may be considered in individuals with risk factors for dysplasia, such as a history of dysplasia or colorectal cancer, a history of PSC, refractory pouchitis, and severely inflamed atrophic pouch mucosa.
Bacterial recolonization
In a number of randomized clinical trials, probiotics have demonstrated the potential to be helpful in the treatment of ulcerative colitis. Specific types of probiotics such as Escherichia coli Nissle have been shown to induce remission in some people for up to a year.
A Cochrane review of controlled trials using various probiotics found low-certainty evidence that probiotic supplements may increase the probability of clinical remission. People receiving probiotics were 73% more likely to experience disease remission and over 2x as likely to report improvement in symptoms compared to those receiving a placebo, with no clear difference in minor or serious adverse effects. Although there was no clear evidence of greater remission when probiotic supplements were compared with 5‐aminosalicylic acid treatment as a monotherapy, the likelihood of remission was 22% higher if probiotics were used in combination with 5-aminosalicylic acid therapy.
It is unclear whether probiotics help to prevent future relapse in people with stable disease activity, either as a monotherapy or combination therapy.
Fecal microbiota transplant involves the infusion of human probiotics through fecal enemas. Ulcerative colitis typically requires a more prolonged bacteriotherapy treatment than Clostridioides difficile infection does to be successful, possibly due to the time needed to heal the ulcerated epithelium. The response of ulcerative colitis is potentially very favorable with one study reporting 67.7% of people experiencing complete remission. Other studies found a benefit from using fecal microbiota transplantation.
Alternative medicine
A variety of alternative medicine therapies have been used for ulcerative colitis, with inconsistent results. Curcumin (turmeric) therapy, in conjunction with taking the medications mesalamine or sulfasalazine, may be effective and safe for maintaining remission in people with quiescent ulcerative colitis. The effect of curcumin therapy alone on quiescent ulcerative colitis is unknown.
Treatments using cannabis or cannabis oil are uncertain. So far, studies have not determined its effectiveness and safety.
Abdominal pain management
Many interventions have been considered to manage abdominal pain in people with ulcerative colitis, including FODMAPs diet, relaxation training, yoga, kefir diet and stellate ganglion block treatment. It is unclear whether any of these are safe or effective at improving pain or reducing anxiety and depression.
Nutrition
Diet can play a role in symptoms of patients with ulcerative colitis.
The most avoided foods by patients are spicy foods, dairy products, alcohol, fruits and vegetables and carbonated beverages; these foods are mainly avoided during remission and to prevent relapse. In some cases, especially in the flares period, the dietary restrictions of these patients can be very severe and can lead to a compromised nutritional state. Some patients tend to eliminate gluten spontaneously, despite not having a definite diagnosis of Coeliac disease, because they believe that gluten can exacerbate gastrointestinal symptoms.
Mental health
Many studies found that patients with IBD reported a higher frequency of depressive and anxiety disorders than the general population, and most studies confirm that women with IBD are more likely than men to develop affective disorders and show that up to 65% of them may have depression disorder and anxiety disorder.
A meta analysis of interventions to improve mood (including talking therapy, antidepressants, and exercise) in people with inflammatory bowel disease found that they reduced inflammatory markers such as C-reactive protein and faecal calprotectin. Psychological therapies reduced inflammation more than antidepressants or exercise.
Prognosis
Poor prognostic factors include: age < 40 years upon diagnosis, extensive colitis, severe colitis on endoscopy, prior hospitalization, elevated CRP and low serum albumin.
Progression or remission
People with ulcerative colitis usually have an intermittent course, with periods of disease inactivity alternating with "flares" of disease. People with proctitis or left-sided colitis usually have a more benign course: only 15% progress proximally with their disease, and up to 20% can have sustained remission in the absence of any therapy. A subset of people experience a course of disease progress rapidly. In these cases, there is usually a failure to respond to medication and surgery often is performed within the first few years of disease onset. People with more extensive disease are less likely to sustain remission, but the rate of remission is independent of the severity of the disease. Several risk factors are associated with eventual need for colectomy, including: prior hospitalization for UC, extensive colitis, need for systemic steroids, young age at diagnosis, low serum albumin, elevated inflammatory markers (CRP & ESR), and severe inflammation seen during colonoscopy. Surgical removal of the large intestine is necessary in some cases.
Colorectal cancer
The risk of colorectal cancer is significantly increased in people with ulcerative colitis after ten years if involvement is beyond the splenic flexure. People with backwash ileitis might have an increased risk for colorectal carcinoma. Those people with only proctitis usually have no increased risk. It is recommended that people have screening colonoscopies with random biopsies to look for dysplasia after eight years of disease activity, at one to two year intervals.
Mortality
People with ulcerative colitis are at similar or perhaps slightly increased overall risk of death compared with the background population. However, the distribution of causes-of-death differs from the general population. Specific risk factors may predict worse outcomes and a higher risk of mortality in people with ulcerative colitis, including C. difficile infection and cytomegalovirus infection (due to reactivation).
Epidemiology
Together with Crohn's disease, about 11.2 million people were affected . Each year it newly occurs in 1 to 20 per 100,000 people, and 5 to 500 per 100,000 individuals are affected. The disease is more common in North America and Europe than other regions. Often it begins in people aged 15 to 30 years, or among those over 60. Males and females appear to be affected in equal proportions. It has also become more common since the 1950s. Together, ulcerative colitis and Crohn's disease affect about a million people in the United States. With appropriate treatment the risk of death appears the same as that of the general population. The first description of ulcerative colitis occurred around the 1850s.
Each year, ulcerative colitis newly occurs in 1 to 20 per 100,000 people (incidence), and there are a total of 5–500 per 100,000 individuals with the disease (prevalence). In 2015, a worldwide total of 47,400 people died due to inflammatory bowel disease (UC and Crohn's disease). The peak onset is between 30 and 40 years of age, with a second peak of onset occurring in the 6th decade of life. Ulcerative colitis is equally common among men and women. With appropriate treatment the risk of death appears similar to that of the general population. UC has become more common since the 1950s.
The geographic distribution of UC and Crohn's disease is similar worldwide, with the highest number of new cases a year of UC found in Canada, New Zealand and the United Kingdom. The disease is more common in North America and Europe than other regions. In general, higher rates are seen in northern locations compared to southern locations in Europe and the United States. UC is more common in western Europe compared with eastern Europe. Worldwide, the prevalence of UC varies from 2 to 299 per 100,000 people. Together, ulcerative colitis and Crohn's disease affect about a million people in the United States.
As with Crohn's disease, the rates of UC are greater among Ashkenazi Jews and decreases progressively in other persons of Jewish descent, non-Jewish Caucasians, Africans, Hispanics, and Asians. Appendectomy prior to age 20 for appendicitis and current tobacco use are protective against development of UC. However, former tobacco use is associated with a higher risk of developing the disease.
United States
, the number of new cases of UC in the United States was between 2.2 and 14.3 per 100,000 per year. The number of people affected in the United States in 2004 was between 37 and 246 per 100,000.
Canada
In Canada, between 1998 and 2000, the number of new cases per year was 12.9 per 100,000 population or 4,500 new cases. The number of people affected was estimated to be 211 per 100,000 or 104,000.
United Kingdom
In the United Kingdom 10 per 100,000 people newly develop the condition a year while the number of people affected is 243 per 100,000. Approximately 146,000 people in the United Kingdom have been diagnosed with UC.
History
The term ulcerative colitis was first used by Samuel Wilks in 1859. The term entered general medical vocabulary afterwards in 1888 with William Hale-White publishing a report of various cases of "ulcerative colitis".
UC was the first subtype of IBD to be identified.
Research
Helminthic therapy using the whipworm Trichuris suis has been shown in a randomized control trial from Iowa to show benefit in people with ulcerative colitis. The therapy tests the hygiene hypothesis which argues that the absence of helminths in the colons of people in the developed world may lead to inflammation. Both helminthic therapy and fecal microbiota transplant induce a characteristic Th2 white cell response in the diseased areas, which was unexpected given that ulcerative colitis was thought to involve Th2 overproduction.
Alicaforsen is a first generation antisense oligodeoxynucleotide designed to bind specifically to the human ICAM-1 messenger RNA through Watson-Crick base pair interactions in order to subdue expression of ICAM-1. ICAM-1 propagates an inflammatory response promoting the extravasation and activation of leukocytes (white blood cells) into inflamed tissue. Increased expression of ICAM-1 has been observed within the inflamed intestinal mucosa of ulcerative colitis patients, where ICAM-1 over production correlated with disease activity. This suggests that ICAM-1 is a potential therapeutic target in the treatment of ulcerative colitis.
Gram positive bacteria present in the lumen could be associated with extending the time of relapse for ulcerative colitis.
A series of drugs in development looks to disrupt the inflammation process by selectively targeting an ion channel in the inflammation signaling cascade known as KCa3.1. In a preclinical study in rats and mice, inhibition of KCa3.1 disrupted the production of Th1 cytokines IL-2 and TNF-α and decreased colon inflammation as effectively as sulfasalazine.
Neutrophil extracellular traps and the resulting degradation of the extracellular matrix have been reported in the colon mucosa in ulcerative colitis patients in clinical remission, indicating the involvement of the innate immune system in the etiology.
Fexofenadine, an antihistamine drug used in treatment of allergies, has shown promise in a combination therapy in some studies. Opportunely, low gastrointestinal absorption (or high absorbed drug gastrointestinal secretion) of fexofenadine results in higher concentration at the site of inflammation. Thus, the drug may locally decrease histamine secretion by involved gastrointestinal mast cells and alleviate the inflammation.
There is evidence that etrolizumab is effective for ulcerative colitis, with phase 3 trials underway as of 2016. Etrolizumab is a humanized monoclonal antibody that targets the β7 subunit of integrins α4β7 and αEβ7, ultimately blocking migration and retention of leukocytes in the intestinal mucosa. As of early 2022, Roche halted clinical trials for the use of etrolizumab in the treatment of ulcerative colitis.
A type of leukocyte apheresis, known as granulocyte and monocyte adsorptive apheresis, still requires large-scale trials to determine whether or not it is effective. Results from small trials have been tentatively positive.
Notable cases
| Biology and health sciences | Specific diseases | Health |
63539 | https://en.wikipedia.org/wiki/Alanine | Alanine | Alanine (symbol Ala or A), or α-alanine, is an α-amino acid that is used in the biosynthesis of proteins. It contains an amine group and a carboxylic acid group, both attached to the central carbon atom which also carries a methyl group side chain. Consequently it is classified as a nonpolar, aliphatic α-amino acid. Under biological conditions, it exists in its zwitterionic form with its amine group protonated (as ) and its carboxyl group deprotonated (as ). It is non-essential to humans as it can be synthesized metabolically and does not need to be present in the diet. It is encoded by all codons starting with GC (GCU, GCC, GCA, and GCG).
The L-isomer of alanine (left-handed) is the one that is incorporated into proteins. L-alanine is second only to L-leucine in rate of occurrence, accounting for 7.8% of the primary structure in a sample of 1,150 proteins. The right-handed form, D-alanine, occurs in peptides in some bacterial cell walls (in peptidoglycan) and in some peptide antibiotics, and occurs in the tissues of many crustaceans and molluscs as an osmolyte.
History and etymology
Alanine was first synthesized in 1850 when Adolph Strecker combined acetaldehyde and ammonia with hydrogen cyanide. The amino acid was named Alanin in German, in reference to aldehyde, with the interfix -an- for ease of pronunciation, the German ending -in used in chemical compounds being analogous to English -ine.
Structure
Alanine is an aliphatic amino acid, because the side-chain connected to the α-carbon atom is a methyl group (-CH3). Alanine is the simplest α-amino acid after glycine. The methyl side-chain of alanine is non-reactive and is therefore hardly ever directly involved in protein function. Alanine is a nonessential amino acid, meaning it can be manufactured by the human body, and does not need to be obtained through the diet. Alanine is found in a wide variety of foods, but is particularly concentrated in meats.
Sources
Biosynthesis
Alanine can be synthesized from pyruvate and branched chain amino acids such as valine, leucine, and isoleucine.
Alanine is produced by reductive amination of pyruvate, a two-step process. In the first step, α-ketoglutarate, ammonia and NADH are converted by glutamate dehydrogenase to glutamate, NAD+ and water. In the second step, the amino group of the newly formed glutamate is transferred to pyruvate by an aminotransferase enzyme, regenerating the α-ketoglutarate, and converting the pyruvate to alanine. The net result is that pyruvate and ammonia are converted to alanine, consuming one reducing equivalent. Because transamination reactions are readily reversible and pyruvate is present in all cells, alanine can be easily formed and thus has close links to metabolic pathways such as glycolysis, gluconeogenesis, and the citric acid cycle.
Chemical synthesis
L-Alanine is produced industrially by decarboxylation of L-aspartate by the action of aspartate 4-decarboxylase. Fermentation routes to L-alanine are complicated by alanine racemase.
Racemic alanine can be prepared by the condensation of acetaldehyde with ammonium chloride in the presence of sodium cyanide by the Strecker reaction,
or by the ammonolysis of 2-bromopropanoic acid.
Degradation
Alanine is broken down by oxidative deamination, the inverse reaction of the reductive amination reaction described above, catalyzed by the same enzymes. The direction of the process is largely controlled by the relative concentration of the substrates and products of the reactions involved.
Alanine world hypothesis
Alanine is one of the twenty canonical α-amino acids used as building blocks (monomers) for the ribosome-mediated biosynthesis of proteins. Alanine is believed to be one of the earliest amino acids to be included in the genetic code standard repertoire.
On the basis of this fact the "alanine world" hypothesis was proposed. This hypothesis explains the evolutionary choice of amino acids in the repertoire of the genetic code from a chemical point of view. In this model the selection of monomers (i.e. amino acids) for ribosomal protein synthesis is rather limited to those alanine derivatives that are suitable for building α-helix or β-sheet secondary structural elements. Dominant secondary structures in life as we know it are α-helices and β-sheets and most canonical amino acids can be regarded as chemical derivatives of alanine. Therefore, most canonical amino acids in proteins can be exchanged with alanine by point mutations while the secondary structure remains intact. The fact that alanine mimics the secondary structure preferences of the majority of the encoded amino acids is practically exploited in alanine scanning mutagenesis. In addition, classical X-ray crystallography often employs the polyalanine-backbone model to determine three-dimensional structures of proteins using molecular replacement—a model-based phasing method.
Physiological function
Glucose–alanine cycle
In mammals, alanine plays a key role in glucose–alanine cycle between tissues and liver. In muscle and other tissues that degrade amino acids for fuel, amino groups are collected in the form of glutamate by transamination. Glutamate can then transfer its amino group to pyruvate, a product of muscle glycolysis, through the action of alanine aminotransferase, forming alanine and α-ketoglutarate. The alanine enters the bloodstream, and is transported to the liver. The alanine aminotransferase reaction takes place in reverse in the liver, where the regenerated pyruvate is used in gluconeogenesis, forming glucose which returns to the muscles through the circulation system. Glutamate in the liver enters mitochondria and is broken down by glutamate dehydrogenase into α-ketoglutarate and ammonium, which in turn participates in the urea cycle to form urea which is excreted through the kidneys.
The glucose–alanine cycle enables pyruvate and glutamate to be removed from muscle and safely transported to the liver. Once there, pyruvate is used to regenerate glucose, after which the glucose returns to muscle to be metabolized for energy: this moves the energetic burden of gluconeogenesis to the liver instead of the muscle, and all available ATP in the muscle can be devoted to muscle contraction. It is a catabolic pathway, and relies upon protein breakdown in the muscle tissue. Whether and to what extent it occurs in non-mammals is unclear.
Link to diabetes
Alterations in the alanine cycle that increase the levels of serum alanine aminotransferase (ALT) are linked to the development of type II diabetes.
Chemical properties
Alanine is useful in loss of function experiments with respect to phosphorylation. Some techniques involve creating a library of genes, each of which has a point mutation at a different position in the area of interest, sometimes even every position in the whole gene: this is called "scanning mutagenesis". The simplest method, and the first to have been used, is so-called alanine scanning, where every position in turn is mutated to alanine.
Hydrogenation of alanine gives the amino alcohol alaninol, which is a useful chiral building block.
Free radical
The deamination of an alanine molecule produces the free radical CH3C•HCO2−. Deamination can be induced in solid or aqueous alanine by radiation that causes homolytic cleavage of the carbon–nitrogen bond.
This property of alanine is used in dosimetric measurements in radiotherapy. When normal alanine is irradiated, the radiation causes certain alanine molecules to become free radicals, and, as these radicals are stable, the free radical content can later be measured by electron paramagnetic resonance in order to find out how much radiation the alanine was exposed to. This is considered to be a biologically relevant measure of the amount of radiation damage that living tissue would suffer under the same radiation exposure. Radiotherapy treatment plans can be delivered in test mode to alanine pellets, which can then be measured to check that the intended pattern of radiation dose is correctly delivered by the treatment system.
| Biology and health sciences | Amino acids | Biology |
63540 | https://en.wikipedia.org/wiki/Aspartic%20acid | Aspartic acid | Aspartic acid (symbol Asp or D; the ionic form is known as aspartate), is an α-amino acid that is used in the biosynthesis of proteins. The L-isomer of aspartic acid is one of the 22 proteinogenic amino acids, i.e., the building blocks of proteins.
D-aspartic acid is one of two D-amino acids commonly found in mammals. Apart from a few rare exceptions, D-aspartic acid is not used for protein synthesis but is incorporated into some peptides and plays a role as a neurotransmitter/neuromodulator.
Like all other amino acids, aspartic acid contains an amino group and a carboxylic acid. Its α-amino group is in the protonated –NH form under physiological conditions, while its α-carboxylic acid group is deprotonated −COO− under physiological conditions. Aspartic acid has an acidic side chain (CH2COOH) which reacts with other amino acids, enzymes and proteins in the body. Under physiological conditions (pH 7.4) in proteins the side chain usually occurs as the negatively charged aspartate form, −COO−. It is a non-essential amino acid in humans, meaning the body can synthesize it as needed. It is encoded by the codons GAU and GAC.
In proteins aspartate sidechains are often hydrogen bonded to form asx turns or asx motifs, which frequently occur at the N-termini of alpha helices.
Aspartic acid, like glutamic acid, is classified as an acidic amino acid, with a pKa of 3.9; however, in a peptide this is highly dependent on the local environment, and could be as high as 14.
The one-letter code D for aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid.
Discovery
Aspartic acid was first discovered in 1827 by Auguste-Arthur Plisson and Étienne-Ossian Henry by hydrolysis of asparagine, which had been isolated from asparagus juice in 1806. Their original method used lead hydroxide, but various other acids or bases are now more commonly used instead.
Forms and nomenclature
There are two forms or enantiomers of aspartic acid. The name "aspartic acid" can refer to either enantiomer or a mixture of two. Of these two forms, only one, "L-aspartic acid", is directly incorporated into proteins. The biological roles of its counterpart, "D-aspartic acid" are more limited. Where enzymatic synthesis will produce one or the other, most chemical syntheses will produce both forms, "DL-aspartic acid", known as a racemic mixture.
Synthesis
Biosynthesis
In the human body, aspartate is most frequently synthesized through the transamination of oxaloacetate. The biosynthesis of aspartate is facilitated by an aminotransferase enzyme: the transfer of an amine group from another molecule such as alanine or glutamine yields aspartate and an alpha-keto acid.
Chemical synthesis
Industrially, aspartate is produced by amination of fumarate catalyzed by L-aspartate ammonia-lyase.
Racemic aspartic acid can be synthesized from diethyl sodium phthalimidomalonate,
(C6H4(CO)2NC(CO2Et)2).
Metabolism
In plants and microorganisms, aspartate is the precursor to several amino acids, including four that are essential for humans: methionine, threonine, isoleucine, and lysine. The conversion of aspartate to these other amino acids begins with reduction of aspartate to its "semialdehyde", O2CCH(NH2)CH2CHO. Asparagine is derived from aspartate via transamidation:
−O2CCH(NH2)CH2CO2− + GC(O)NH3+ → O2CCH(NH2)CH2CONH3+ + GC(O)O
(where GC(O)NH2 and GC(O)OH are glutamine and glutamic acid, respectively)
Other biochemical roles
Aspartate has many other biochemical roles. It is a metabolite in the urea cycle and participates in gluconeogenesis. It carries reducing equivalents in the malate-aspartate shuttle, which utilizes the ready interconversion of aspartate and oxaloacetate, which is the oxidized (dehydrogenated) derivative of malic acid. Aspartate donates one nitrogen atom in the biosynthesis of inosine, the precursor to the purine bases. In addition, aspartic acid acts as a hydrogen acceptor in a chain of ATP synthase. Dietary L-aspartic acid has been shown to act as an inhibitor of Beta-glucuronidase, which serves to regulate enterohepatic circulation of bilirubin and bile acids.
Interactive pathway map
Neurotransmitter
Aspartate (the conjugate base of aspartic acid) stimulates NMDA receptors, though not as strongly as the amino acid neurotransmitter L-glutamate does.
Applications & market
In 2014, the global market for aspartic acid was or about $117 million annually. The three largest market segments include the U.S., Western Europe, and China. Current applications include biodegradable polymers (polyaspartic acid), low calorie sweeteners (aspartame), scale and corrosion inhibitors, and resins.
Superabsorbent polymers
One area of aspartic acid market growth is biodegradable superabsorbent polymers (SAP), and hydrogels. Around 75% of superabsorbent polymers are used in disposable diapers and an additional 20% is used for adult incontinence and feminine hygiene products. Polyaspartic acid, the polymerization product of aspartic acid, is a biodegradable substitute to polyacrylate.
Additional uses
In addition to SAP, aspartic acid has applications in the fertilizer industry, where polyaspartate improves water retention and nitrogen uptake.
Sources
Dietary sources
Aspartic acid is not an essential amino acid, which means that it can be synthesized from central metabolic pathway intermediates in humans, and does not need to be present in the diet. In eukaryotic cells, roughly 1 in 20 amino acids incorporated into a protein is an aspartic acid, and accordingly almost any source of dietary protein will include aspartic acid. Additionally, aspartic acid is found in:
Dietary supplements, either as aspartic acid itself or salts (such as magnesium aspartate)
The sweetener aspartame, which is made from an aspartic acid and phenylalanine
| Biology and health sciences | Amino acids | Biology |
63541 | https://en.wikipedia.org/wiki/Glutamic%20acid | Glutamic acid | Glutamic acid (symbol Glu or E; the anionic form is known as glutamate) is an α-amino acid that is used by almost all living beings in the biosynthesis of proteins. It is a non-essential nutrient for humans, meaning that the human body can synthesize enough for its use. It is also the most abundant excitatory neurotransmitter in the vertebrate nervous system. It serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABAergic neurons.
Its molecular formula is . Glutamic acid exists in two optically isomeric forms; the dextrorotatory -form is usually obtained by hydrolysis of gluten or from the waste waters of beet-sugar manufacture or by fermentation. Its molecular structure could be idealized as HOOC−CH()−()2−COOH, with two carboxyl groups −COOH and one amino group −. However, in the solid state and mildly acidic water solutions, the molecule assumes an electrically neutral zwitterion structure −OOC−CH()−()2−COOH. It is encoded by the codons GAA or GAG.
The acid can lose one proton from its second carboxyl group to form the conjugate base, the singly-negative anion glutamate −OOC−CH()−()2−COO−. This form of the compound is prevalent in neutral solutions. The glutamate neurotransmitter plays the principal role in neural activation. This anion creates the savory umami flavor of foods and is found in glutamate flavorings such as MSG. In Europe, it is classified as food additive E620. In highly alkaline solutions the doubly negative anion −OOC−CH()−()2−COO− prevails. The radical corresponding to glutamate is called glutamyl.
The one-letter symbol E for glutamate was assigned as the letter following D for aspartate, as glutamate is larger by one methylene –CH2– group.
Chemistry
Ionization
When glutamic acid is dissolved in water, the amino group (−) may gain a proton (), and/or the carboxyl groups may lose protons, depending on the acidity of the medium.
In sufficiently acidic environments, both carboxyl groups are protonated and the molecule becomes a cation with a single positive charge, HOOC−CH()−()2−COOH.
At pH values between about 2.5 and 4.1, the carboxylic acid closer to the amine generally loses a proton, and the acid becomes the neutral zwitterion −OOC−CH()−()2−COOH. This is also the form of the compound in the crystalline solid state. The change in protonation state is gradual; the two forms are in equal concentrations at pH 2.10.
At even higher pH, the other carboxylic acid group loses its proton and the acid exists almost entirely as the glutamate anion −OOC−CH()−()2−COO−, with a single negative charge overall. The change in protonation state occurs at pH 4.07. This form with both carboxylates lacking protons is dominant in the physiological pH range (7.35–7.45).
At even higher pH, the amino group loses the extra proton, and the prevalent species is the doubly-negative anion −OOC−CH()−()2−COO−. The change in protonation state occurs at pH 9.47.
Optical isomerism
Glutamic acid is chiral; two mirror-image enantiomers exist: (−), and (+). The form is more widely occurring in nature, but the form occurs in some special contexts, such as the bacterial capsule and cell walls of the bacteria (which produce it from the form with the enzyme glutamate racemase) and the liver of mammals.
History
Although they occur naturally in many foods, the flavor contributions made by glutamic acid and other amino acids were only scientifically identified early in the 20th century. The substance was discovered and identified in the year 1866 by the German chemist Karl Heinrich Ritthausen, who treated wheat gluten (for which it was named) with sulfuric acid. In 1908, Japanese researcher Kikunae Ikeda of the Tokyo Imperial University identified brown crystals left behind after the evaporation of a large amount of kombu broth as glutamic acid. These crystals, when tasted, reproduced the novel flavor he detected in many foods, most especially in seaweed. Professor Ikeda termed this flavor umami. He then patented a method of mass-producing a crystalline salt of glutamic acid, monosodium glutamate.
Synthesis
Biosynthesis
Industrial synthesis
Glutamic acid is produced on the largest scale of any amino acid, with an estimated annual production of about 1.5 million tons in 2006. Chemical synthesis was supplanted by the aerobic fermentation of sugars and ammonia in the 1950s, with the organism Corynebacterium glutamicum (also known as Brevibacterium flavum) being the most widely used for production. Isolation and purification can be achieved by concentration and crystallization; it is also widely available as its hydrochloride salt.
Function and uses
Metabolism
Glutamate is a key compound in cellular metabolism. In humans, dietary proteins are broken down by digestion into amino acids, which serve as metabolic fuel for other functional roles in the body. A key process in amino acid degradation is transamination, in which the amino group of an amino acid is transferred to an α-ketoacid, typically catalysed by a transaminase. The reaction can be generalised as such:
R1-amino acid + R2-α-ketoacid ⇌ R1-α-ketoacid + R2-amino acid
A very common α-keto acid is α-ketoglutarate, an intermediate in the citric acid cycle. Transamination of α-ketoglutarate gives glutamate. The resulting α-ketoacid product is often a useful one as well, which can contribute as fuel or as a substrate for further metabolism processes. Examples are as follows:
Alanine + α-ketoglutarate ⇌ pyruvate + glutamate
Aspartate + α-ketoglutarate ⇌ oxaloacetate + glutamate
Both pyruvate and oxaloacetate are key components of cellular metabolism, contributing as substrates or intermediates in fundamental processes such as glycolysis, gluconeogenesis, and the citric acid cycle.
Glutamate also plays an important role in the body's disposal of excess or waste nitrogen. Glutamate undergoes deamination, an oxidative reaction catalysed by glutamate dehydrogenase, as follows:
glutamate + H2O + NADP+ → α-ketoglutarate + NADPH + NH3 + H+
Ammonia (as ammonium) is then excreted predominantly as urea, synthesised in the liver. Transamination can thus be linked to deamination, effectively allowing nitrogen from the amine groups of amino acids to be removed, via glutamate as an intermediate, and finally excreted from the body in the form of urea.
Glutamate is also a neurotransmitter (see below), which makes it one of the most abundant molecules in the brain. Malignant brain tumors known as glioma or glioblastoma exploit this phenomenon by using glutamate as an energy source, especially when these tumors become more dependent on glutamate due to mutations in the gene IDH1.
Neurotransmitter
Glutamate is the most abundant excitatory neurotransmitter in the vertebrate nervous system. At chemical synapses, glutamate is stored in vesicles. Nerve impulses trigger the release of glutamate from the presynaptic cell. Glutamate acts on ionotropic and metabotropic (G-protein coupled) receptors. In the opposing postsynaptic cell, glutamate receptors, such as the NMDA receptor or the AMPA receptor, bind glutamate and are activated. Because of its role in synaptic plasticity, glutamate is involved in cognitive functions such as learning and memory in the brain. The form of plasticity known as long-term potentiation takes place at glutamatergic synapses in the hippocampus, neocortex, and other parts of the brain. Glutamate works not only as a point-to-point transmitter, but also through spill-over synaptic crosstalk between synapses in which summation of glutamate released from a neighboring synapse creates extrasynaptic signaling/volume transmission. In addition, glutamate plays important roles in the regulation of growth cones and synaptogenesis during brain development as originally described by Mark Mattson.
Brain nonsynaptic glutamatergic signaling circuits
Extracellular glutamate in Drosophila brains has been found to regulate postsynaptic glutamate receptor clustering, via a process involving receptor desensitization. A gene expressed in glial cells actively transports glutamate into the extracellular space, while, in the nucleus accumbens-stimulating group II metabotropic glutamate receptors, this gene was found to reduce extracellular glutamate levels. This raises the possibility that this extracellular glutamate plays an "endocrine-like" role as part of a larger homeostatic system.
GABA precursor
Glutamate also serves as the precursor for the synthesis of the inhibitory gamma-aminobutyric acid (GABA) in GABA-ergic neurons. This reaction is catalyzed by glutamate decarboxylase (GAD). GABA-ergic neurons are identified (for research purposes) by revealing its activity (with the autoradiography and immunohistochemistry methods) which is most abundant in the cerebellum and pancreas.
Stiff person syndrome is a neurologic disorder caused by anti-GAD antibodies, leading to a decrease in GABA synthesis and, therefore, impaired motor function such as muscle stiffness and spasm. Since the pancreas has abundant GAD, a direct immunological destruction occurs in the pancreas and the patients will have diabetes mellitus.
Flavor enhancer
Glutamic acid, being a constituent of protein, is present in foods that contain protein, but it can only be tasted when it is present in an unbound form. Significant amounts of free glutamic acid are present in a wide variety of foods, including cheeses and soy sauce, and glutamic acid is responsible for umami, one of the five basic tastes of the human sense of taste. Glutamic acid often is used as a food additive and flavor enhancer in the form of its sodium salt, known as monosodium glutamate (MSG).
Nutrient
All meats, poultry, fish, eggs, dairy products, and kombu are excellent sources of glutamic acid. Some protein-rich plant foods also serve as sources. 30% to 35% of gluten (much of the protein in wheat) is glutamic acid. Ninety-five percent of the dietary glutamate is metabolized by intestinal cells in a first pass.
Plant growth
Auxigro is a plant growth preparation that contains 30% glutamic acid.
NMR spectroscopy
In recent years, there has been much research into the use of residual dipolar coupling (RDC) in nuclear magnetic resonance spectroscopy (NMR). A glutamic acid derivative, poly-γ-benzyl-L-glutamate (PBLG), is often used as an alignment medium to control the scale of the dipolar interactions observed.
Role of glutamate in aging
Pharmacology
The drug phencyclidine (more commonly known as PCP or 'Angel Dust') antagonizes glutamic acid non-competitively at the NMDA receptor. For the same reasons, dextromethorphan and ketamine also have strong dissociative and hallucinogenic effects. Acute infusion of the drug eglumetad (also known as eglumegad or LY354740), an agonist of the metabotropic glutamate receptors 2 and 3) resulted in a marked diminution of yohimbine-induced stress response in bonnet macaques (Macaca radiata); chronic oral administration of eglumetad in those animals led to markedly reduced baseline cortisol levels (approximately 50 percent) in comparison to untreated control subjects. Eglumetad has also been demonstrated to act on the metabotropic glutamate receptor 3 (GRM3) of human adrenocortical cells, downregulating aldosterone synthase, CYP11B1, and the production of adrenal steroids (i.e. aldosterone and cortisol). Glutamate does not easily pass the blood brain barrier, but, instead, is transported by a high-affinity transport system. It can also be converted into glutamine.
Glutamate toxicity can be reduced by antioxidants, and the psychoactive principle of cannabis, tetrahydrocannabinol (THC), and the non psychoactive principle cannabidiol (CBD), and other cannabinoids, is found to block glutamate neurotoxicity with a similar potency, and thereby potent antioxidants.
| Biology and health sciences | Amino acids | Biology |
63542 | https://en.wikipedia.org/wiki/Histidine | Histidine | Histidine (symbol His or H) is an essential amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated –NH3+ form under biological conditions), a carboxylic acid group (which is in the deprotonated –COO− form under biological conditions), and an imidazole side chain (which is partially protonated), classifying it as a positively charged amino acid at physiological pH. Initially thought essential only for infants, it has now been shown in longer-term studies to be essential for adults also. It is encoded by the codons CAU and CAC.
Histidine was first isolated by Albrecht Kossel and Sven Gustaf Hedin in 1896. The name stems from its discovery in tissue, from histós "tissue". It is also a precursor to histamine, a vital inflammatory agent in immune responses. The acyl radical is histidyl.
Properties of the imidazole side chain
The conjugate acid (protonated form) of the imidazole side chain in histidine has a pKa of approximately 6.0. Thus, below a pH of 6, the imidazole ring is mostly protonated (as described by the Henderson–Hasselbalch equation). The resulting imidazolium ring bears two NH bonds and has a positive charge. The positive charge is equally distributed between both nitrogens and can be represented with two equally important resonance structures. Sometimes, the symbol Hip is used for this protonated form instead of the usual His. Above pH 6, one of the two protons is lost. The remaining proton of the imidazole ring can reside on either nitrogen, giving rise to what are known as the N3-H or N1-H tautomers. The N3-H tautomer is shown in the figure above. In the N1-H tautomer, the NH is nearer the backbone. These neutral tautomers, also referred to as Nε and Nδ, are sometimes referred to with symbols Hie and Hid, respectively. The imidazole/imidazolium ring of histidine is aromatic at all pH values. Under certain conditions, all three ion-forming groups of histidine can be charged forming the histidinium cation.
The acid-base properties of the imidazole side chain are relevant to the catalytic mechanism of many enzymes. In catalytic triads, the basic nitrogen of histidine abstracts a proton from serine, threonine, or cysteine to activate it as a nucleophile. In a histidine proton shuttle, histidine is used to quickly shuttle protons. It can do this by abstracting a proton with its basic nitrogen to make a positively charged intermediate and then use another molecule, a buffer, to extract the proton from its acidic nitrogen. In carbonic anhydrases, a histidine proton shuttle is utilized to rapidly shuttle protons away from a zinc-bound water molecule to quickly regenerate the active form of the enzyme. In helices E and F of hemoglobin, histidine influences binding of dioxygen as well as carbon monoxide. This interaction enhances the affinity of Fe(II) for O2 but destabilizes the binding of CO, which binds only 200 times stronger in hemoglobin, compared to 20,000 times stronger in free heme.
The tautomerism and acid-base properties of the imidazole side chain has been characterized by 15N NMR spectroscopy. The two 15N chemical shifts are similar (about 200 ppm, relative to nitric acid on the sigma scale, on which increased shielding corresponds to increased chemical shift). NMR spectral measurements shows that the chemical shift of N1-H drops slightly, whereas the chemical shift of N3-H drops considerably (about 190 vs. 145 ppm). This change indicates that the N1-H tautomer is preferred, possibly due to hydrogen bonding to the neighboring ammonium. The shielding at N3 is substantially reduced due to the second-order paramagnetic effect, which involves a symmetry-allowed interaction between the nitrogen lone pair and the excited π* states of the aromatic ring. At pH > 9, the chemical shifts of N1 and N3 are approximately 185 and 170 ppm.
Ligand
Histidine forms complexes with many metal ions. The imidazole sidechain of the histidine residue commonly serves as a ligand in metalloproteins. One example is the axial base attached to Fe in myoglobin and hemoglobin. Poly-histidine tags (of six or more consecutive H residues) are utilized for protein purification by binding to columns with nickel or cobalt, with micromolar affinity. Natural poly-histidine peptides, found in the venom of the viper Atheris squamigera have been shown to bind Zn(2+), Ni(2+) and Cu(2+) and affect the function of venom metalloproteases.
Metabolism
Biosynthesis
-Histidine is an essential amino acid that is not synthesized de novo in humans. Humans and other animals must ingest histidine or histidine-containing proteins. The biosynthesis of histidine has been widely studied in prokaryotes such as E. coli. Histidine synthesis in E. coli involves eight gene products (His1, 2, 3, 4, 5, 6, 7, and 8) and it occurs in ten steps. This is possible because a single gene product has the ability to catalyze more than one reaction. For example, as shown in the pathway, His4 catalyzes 4 different steps in the pathway.
Histidine is synthesized from phosphoribosyl pyrophosphate (PRPP), which is made from ribose-5-phosphate by ribose-phosphate diphosphokinase in the pentose phosphate pathway. The first reaction of histidine biosynthesis is the condensation of PRPP and adenosine triphosphate (ATP) by the enzyme ATP-phosphoribosyl transferase. ATP-phosphoribosyl transferase is indicated by His1 in the image. His4 gene product then hydrolyzes the product of the condensation, phosphoribosyl-ATP, producing phosphoribosyl-AMP (PRAMP), which is an irreversible step. His4 then catalyzes the formation of phosphoribosylformiminoAICAR-phosphate, which is then converted to phosphoribulosylformimino-AICAR-P by the His6 gene product. His7 splits phosphoribulosylformimino-AICAR-P to form -erythro-imidazole-glycerol-phosphate. After, His3 forms imidazole acetol-phosphate releasing water. His5 then makes -histidinol-phosphate, which is then hydrolyzed by His2 making histidinol. His4 catalyzes the oxidation of -histidinol to form -histidinal, an amino aldehyde. In the last step, -histidinal is converted to -histidine.
The histidine biosynthesis pathway has been studied in the fungus Neurospora crassa, and a gene (His-3) encoding a multienzyme complex was found that was similar to the His4 gene of the bacterium E. coli. A genetic study of N. crassa histidine mutants indicated that the individual activities of the multienzyme complex occur in discrete, contiguous sections of the His-3 genetic map, suggesting that the different activities of the multienzyme complex are encoded separately from each other. However, mutants were also found that lacked all three activities simultaneously, suggesting that some mutations cause loss of function of the complex as a whole.
Just like animals and microorganisms, plants need histidine for their growth and development. Microorganisms and plants are similar in that they can synthesize histidine. Both synthesize histidine from the biochemical intermediate phosphoribosyl pyrophosphate. In general, the histidine biosynthesis is very similar in plants and microorganisms.
Regulation of biosynthesis
This pathway requires energy in order to occur therefore, the presence of ATP activates the first enzyme of the pathway, ATP-phosphoribosyl transferase (shown as His1 in the image on the right). ATP-phosphoribosyl transferase is the rate determining enzyme, which is regulated through feedback inhibition meaning that it is inhibited in the presence of the product, histidine.
Degradation
Histidine is one of the amino acids that can be converted to intermediates of the tricarboxylic acid (TCA) cycle (also known as the citric acid cycle). Histidine, along with other amino acids such as proline and arginine, takes part in deamination, a process in which its amino group is removed. In prokaryotes, histidine is first converted to urocanate by histidase. Then, urocanase converts urocanate to 4-imidazolone-5-propionate. Imidazolonepropionase catalyzes the reaction to form formiminoglutamate (FIGLU) from 4-imidazolone-5-propionate. The formimino group is transferred to tetrahydrofolate, and the remaining five carbons form glutamate. Overall, these reactions result in the formation of glutamate and ammonia. Glutamate can then be deaminated by glutamate dehydrogenase or transaminated to form α-ketoglutarate.
Conversion to other biologically active amines
The histidine amino acid is a precursor for histamine, an amine produced in the body necessary for inflammation.
The enzyme histidine ammonia-lyase converts histidine into ammonia and urocanic acid. A deficiency in this enzyme is present in the rare metabolic disorder histidinemia, producing urocanic aciduria as a key diagnostic finding.
Histidine can be converted to 3-methylhistidine, which serves as a biomarker for skeletal muscle damage, by certain methyltransferase enzymes.
Histidine is also a precursor for carnosine biosynthesis, which is a dipeptide found in skeletal muscle.
In Actinomycetota and filamentous fungi, such as Neurospora crassa, histidine can be converted into the antioxidant ergothioneine.
Requirements
The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For histidine, for adults 19 years and older, 14 mg/kg body weight/day. Supplemental histidine is being investigated for use in a variety of different conditions, including neurological disorders, atopic dermatitis, metabolic syndrome, diabetes, uraemic anaemia, ulcers, inflammatory bowel diseases, malignancies, and muscle performance during strenuous exercise.
| Biology and health sciences | Amino acids | Biology |
63543 | https://en.wikipedia.org/wiki/Isoleucine | Isoleucine | Isoleucine (symbol Ile or I) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated −NH form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a hydrocarbon side chain with a branch (a central carbon atom bound to three other carbon atoms). It is classified as a non-polar, uncharged (at physiological pH), branched-chain, aliphatic amino acid. It is essential in humans, meaning the body cannot synthesize it. Essential amino acids are necessary in the human diet. In plants isoleucine can be synthesized from threonine and methionine. In plants and bacteria, isoleucine is synthesized from a pyruvate employing leucine biosynthesis enzymes. It is encoded by the codons AUU, AUC, and AUA.
Metabolism
Biosynthesis
In plants and microorganisms, isoleucine is synthesized from pyruvate and alpha-ketobutyrate. This pathway is not present in humans. Enzymes involved in this biosynthesis include:
Acetolactate synthase (also known as acetohydroxy acid synthase)
Acetohydroxy acid isomeroreductase
Dihydroxyacid dehydratase
Valine aminotransferase
Catabolism
Isoleucine is both a glucogenic and a ketogenic amino acid. After transamination with alpha-ketoglutarate, the carbon skeleton is oxidised and split into propionyl-CoA and acetyl-CoA. Propionyl-CoA is converted into succinyl-CoA, a TCA cycle intermediate which can be converted into oxaloacetate for gluconeogenesis (hence glucogenic). In mammals acetyl-CoA cannot be converted to carbohydrate but can be either fed into the TCA cycle by condensing with oxaloacetate to form citrate or used in the synthesis of ketone bodies (hence ketogenic) or fatty acids.
Metabolic diseases
The degradation of isoleucine is impaired in the following metabolic diseases:
Combined malonic and methylmalonic aciduria (CMAMMA)
Maple syrup urine disease (MSUD)
Methylmalonic acidemia
Propionic acidemia
Insulin resistance
Isoleucine, like other branched-chain amino acids, is associated with insulin resistance: higher levels of isoleucine are observed in the blood of diabetic mice, rats, and humans. In diet-induced obese and insulin resistant mice, a diet with decreased levels of isoleucine (with or without the other branched-chain amino acids) results in reduced adiposity and improved insulin sensitivity. Reduced dietary levels of isoleucine are required for the beneficial metabolic effects of a low protein diet. In humans, a protein restricted diet lowers blood levels of isoleucine and decreases fasting blood glucose levels. Mice fed a low isoleucine diet are leaner, live longer, and are less frail. In humans, higher dietary levels of isoleucine are associated with greater body mass index.
Functions and requirement
The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine has set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For adults 19 years and older, 19 mg of isoleucine/kg body weight is required daily.
Beside its biological role as a nutrient, isoleucine also participates in regulation of glucose metabolism. Isoleucine is an essential component of many proteins. As an essential amino acid, isoleucine must be ingested or protein production in the cell will be disrupted. Fetal hemoglobin is one of the many proteins that require isoleucine. Isoleucine is present in the gamma chain of fetal hemoglobin and must be present for the protein to form.
Genetic diseases can change the consumption requirements of isoleucine. Amino acids cannot be stored in the body. Buildup of excess amino acids will cause a buildup of toxic molecules so, humans have many pathways to degrade each amino acid when the need for protein synthesis has been met. Mutations in isoleucine-degrading enzymes can lead to dangerous buildup of isoleucine and its toxic derivative. One example is maple syrup urine disease (MSUD), a disorder that leaves people unable to breakdown isoleucine, valine, and leucine. People with MSUD manage their disease by a reduced intake of all three of those amino acids alongside drugs that help excrete built-up toxins.
Many animals and plants are dietary sources of isoleucine as a component of proteins. Foods that have high amounts of isoleucine include eggs, soy protein, seaweed, turkey, chicken, lamb, cheese, and fish.
Synthesis
Routes to isoleucine are numerous. One common multistep procedure starts from 2-bromobutane and diethylmalonate. Synthetic isoleucine was first reported in 1905 by French chemists Bouveault and Locquin.
Discovery
German chemist Felix Ehrlich discovered isoleucine while studying the composition of beet-sugar molasses 1903. In 1907 Ehrlich carried out further studies on fibrin, egg albumin, gluten, and beef muscle in 1907. These studies verified the natural composition of isoleucine. Ehrlich published his own synthesis of isoleucine in 1908.
| Biology and health sciences | Amino acids | Biology |
63544 | https://en.wikipedia.org/wiki/Lysine | Lysine | Lysine (symbol Lys or K) is an α-amino acid that is a precursor to many proteins. Lysine contains an α-amino group (which is in the protonated form when the lysine is dissolved in water at physiological pH), an α-carboxylic acid group (which is in the deprotonated form when the lysine is dissolved in water at physiological pH), and a side chain (which is partially protonated when the lysine is dissolved in water at physiological pH), and so it is classified as a basic, charged (in water at physiological pH), aliphatic amino acid. It is encoded by the codons AAA and AAG. Like almost all other amino acids, the α-carbon is chiral and lysine may refer to either enantiomer or a racemic mixture of both. For the purpose of this article, lysine will refer to the biologically active enantiomer L-lysine, where the α-carbon is in the S configuration.
The human body cannot synthesize lysine. It is essential in humans and must therefore be obtained from the diet. In organisms that synthesise lysine, two main biosynthetic pathways exist, the diaminopimelate and α-aminoadipate pathways, which employ distinct enzymes and substrates and are found in diverse organisms. Lysine catabolism occurs through one of several pathways, the most common of which is the saccharopine pathway.
Lysine plays several roles in humans, most importantly proteinogenesis, but also in the crosslinking of collagen polypeptides, uptake of essential mineral nutrients, and in the production of carnitine, which is key in fatty acid metabolism. Lysine is also often involved in histone modifications, and thus, impacts the epigenome. The ε-amino group often participates in hydrogen bonding and as a general base in catalysis. The ε-ammonium group () is attached to the fourth carbon from the α-carbon, which is attached to the carboxyl () group.
Due to its importance in several biological processes, a lack of lysine can lead to several disease states including defective connective tissues, impaired fatty acid metabolism, anaemia, and systemic protein-energy deficiency. In contrast, an overabundance of lysine, caused by ineffective catabolism, can cause severe neurological disorders.
Lysine was first isolated by the German biological chemist Ferdinand Heinrich Edmund Drechsel in 1889 from hydrolysis of the protein casein, and thus named it Lysin, . In 1902, the German chemists Emil Fischer and Fritz Weigert determined lysine's chemical structure by synthesizing it.
The one-letter symbol K was assigned to lysine for being alphabetically nearest, with L being assigned to the structurally simpler leucine, and M to methionine.
Biosynthesis
Two pathways have been identified in nature for the synthesis of lysine. The diaminopimelate (DAP) pathway belongs to the aspartate derived biosynthetic family, which is also involved in the synthesis of threonine, methionine and isoleucine, whereas the α-aminoadipate (AAA) pathway is part of the glutamate biosynthetic family.
DAP pathway
The DAP pathway is found in both prokaryotes and plants and begins with the dihydrodipicolinate synthase (DHDPS) (E.C 4.3.3.7) catalysed condensation reaction between the aspartate derived, L-aspartate semialdehyde, and pyruvate to form (4S)-4-hydroxy-2,3,4,5-tetrahydro-(2S)-dipicolinic acid (HTPA). The product is then reduced by dihydrodipicolinate reductase (DHDPR) (E.C 1.3.1.26), with NAD(P)H as a proton donor, to yield 2,3,4,5-tetrahydrodipicolinate (THDP). From this point on, four pathway variations have been found, namely the acetylase, aminotransferase, dehydrogenase, and succinylase pathways. Both the acetylase and succinylase variant pathways use four enzyme catalysed steps, the aminotransferase pathway uses two enzymes, and the dehydrogenase pathway uses a single enzyme. These four variant pathways converge at the formation of the penultimate product, meso‑diaminopimelate, which is subsequently enzymatically decarboxylated in an irreversible reaction catalysed by diaminopimelate decarboxylase (DAPDC) (E.C 4.1.1.20) to produce L-lysine. The DAP pathway is regulated at multiple levels, including upstream at the enzymes involved in aspartate processing as well as at the initial DHDPS catalysed condensation step. Lysine imparts a strong negative feedback loop on these enzymes and, subsequently, regulates the entire pathway.
AAA pathway
The AAA pathway involves the condensation of α-ketoglutarate and acetyl-CoA via the intermediate AAA for the synthesis of L-lysine. This pathway has been shown to be present in several yeast species, as well as protists and higher fungi. It has also been reported that an alternative variant of the AAA route has been found in Thermus thermophilus and Pyrococcus horikoshii, which could indicate that this pathway is more widely spread in prokaryotes than originally proposed. The first and rate-limiting step in the AAA pathway is the condensation reaction between acetyl-CoA and α‑ketoglutarate catalysed by homocitrate-synthase (HCS) (E.C 2.3.3.14) to give the intermediate homocitryl‑CoA, which is hydrolysed by the same enzyme to produce homocitrate. Homocitrate is enzymatically dehydrated by homoaconitase (HAc) (E.C 4.2.1.36) to yield cis-homoaconitate. HAc then catalyses a second reaction in which cis-homoaconitate undergoes rehydration to produce homoisocitrate. The resulting product undergoes an oxidative decarboxylation by homoisocitrate dehydrogenase (HIDH) (E.C 1.1.1.87) to yield α‑ketoadipate. AAA is then formed via a pyridoxal 5′-phosphate (PLP)-dependent aminotransferase (PLP-AT) (E.C 2.6.1.39), using glutamate as the amino donor. From this point on, the AAA pathway varies with [something is missing here ? -> at the very least, section header! ] on the kingdom. In fungi, AAA is reduced to α‑aminoadipate-semialdehyde via AAA reductase (E.C 1.2.1.95) in a unique process involving both adenylation and reduction that is activated by a phosphopantetheinyl transferase (E.C 2.7.8.7). Once the semialdehyde is formed, saccharopine reductase (E.C 1.5.1.10) catalyses a condensation reaction with glutamate and NAD(P)H, as a proton donor, and the imine is reduced to produce the penultimate product, saccharopine. The final step of the pathway in fungi involves the saccharopine dehydrogenase (SDH) (E.C 1.5.1.8) catalysed oxidative deamination of saccharopine, resulting in L-lysine. In a variant AAA pathway found in some prokaryotes, AAA is first converted to N‑acetyl-α-aminoadipate, which is phosphorylated and then reductively dephosphorylated to the ε-aldehyde. The aldehyde is then transaminated to N‑acetyllysine, which is deacetylated to give L-lysine. However, the enzymes involved in this variant pathway need further validation.
Catabolism
As with all amino acids, catabolism of lysine is initiated from the uptake of dietary lysine or from the breakdown of intracellular protein. Catabolism is also used as a means to control the intracellular concentration of free lysine and maintain a steady-state to prevent the toxic effects of excessive free lysine. There are several pathways involved in lysine catabolism but the most commonly used is the saccharopine pathway, which primarily takes place in the liver (and equivalent organs) in animals, specifically within the mitochondria. This is the reverse of the previously described AAA pathway. In animals and plants, the first two steps of the saccharopine pathway are catalysed by the bifunctional enzyme, α-aminoadipic semialdehyde synthase (AASS), which possess both lysine-ketoglutarate reductase (LKR) (E.C 1.5.1.8) and SDH activities, whereas in other organisms, such as bacteria and fungi, both of these enzymes are encoded by separate genes. The first step involves the LKR catalysed reduction of L-lysine in the presence of α-ketoglutarate to produce saccharopine, with NAD(P)H acting as a proton donor. Saccharopine then undergoes a dehydration reaction, catalysed by SDH in the presence of NAD+, to produce AAS and glutamate. AAS dehydrogenase (AASD) (E.C 1.2.1.31) then further dehydrates the molecule into AAA. Subsequently, PLP-AT catalyses the reverse reaction to that of the AAA biosynthesis pathway, resulting in AAA being converted to α-ketoadipate. The product, α‑ketoadipate, is decarboxylated in the presence of NAD+ and coenzyme A to yield glutaryl-CoA, however the enzyme involved in this is yet to be fully elucidated. Some evidence suggests that the 2-oxoadipate dehydrogenase complex (OADHc), which is structurally homologous to the E1 subunit of the oxoglutarate dehydrogenase complex (OGDHc) (E.C 1.2.4.2), is responsible for the decarboxylation reaction. Finally, glutaryl-CoA is oxidatively decarboxylated to crotonyl-CoA by glutaryl-CoA dehydrogenase (E.C 1.3.8.6), which goes on to be further processed through multiple enzymatic steps to yield acetyl-CoA; an essential carbon metabolite involved in the tricarboxylic acid cycle (TCA).
Nutritional value
Lysine is an essential amino acid in humans. The human daily nutritional requirement varies from ~60 mg/kg in infancy to ~30 mg/kg in adults. This requirement is commonly met in a western society with the intake of lysine from meat and vegetable sources well in excess of the recommended requirement. In vegetarian diets, the intake of lysine is less due to the limited quantity of lysine in cereal crops compared to meat sources.
Given the limiting concentration of lysine in cereal crops, it has long been speculated that the content of lysine can be increased through genetic modification practices. Often these practices have involved the intentional dysregulation of the DAP pathway by means of introducing lysine feedback-insensitive orthologues of the DHDPS enzyme. These methods have met limited success likely due to the toxic side effects of increased free lysine and indirect effects on the TCA cycle. Plants accumulate lysine and other amino acids in the form of seed storage proteins, found within the seeds of the plant, and this represents the edible component of cereal crops. This highlights the need to not only increase free lysine, but also direct lysine towards the synthesis of stable seed storage proteins, and subsequently, increase the nutritional value of the consumable component of crops. While genetic modification practices have met limited success, more traditional selective breeding techniques have allowed for the isolation of "Quality Protein Maize", which has significantly increased levels of lysine and tryptophan, also an essential amino acid. This increase in lysine content is attributed to an opaque-2 mutation that reduced the transcription of lysine-lacking zein-related seed storage proteins and, as a result, increased the abundance of other proteins that are rich in lysine. Commonly, to overcome the limiting abundance of lysine in livestock feed, industrially produced lysine is added. The industrial process includes the fermentative culturing of Corynebacterium glutamicum and the subsequent purification of lysine.
Dietary sources
Good sources of lysine are high-protein foods such as eggs, meat (specifically red meat, lamb, pork, and poultry), soy, beans and peas, cheese (particularly Parmesan), and certain fish (such as cod and sardines). Lysine is the limiting amino acid (the essential amino acid found in the smallest quantity in the particular foodstuff) in most cereal grains, but is plentiful in most pulses (legumes). Beans contain the lysine that maize lacks, and in the human archeological record beans and maize often appear together, as in the Three Sisters: beans, maize, and squash.
A food is considered to have sufficient lysine if it has at least 51 mg of lysine per gram of protein (so that the protein is 5.1% lysine). L-lysine HCl is used as a dietary supplement, providing 80.03% L-lysine. As such, 1 g of L-lysine is contained in 1.25 g of L-lysine HCl.
Biological roles
The most common role for lysine is proteinogenesis. Lysine frequently plays an important role in protein structure. Since its side chain contains a positively charged group on one end and a long hydrophobic carbon tail close to the backbone, lysine is considered somewhat amphipathic. For this reason, lysine can be found buried as well as more commonly in solvent channels and on the exterior of proteins, where it can interact with the aqueous environment. Lysine can also contribute to protein stability as its ε-amino group often participates in hydrogen bonding, salt bridges and covalent interactions to form a Schiff base.
A second major role of lysine is in epigenetic regulation by means of histone modification. There are several types of covalent histone modifications, which commonly involve lysine residues found in the protruding tail of histones. Modifications often include the addition or removal of an acetyl (−CH3CO) forming acetyllysine or reverting to lysine, up to three methyl (−CH3), ubiquitin or a sumo protein group. The various modifications have downstream effects on gene regulation, in which genes can be activated or repressed.
Lysine has also been implicated to play a key role in other biological processes including; structural proteins of connective tissues, calcium homeostasis, and fatty acid metabolism. Lysine has been shown to be involved in the crosslinking between the three helical polypeptides in collagen, resulting in its stability and tensile strength. This mechanism is akin to the role of lysine in bacterial cell walls, in which lysine (and meso-diaminopimelate) are critical to the formation of crosslinks, and therefore, stability of the cell wall. This concept has previously been explored as a means to circumvent the unwanted release of potentially pathogenic genetically modified bacteria. It was proposed that an auxotrophic strain of Escherichia coli (X1776) could be used for all genetic modification practices, as the strain is unable to survive without the supplementation of DAP, and thus, cannot live outside of a laboratory environment. Lysine has also been proposed to be involved in calcium intestinal absorption and renal retention, and thus, may play a role in calcium homeostasis. Finally, lysine has been shown to be a precursor for carnitine, which transports fatty acids to the mitochondria, where they can be oxidised for the release of energy. Carnitine is synthesised from trimethyllysine, which is a product of the degradation of certain proteins, as such lysine must first be incorporated into proteins and be methylated prior to being converted to carnitine. However, in mammals the primary source of carnitine is through dietary sources, rather than through lysine conversion.
In opsins like rhodopsin and the visual opsins (encoded by the genes OPN1SW, OPN1MW, and OPN1LW), retinaldehyde forms a Schiff base with a conserved lysine residue, and interaction of light with the retinylidene group causes signal transduction in color vision (See visual cycle for details).
Disputed roles
There has been a long discussion that lysine, when administered intravenously or orally, can significantly increase the release of growth hormones. This has led to athletes using lysine as a means of promoting muscle growth while training, however, no significant evidence to support this application of lysine has been found to date.
Because herpes simplex virus (HSV) proteins are richer in arginine and poorer in lysine than the cells they infect, lysine supplements have been tried as a treatment. Since the two amino acids are taken up in the intestine, reclaimed in the kidney, and moved into cells by the same amino acid transporters, an abundance of lysine would, in theory, limit the amount of arginine available for viral replication. Clinical studies do not provide good evidence for effectiveness as a prophylactic or in the treatment for HSV outbreaks. In response to product claims that lysine could improve immune responses to HSV, a review by the European Food Safety Authority found no evidence of a cause–effect relationship. The same review, published in 2011, found no evidence to support claims that lysine could lower cholesterol, increase appetite, contribute to protein synthesis in any role other than as an ordinary nutrient, or increase calcium absorption or retention.
Roles in disease
Diseases related to lysine are a result of the downstream processing of lysine, i.e. the incorporation into proteins or modification into alternative biomolecules. The role of lysine in collagen has been outlined above, however, a lack of lysine and hydroxylysine involved in the crosslinking of collagen peptides has been linked to a disease state of the connective tissue. As carnitine is a key lysine-derived metabolite involved in fatty acid metabolism, a substandard diet lacking sufficient carnitine and lysine can lead to decreased carnitine levels, which can have significant cascading effects on an individual's health. Lysine has also been shown to play a role in anaemia, as lysine is suspected to have an effect on the uptake of iron and, subsequently, the concentration of ferritin in blood plasma. However, the exact mechanism of action is yet to be elucidated. Most commonly, lysine deficiency is seen in non-western societies and manifests as protein-energy malnutrition, which has profound and systemic effects on the health of the individual. There is also a hereditary genetic disease that involves mutations in the enzymes responsible for lysine catabolism, namely the bifunctional AASS enzyme of the saccharopine pathway. Due to a lack of lysine catabolism, the amino acid accumulates in plasma and patients develop hyperlysinaemia, which can present as asymptomatic to severe neurological disabilities, including epilepsy, ataxia, spasticity, and psychomotor impairment. The clinical significance of hyperlysinemia is the subject of debate in the field with some studies finding no correlation between physical or mental disabilities and hyperlysinemia. In addition to this, mutations in genes related to lysine metabolism have been implicated in several disease states, including pyridoxine-dependent epilepsia (ALDH7A1 gene), α-ketoadipic and α-aminoadipic aciduria (DHTKD1 gene), and glutaric aciduria type 1 (GCDH gene).
Hyperlysinuria is marked by high amounts of lysine in the urine. It is often due to a metabolic disease in which a protein involved in the breakdown of lysine is non functional due to a genetic mutation. It may also occur due to a failure of renal tubular transport.
Use of lysine in animal feed
Lysine production for animal feed is a major global industry, reaching in 2009 almost 700,000 tons for a market value of over €1.22 billion. Lysine is an important additive to animal feed because it is a limiting amino acid when optimizing the growth of certain animals such as pigs and chickens for the production of meat. Lysine supplementation allows for the use of lower-cost plant protein (maize, for instance, rather than soy) while maintaining high growth rates, and limiting the pollution from nitrogen excretion. In turn, however, phosphate pollution is a major environmental cost when corn is used as feed for poultry and swine.
Lysine is industrially produced by microbial fermentation, from a base mainly of sugar. Genetic engineering research is actively pursuing bacterial strains to improve the efficiency of production and allow lysine to be made from other substrates. The most common bacteria used is Corynebacterium glutamicum specially mutagenized or gene-engineered to produce lysine, but analogous strains of Escherichia coli are also employed.
In popular culture
The 1993 film Jurassic Park, which is based on the 1990 novel Jurassic Park by Michael Crichton, features dinosaurs that were genetically altered so that they could not produce lysine, an example of engineered auxotrophy. This was known as the "lysine contingency" and was supposed to prevent the cloned dinosaurs from surviving outside the park, forcing them to depend on lysine supplements provided by the park's veterinary staff. In reality, no animal can produce lysine; it is an essential amino acid.
In 1996, lysine became the focus of a price-fixing case, the largest in United States history. The Archer Daniels Midland Company paid a fine of US$100 million, and three of its executives were convicted and served prison time. Also found guilty in the price-fixing case were two Japanese firms (Ajinomoto, Kyowa Hakko) and a South Korean firm (Sewon). Secret video recordings of the conspirators fixing lysine's price can be found online or by requesting the video from the U.S. Department of Justice, Antitrust Division. This case gave the basis for the book The Informant: A True Story, and the movie The Informant!.
| Biology and health sciences | Amino acids | Biology |
63546 | https://en.wikipedia.org/wiki/Leucine | Leucine | Leucine (symbol Leu or L) is an essential amino acid that is used in the biosynthesis of proteins. Leucine is an α-amino acid, meaning it contains an α-amino group (which is in the protonated −NH3+ form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain isobutyl group, making it a non-polar aliphatic amino acid. It is essential in humans, meaning the body cannot synthesize it; it must be obtained from the diet. Human dietary sources are foods that contain protein, such as meats, dairy products, soy products, and beans and other legumes. It is encoded by the codons UUA, UUG, CUU, CUC, CUA, and CUG. Leucine is named after the Greek word for "white": λευκός (leukós, "white"), after its common appearance as a white powder, a property it shares with many other amino acids.
Like valine and isoleucine, leucine is a branched-chain amino acid. The primary metabolic end products of leucine metabolism are acetyl-CoA and acetoacetate; consequently, it is one of the two exclusively ketogenic amino acids, with lysine being the other. It is the most important ketogenic amino acid in humans.
Leucine and β-hydroxy β-methylbutyric acid, a minor leucine metabolite, exhibit pharmacological activity in humans and have been demonstrated to promote protein biosynthesis via the phosphorylation of the mechanistic target of rapamycin (mTOR).
Dietary leucine
As a food additive, L-leucine has E number E641 and is classified as a flavor enhancer.
Requirements
The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For leucine, for adults 19 years and older, 42 mg/kg body weight/day.
Sources
Health effects
As a dietary supplement, leucine has been found to slow the degradation of muscle tissue by increasing the synthesis of muscle proteins in aged rats. However, results of comparative studies are conflicted. Long-term leucine supplementation does not increase muscle mass or strength in healthy elderly men. More studies are needed, preferably ones based on an objective, random sample of society. Factors such as lifestyle choices, age, gender, diet, exercise, etc. must be factored into the analyses to isolate the effects of supplemental leucine as a stand-alone, or if taken with other branched-chain amino acids (BCAAs). Until then, dietary supplemental leucine cannot be associated as the prime reason for muscular growth or optimal maintenance for the entire population.
Both L-leucine and D-leucine protect mice against epileptic seizures. D-leucine also terminates seizures in mice after the onset of seizure activity, at least as effectively as diazepam and without sedative effects. Decreased dietary intake of L-leucine lessens adiposity in mice. High blood levels of leucine are associated with insulin resistance in humans, mice, and rodents. This might be due to the effect of leucine to stimulate mTOR signaling. Dietary restriction of leucine and the other BCAAs can reverse diet-induced obesity in wild-type mice by increasing energy expenditure, and can restrict fat mass gain of hyperphagic rats.
Safety
Leucine toxicity, as seen in decompensated maple syrup urine disease, causes delirium and neurologic compromise, and can be life-threatening.
A high intake of leucine may cause or exacerbate symptoms of pellagra in people with low niacin status because it interferes with the conversion of L-tryptophan to niacin.
Leucine at a dose exceeding 500 mg/kg/d was observed with hyperammonemia. As such, unofficially, a tolerable upper intake level (UL) for leucine in healthy adult men can be suggested at 500 mg/kg/d or 35 g/d under acute dietary conditions.
Pharmacology
Pharmacodynamics
Leucine is a dietary amino acid with the capacity to directly stimulate myofibrillar muscle protein synthesis. This effect of leucine results from its role as an activator of the mechanistic target of rapamycin (mTOR), a serine-threonine protein kinase that regulates protein biosynthesis and cell growth. The activation of mTOR by leucine is mediated through Rag GTPases, leucine binding to leucyl-tRNA synthetase, leucine binding to sestrin 2, and possibly other mechanisms.
Metabolism in humans
Leucine metabolism occurs in many tissues in the human body; however, most dietary leucine is metabolized within the liver, adipose tissue, and muscle tissue. Adipose and muscle tissue use leucine in the formation of sterols and other compounds. Combined leucine use in these two tissues is seven times greater than in the liver.
A small fraction of metabolism – less than 5% in all tissues except the testes, where it accounts for about 33% – is initially catalyzed by leucine aminomutase, producing β-leucine, which is subsequently metabolized into (β-KIC), β-ketoisocaproyl-CoA, and then acetyl-CoA by a series of uncharacterized enzymes.
Synthesis in nonhuman organisms
Leucine is an essential amino acid in the diet of animals because they lack the complete enzyme pathway to synthesize it de novo from potential precursor compounds. Consequently, they must ingest it, usually as a component of proteins. Plants and microorganisms synthesize leucine from pyruvic acid with a series of enzymes:
Acetolactate synthase
Acetohydroxy acid isomeroreductase
Dihydroxyacid dehydratase
α-Isopropylmalate synthase
α-Isopropylmalate isomerase
Leucine aminotransferase
Synthesis of the small, hydrophobic amino acid valine also includes the initial part of this pathway.
Chemistry
Leucine is a branched-chain amino acid (BCAA) since it possesses an aliphatic side chain that is not linear.
Racemic leucine had been subjected to circularly polarized synchrotron radiation to better understand the origin of biomolecular asymmetry. An enantiomeric enhancement of 2.6% had been induced, indicating a possible photochemical origin of biomolecules' homochirality.
| Biology and health sciences | Amino acids | Biology |
63548 | https://en.wikipedia.org/wiki/Asparagine | Asparagine | Asparagine (symbol Asn or N) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated −NH form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain carboxamide, classifying it as a polar (at physiological pH), aliphatic amino acid. It is non-essential in humans, meaning the body can synthesize it. It is encoded by the codons AAU and AAC.
The one-letter symbol N for asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe;
History
Asparagine was first isolated in 1806 in a crystalline form by French chemists Louis Nicolas Vauquelin and Pierre Jean Robiquet (then a young assistant). It was isolated from asparagus juice, in which it is abundant, hence the chosen name. It was the first amino acid to be isolated.
Three years later, in 1809, Pierre Jean Robiquet identified a substance from liquorice root with properties which he qualified as very similar to those of asparagine, and which Plisson identified in 1828 as asparagine itself.
The determination of asparagine's structure required decades of research. The empirical formula for asparagine was first determined in 1833 by the French chemists Antoine François Boutron Charlard and Théophile-Jules Pelouze; in the same year, the German chemist Justus Liebig provided a more accurate formula. In 1846 the Italian chemist Raffaele Piria treated asparagine with nitrous acid, which removed the molecule's amine (–NH2) groups and transformed asparagine into malic acid. This revealed the molecule's fundamental structure: a chain of four carbon atoms. Piria thought that asparagine was a diamide of malic acid; however, in 1862 the German chemist Hermann Kolbe showed that this surmise was wrong; instead, Kolbe concluded that asparagine was an amide of an amine of succinic acid. In 1886, the Italian chemist Arnaldo Piutti (1857–1928) discovered a mirror image or "enantiomer" of the natural form of asparagine, which shared many of asparagine's properties, but which also differed from it. Since the structure of asparagine was still not fully known – the location of the amine group within the molecule was still not settled – Piutti synthesized asparagine and thus published its true structure in 1888.
Structural function in proteins
Since the asparagine side-chain can form hydrogen bond interactions with the peptide backbone, asparagine residues are often found near the beginning of alpha-helices as asx turns and asx motifs, and in similar turn motifs, or as amide rings, in beta sheets. Its role can be thought as "capping" the hydrogen bond interactions that would otherwise be satisfied by the polypeptide backbone.
Asparagine also provides key sites for N-linked glycosylation, modification of the protein chain with the addition of carbohydrate chains. Typically, a carbohydrate tree can solely be added to an asparagine residue if the latter is flanked on the C side by X-serine or X-threonine, where X is any amino acid with the exception of proline.
Asparagine can be hydroxylated in the HIF1 hypoxia-inducible transcription factor. This modification inhibits HIF1-mediated gene activation.
Sources
Dietary sources
Asparagine is not essential for humans, which means that it can be synthesized from central metabolic pathway intermediates and is not required in the diet.
Asparagine is found in:
Animal sources: dairy, whey, beef, poultry, eggs, fish, lactalbumin, seafood
Plant sources: seaweed (spirulina), potatoes, soy protein isolate, tofu
Biosynthesis and catabolism
The precursor to asparagine is oxaloacetate, which a transaminase enzyme converts to aspartate. The enzyme transfers the amino group from glutamate to oxaloacetate producing α-ketoglutarate and aspartate. The enzyme asparagine synthetase produces asparagine, AMP, glutamate, and pyrophosphate from aspartate, glutamine, and ATP. Asparagine synthetase uses ATP to activate aspartate, forming β-aspartyl-AMP. Glutamine donates an ammonium group, which reacts with β-aspartyl-AMP to form asparagine and free AMP.
In reaction that is the reverse of its biosynthesis, asparagine is hydrolyzed to aspartate by asparaginase. Aspartate then undergoes transamination to form glutamate and oxaloacetate from alpha-ketoglutarate. Oxaloacetate, which enters the citric acid cycle (Krebs cycle).
Acrylamide controversy
Heating a mixture of asparagine and reducing sugars or other source of carbonyls produces acrylamide in food. These products occur in baked goods such as French fries, potato chips, and toasted bread. Acrylamide is converted in the liver to glycidamide, which is a possible carcinogen.
Function
Asparagine synthetase is required for normal development of the brain. Asparagine is also involved in protein synthesis during replication of poxviruses.
The addition of N-acetylglucosamine to asparagine is performed by oligosaccharyltransferase enzymes in the endoplasmic reticulum. This glycosylation is involved in protein structure and function.
| Biology and health sciences | Amino acids | Biology |
63549 | https://en.wikipedia.org/wiki/Glutamine | Glutamine | Glutamine (symbol Gln or Q) is an α-amino acid that is used in the biosynthesis of proteins. Its side chain is similar to that of glutamic acid, except the carboxylic acid group is replaced by an amide. It is classified as a charge-neutral, polar amino acid. It is non-essential and conditionally essential in humans, meaning the body can usually synthesize sufficient amounts of it, but in some instances of stress, the body's demand for glutamine increases, and glutamine must be obtained from the diet. It is encoded by the codons CAA and CAG. It is named after glutamic acid, which in turn is named after its discovery in cereal proteins, gluten.
In human blood, glutamine is the most abundant free amino acid.
The dietary sources of glutamine include especially the protein-rich foods like beef, chicken, fish, dairy products, eggs, vegetables like beans, beets, cabbage, spinach, carrots, parsley, vegetable juices and also in wheat, papaya, Brussels sprouts, celery, kale and fermented foods like miso.
The one-letter symbol Q for glutamine was assigned in alphabetical sequence to N for asparagine, being larger by merely one methylene –CH2– group. Note that P was used for proline, and O was avoided due to similarity with D. The mnemonic Qlutamine was also proposed.
Functions
Glutamine plays a role in a variety of biochemical functions:
Protein synthesis, as any other of the 20 proteinogenic amino acids
Lipid synthesis, especially by cancer cells.
Regulation of acid-base balance in the kidney by producing ammonium
Cellular energy, as a source, next to glucose
Nitrogen donation for many anabolic processes, including the synthesis of purines
Carbon donation, as a source, refilling the citric acid cycle
Nontoxic transporter of ammonia in the blood circulation.
Integrity of healthy intestinal mucosa, though small randomized trials have shown no benefit in Crohn's disease.
Roles in metabolism
Glutamine maintains redox balance by participating in glutathione synthesis and contributing to anabolic processes such as lipid synthesis by reductive carboxylation.
Glutamine provides a source of carbon and nitrogen for use in other metabolic processes. Glutamine is present in serum at higher concentrations than other amino acids and is essential for many cellular functions. Examples include the synthesis of nucleotides and non-essential amino acids. One of the most important functions of glutamine is its ability to be converted into α-KG, which helps to maintain the flow of the tricarboxylic acid cycle, generating ATP via the electron carriers NADH and FADH2. The highest consumption of glutamine occurs in the cells of the intestines, kidney cells (where it is used for acid-base balance), activated immune cells, and many cancer cells.
Production
Glutamine is produced industrially using mutants of Brevibacterium flavum, which gives ca. 40 g/L in 2 days using glucose as a carbon source.
Biosynthesis
Glutamine synthesis from glutamate and ammonia is catalyzed by the enzyme glutamine synthetase. The majority of glutamine production occurs in muscle tissue, accounting for about 90% of all glutamine synthesized. Glutamine is also released, in small amounts, by the lungs and brain. Although the liver is capable of glutamine synthesis, its role in glutamine metabolism is more regulatory than productive, as the liver takes up glutamine derived from the gut via the hepatic portal system.
Uses
Nutrition
Glutamine is the most abundant naturally occurring, nonessential amino acid in the human body, and one of the few amino acids that can directly cross the blood–brain barrier. Humans obtain glutamine through catabolism of proteins in foods they eat. In states where tissue is being built or repaired, like growth of babies, or healing from wounds or severe illness, glutamine becomes conditionally essential.
Sickle cell disease
In 2017, the U.S. Food and Drug Administration (FDA) approved L-glutamine oral powder, marketed as Endari, to reduce severe complications of sickle cell disease in people aged five years and older with the disorder.
The safety and efficacy of L-glutamine oral powder were studied in a randomized trial of subjects ages five to 58 years old with sickle cell disease who had two or more painful crises within the 12 months prior to enrollment in the trial. Subjects were assigned randomly to treatment with L-glutamine oral powder or placebo, and the effect of treatment was evaluated over 48 weeks. Subjects who were treated with L-glutamine oral powder experienced fewer hospital visits for pain treated with a parenterally administered narcotic or ketorolac (sickle cell crises), on average, compared to subjects who received a placebo (median 3 vs. median 4), fewer hospitalizations for sickle cell pain (median 2 vs. median 3), and fewer days in the hospital (median 6.5 days vs. median 11 days). Subjects who received L-glutamine oral powder also had fewer occurrences of acute chest syndrome (a life-threatening complication of sickle cell disease) compared with patients who received a placebo (8.6 percent vs. 23.1 percent).
Common side effects of L-glutamine oral powder include constipation, nausea, headache, abdominal pain, cough, pain in the extremities, back pain and chest pain.
L-glutamine oral powder received orphan drug designation. The FDA granted the approval of Endari to Emmaus Medical Inc.
Medical food
Glutamine is marketed as medical food and is prescribed when a medical professional believes a person in their care needs supplementary glutamine due to metabolic demands beyond what can be met by endogenous synthesis or diet.
Safety
Glutamine is safe in adults and in preterm infants. Although glutamine is metabolized to glutamate and ammonia, both of which have neurological effects, their concentrations are not increased much, and no adverse neurological effects were detected. The observed safe level for supplemental L-glutamine in normal healthy adults is 14 g/day.
Adverse effects of glutamine have been described for people receiving home parenteral nutrition and those with liver-function abnormalities.
Although glutamine has no effect on the proliferation of tumor cells, it is still possible that glutamine supplementation may be detrimental in some cancer types.
Ceasing glutamine supplementation in people adapted to very high consumption may initiate a withdrawal effect, raising the risk of health problems such as infections or impaired integrity of the intestine.
Structure
Glutamine can exist in either of two enantiomeric forms, L-glutamine and D-glutamine. The L-form is found in nature. Glutamine contains an α-amino group which is in the protonated −NH3+ form under biological conditions and a carboxylic acid group which is in the deprotonated −COO− form, known as carboxylate, under physiological conditions.
Research
Glutamine mouthwash may be useful to prevent oral mucositis in people undergoing chemotherapy but intravenous glutamine does not appear useful to prevent mucositis in the GI tract.
Glutamine supplementation was thought to have potential to reduce complications in people who are critically ill or who have had abdominal surgery but this was based on poor quality clinical trials. Supplementation does not appear to be useful in adults or children with Crohn's disease or inflammatory bowel disease, but clinical studies as of 2016 were underpowered. Supplementation does not appear to have an effect in infants with significant problems of the stomach or intestines.
Some athletes use L-glutamine as supplement. Studies support the positive effects of the chronic oral administration of the supplement on the injury and inflammation induced by intense aerobic and exhaustive exercise, but the effects on muscle recovery from weight training are unclear.
Stress conditions for plants (drought, injury, soil salnity) cause the synthesis of such plant enzymes as superoxide dismutase, L-ascorbate oxidase, and Delta 1 DNA polymerase. Limiting this process, initiated by the conditions of strong soil salinity can be achieved by administering exogenous glutamine to plants. The decrease in the level of expression of genes responsible for the synthesis of superoxide dismutase increases with the increase in glutamine concentration.
| Biology and health sciences | Amino acids | Biology |
63550 | https://en.wikipedia.org/wiki/Arginine | Arginine | Arginine is the amino acid with the formula (H2N)(HN)CN(H)(CH2)3CH(NH2)CO2H. The molecule features a guanidino group appended to a standard amino acid framework. At physiological pH, the carboxylic acid is deprotonated (−CO2−) and both the amino and guanidino groups are protonated, resulting in a cation. Only the -arginine (symbol Arg or R) enantiomer is found naturally. Arg residues are common components of proteins. It is encoded by the codons CGU, CGC, CGA, CGG, AGA, and AGG. The guanidine group in arginine is the precursor for the biosynthesis of nitric oxide. Like all amino acids, it is a white, water-soluble solid.
The one-letter symbol R was assigned to arginine for its phonetic similarity.
History
Arginine was first isolated in 1886 from yellow lupin seedlings by the German chemist Ernst Schulze and his assistant Ernst Steiger. He named it from the Greek árgyros (ἄργυρος) meaning "silver" due to the silver-white appearance of arginine nitrate crystals. In 1897, Schulze and Ernst Winterstein (1865–1949) determined the structure of arginine. Schulze and Winterstein synthesized arginine from ornithine and cyanamide in 1899, but some doubts about arginine's structure lingered until Sørensen's synthesis of 1910.
Sources
Production
It is traditionally obtained by hydrolysis of various cheap sources of protein, such as gelatin. It is obtained commercially by fermentation. In this way, 25-35 g/liter can be produced, using glucose as a carbon source.
Dietary sources
Arginine is classified as a semiessential or conditionally essential amino acid, depending on the developmental stage and health status of the individual. Preterm infants are unable to synthesize arginine internally, making the amino acid nutritionally essential for them. Most healthy people do not need to supplement with arginine because it is a component of all protein-containing foods and can be synthesized in the body from glutamine via citrulline. Additional, dietary arginine is necessary for otherwise healthy individuals temporarily under physiological stress, for example during recovery from burns, injury or sepsis, or if either of the major sites of arginine biosynthesis, the small intestine and kidneys, have reduced function, because the small bowel does the first step of the synthesizing process and the kidneys do the second.
Arginine is an essential amino acid for birds, as they do not have a urea cycle. For some carnivores, for example cats, dogs and ferrets, arginine is essential, because after a meal, their highly efficient protein catabolism produces large quantities of ammonia which need to be processed through the urea cycle, and if not enough arginine is present, the resulting ammonia toxicity can be lethal. This is not a problem in practice, because meat contains sufficient arginine to avoid this situation.
Animal sources of arginine include meat, dairy products, and eggs, and plant sources include seeds of all types, for example grains, beans, and nuts.
Biosynthesis
Arginine is synthesized from citrulline in the urea cycle by the sequential action of the cytosolic enzymes argininosuccinate synthetase and argininosuccinate lyase. This is an energetically costly process, because for each molecule of argininosuccinate that is synthesized, one molecule of adenosine triphosphate (ATP) is hydrolyzed to adenosine monophosphate (AMP), consuming two ATP equivalents.
The pathways linking arginine, glutamine, and proline are bidirectional. Thus, the net use or production of these amino acids is highly dependent on cell type and developmental stage.
Arginine is made by the body as follows. The epithelial cells of the small intestine produce citrulline, primarily from glutamine and glutamate, which is secreted into the bloodstream which carries it to the proximal tubule cells of the kidney, which extract the citrulline and convert it to arginine, which is returned to the blood. This means that impaired small bowel or renal function can reduce arginine synthesis and thus create a dietary requirement for arginine. For such a person, arginine would become "essential".
Synthesis of arginine from citrulline also occurs at a low level in many other cells, and cellular capacity for arginine synthesis can be markedly increased under circumstances that increase the production of inducible nitric oxide synthase (NOS). This allows citrulline, a byproduct of the NOS-catalyzed production of nitric oxide, to be recycled to arginine in a pathway known as the citrulline to nitric oxide (citrulline-NO) or arginine-citrulline pathway. This is demonstrated by the fact that, in many cell types, nitric oxide synthesis can be supported to some extent by citrulline, and not just by arginine. This recycling is not quantitative, however, because citrulline accumulates in nitric oxide producing cells along with nitrate and nitrite, the stable end-products of nitric oxide breakdown.
Function
Arginine plays an important role in cell division, wound healing, removing ammonia from the body, immune function, and the release of hormones. It is a precursor for the synthesis of nitric oxide (NO), making it important in the regulation of blood pressure. Arginine is necessary for T-cells to function in the body, and can lead to their deregulation if depleted.
Proteins
Arginine's side chain is amphipathic, because at physiological pH it contains a positively charged guanidinium group, which is highly polar, at the end of a hydrophobic aliphatic hydrocarbon chain. Because globular proteins have hydrophobic interiors and hydrophilic surfaces, arginine is typically found on the outside of the protein, where the hydrophilic head group can interact with the polar environment, for example taking part in hydrogen bonding and salt bridges. For this reason, it is frequently found at the interface between two proteins. The aliphatic part of the side chain sometimes remains below the surface of the protein.
Arginine residues in proteins can be deiminated by PAD enzymes to form citrulline, in a post-translational modification process called citrullination.This is important in fetal development, is part of the normal immune process, as well as the control of gene expression, but is also significant in autoimmune diseases. Another post-translational modification of arginine involves methylation by protein methyltransferases.
Precursor
Arginine is the immediate precursor of nitric oxide, an important signaling molecule which can act as a second messenger, as well as an intercellular messenger which regulates vasodilation, and also has functions in the immune system's reaction to infection.
Arginine is also a precursor for urea, ornithine, and agmatine; is necessary for the synthesis of creatine; and can also be used for the synthesis of polyamines (mainly through ornithine and to a lesser degree through agmatine, citrulline, and glutamate). The presence of asymmetric dimethylarginine (ADMA), a close relative, inhibits the nitric oxide reaction; therefore, ADMA is considered a marker for vascular disease, just as L-arginine is considered a sign of a healthy endothelium.
Structure
The amino acid side-chain of arginine consists of a 3-carbon aliphatic straight chain, the distal end of which is capped by a guanidinium group, which has a pKa of 13.8, and is therefore always protonated and positively charged at physiological pH. Because of the conjugation between the double bond and the nitrogen lone pairs, the positive charge is delocalized, enabling the formation of multiple hydrogen bonds.
Research
Growth hormone
Intravenously administered arginine is used in growth hormone stimulation tests because it stimulates the secretion of growth hormone. A review of clinical trials concluded that oral arginine increases growth hormone, but decreases growth hormone secretion, which is normally associated with exercising. However, a more recent trial reported that although oral arginine increased plasma levels of L-arginine it did not cause an increase in growth hormone.
Herpes-Simplex Virus (Cold sores)
Research from 1964 into amino acid requirements of herpes simplex virus in human cells indicated that "...the lack of arginine or histidine, and possibly the presence of lysine, would interfere markedly with virus synthesis", but concludes that "no ready explanation is available for any of these observations".
Further reviews conclude that "lysine's efficacy for herpes labialis may lie more in prevention than treatment." and that "the use of lysine for decreasing the severity or duration of outbreaks" is not supported, while further research is needed. A 2017 study concludes that "clinicians could consider advising patients that there is a theoretical role of lysine supplementation in the prevention of herpes simplex sores but the research evidence is insufficient to back this. Patients with cardiovascular or gallbladder disease should be cautioned and warned of the theoretical risks."
High blood pressure
A meta-analysis showed that L-arginine reduces blood pressure with pooled estimates of 5.4 mmHg for systolic blood pressure and 2.7 mmHg for diastolic blood pressure.
Supplementation with -arginine reduces diastolic blood pressure and lengthens pregnancy for women with gestational hypertension, including women with high blood pressure as part of pre-eclampsia. It did not lower systolic blood pressure or improve weight at birth.
Schizophrenia
Both liquid chromatography and liquid chromatography/mass spectrometric assays have found that brain tissue of deceased people with schizophrenia shows altered arginine metabolism. Assays also confirmed significantly reduced levels of γ-aminobutyric acid (GABA), but increased agmatine concentration and glutamate/GABA ratio in the schizophrenia cases. Regression analysis indicated positive correlations between arginase activity and the age of disease onset and between L-ornithine level and the duration of illness. Moreover, cluster analyses revealed that L-arginine and its main metabolites L-citrulline, L-ornithine and agmatine formed distinct groups, which were altered in the schizophrenia group. Despite this, the biological basis of schizophrenia is still poorly understood, a number of factors, such as dopamine hyperfunction, glutamatergic hypofunction, GABAergic deficits, cholinergic system dysfunction, stress vulnerability and neurodevelopmental disruption, have been linked to the aetiology and/or pathophysiology of the disease.
Raynaud's phenomenon
Oral L-arginine has been shown to reverse digital necrosis in Raynaud syndrome
Safety and potential drug interactions
L-arginine is recognized as safe (GRAS-status) at intakes of up to 20 grams per day. L-arginine is found in many foods, such as fish, poultry, and dairy products, and is used as a dietary supplement. It may interact with various prescription drugs and herbal supplements.
| Biology and health sciences | Amino acids | Biology |
63551 | https://en.wikipedia.org/wiki/Serine | Serine | Serine (symbol Ser or S) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated − form under biological conditions), a carboxyl group (which is in the deprotonated − form under biological conditions), and a side chain consisting of a hydroxymethyl group, classifying it as a polar amino acid. It can be synthesized in the human body under normal physiological circumstances, making it a nonessential amino acid. It is encoded by the codons UCU, UCC, UCA, UCG, AGU and AGC.
Occurrence
This compound is one of the proteinogenic amino acids. Only the L-stereoisomer appears naturally in proteins. It is not essential to the human diet, since it is synthesized in the body from other metabolites, including glycine. Serine was first obtained from silk protein, a particularly rich source, in 1865 by Emil Cramer. Its name is derived from the Latin for silk, sericum. Serine's structure was established in 1902.
Biosynthesis
The biosynthesis of serine starts with the oxidation of 3-phosphoglycerate (an intermediate from glycolysis) to 3-phosphohydroxypyruvate and NADH by phosphoglycerate dehydrogenase (). Reductive amination (transamination) of this ketone by phosphoserine transaminase () yields 3-phosphoserine (O-phosphoserine) which is hydrolyzed to serine by phosphoserine phosphatase ().
In bacteria such as E. coli these enzymes are encoded by the genes serA (EC 1.1.1.95), serC (EC 2.6.1.52), and serB (EC 3.1.3.3).
Serine hydroxymethyltransferase (SMHT) also catalyzes the biosynthesis of glycine (retro-aldol cleavage) from serine, transferring the resulting formalddehyde synthon to 5,6,7,8-tetrahydrofolate. However, that reaction is reversible, and will convert excess glycine to serine. SHMT is a pyridoxal phosphate (PLP) dependent enzyme.
Synthesis and reactions
Industrially, L-serine is produced from glycine and methanol catalyzed by hydroxymethyltransferase.
Racemic serine can be prepared in the laboratory from methyl acrylate in several steps:
Hydrogenation of serine gives the diol serinol:
Biological function
Metabolic
Serine is important in metabolism in that it participates in the biosynthesis of purines and pyrimidines. It is the precursor to several amino acids including glycine and cysteine, as well as tryptophan in bacteria. It is also the precursor to numerous other metabolites, including sphingolipids and folate, which is the principal donor of one-carbon fragments in biosynthesis.
Signaling
D-Serine, synthesized in neurons by serine racemase from L-serine (its enantiomer), serves as a neuromodulator by coactivating NMDA receptors, making them able to open if they then also bind glutamate. D-serine is a potent agonist at the glycine site (NR1) of canonical diheteromeric NMDA receptors. For the receptor to open, glutamate and either glycine or D-serine must bind to it; in addition a pore blocker must not be bound (e.g. Mg2+ or Zn2+). Some research has shown that D-serine is a more potent agonist at the NMDAR glycine site than glycine itself. However, D-serine has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors through the glycine binding site on the GluN3 subunit.
Ligands
D-serine was thought to exist only in bacteria until relatively recently; it was the second D amino acid discovered to naturally exist in humans, present as a signaling molecule in the brain, soon after the discovery of D-aspartate. Had D amino acids been discovered in humans sooner, the glycine site on the NMDA receptor might instead be named the D-serine site. Apart from central nervous system, D-serine plays a signaling role in peripheral tissues and organs such as cartilage, kidney, and corpus cavernosum.
Gustatory sensation
Pure D-serine is an off-white crystalline powder with a very faint musty aroma. D-Serine is sweet with an additional minor sour taste at medium and high concentrations.
Clinical significance
Serine deficiency disorders are rare defects in the biosynthesis of the amino acid L-serine. At present three disorders have been reported:
3-phosphoglycerate dehydrogenase deficiency
3-phosphoserine phosphatase deficiency
Phosphoserine aminotransferase deficiency
These enzyme defects lead to severe neurological symptoms such as congenital microcephaly and severe psychomotor retardation and in addition, in patients with 3-phosphoglycerate dehydrogenase deficiency to intractable seizures. These symptoms respond to a variable degree to treatment with L-serine, sometimes combined with glycine.
Response to treatment is variable and the long-term and functional outcome is unknown. To provide a basis for improving the understanding of the epidemiology, genotype/phenotype correlation and outcome of these diseases their impact on the quality of life of patients, as well as for evaluating diagnostic and therapeutic strategies a patient registry was established by the noncommercial International Working Group on Neurotransmitter Related Disorders (iNTD).
Besides disruption of serine biosynthesis, its transport may also become disrupted. One example is spastic tetraplegia, thin corpus callosum, and progressive microcephaly, a disease caused by mutations that affect the function of the neutral amino acid transporter A.
Research for therapeutic use
The classification of L-serine as a non-essential amino acid has come to be considered as conditional, since vertebrates such as humans cannot always synthesize optimal quantities over entire lifespans. Safety of L-serine has been demonstrated in an FDA-approved human phase I clinical trial with Amyotrophic Lateral Sclerosis, ALS, patients (ClinicalTrials.gov identifier: NCT01835782), but treatment of ALS symptoms has yet to be shown. A 2011 meta-analysis found adjunctive sarcosine to have a medium effect size for negative and total symptoms of schizophrenia. There also is evidence that L‐serine could acquire a therapeutic role in diabetes.
D-Serine is being studied in rodents as a potential treatment for schizophrenia. D-Serine also has been described as a potential biomarker for early Alzheimer's disease (AD) diagnosis, due to a relatively high concentration of it in the cerebrospinal fluid of probable AD patients. D-serine, which is made in the brain, has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors mitigating neuron loss in an animal model of temporal lobe epilepsy.
D-Serine has been theorized as a potential treatment for sensorineural hearing disorders such as hearing loss and tinnitus.
| Biology and health sciences | Amino acids | Biology |
63553 | https://en.wikipedia.org/wiki/Threonine | Threonine | Threonine (symbol Thr or T) is an amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated −NH form when dissolved in water), a carboxyl group (which is in the deprotonated −COO− form when dissolved in water), and a side chain containing a hydroxyl group, making it a polar, uncharged amino acid. It is essential in humans, meaning the body cannot synthesize it: it must be obtained from the diet. Threonine is synthesized from aspartate in bacteria such as E. coli. It is encoded by all the codons starting AC (ACU, ACC, ACA, and ACG).
Threonine sidechains are often hydrogen bonded; the most common small motifs formed are based on interactions with serine: ST turns, ST motifs (often at the beginning of alpha helices) and ST staples (usually at the middle of alpha helices).
Modifications
The threonine residue is susceptible to numerous posttranslational modifications. The hydroxyl side-chain can undergo O-linked glycosylation. In addition, threonine residues undergo phosphorylation through the action of a threonine kinase. In its phosphorylated form, it can be referred to as phosphothreonine. Phosphothreonine has three potential coordination sites (carboxyl, amine and phosphate group) and determination of the mode of coordination between phosphorylated ligands and metal ions occurring in an organism is important to explain the function of the phosphothreonine in biological processes.
History
Threonine was the last of the 20 common proteinogenic amino acids to be discovered. It was discovered in 1935 by William Cumming Rose, collaborating with Curtis Meyer. The amino acid was named threonine because it was similar in structure to threonic acid, a four-carbon monosaccharide with molecular formula C4H8O5
Stereoisomers
Threonine is one of two proteinogenic amino acids with two stereogenic centers, the other being isoleucine. Threonine can exist in four possible stereoisomers with the following configurations: (2S,3R), (2R,3S), (2S,3S) and (2R,3R). However, the name L-threonine is used for one single stereoisomer, (2S,3R)-2-amino-3-hydroxybutanoic acid. The stereoisomer (2S,3S), which is rarely present in nature, is called L-allothreonine.
Biosynthesis
As an essential amino acid, threonine is not synthesized in humans, and needs to be present in proteins in the diet. Adult humans require about 20 mg/kg body weight/day. In plants and microorganisms, threonine is synthesized from aspartic acid via α-aspartyl-semialdehyde and homoserine. Homoserine undergoes O-phosphorylation; this phosphate ester undergoes hydrolysis concomitant with relocation of the OH group. Enzymes involved in a typical biosynthesis of threonine include:
aspartokinase
β-aspartate semialdehyde dehydrogenase
homoserine dehydrogenase
homoserine kinase
threonine synthase.
Metabolism
Threonine is metabolized in at least three ways:
In many animals it is converted to pyruvate via threonine dehydrogenase. An intermediate in this pathway can undergo thiolysis with CoA to produce acetyl-CoA and glycine.
In humans the gene for threonine dehydrogenase is an inactive pseudogene, so threonine is converted to α-ketobutyrate. The mechanism of the first step is analogous to that catalyzed by serine dehydratase, and the serine and threonine dehydratase reactions are probably catalyzed by the same enzyme.
In many organisms it is O-phosphorylated by a kinase preparatory to further metabolism. This is especially important in bacteria as part of the biosynthesis of cobalamin (Vitamin B12), as the product is converted to (R)-1-aminopropan-2-ol for incorporation into the vitamin's sidechain.
Threonine is used to synthesize glycine during the endogenous production of L-carnitine in the brain and liver of rats.
Metabolic diseases
The degradation of threonine is impaired in the following metabolic diseases:
Combined malonic and methylmalonic aciduria (CMAMMA)
Methylmalonic acidemia
Propionic acidemia
Research of Threonine as a Dietary Supplement in Animals
Effects of threonine dietary supplementation have been researched in broilers.
An essential amino acid, threonine is involved in the metabolism of fats, the creation of proteins, the proliferation and differentiation of embryonic stem cells, and the health and function of the intestines. Animal health and illness are strongly correlated with the need for and metabolism of threonine. Intestinal inflammation and energy metabolism disorders in animals may be alleviated by appropriate amounts of dietary threonine. Nevertheless, because these effects pertain to the control of nutrition metabolism, more research is required to confirm the results in various animal models. Furthermore, more research is needed to understand how threonine controls the dynamic equilibrium of the intestinal barrier function, immunological response and gut flora.
Sources
Foods high in threonine include cottage cheese, poultry, fish, meat, lentils, black turtle bean and sesame seeds.
Racemic threonine can be prepared from crotonic acid by alpha-functionalization using mercury(II) acetate.
| Biology and health sciences | Amino acids | Biology |
63554 | https://en.wikipedia.org/wiki/Valine | Valine | Valine (symbol Val or V) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated −NH3+ form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a side chain isopropyl group, making it a non-polar aliphatic amino acid. Valine is essential in humans, meaning the body cannot synthesize it; it must be obtained from dietary sources which are foods that contain proteins, such as meats, dairy products, soy products, beans and legumes. It is encoded by all codons starting with GU (GUU, GUC, GUA, and GUG).
History and etymology
Valine was first isolated from casein in 1901 by Hermann Emil Fischer. The name valine comes from its structural similarity to valeric acid, which in turn is named after the plant valerian due to the presence of the acid in the roots of the plant.
Nomenclature
According to IUPAC, carbon atoms forming valine are numbered sequentially starting from 1 denoting the carboxyl carbon, whereas 4 and 4' denote the two terminal methyl carbons.
Metabolism
Source and biosynthesis
Valine, like other branched-chain amino acids, is synthesized by bacteria and plants, but not by animals. It is therefore an essential amino acid in animals, and needs to be present in the diet. Adult humans require about 24 mg/kg body weight daily. It is synthesized in plants and bacteria via several steps starting from pyruvic acid. The initial part of the pathway also leads to leucine. The intermediate α-ketoisovalerate undergoes reductive amination with glutamate. Enzymes involved in this biosynthesis include:
Acetolactate synthase (also known as acetohydroxy acid synthase)
Acetohydroxy acid isomeroreductase
Dihydroxyacid dehydratase
Valine aminotransferase
Degradation
Like other branched-chain amino acids, the catabolism of valine starts with the removal of the amino group by transamination, giving alpha-ketoisovalerate, an alpha-keto acid, which is converted to isobutyryl-CoA through oxidative decarboxylation by the branched-chain α-ketoacid dehydrogenase complex. This is further oxidised and rearranged to succinyl-CoA, which can enter the citric acid cycle and provide direct fuel in muscle tissue.
Synthesis
Racemic valine can be synthesized by bromination of isovaleric acid followed by amination of the α-bromo derivative.
HO2CCH2CH(CH3)2 + Br2 → HO2CCHBrCH(CH3)2 + HBr
HO2CCHBrCH(CH3)2 + 2 NH3 → HO2CCH(NH2)CH(CH3)2 + NH4Br
Medical significance
Metabolic diseases
The degradation of valine is impaired in the following metabolic diseases:
Combined malonic and methylmalonic aciduria (CMAMMA)
Maple syrup urine disease (MSUD)
Methylmalonic acidemia
Propionic acidemia
Insulin resistance
Lower levels of serum valine, like other branched-chain amino acids, are associated with weight loss and decreased insulin resistance: higher levels of valine are observed in the blood of diabetic mice, rats, and humans. Mice fed a BCAA-deprived diet for one day had improved insulin sensitivity, and feeding of a valine-deprived diet for one week significantly decreases blood glucose levels. In diet-induced obese and insulin resistant mice, a diet with decreased levels of valine and the other branched-chain amino acids resulted in a rapid reversal of the adiposity and an improvement in glucose-level control. The valine catabolite 3-hydroxyisobutyrate promotes insulin resistance in mice by stimulating fatty acid uptake into muscle and lipid accumulation. In mice, a BCAA-restricted diet decreased fasting blood glucose levels and improved body composition.
Hematopoietic stem cells
Dietary valine is essential for hematopoietic stem cell (HSC) self-renewal, as demonstrated by experiments in mice. Dietary valine restriction selectively depletes long-term repopulating HSC in mouse bone marrow. Successful stem cell transplantation was achieved in mice without irradiation after 3 weeks on a valine restricted diet. Long-term survival of the transplanted mice was achieved when valine was returned to the diet gradually over a 2-week period to avoid refeeding syndrome.
| Biology and health sciences | Amino acids | Biology |
63564 | https://en.wikipedia.org/wiki/Hematology | Hematology | Hematology (spelled haematology in British English) is the branch of medicine concerned with the study of the cause, prognosis, treatment, and prevention of diseases related to blood. It involves treating diseases that affect the production of blood and its components, such as blood cells, hemoglobin, blood proteins, bone marrow, platelets, blood vessels, spleen, and the mechanism of coagulation. Such diseases might include hemophilia, sickle cell anemia, blood clots (thrombus), other bleeding disorders, and blood cancers such as leukemia, multiple myeloma, and lymphoma. The laboratory analysis of blood is frequently performed by a medical technologist or medical laboratory scientist.
Specialization
Physicians specialized in hematology are known as hematologists or haematologists. Their routine work mainly includes the care and treatment of patients with hematological diseases, although some may also work at the hematology laboratory viewing blood films and bone marrow slides under the microscope, interpreting various hematological test results and blood clotting test results. In some institutions, hematologists also manage the hematology laboratory. Physicians who work in hematology laboratories, and most commonly manage them, are pathologists specialized in the diagnosis of hematological diseases, referred to as hematopathologists or haematopathologists. Hematologists and hematopathologists generally work in conjunction to formulate a diagnosis and deliver the most appropriate therapy if needed. Hematology is a distinct subspecialty of internal medicine, separate from but overlapping with the subspecialty of medical oncology. Hematologists may specialize further or have special interests, for example, in:
treating bleeding disorders such as hemophilia and idiopathic thrombocytopenic purpura, with the latter of these two conditions being continuously studied by hematologists due to its unknown cause.
treating hematological malignancies such as lymphoma and leukemia (cancers)
treating hemoglobinopathies, including α-thalassemias and β-thalassemias (thalassemia syndromes) and hemoglobin S, hemoglobin C, and hemoglobin E (abnormal hemoglobins).
the science of blood transfusion and the work of a blood bank, known as transfusion medicine
bone marrow and stem cell transplantation, especially with the use of technologies to extract and isolate hematopoietic progenitor cells (HPCs).
Training
Starting hematologists (in the US) complete a four-year medical degree followed by three or four more years in residency or internship programs. After completion, they further expand their knowledge by spending two or three more years learning how to experiment, diagnose, and treat blood disorders. Some exposure to hematopathology is typically included in their fellowship training. Job openings for hematologists require training in a recognized fellowship program to learn to diagnose and treat numerous blood-related benign conditions and blood cancers. Hematologists typically work across specialties to care for patients with complex illnesses, such as sickle cell disease, who require complex, multidisciplinary care, and to provide consultation on cases of disseminated intravascular coagulation, thrombosis and other conditions that can occur in hospitalized patients.
| Biology and health sciences | Fields of medicine | null |
63570 | https://en.wikipedia.org/wiki/Grouse | Grouse | Grouse are a group of birds from the order Galliformes, in the family Phasianidae. Grouse are presently assigned to the tribe Tetraonini (formerly the subfamily Tetraoninae and the family Tetraonidae), a classification supported by mitochondrial DNA sequence studies, and applied by the American Ornithologists' Union, ITIS, International Ornithological Congress, and others.
Grouse inhabit temperate and subarctic regions of the Northern Hemisphere, from pine forests to moorland and mountainside, from 83°N (rock ptarmigan in northern Greenland) to 28°N (Attwater's prairie chicken in Texas).
The turkeys are closely allied with grouse, but they have traditionally been excluded from Tetraonini, often placed in their own tribe, subfamily, or family; certain more modern treatments also exclude them. Later phylogenomic analyses demonstrated conclusively that they are sister to the traditionally-defined grouse, and they, along with the somewhat earlier-diverging koklass pheasant, may be treated as grouse (i.e., as basal members of the Tetraonini). This is reflected in some more recent circumscriptions.
Description
Like many other galliforms, grouse are generally heavily-built birds. The traditional grouse (excluding turkeys) range in length from , and in weight from . If they are included, wild turkey toms are the largest grouse species, attaining lengths of 130 cm (50 in) and weighing up to 10 kg (22 lb). Male grouse are larger than females, and can be twice as heavy in the western capercaillie (the largest of the traditional grouse). Like many other galliforms, males often sport incredibly elaborate ornamentation, such as crests, fan-tails, and inflatable, brightly colored patches of bare skin. Many grouse have feathered nostrils, and some species, such as the ptarmigans, have legs which are entirely covered in feathers; in winter the toes, too, have feathers or small scales on the sides, an adaptation for walking on snow and burrowing into it for shelter. Unlike many other galliforms, they typically have no spurs, although turkeys do possess very prominent spurs.
Feeding and habits
Grouse feed mainly on vegetation—buds, catkins, leaves, and twigs—which typically accounts for over 95% of adults' food by weight. Thus, their diets vary greatly with the seasons. Hatchlings eat mostly insects and other invertebrates, gradually reducing their proportion of animal food to adult levels. Several of the forest-living species are notable for eating large quantities of conifer needles, which most other vertebrates refuse. To digest vegetable food, grouse have big crops and gizzards, eat grit to break up food, and have long intestines with well-developed caeca in which symbiotic bacteria digest cellulose.
Forest species flock only in autumn and winter, though individuals tolerate each other when they meet. Prairie species are more social, and tundra species (ptarmigans, Lagopus) are the most social, forming flocks of up to 100 in winter. All grouse spend most of their time on the ground, though when alarmed, they may take off in a flurry and go into a long glide.
Most species stay within their breeding range all year, but make short seasonal movements; many individuals of the ptarmigan (called rock ptarmigan in the US) and willow grouse (called willow ptarmigan in the US) migrate hundreds of kilometers.
Reproduction
In all but one species (the willow ptarmigan), males are polygamous. Many species have elaborate courtship displays on the ground at dawn and dusk, which in some are given in leks. The displays feature males' brightly colored combs and in some species, brightly colored inflatable sacs on the sides of their necks. The males display their plumage, give vocalizations that vary widely between species, and may engage in other activities, such as drumming or fluttering their wings, rattling their tails, and making display flights. Occasionally, males fight.
The nest is a shallow depression or scrape on the ground—often in cover—with a scanty lining of plant material. The female lays one clutch, but may replace it if the eggs are lost. She begins to lay about a week after mating and lays one egg every day or two; the clutch comprises five to 12 eggs. The eggs have the shape of hen's eggs and are pale yellow, sparsely spotted with brown. On laying the second-last or last egg, the female starts 21 to 28 days of incubation. Chicks hatch in dense, yellow-brown down and leave the nest immediately. They soon develop feathers and can fly shortly before they are two weeks old. The female (and the male in the willow grouse) stays with them and protects them until their first autumn, when they reach their mature weights (except in the male capercaillies). They are sexually mature the following spring, but often do not mate until later years.
Populations
Grouse make up a considerable part of the vertebrate biomass in the Arctic and Subarctic. Their numbers may fall sharply in years of bad weather or high predator populations—significant grouse populations are a major food source for lynx, foxes, martens, and birds of prey.
The three tundra species have maintained their former numbers. The prairie and forest species have declined greatly because of habitat loss, though popular game birds such as the red grouse and the ruffed grouse have benefited from habitat management. Most grouse species are listed by the IUCN as "least concern" or "near threatened", but the greater and lesser prairie chicken are listed as "vulnerable" and the Gunnison grouse is listed as "endangered". Some subspecies, such as Attwater's prairie chicken and the Cantabrian capercaillie, and some national and regional populations are also in danger. The wild turkey precipitously declined before returning to abundance, even in developed areas.
Sexual size dimorphism
Male size selection
The phenotypic difference between males and females is called sexual dimorphism. Male grouse tend to be larger than female grouse, which seems to hold true across all the species of grouse, with some difference within each species in terms of how drastic the size difference is. The hypothesis with the most supporting evidence for the evolution of sexual dimorphism in grouse is sexual selection. Sexual selection favors large males; stronger selection for larger size in males leads to greater size dimorphism. Female size will increase correspondingly as male size increases, and this is due to heredity (but not to the extent of the male size). This is because females that are smaller will still be able to reproduce without a substantial disadvantage, but this is not the case with males. The largest among the male grouse (commonly dubbed 'Biggrouse') attract the greatest numbers of females during their mating seasons.
Mating behavior selection
Male grouse display lekking behavior, which is when many males come together in one area and put on displays to attract females. Females selectively choose among the males present for traits they find more appealing. Male grouse exhibit two types: typical lekking and exploded lekking. In typical lekking, males display in small areas defending a limited territory, and in exploded lekking, displaying males are covered over an expansive land area and share larger territories. Male grouse can also compete with one another for access to female grouse through territoriality, in which a male defends a territory which has resources that females need, like food and nest sites. These differences in male behavior in mating systems account for the evolution of body size in grouse. Males of territorial species were smaller than those of exploded lekking species, and males of typical lekking species were the largest overall. The male birds that exhibit lekking behavior, and have to compete with other males for females to choose them, have greater sexual dimorphism in size. This suggests the hypothesis of sexual selection affecting male body size and also gives an explanation for why some species of grouse have a more drastic difference between male and female body size than others.
Contrast with other bird species
Sexual size dimorphism can manifest itself differently between grouse and other birds. In some cases, the female is dominant over the male in breeding behavior, which can result in females that are larger than the males.
In culture
Grouse are game, and hunters kill millions each year for food, sport, and other uses. In the United Kingdom, this takes the form of driven grouse shooting. The male black grouse's tail feathers are a traditional ornament for hats in areas such as Scotland and the Alps. Folk dances from the Alps to the North American prairies imitate the displays of lekking males.
Species
Extant genera
Extinct genera
Genus †Proagriocharis
Proagriocharis kimballensis
Genus †Rhegminornis
Rhegminornis calobates
| Biology and health sciences | Galliformes | null |
63572 | https://en.wikipedia.org/wiki/Old%20World%20sparrow | Old World sparrow | Old World sparrows are a group of small passerine birds forming the family Passeridae. They are also known as true sparrows, a name also used for a particular genus of the family, Passer. They are distinct from both the New World sparrows, in the family Passerellidae, and from a few other birds sharing their name, such as the Java sparrow of the family Estrildidae. Many species nest on buildings and the house and Eurasian tree sparrows, in particular, inhabit cities in large numbers. They are primarily seed-eaters, though they also consume small insects. Some species scavenge for food around cities and, like pigeons or gulls, will eat small quantities of a diversity of items.
Description
Generally, Old World sparrows are small, plump, brown and grey birds with short tails and stubby, powerful beaks. The differences between sparrow species can be subtle. Members of this family range in size from the chestnut sparrow (Passer eminibey), at and , to the parrot-billed sparrow (Passer gongonensis), at and . Sparrows are physically similar to other seed-eating birds, such as finches, but have a vestigial dorsal outer primary wing feather and an extra bone in the tongue. This bone, the preglossale, helps stiffen the tongue when holding seeds. Other adaptations for eating seeds are specialised bills and elongated and specialised alimentary canals.
Taxonomy and systematics
The family Passeridae was introduced (as Passernia) by the French polymath Constantine Samuel Rafinesque in 1815. Under the classification used in the Handbook of the Birds of the World (HBW) main groupings of the sparrows are the true sparrows (genus Passer), the snowfinches (typically one genus, Montifringilla), and the rock sparrows (Petronia and the pale rockfinch). These groups are similar to each other, and are each fairly homogeneous, especially Passer. Some classifications also include the sparrow-weavers (Plocepasser) and several other African genera (otherwise classified among the weavers, Ploceidae) which are morphologically similar to Passer. According to a study of molecular and skeletal evidence by Jon Fjeldså and colleagues, the cinnamon ibon of the Philippines, previously considered to be a white-eye, is a sister taxon to the sparrows as defined by the HBW. They therefore classify it as its own subfamily within Passeridae.
Many early classifications of the Old World sparrows placed them as close relatives of the weavers among the various families of small seed-eating birds, based on the similarity of their breeding behaviour, bill structure, and moult, among other characters. Some, starting with P. P. Suskin in the 1920s, placed the sparrows in the weaver family as the subfamily Passerinae, and tied them to Plocepasser. Another family sparrows were classed with was the finches (Fringillidae).
Some authorities previously classified the related estrildid finches of the Old World tropics and Australasia as members of the Passeridae. Like sparrows, the estrildid finches are small, gregarious and often colonial seed-eaters with short, thick, but pointed bills. They are broadly similar in structure and habits, but tend to be very colourful and vary greatly in their plumage. The 2008 Christidis and Boles taxonomic scheme lists the estrildid finches as the separate family Estrildidae, leaving just the true sparrows in Passeridae.
Despite some resemblance such as the seed-eater's bill and frequently well-marked heads, New World sparrows are members of a different family, Passerellidae, with 29 genera recognised. Several species in this family are notable singers. New World sparrows are related to Old World buntings, and until 2017, were included in the Old World bunting family Emberizidae.
The hedge sparrow or dunnock (Prunella modularis) is similarly unrelated. It is a sparrow in name only, a relict of the old practice of calling more types of small birds "sparrows". A few further bird species are also called sparrows, such as the Java sparrow, an estrildid finch.
Species
The family contains 43 species divided into eight genera:
Distribution and habitat
The Old World sparrows are indigenous to Europe, Africa and Asia. In the Americas, Australia, and other parts of the world, settlers imported some species which quickly naturalised, particularly in urban and degraded areas. House sparrows, for example, are now found throughout North America, Australia (every state except Western Australia), parts of southern and eastern Africa, and over much of the heavily populated parts of South America.
The Old World sparrows are generally birds of open habitats, including grasslands, deserts, and scrubland. The snowfinches and ground-sparrows are all species of high latitudes. A few species, like the Eurasian tree sparrow, inhabit open woodland. The aberrant cinnamon ibon has the most unusual habitat of the family, inhabiting the canopy of cloud forest in the Philippines.
Behaviour and ecology
Old World sparrows are generally social birds, with many species breeding in loose colonies and most species occurring in flocks during the non-breeding season. The great sparrow is an exception, breeding in solitary pairs and remaining only in small family groups in the non-breeding season. They form large roosting aggregations in the non-breeding seasons that contain only a single species (in contrast to multi-species flocks that might gather for foraging). Sites are chosen for cover and include trees, thick bushes and reed beds. The assemblages can be quite large with up to 10,000 house sparrows counted in one roost in Egypt.
The Old World sparrows are some of the few passerine birds that engage in dust bathing. They will first scratch a hole in the ground with their feet, then lie in it and fling dirt or sand over their bodies with flicks of their wings. They will also bathe in water, or in dry or melting snow. Water bathing is similar to dust bathing, with the sparrow standing in shallow water and flicking water over its back with its wings, also ducking its head under the water. Both activities are social, with up to a hundred birds participating at once, and is followed by preening and sometimes group singing.
Eggs
The house sparrow typically lays 3-6 eggs, but has been known to lay as few as 1 and as many as 8 greenish-white eggs. The incubation period is typically 10–14 days.
Relationships with humans
Old World sparrows may be the most familiar of all wild birds worldwide. Many species commonly live in agricultural areas, and for several, human settlements are a primary habitat. The Eurasian tree and house sparrows are particularly specialised in living around humans and inhabit cities in large numbers. 17 of the 26 species recognised by the Handbook of the Birds of the World are known to nest on and feed around buildings.
Grain-eating species, in particular the house and Sudan golden sparrows, can be significant agricultural pests. They can be beneficial to humans as well, especially by eating insect pests. Attempts at large-scale control have failed to affect populations significantly, or have been accompanied by major increases in insect attacks probably resulting from a reduction of numbers, as in the Great Sparrow Campaign in 1950s China.
Because of their familiarity, the house sparrow and other species of the family are frequently used to represent the common and vulgar, or the lewd. Birds usually described later as Old World sparrows are referred to in many works of ancient literature and religious texts in Europe and western Asia. These references may not always refer specifically to Old World sparrows, or even to small, seed-eating birds, but later writers who were inspired by these texts often had the house sparrow and other members of the family in mind. In particular, Old World sparrows were associated by the ancient Greeks with Aphrodite, the goddess of love, due to their perceived lustfulness, an association echoed by later writers such as Chaucer and Shakespeare.
Jesus's use of "sparrows" as an example of divine providence in the Gospel of Matthew also inspired later references, such as that in the final scene of Shakespeare's Hamlet and the Gospel hymn "His Eye Is on the Sparrow".
Sparrows are represented in ancient Egyptian art very rarely, but an Egyptian hieroglyph G37 is based on the house sparrow. The symbol had no phonetic value and was used as a determinative in words to indicate small, narrow, or bad.
Old World sparrows have been kept as pets at many times in history, even though most are not particularly colourful and their songs are unremarkable. They are also difficult to keep, as pet sparrows must be raised by hand and a considerable amount of insects are required to feed them. Nevertheless, many people succeed at hand-raising orphaned or abandoned baby sparrows.
The earliest mentions of pet sparrows are from the Romans. Not all the passeri mentioned, often as pets, in Roman literature were necessarily sparrows, but some accounts of them clearly describe their appearance and habits. The pet passer of Lesbia in Catullus's poems may not have been a sparrow, but a thrush or European goldfinch. John Skelton's The Boke of Phyllyp Sparowe is a lament for a pet house sparrow belonging to a Jane Scrope, narrated by Scrope.
| Biology and health sciences | Passerida | null |
63577 | https://en.wikipedia.org/wiki/Cashew | Cashew | Cashew is the common name of a tropical evergreen tree Anacardium occidentale, in the family Anacardiaceae. It is native to South America and is the source of the cashew nut and the cashew apple, an accessory fruit. The tree can grow as tall as , but the dwarf cultivars, growing up to , prove more profitable, with earlier maturity and greater yields. The cashew nut is edible and is eaten on its own as a snack, used in recipes, or processed into cashew cheese or cashew butter. The nut is often simply called a 'cashew'.
In 2019, four million tonnes of cashew nuts were produced globally, with Ivory Coast and India the leading producers. As well as the nut and fruit, the plant has several other uses. The shell of the cashew seed yields derivatives that can be used in many applications including lubricants, waterproofing, paints, and, starting in World War II, arms production. The cashew apple is a light reddish to yellow fruit, whose pulp and juice can be processed into a sweet, astringent fruit drink or fermented and distilled into liquor.
Description
The cashew tree is large and evergreen, growing to tall, with a short, often irregularly shaped trunk. The leaves are spirally arranged, leathery textured, elliptic to obovate, long and broad, with smooth margins. The flowers are produced in a panicle or corymb up to long; each flower is small, pale green at first, then turning reddish, with five slender, acute petals long. The largest cashew tree in the world covers an area around and is located in Natal, Brazil.
The fruit of the cashew tree is an accessory fruit (sometimes called a pseudocarp or false fruit). What appears to be the fruit is an oval or pear-shaped structure, a hypocarpium, that develops from the pedicel and the receptacle of the cashew flower. Called the cashew apple, better known in Central America as , it ripens into a yellow or red structure about long.
The true fruit of the cashew tree is a kidney-shaped or boxing glove-shaped drupe that grows at the end of the cashew apple. The drupe first develops on the tree and then the pedicel expands to become the cashew apple. The drupe becomes the true fruit, a single shell-encased seed, which is often considered a nut in the culinary sense. The seed is surrounded by a double shell that contains an allergenic phenolic resin, anacardic acid—which is a potent skin irritant chemically related to the better-known and also toxic allergenic oil urushiol, which is found in the related poison ivy and lacquer tree.
Etymology
The English name derives from the Portuguese name for the fruit of the cashew tree: (), also known as , which itself is from the Tupi word , literally meaning "nut that produces itself".
The generic name Anacardium is composed of the Greek prefix ana- (), the Greek cardia (), and the Neo-Latin suffix . It possibly refers to the heart shape of the fruit, to "the top of the fruit stem" or to the seed. The word anacardium was earlier used to refer to Semecarpus anacardium (the marking nut tree) before Carl Linnaeus transferred it to the cashew; both plants are in the same family. The epithet occidentale derives from the Western (or Occidental) world.
The plant has diverse common names in various languages among its wide distribution range, including (French) with the fruit referred to as , (), or (Portuguese).
Distribution and habitat
The species is native to tropical South America and later was distributed around the world in the 1500s by Portuguese explorers. Portuguese colonists in Brazil began exporting cashew nuts as early as the 1550s. The Portuguese took it to Goa, formerly Estado da Índia Portuguesa in India, between 1560 and 1565. From there, it spread throughout Southeast Asia and eventually Africa.
Cultivation
The cashew tree is cultivated in the tropics between 25°N and 25°S, and is well-adapted to hot lowland areas with a pronounced dry season, where the mango and tamarind trees also thrive. The traditional cashew tree is tall (up to ) and takes three years from planting before it starts production, and eight years before economic harvests can begin.
More recent breeds, such as the dwarf cashew trees, are up to tall and start producing after the first year, with economic yields after three years. The cashew nut yields for the traditional tree are about , in contrast to over a ton per hectare for the dwarf variety. Grafting and other modern tree management technologies are used to further improve and sustain cashew nut yields in commercial orchards.
Production
In 2021, global production of cashew nuts (as the kernel) was 3.7 million tonnes, led by Ivory Coast and India with a combined 43% of the world total (table).
Trade
The top ten exporters of cashew nuts (in-shell; HS Code 080131) in value (USD) in 2021 were Ghana, Tanzania, Guinea-Bissau, Nigeria, Ivory Coast, Burkina Faso, Senegal, Indonesia, United Arab Emirates (UAE), and Guinea.
From 2017 to 2021, the top ten exporters of cashew nuts (shelled; HS Code 080132) were Vietnam, India, the Netherlands, Germany, Brazil, Ivory Coast, Nigeria, Indonesia, Burkina Faso, and the United States.
In 2014, the rapid growth of cashew cultivation in the Ivory Coast made this country the top African exporter. Fluctuations in world market prices, poor working conditions, and low pay for local harvesting have caused discontent in the cashew nut industry. Almost all cashews produced in Africa between 2000 and 2019 were exported as raw nuts which are much less profitable than shelled nuts. One of the goals of the African Cashew Alliance is to promote Africa's cashew processing capabilities to improve the profitability of Africa's cashew industry.
In 2011, Human Rights Watch reported that forced labour was used for cashew processing in Vietnam. Around 40,000 current or former drug users were forced to remove shells from "blood cashews" or perform other work and often beaten at more than 100 rehabilitation centers.
Toxicity
Some people are allergic to cashews, but they are a less frequent allergen than other tree nuts or peanuts. For up to 6% of children and 3% of adults, consuming cashews may cause allergic reactions, ranging from mild discomfort to life-threatening anaphylaxis. These allergies are triggered by the proteins found in tree nuts, and cooking often does not remove or change these proteins. Reactions to cashew and tree nuts can also occur as a consequence of hidden nut ingredients or traces of nuts that may inadvertently be introduced during food processing, handling, or manufacturing.
The shell of the cashew nut contains oil compounds that can cause contact dermatitis similar to poison ivy, primarily resulting from the phenolic lipids, anacardic acid, and cardanol. Because it can cause dermatitis, cashews are typically not sold in the shell to consumers. Readily and inexpensively extracted from the waste shells, cardanol is under research for its potential applications in nanomaterials and biotechnology.
Uses
Nutrition
Raw cashew nuts are 5% water, 30% carbohydrates, 44% fat, and 18% protein (table). In a 100-gram reference amount, raw cashews provide 553 kilocalories, 67% of the Daily Value (DV) in total fats, 36% DV of protein, 13% DV of dietary fiber and 11% DV of carbohydrates. Cashew nuts are rich sources (20% or more of the DV) of dietary minerals, including particularly copper, manganese, phosphorus, and magnesium (79–110% DV), and of thiamin, vitamin B6 and vitamin K (32–37% DV). Iron, potassium, zinc, and selenium are present in significant content (14–61% DV) (table). Cashews (100 g, raw) contain of beta-sitosterol.
Nut and shell
Culinary uses for cashew seeds in snacking and cooking are similar to those for all tree seeds called nuts.
Cashews are commonly used in South Asian cuisine, whole for garnishing sweets or curries, or ground into a paste that forms a base of sauces for curries (e.g., korma), or some sweets (e.g., kaju barfi). It is also used in powdered form in the preparation of several Indian sweets and desserts. In Goan cuisine, both roasted and raw kernels of Goa Kaju are used whole for making curries and sweets. Cashews are also used in Thai and Chinese cuisines, generally in whole form. In the Philippines, cashew is a known product of Antipolo and is eaten with suman. The province of Pampanga also has a sweet dessert called turrones de casuy, which is cashew marzipan wrapped in white wafers. In Indonesia, roasted and salted cashews are called kacang mete or kacang mede, while the cashew apple is called jambu monyet ( 'monkey rose apple').
In the 21st century, cashew cultivation increased in several African countries to meet the manufacturing demands for cashew milk, a plant milk alternative to dairy milk. In Mozambique, bolo polana is a cake prepared using powdered cashews and mashed potatoes as the main ingredients. This dessert is common in South Africa.
Husk
The cashew nut kernel has a slight curvature and two cotyledons, each representing around 20–25% of the weight of the nut. It is encased in a reddish-brown membrane called a husk, which accounts for approximately 5% of the total nut. Cashew nut husk is used in emerging industrial applications, such as an adsorbent, composites, biopolymers, dyes and enzyme synthesis.
Apple
The mature cashew apple can be eaten fresh, cooked in curries, or fermented into vinegar, citric acid or an alcoholic drink. It is also used to make preserves, chutneys, and jams in some countries, such as India and Brazil. In many countries, particularly in South America, the cashew apple is used to flavor drinks, both alcoholic and nonalcoholic.
In Brazil, cashew fruit juice and fruit pulp are used in the production of sweets, and juice mixed with alcoholic beverages such as cachaça, and as flour, milk, or cheese. In Panama, the cashew fruit is cooked with water and sugar for a prolonged time to make a sweet, brown, paste-like dessert called ( being a Spanish name for cashew).
Cashew nuts are more widely traded than cashew apples, because the fruit, unlike the nut, is easily bruised and has a very limited shelf life. Cashew apple juice, however, may be used for manufacturing blended juices.
When the apple is consumed, its astringency is sometimes removed by steaming the fruit for five minutes before washing it in cold water. Steeping the fruit in boiling salt water for five minutes also reduces the astringency.
In Cambodia, where the plant is usually grown as an ornamental rather than an economic tree, the fruit is a delicacy and is eaten with salt.
Alcohol
In the Indian state of Goa, the ripened cashew apples are mashed, and the juice, called "neero", is extracted and kept for fermentation for a few days. This fermented juice then undergoes a double distillation process. The resulting beverage is called feni or fenny. Feni is about 40–42% alcohol (80–84 proof). The single-distilled version is called urrak, which is about 15% alcohol (30 proof). In Tanzania, the cashew apple (bibo in Swahili) is dried and reconstituted with water and fermented, then distilled to make a strong liquor called gongo.
Nut oil
Cashew nut oil is a dark yellow oil derived from pressing the cashew nuts (typically from lower-value broken chunks created accidentally during processing) and is used for cooking or as a salad dressing. The highest quality oil is produced from a single cold pressing.
Shell oil
Cashew nutshell liquid (CNSL) or cashew shell oil (CAS registry number 8007-24-7) is a natural resin with a yellowish sheen found in the honeycomb structure of the cashew nutshell, and is a byproduct of processing cashew nuts. As it is a strong irritant, it should not be confused with edible cashew nut oil. It is dangerous to handle in small-scale processing of the shells, but is itself a raw material with multiple uses. It is used in tropical folk medicine and for anti-termite treatment of timber. Its composition varies depending on how it is processed.
Cold, solvent-extracted CNSL is mostly composed of anacardic acids (70%), cardol (18%) and cardanol (5%).
Heating CNSL decarboxylates the anacardic acids, producing a technical grade of CNSL that is rich in cardanol. Distillation of this material gives distilled, technical CNSL containing 78% cardanol and 8% cardol (cardol has one more hydroxyl group than cardanol). This process also reduces the degree of thermal polymerization of the unsaturated alkyl-phenols present in CNSL.
Anacardic acid is also used in the chemical industry for the production of cardanol, which is used for resins, coatings, and frictional materials.
These substances are skin allergens, like lacquer and the oils of poison ivy, and they present a danger during manual cashew processing.
This natural oil phenol has interesting chemical structural features that can be modified to create a wide spectrum of biobased monomers. These capitalize on the chemically versatile construct, which contains three functional groups: the aromatic ring, the hydroxyl group, and the double bonds in the flanking alkyl chain. These include polyols, which have recently seen increased demand for their biobased origin and key chemical attributes such as high reactivity, range of functionalities, reduction in blowing agents, and naturally occurring fire retardant properties in the field of rigid polyurethanes, aided by their inherent phenolic structure and larger number of reactive units per unit mass.
CNSL may be used as a resin for carbon composite products. CNSL-based novolac is another versatile industrial monomer deriving from cardanol typically used as a reticulating agent (hardener) for epoxy matrices in composite applications providing good thermal and mechanical properties to the final composite material.
Animal feed
Discarded cashew nuts unfit for human consumption, alongside the residues of oil extraction from cashew kernels, can be fed to livestock. Animals can also eat the leaves of cashew trees.
Other uses
As well as the nut and fruit, the plant has several other uses. In Cambodia, the bark gives a yellow dye, the timber is used in boat-making, and for house-boards, and the wood makes excellent charcoal. The shells yield a black oil used as a preservative and water-proofing agent in varnishes, cement, and as a lubricant or timber seal. Timber is used to manufacture furniture, boats, packing crates, and charcoal. Its juice turns black on exposure to air, providing an indelible ink.
| Biology and health sciences | Sapindales | null |
63610 | https://en.wikipedia.org/wiki/Peafowl | Peafowl | Peafowl is a common name for two bird species of the genus Pavo and one species of the closely related genus Afropavo within the tribe Pavonini of the family Phasianidae (the pheasants and their allies). Male peafowl are referred to as peacocks, and female peafowl are referred to as peahens.
The two Asiatic species are the blue or Indian peafowl originally from the Indian subcontinent, and the green peafowl from Southeast Asia. The Congo peafowl, native only to the Congo Basin, is not a true peafowl. Male peafowl are known for their piercing calls and their extravagant plumage. The latter is especially prominent in the Asiatic species, which have an eye-spotted "tail" or "train" of covert feathers, which they display as part of a courtship ritual.
The functions of the elaborate iridescent coloration and large "train" of peacocks have been the subject of extensive scientific debate. Charles Darwin suggested that they served to attract females, and the showy features of the males had evolved by sexual selection. More recently, Amotz Zahavi proposed in his handicap principle that these features acted as honest signals of the males' fitness, since less-fit males would be disadvantaged by the difficulty of surviving with such large and conspicuous structures.
Description
The Indian peacock (Pavo cristatus) has iridescent blue and green plumage, mostly metal-like blue and green. In both species, females are a little smaller than males in terms of weight and wingspan, but males are significantly longer due to the "tail", also known as a "train". The peacock train consists not of tail quill feathers but highly elongated upper tail coverts. These feathers are marked with eyespots, best seen when a peacock fans his tail. All species have a crest atop the head. The Indian peahen has a mixture of dull grey, brown, and green in her plumage. The female also displays her plumage to ward off female competition or signal danger to her young.
Male green peafowls (Pavo muticus) have green and bronze or gold plumage, and black wings with a sheen of blue. Unlike Indian peafowl, the green peahen is similar to the male, but has shorter upper tail coverts, a more coppery neck, and overall less iridescence. Both males and females have spurs.
The Congo peacock (Afropavo congensis) male does not display his covert feathers, but uses his actual tail feathers during courtship displays. These feathers are much shorter than those of the Indian and green species, and the ocelli are much less pronounced. Females of the Indian and African species are dull grey and/or brown.
Chicks of both sexes in all the species are cryptically colored. They vary between yellow and tawny, usually with patches of darker brown or light tan and "dirty white" ivory.
Mature peahens have been recorded as suddenly growing typically male peacock plumage and making male calls. Research has suggested that changes in mature birds are due to a lack of estrogen from old or damaged ovaries, and that male plumage and calls are the default unless hormonally suppressed.
Iridescence and structural coloration
As with many birds, vibrant iridescent plumage colors are not primarily pigments, but structural coloration. Optical interference Bragg reflections, based on regular, periodic nanostructures of the barbules (fiber-like components) of the feathers, produce the peacock's colors. 2D photonic-crystal structures within the layers of the barbules cause the coloration of their feathers. Slight changes to the spacing of the barbules result in different colors. Brown feathers are a mixture of red and blue: one color is created by the periodic structure and the other is created by a Fabry–Pérot interference peak from reflections from the outer and inner boundaries. Color derived from physical structure rather than pigment can vary with viewing angle, causing iridescence.
Courtship
Most commonly, during a courtship display, the visiting female peahen will stop directly in front of the male peacock, thus providing her with the ability to assess the male at 90° to the surface of the feather. Then, the male will turn and display his feathers about 45° to the right of the sun's azimuth which allows the sunlight to accentuate the iridescence of his train. If the female chooses to interact with the male, he will then turn to face her and shiver his train so as to begin the mating process.
Evolution
Sexual selection
Charles Darwin suggested in The Descent of Man and Selection in Relation to Sex that peafowl plumage may have evolved through sexual selection:
Aposematism and natural selection
It has been suggested that a peacock's train, loud call, and fearless behavior have been formed by natural selection (with or without sexual selection too), and served as an aposematic display to intimidate predators and rivals. This hypothesis is designed to explain Takahashi's observations that in Japan, neither reproductive success nor physical condition correlate with the train's length, symmetry or number of eyespots.
Female choice
Multiple hypotheses involving female choice have been posited. One hypothesis is that females choose mates with good genes. Males with more exaggerated secondary sexual characteristics, such as bigger, brighter peacock trains, tend to have better genes in the peahen's eyes. These better genes directly benefit her offspring, as well as her fitness and reproductive success.
Runaway selection is another hypothesis. In runaway sexual selection, linked genes in males and females code for sexually dimorphic traits in males, and preference for those traits in females. The close spatial association of alleles for loci involved in the train in males, and for preference for more exuberant trains in females, on the chromosome (linkage disequilibrium) causes a positive feedback loop that exaggerates both the male traits and the female preferences.
Another hypothesis is sensory bias, in which females have a preference for a trait in a non-mating context that becomes transferred to mating, such as Merle Jacobs' food-courtship hypothesis, which suggests that peahens are attracted to peacocks for the resemblance of their eye spots to blue berries.
Multiple causalities for the evolution of female choice are also possible.
The peacock's train and iridescent plumage are perhaps the best-known examples of traits believed to have arisen through sexual selection, though with some controversy. Male peafowl erect their trains to form a shimmering fan in their display for females. Marion Petrie tested whether or not these displays signalled a male's genetic quality by studying a feral population of peafowl in Whipsnade Wildlife Park in southern England. The number of eyespots in the train predicted a male's mating success. She was able to manipulate this success by cutting the eyespots off some of the males' tails: females lost interest in pruned males and became attracted to untrimmed ones. Males with fewer eyespots, thus having lower mating success, suffered from greater predation. She allowed females to mate with males with differing numbers of eyespots, and reared the offspring in a communal incubator to control for differences in maternal care. Chicks fathered by more ornamented males weighed more than those fathered by less ornamented males, an attribute generally associated with better survival rate in birds. These chicks were released into the park and recaptured one year later. Those with heavily ornamented feathers were better able to avoid predators and survive in natural conditions. Thus, Petrie's work shows correlations between tail ornamentation, mating success, and increased survival ability in both the ornamented males and their offspring.
Furthermore, peafowl and their sexual characteristics have been used in the discussion of the causes for sexual traits. Amotz Zahavi used the excessive tail plumes of male peafowls as evidence for his "handicap principle". Since these trains are likely to be deleterious to an individual's survival (as their brilliance makes them more visible to predators and their length hinders escape from danger), Zahavi argued that only the fittest males could survive the handicap of a large train. Thus, a brilliant train serves as an honest indicator for females that these highly ornamented males are good at surviving for other reasons, so are preferable mates. This theory may be contrasted with Ronald Fisher's hypothesis that male sexual traits are the result of initially arbitrary aesthetic selection by females.
In contrast to Petrie's findings, a seven-year Japanese study of free-ranging peafowl concluded that female peafowl do not select mates solely on the basis of their trains. Mariko Takahashi found no evidence that peahens preferred peacocks with more elaborate trains (such as with more eyespots), a more symmetrical arrangement, or a greater length. Takahashi determined that the peacock's train was not the universal target of female mate choice, showed little variance across male populations, and did not correlate with male physiological condition. Adeline Loyau and her colleagues responded that alternative and possibly central explanations for these results had been overlooked. They concluded that female choice might indeed vary in different ecological conditions.
Plumage colours as attractants
A peacock's copulation success rate depends on the colours of his eyespots (ocelli) and the angle at which they are displayed. The angle at which the ocelli are displayed during courtship is more important in a peahen's choice of males than train size or number of ocelli. Peahens pay careful attention to the different parts of a peacock's train during his display. The lower train is usually evaluated during close-up courtship, while the upper train is more of a long-distance attraction signal. Actions such as train rattling and wing shaking also kept the peahens' attention.
Redundant signal hypothesis
Although an intricate display catches a peahen's attention, the redundant signal hypothesis also plays a crucial role in keeping this attention on the peacock's display. The redundant signal hypothesis explains that whilst each signal that a male projects is about the same quality, the addition of multiple signals enhances the reliability of that mate. This idea also suggests that the success of multiple signalling is not only due to the repetitiveness of the signal, but also of multiple receivers of the signal. In the peacock species, males congregate a communal display during breeding season and the peahens observe. Peacocks first defend their territory through intra-sexual behaviour, defending their areas from intruders. They fight for areas within the congregation to display a strong front for the peahens. Central positions are usually taken by older, dominant males, which influences mating success. Certain morphological and behavioural traits come in to play during inter and intra-sexual selection, which include train length for territory acquisition and visual and vocal displays involved in mate choice by peahens.
Behaviour
Peafowl are forest birds that nest on the ground, but roost in trees. They are terrestrial feeders. All species of peafowl are believed to be polygamous. In common with other members of the Galliformes, the males possess metatarsal spurs or "thorns" on their legs used during intraspecific territorial fights with some other members of their kind.
In courtship, vocalisation stands to be a primary way for peacocks to attract peahens. Some studies suggest that the intricacy of the "song" produced by displaying peacocks proved to be impressive to peafowl. Singing in peacocks usually occurs just before, just after, or sometimes during copulation.
Diet
Peafowl are omnivores and mostly eat plants, flower petals, seed heads, insects and other arthropods, reptiles, and amphibians. Wild peafowl look for their food scratching around in leaf litter either early in the morning or at dusk. They retreat to the shade and security of the woods for the hottest portion of the day. These birds are not picky and will eat almost anything they can fit in their beak and digest. They actively hunt insects like ants, crickets and termites; millipedes; and other arthropods and small mammals. Indian peafowl also eat small snakes.
Domesticated peafowl may also eat bread and cracked grain such as oats and corn, cheese, cooked rice and sometimes cat food. It has been noticed by keepers that peafowl enjoy protein-rich food including larvae that infest granaries, different kinds of meat and fruit, as well as vegetables including dark leafy greens, broccoli, carrots, beans, beets, and peas.
Cultural significance
Indian peafowl
The peafowl is native to India and significant in its culture. In Hinduism, the Indian peacock is the mount of the god of war, Kartikeya, and the warrior goddess Kaumari, and is also depicted around the goddess Santoshi. During a war with Asuras, Kartikeya split the demon king Surapadman in half. Out of respect for his adversary's prowess in battle, the god converted the two halves into an integral part of himself. One half became a peacock serving as his mount, and the other a rooster adorning his flag. The peacock displays the divine shape of Omkara when it spreads its magnificent plumes into a full-blown circular form. In the Tantric traditions of Hinduism the goddess Tvarita is depicted with peacock feathers. A peacock feather also adorns the crest of the god Krishna.
Chandragupta Maurya, the founder of the Mauryan Empire, was born an orphan and raised by a family farming peacocks. According to the Buddhist tradition, the ancestors of the Maurya kings had settled in a region where peacocks (mora in Pali) were abundant. Therefore, they came to be known as "Moriyas", literally, "belonging to the place of peacocks". According to another Buddhist account, these ancestors built a city called Moriya-nagara ("Moriya-city"), which was so called, because it was built with the "bricks coloured like peacocks' necks". After conquering the Nanda Empire and defeating the Seleucid Empire, the Chandragupta dynasty reigned uncontested during its time. Its royal emblem remained the peacock until Emperor Ashoka changed it to a lion, as seen in the Lion Capital of Ashoka, as well in his edicts. The peacock continued to represent elegance and royalty in India during medieval times; for instance, the Mughal seat of power was called the Peacock Throne.
The peacock is represented in both the Burmese and Sinhalese zodiacs. To the Sinhalese people, the peacock is the third animal of the zodiac of Sri Lanka.
Peacocks (often a symbol of pride and vanity) were believed to deliberately consume poisonous substances in order to become immune to them, as well as to make the colours of their resplendent plumage all the more vibrant – seeing as so many poisonous flora and fauna are so colourful due to aposematism, this idea appears to have merit. The Buddhist deity Mahamayuri is depicted seated on a peacock. Peacocks are seen supporting the throne of Amitabha, the ruby red sunset coloured archetypal Buddha of Infinite Light.
India adopted the peacock as its national bird in 1963 and it is one of the national symbols of India.
Middle East
Yazidism
Tawûsî Melek (lit. 'Peacock Angel') one of the central figures of the Yazidi religion, is symbolized with a peacock. In Yazidi creation stories, before the creation of this world, God created seven Divine Beings, of whom Tawûsî Melek was appointed as the leader. God assigned all of the world's affairs to these seven Divine Beings, also often referred to as the Seven Angels or heft sirr ("the Seven Mysteries").
In Yazidism, the peacock is believed to represent the diversity of the world, and the colourfulness of the peacock's feathers is considered to represent of all the colours of nature. The feathers of the peacock also symbolize sun rays, from which come light, luminosity and brightness. The peacock opening the feathers of its tail in a circular shape symbolizes the sunrise.
Consequently, due to its holiness, Yazidis are not allowed to hunt and eat the peacock, ill-treat it or utter bad words about it. Images of the peacock are also found drawn around the sanctuary of Lalish and on other Yazidi shrines and holy sites, homes, as well as religious, social, cultural and academic centres.
Mandaeism
In The Baptism of Hibil Ziwa, the Mandaean uthra and emanation Yushamin is described as a peacock.
Ancient Greece
Ancient Greeks believed that the flesh of peafowl did not decay after death, so it became a symbol of immortality. In Hellenistic imagery, the Greek goddess Hera's chariot was pulled by peacocks, birds not known to Greeks before the conquests of Alexander. Alexander's tutor, Aristotle, refers to it as "the Persian bird". When Alexander saw the birds in India, he was so amazed at their beauty that he threatened the severest penalties for any man who slew one. Claudius Aelianus writes that there were peacocks in India, larger than anywhere else.
One myth states that Hera's servant, the hundred-eyed Argus Panoptes, was instructed to guard the woman-turned-cow, Io. Hera had transformed Io into a cow after learning of Zeus's interest in her. Zeus had the messenger of the gods, Hermes, kill Argus through eternal sleep and free Io. According to Ovid, to commemorate her faithful watchman, Hera had the hundred eyes of Argus preserved forever, in the peacock's tail.
Christianity
The symbolism was adopted by early Christianity, thus many early Christian paintings and mosaics show the peacock. The peacock is still used in the Easter season, especially in the east. The "eyes" in the peacock's tail feathers can symbolise the all-seeing Christian God, the Church, or angelic wisdom. The emblem of a pair of peacocks drinking from a vase is used as a symbol of the eucharist and the resurrection, as it represents the Christian believer drinking from the waters of eternal life. The peacock can also symbolise the cosmos if one interprets its tail with its many "eyes" as the vault of heaven dotted by the sun, moon, and stars. Due to the adoption by Augustine of the ancient idea that the peacock's flesh did not decay, the bird was again associated with immortality. In Christian iconography, two peacocks are often depicted either side of the Tree of Life.
The symbolic association of peacock feathers with the wings of angels led to the
belief that the waving of such liturgical fans resulted in an automated emission of
prayers. This affinity between peacocks' and angels' feathers was also expressed in
other artistic media, including paintings of angels with peacock feather wings
Judaism
Among Ashkenazi Jews, the golden peacock is a symbol for joy and creativity, with quills from the bird's feathers being a metaphor for a writer's inspiration.
Renaissance
The peacock motif was revived in the Renaissance iconography that unified Hera and Juno, and on which European painters focused.
Contemporary
In 1956, John J. Graham created an abstraction of an 11-feathered peacock logo for American broadcaster NBC. This brightly hued peacock was adopted due to the increase in colour programming. NBC's first colour broadcasts showed only a still frame of the colourful peacock. The emblem made its first on-air appearance on 22 May 1956. The current, six-feathered logo debuted on 12 May 1986.
Breeding and colour variations
Hybrids between Indian peafowl and Green peafowl are called Spaldings, after the first person to successfully hybridise them, Keith Spalding. Spaldings with a high-green phenotype do much better in cold temperatures than the cold-intolerant green peafowl while still looking like their green parents. Plumage varies between individual spaldings, with some looking far more like green peafowl and some looking far more like blue peafowl, though most visually carry traits of both.
In addition to the wild-type "blue" colouration, several hundred variations in colour and pattern are recognised as separate morphs of the Indian Blue among peafowl breeders. Pattern variations include solid-wing/black shoulder (the black and brown stripes on the wing are instead one solid colour), pied, white-eye (the ocelli in a male's eye feathers have white spots instead of black), and silver pied (a mostly white bird with small patches of colour). Colour variations include white, purple, Buford bronze, opal, midnight, charcoal, jade, and taupe, as well as the sex-linked colours purple, cameo, peach, and Sonja's Violeta. Additional colour and pattern variations are first approved by the United Peafowl Association to become officially recognised as a morph among breeders. Alternately-coloured peafowl are born differently coloured than wild-type peafowl, and though each colour is recognisable at hatch, their peachick plumage does not necessarily match their adult plumage.
Occasionally, peafowl appear with white plumage. Although albino peafowl do exist, this is quite rare, and almost all white peafowl are not albinos; they have a genetic condition called leucism, which causes pigment cells to fail to migrate from the neural crest during development. Leucistic peafowl can produce pigment but not deposit the pigment to their feathers, resulting in a blue-grey eye colour and the complete lack of colouration in their plumage. Pied peafowl are affected by partial leucism, where only some pigment cells fail to migrate, resulting in birds that have colour but also have patches absent of all colour; they, too, have blue-grey eyes. By contrast, true albino peafowl would have a complete lack of melanin, resulting in irises that look red or pink. Leucistic peachicks are born yellow and become fully white as they mature.
The black-shouldered or Japanned mutation was initially considered as a subspecies of the Indian peafowl (P. c. nigripennis) (or even a separate species (P. nigripennis)) and was a topic of some interest during Darwin's time. Others had doubts about its taxonomic status, but the English naturalist and biologist Charles Darwin (1809–1882) presented firm evidence for it being a variety under domestication, which treatment is now well established and accepted. It being a colour variation rather than a wild species was important for Darwin to prove, as otherwise it could undermine his theory of slow modification by natural selection in the wild. It is, however, only a case of genetic variation within the population. In this mutation, the adult male is melanistic with black wings.
Gastronomy
In ancient Rome, peafowl were served as a delicacy. The dish was introduced there in approximately 35 B.C. The poet Horace ridiculed the eating of peafowl, saying they tasted like chicken. Peafowl eggs were also valued. Gaius Petronius in his Satyricon also mocked the ostentation and snobbery of eating peafowl and their eggs.
During the Medieval period, various types of fowl were consumed as food, with the poorer populations (such as serfs) consuming more common birds, such as chicken. However, the more wealthy gentry were privileged to eat less usual foods, such as swan, and even peafowl were consumed. On a king's table, a peacock would be for ostentatious display as much as for culinary consumption.
From the 1864 The English and Australian Cookery Book, regarding occasions and preparation of the bird:
Instead of plucking this bird, take off the skin with the greatest care, so that the feathers do not get detached or broken. Stuff it with what you like, as truffles, mushrooms, livers of fowls, bacon, salt, spice, thyme, crumbs of bread, and a bay-leaf. Wrap the claws and head in several folds of cloth, and envelope the body in buttered paper. The head and claws, which project at the two ends, must be basted with water during the cooking, to preserve them, and especially the tuft. Before taking it off the spit, brown the bird by removing the paper. Garnish with lemon and flowers. If to come on the table cold, place the bird in a wooden trencher, in the middle of which is fixed a wooden skewer, which should penetrate the body of the bird, to keep it upright. Arrange the claws and feathers in a natural manner, and the tail like a fan, supported with wire. No ordinary cook can place a peacock on the table properly. This ceremony was reserved, in the times of chivalry, for the lady most distinguished for her beauty. She carried it, amidst inspiring music, and placed it, at the commencement of the banquet, before the master of the house. At a nuptial feast, the peacock was served by the maid of honour, and placed before the bride for her to consume.
| Biology and health sciences | Galliformes | null |
63640 | https://en.wikipedia.org/wiki/Partridge | Partridge | A partridge is a medium-sized galliform bird in any of several genera, with a wide native distribution throughout parts of Europe, Asia and Africa. Several species have been introduced to the Americas. They are sometimes grouped in the Perdicinae subfamily of the Phasianidae (pheasants, quail, etc.). However, molecular research suggests that partridges are not a distinct taxon within the family Phasianidae, but that some species are closer to the pheasants, while others are closer to the junglefowl.
Description
Partridges are medium-sized game birds, generally intermediate in size between the larger pheasants, smaller quail; they're ground-dwelling birds that feature variable plumage colouration across species, with most tending to grey and brown.
Range and habitat
Partridges are native to Europe, Asia, Africa, and the Middle East. Some species are found nesting on steppes or agricultural land, while other species prefer more forested areas. They nest on the ground and have a diet consisting of seeds and insects.
Hunting
Species such as the grey partridge and the red-legged partridge are popular as game birds, and are often reared in captivity and released for the purpose of hunting. For the same reason, they have been introduced into large areas of North America.
Cultural references
According to Greek legend, the first partridge appeared when Daedalus threw his apprentice, Talos, off the sacred hill of Athena in a fit of jealous rage. Supposedly mindful of his fall, the bird does not build its nest in the trees, nor take lofty flights and avoids high places.
As described by medieval scholar Madeleine Pelner Cosman, medical practitioners in the Middle Ages recommended partridge as a food of love: They suggested that "Partridge was superior in arousing dulled passions and increasing the powers of engendering. Gentle to the human stomach, partridge stimulated bodily fluids, raised the spirits, and firmed the muscles."
Probably the most famous reference to the partridge is in the Christmas carol, "The Twelve Days of Christmas". The first gift listed is "a partridge in a pear tree", and these words end each verse. Since partridges are unlikely to be seen in pear trees (they are ground-nesting birds) it has been suggested that the text "a pear tree" is a corruption of the French "une perdrix" (a partridge).
The partridge has also been used as a symbol that represents Kurdish nationalism. It is called Kew. Sherko Kurmanj discusses the paradox of symbols in Iraq as an attempt to make a distinction between the Kurds and the Arabs. He says that while Iraqis generally regards the palm tree, falcon, and sword as their national symbols, the Kurds consider the oak, partridge, and dagger as theirs.
Species list in taxonomic order
Genus Lerwa
Snow partridge, Lerwa lerwa
Genus Tetraophasis
Verreaux's monal-partridge, Tetraophasis obscurus
Szechenyi's monal-partridge, Tetraophasis szechenyii
Genus Alectoris
Arabian partridge, Alectoris melanocephala
Przevalski's partridge, Alectoris magna
Rock partridge, Alectoris graeca
Chukar, Alectoris chukar
Philby's partridge, Alectoris philbyi
Barbary partridge, Alectoris barbara
Red-legged partridge, Alectoris rufa
Genus Ammoperdix
See-see partridge, Ammoperdix griseogularis
Sand partridge, Ammoperdix heyi
Genus Perdix
Grey partridge, Perdix perdix
Daurian partridge, Perdix dauurica
Tibetan partridge, Perdix hodgsoniae
Genus Rhizothera
Long-billed partridge, Rhizothera longirostris
Dulit partridge, Rhizothera dulitensis
Genus Margaroperdix
Madagascar partridge, Margaroperdix madagascarensis
Genus Melanoperdix
Black wood-partridge, Melanoperdix nigra
Genus Xenoperdix
Rubeho forest partridge, Xenoperdix obscuratus
Udzungwa forest partridge, Xenoperdix udzungwensis
Genus Arborophila, the hill partridges
Hill partridge, Arborophila torqueola
Sichuan partridge, Arborophila rufipectus
Chestnut-breasted partridge, Arborophila mandellii
White-necklaced partridge, Arborophila gingica
Rufous-throated partridge, Arborophila rufogularis
White-cheeked partridge, Arborophila atrogularis
Taiwan partridge, Arborophila crudigularis
Hainan partridge, Arborophila ardens
Chestnut-bellied partridge, Arborophila javanica
Grey-breasted partridge, Arborophila orientalis
Bar-backed partridge, Arborophila brunneopectus
Orange-necked partridge, Arborophila davidi
Chestnut-headed partridge, Arborophila cambodiana
Red-breasted partridge, Arborophila hyperythra
Red-billed partridge, Arborophila rubrirostris
Sumatran partridge, Arborophila sumatrana
Genus Tropicoperdix
Scaly-breasted partridge, Tropicoperdix chloropus
Chestnut-necklaced partridge, Tropicoperdix charltonii
Genus Caloperdix
Ferruginous partridge, Caloperdix oculea
Genus Haematortyx
Crimson-headed partridge, Haematortyx sanguiniceps
Genus Rollulus
Crested partridge, Rollulus roulroul
Genus Bambusicola
Mountain bamboo partridge, Bambusicola fytchii
Chinese bamboo partridge, Bambusicola thoracica
| Biology and health sciences | Galliformes | Animals |
63663 | https://en.wikipedia.org/wiki/Fly | Fly | Flies are insects of the order Diptera, the name being derived from the Greek δι- di- "two", and πτερόν pteron "wing". Insects of this order use only a single pair of wings to fly, the hindwings having evolved into advanced mechanosensory organs known as halteres, which act as high-speed sensors of rotational movement and allow dipterans to perform advanced aerobatics. Diptera is a large order containing more than 150,000 species including horse-flies, crane flies, hoverflies, mosquitoes and others.
Flies have a mobile head, with a pair of large compound eyes, and mouthparts designed for piercing and sucking (mosquitoes, black flies and robber flies), or for lapping and sucking in the other groups. Their wing arrangement gives them great manoeuvrability in flight, and claws and pads on their feet enable them to cling to smooth surfaces. Flies undergo complete metamorphosis; the eggs are often laid on the larval food-source and the larvae, which lack true limbs, develop in a protected environment, often inside their food source. Other species are ovoviviparous, opportunistically depositing hatched or hatching larvae instead of eggs on carrion, dung, decaying material, or open wounds of mammals. The pupa is a tough capsule from which the adult emerges when ready to do so; flies mostly have short lives as adults.
Diptera is one of the major insect orders and of considerable ecological and human importance. Flies are major pollinators, second only to the bees and their Hymenopteran relatives. Flies may have been among the evolutionarily earliest pollinators responsible for early plant pollination. Fruit flies are used as model organisms in research, but less benignly, mosquitoes are vectors for malaria, dengue, West Nile fever, yellow fever, encephalitis, and other infectious diseases; and houseflies, commensal with humans all over the world, spread foodborne illnesses. Flies can be annoyances especially in some parts of the world where they can occur in large numbers, buzzing and settling on the skin or eyes to bite or seek fluids. Larger flies such as tsetse flies and screwworms cause significant economic harm to cattle. Blowfly larvae, known as gentles, and other dipteran larvae, known more generally as maggots, are used as fishing bait, as food for carnivorous animals, and in medicine in debridement, to clean wounds.
Taxonomy and phylogeny
Relationships to other insects
Dipterans are holometabolans, insects that undergo radical metamorphosis. They belong to the Mecopterida, alongside the Mecoptera, Siphonaptera, Lepidoptera and Trichoptera. The possession of a single pair of wings distinguishes most true flies from other insects with "fly" in their names. However, some true flies such as Hippoboscidae (louse flies) have become secondarily wingless.
The cladogram represents the current consensus view.
Relationships between subgroups and families
The first true dipterans known are from the Middle Triassic (around 240 million years ago), and they became widespread during the Middle and Late Triassic. Modern flowering plants did not appear until the Cretaceous (around 140 million years ago), so the original dipterans must have had a different source of nutrition other than nectar. Based on the attraction of many modern fly groups to shiny droplets, it has been suggested that they may have fed on honeydew produced by sap-sucking bugs which were abundant at the time, and dipteran mouthparts are well-adapted to softening and lapping up the crusted residues. The basal clades in the Diptera include the Deuterophlebiidae and the enigmatic Nymphomyiidae. Three episodes of evolutionary radiation are thought to have occurred based on the fossil record. Many new species of lower Diptera developed in the Triassic, about 220 million years ago. Many lower Brachycera appeared in the Jurassic, some 180 million years ago. A third radiation took place among the Schizophora at the start of the Paleogene, 66 million years ago.
The phylogenetic position of Diptera has been controversial. The monophyly of holometabolous insects has long been accepted, with the main orders being established as Lepidoptera, Coleoptera, Hymenoptera and Diptera, and it is the relationships between these groups which has caused difficulties. Diptera is widely thought to be a member of Mecopterida, along with Lepidoptera (butterflies and moths), Trichoptera (caddisflies), Siphonaptera (fleas), Mecoptera (scorpionflies) and possibly Strepsiptera (twisted-wing flies). Diptera has been grouped with Siphonaptera and Mecoptera in the Antliophora, but this has not been confirmed by molecular studies.
Diptera were traditionally broken down into two suborders, Nematocera and Brachycera, distinguished by the differences in antennae. The Nematocera are identified by their elongated bodies and many-segmented, often feathery antennae as represented by mosquitoes and crane flies. The Brachycera have rounder bodies and much shorter antennae. Subsequent studies have identified the Nematocera as being non-monophyletic with modern phylogenies placing the Brachycera within grades of groups formerly placed in the Nematocera. The construction of a phylogenetic tree has been the subject of ongoing research. The following cladogram is based on the FLYTREE project.
Diversity
Flies are often abundant and are found in almost all terrestrial habitats in the world apart from Antarctica. They include many familiar insects such as house flies, blow flies, mosquitoes, gnats, black flies, midges and fruit flies. More than 150,000 have been formally described and the actual species diversity is much greater, with the flies from many parts of the world yet to be studied intensively. The suborder Nematocera include generally small, slender insects with long antennae such as mosquitoes, gnats, midges and crane-flies, while the Brachycera includes broader, more robust flies with short antennae. Many nematoceran larvae are aquatic. There are estimated to be a total of about 19,000 species of Diptera in Europe, 22,000 in the Nearctic region, 20,000 in the Afrotropical region, 23,000 in the Oriental region and 19,000 in the Australasian region. While most species have restricted distributions, a few like the housefly (Musca domestica) are cosmopolitan. Gauromydas heros (Asiloidea), with a length of up to , is generally considered to be the largest fly in the world, while the smallest is Euryplatea nanaknihali, which at is smaller than a grain of salt.
Brachycera are ecologically very diverse, with many being predatory at the larval stage and some being parasitic. Animals parasitised include molluscs, woodlice, millipedes, insects, mammals, and amphibians. Flies are the second largest group of pollinators after the Hymenoptera (bees, wasps and relatives). In wet and colder environments flies are significantly more important as pollinators. Compared to bees, they need less food as they do not need to provision their young. Many flowers that bear low nectar and those that have evolved trap pollination depend on flies. It is thought that some of the earliest pollinators of plants may have been flies.
The greatest diversity of gall forming insects are found among the flies, principally in the family Cecidomyiidae (gall midges). Many flies (most importantly in the family Agromyzidae) lay their eggs in the mesophyll tissue of leaves with larvae feeding between the surfaces forming blisters and mines. Some families are mycophagous or fungus feeding. These include the cave dwelling Mycetophilidae (fungus gnats) whose larvae are the only diptera with bioluminescence. The Sciaridae are also fungus feeders. Some plants are pollinated by fungus feeding flies that visit fungus infected male flowers.
The larvae of Megaselia scalaris (Phoridae) are almost omnivorous and consume such substances as paint and shoe polish. The Exorista mella (Walker) fly are considered generalists and parasitoids of a variety of hosts. The larvae of the shore flies (Ephydridae) and some Chironomidae survive in extreme environments including glaciers (Diamesa sp., Chironomidae), hot springs, geysers, saline pools, sulphur pools, septic tanks and even crude oil (Helaeomyia petrolei). Adult hoverflies (Syrphidae) are well known for their mimicry and the larvae adopt diverse lifestyles including being inquiline scavengers inside the nests of social insects. Some brachycerans are agricultural pests, some bite animals and humans and suck their blood, and some transmit diseases.
Anatomy and morphology
Flies are adapted for aerial movement and typically have short and streamlined bodies. The first tagma of the fly, the head, bears the eyes, the antennae, and the mouthparts (the labrum, labium, mandible, and maxilla make up the mouthparts). The second tagma, the thorax, bears the wings and contains the flight muscles on the second segment, which is greatly enlarged; the first and third segments have been reduced to collar-like structures, and the third segment bears the halteres, which help to balance the insect during flight. The third tagma is the abdomen consisting of 11 segments, some of which may be fused, and with the three hindmost segments modified for reproduction. Some Dipterans are mimics and can only be distinguished from their models by very careful inspection. An example of this is Spilomyia longicornis, which is a fly but mimics a vespid wasp.
Flies have a mobile head with a pair of large compound eyes on the sides of the head, and in most species, three small ocelli on the top. The compound eyes may be close together or widely separated, and in some instances are divided into a dorsal region and a ventral region, perhaps to assist in swarming behaviour. The antennae are well-developed but variable, being thread-like, feathery or comb-like in the different families. The mouthparts are adapted for piercing and sucking, as in the black flies, mosquitoes and robber flies, and for lapping and sucking as in many other groups. Female horse-flies use knife-like mandibles and maxillae to make a cross-shaped incision in the host's skin and then lap up the blood that flows. The gut includes large diverticulae, allowing the insect to store small quantities of liquid after a meal.
For visual course control, flies' optic flow field is analyzed by a set of motion-sensitive neurons. A subset of these neurons is thought to be involved in using the optic flow to estimate the parameters of self-motion, such as yaw, roll, and sideward translation. Other neurons are thought to be involved in analyzing the content of the visual scene itself, such as separating figures from the ground using motion parallax. The H1 neuron is responsible for detecting horizontal motion across the entire visual field of the fly, allowing the fly to generate and guide stabilizing motor corrections midflight with respect to yaw. The ocelli are concerned in the detection of changes in light intensity, enabling the fly to react swiftly to the approach of an object.
Like other insects, flies have chemoreceptors that detect smell and taste, and mechanoreceptors that respond to touch. The third segments of the antennae and the maxillary palps bear the main olfactory receptors, while the gustatory receptors are in the labium, pharynx, feet, wing margins and female genitalia, enabling flies to taste their food by walking on it. The taste receptors in females at the tip of the abdomen receive information on the suitability of a site for ovipositing. Flies that feed on blood have special sensory structures that can detect infrared emissions, and use them to home in on their hosts. Many blood-sucking flies can detect the raised concentration of carbon dioxide that occurs near large animals. Some tachinid flies (Ormiinae) which are parasitoids of bush crickets, have sound receptors to help them locate their singing hosts.
Diptera have one pair of fore wings on the mesothorax and a pair of halteres, or reduced hind wings, on the metathorax. A further adaptation for flight is the reduction in number of the neural ganglia, and concentration of nerve tissue in the thorax, a feature that is most extreme in the highly derived Muscomorpha infraorder. Some flies such as the ectoparasitic Nycteribiidae and Streblidae are exceptional in having lost their wings and become flightless. The only other order of insects bearing a single pair of true, functional wings, in addition to any form of halteres, are the Strepsiptera. In contrast to the flies, the Strepsiptera bear their halteres on the mesothorax and their flight wings on the metathorax. Each of the fly's six legs has a typical insect structure of coxa, trochanter, femur, tibia and tarsus, with the tarsus in most instances being subdivided into five tarsomeres. At the tip of the limb is a pair of claws, and between these are cushion-like structures known as pulvilli which provide adhesion.
The abdomen shows considerable variability among members of the order. It consists of eleven segments in primitive groups and ten segments in more derived groups, the tenth and eleventh segments having fused. The last two or three segments are adapted for reproduction. Each segment is made up of a dorsal and a ventral sclerite, connected by an elastic membrane. In some females, the sclerites are rolled into a flexible, telescopic ovipositor.
Flight
Flies are capable of great manoeuvrability during flight due to the presence of the halteres. These act as gyroscopic organs and are rapidly oscillated in time with the wings; they act as a balance and guidance system by providing rapid feedback to the wing-steering muscles, and flies deprived of their halteres are unable to fly. The wings and halteres move in synchrony but the amplitude of each wing beat is independent, allowing the fly to turn sideways. The wings of the fly are attached to two kinds of muscles, those used to power it and another set used for fine control.
Flies tend to fly in a straight line then make a rapid change in direction before continuing on a different straight path. The directional changes are called saccades and typically involve an angle of 90°, being achieved in 50 milliseconds. They are initiated by visual stimuli as the fly observes an object, nerves then activate steering muscles in the thorax that cause a small change in wing stroke which generate sufficient torque to turn. Detecting this within four or five wingbeats, the halteres trigger a counter-turn and the fly heads off in a new direction.
Flies have rapid reflexes that aid their escape from predators but their sustained flight speeds are low. Dolichopodid flies in the genus Condylostylus respond in less than five milliseconds to camera flashes by taking flight. In the past, the deer bot fly, Cephenemyia, was claimed to be one of the fastest insects on the basis of an estimate made visually by Charles Townsend in 1927. This claim, of speeds of 600 to 800 miles per hour, was regularly repeated until it was shown to be physically impossible as well as incorrect by Irving Langmuir. Langmuir suggested an estimated speed of 25 miles per hour.
Although most flies live and fly close to the ground, a few are known to fly at heights and a few like Oscinella (Chloropidae) are known to be dispersed by winds at altitudes of up to 2,000 ft and over long distances. Some hover flies like Metasyrphus corollae have been known to undertake long flights in response to aphid population spurts.
Males of fly species such as Cuterebra, many hover flies, bee flies (Bombyliidae) and fruit flies (Tephritidae) maintain territories within which they engage in aerial pursuit to drive away intruding males and other species. While these territories may be held by individual males, some species, such as A. freeborni, form leks with many males aggregating in displays. Some flies maintain an airspace and still others form dense swarms that maintain a stationary location with respect to landmarks. Many flies mate in flight while swarming.
Life cycle and development
Diptera go through a complete metamorphosis with four distinct life stages – egg, larva, pupa and adult.
Larva
In many flies, the larval stage is long and adults may have a short life. Most dipteran larvae develop in protected environments; many are aquatic and others are found in moist places such as carrion, fruit, vegetable matter, fungi and, in the case of parasitic species, inside their hosts. They tend to have thin cuticles and become desiccated if exposed to the air. Apart from the Brachycera, most dipteran larvae have sclerotised head capsules, which may be reduced to remnant mouth hooks; the Brachycera, however, have soft, gelatinized head capsules from which the sclerites are reduced or missing. Many of these larvae retract their heads into their thorax. The spiracles in the larva and pupa do not have any internal mechanical closing device.
Some other anatomical distinction exists between the larvae of the Nematocera and the Brachycera. Especially in the Brachycera, little demarcation is seen between the thorax and abdomen, though the demarcation may be visible in many Nematocera, such as mosquitoes; in the Brachycera, the head of the larva is not clearly distinguishable from the rest of the body, and few, if any, sclerites are present. Informally, such brachyceran larvae are called maggots, but the term is not technical and often applied indifferently to fly larvae or insect larvae in general. The eyes and antennae of brachyceran larvae are reduced or absent, and the abdomen also lacks appendages such as cerci. This lack of features is an adaptation to food such as carrion, decaying detritus, or host tissues surrounding endoparasites. Nematoceran larvae generally have well-developed eyes and antennae, while those of Brachyceran larvae are reduced or modified.
Dipteran larvae have no jointed, "true legs", but some dipteran larvae, such as species of Simuliidae, Tabanidae and Vermileonidae, have prolegs adapted to hold onto a substrate in flowing water, host tissues or prey. The majority of dipterans are oviparous and lay batches of eggs, but some species are ovoviviparous, where the larvae starting development inside the eggs before they hatch or viviparous, the larvae hatching and maturing in the body of the mother before being externally deposited. These are found especially in groups that have larvae dependent on food sources that are short-lived or are accessible for brief periods. This is widespread in some families such as the Sarcophagidae. In Hylemya strigosa (Anthomyiidae) the larva moults to the second instar before hatching, and in Termitoxenia (Phoridae) females have incubation pouches, and a full developed third instar larva is deposited by the adult and it almost immediately pupates with no freely feeding larval stage. The tsetse fly (as well as other Glossinidae, Hippoboscidae, Nycteribidae and Streblidae) exhibits adenotrophic viviparity; a single fertilised egg is retained in the oviduct and the developing larva feeds on glandular secretions. When fully grown, the female finds a spot with soft soil and the larva works its way out of the oviduct, buries itself and pupates. Some flies like Lundstroemia parthenogenetica (Chironomidae) reproduce by thelytokous parthenogenesis, and some gall midges have larvae that can produce eggs (paedogenesis).
Pupa
The pupae take various forms. In some groups, particularly the Nematocera, the pupa is intermediate between the larval and adult form; these pupae are described as "obtect", having the future appendages visible as structures that adhere to the pupal body. The outer surface of the pupa may be leathery and bear spines, respiratory features or locomotory paddles. In other groups, described as "coarctate", the appendages are not visible. In these, the outer surface is a puparium, formed from the last larval skin, and the actual pupa is concealed within. When the adult insect is ready to emerge from this tough, desiccation-resistant capsule, it inflates a balloon-like structure on its head, and forces its way out.
Adult
The adult stage is usually short, its function is only to mate and lay eggs. The genitalia of male flies are rotated to a varying degree from the position found in other insects. In some flies, this is a temporary rotation during mating, but in others, it is a permanent torsion of the organs that occurs during the pupal stage. This torsion may lead to the anus being below the genitals, or, in the case of 360° torsion, to the sperm duct being wrapped around the gut and the external organs being in their usual position. When flies mate, the male initially flies on top of the female, facing in the same direction, but then turns around to face in the opposite direction. This forces the male to lie on his back for his genitalia to remain engaged with those of the female, or the torsion of the male genitals allows the male to mate while remaining upright. This leads to flies having more reproduction abilities than most insects, and much quicker. Flies occur in large populations due to their ability to mate effectively and quickly during the mating season. More primitive groups mates in the air during swarming, but most of the more advanced species with a 360° torsion mate on a substrate.
Ecology
As ubiquitous insects, dipterans play an important role at various trophic levels both as consumers and as prey. In some groups the larvae complete their development without feeding, and in others the adults do not feed. The larvae can be herbivores, scavengers, decomposers, predators or parasites, with the consumption of decaying organic matter being one of the most prevalent feeding behaviours. The fruit or detritus is consumed along with the associated micro-organisms, a sieve-like filter in the pharynx being used to concentrate the particles, while flesh-eating larvae have mouth-hooks to help shred their food. The larvae of some groups feed on or in the living tissues of plants and fungi, and some of these are serious pests of agricultural crops. Some aquatic larvae consume the films of algae that form underwater on rocks and plants. Many of the parasitoid larvae grow inside and eventually kill other arthropods, while parasitic larvae may attack vertebrate hosts.
Whereas many dipteran larvae are aquatic or live in enclosed terrestrial locations, the majority of adults live above ground and are capable of flight. Predominantly they feed on nectar or plant or animal exudates, such as honeydew, for which their lapping mouthparts are adapted. Some flies have functional mandibles that may be used for biting. The flies that feed on vertebrate blood have sharp stylets that pierce the skin, with some species having anticoagulant saliva that is regurgitated before absorbing the blood that flows; in this process, certain diseases can be transmitted. The bot flies (Oestridae) have evolved to parasitize mammals. Many species complete their life cycle inside the bodies of their hosts. The larvae of a few fly groups (Agromyzidae, Anthomyiidae, Cecidomyiidae) are capable of inducing plant galls. Some dipteran larvae are leaf-miners. The larvae of many brachyceran families are predaceous. In many dipteran groups, swarming is a feature of adult life, with clouds of insects gathering in certain locations; these insects are mostly males, and the swarm may serve the purpose of making their location more visible to females.
Most adult diptera have their mouthparts modified to sponge up fluid. The adults of many species of flies (e.g. Anthomyia sp., Steganopsis melanogaster) that feed on liquid food will regurgitate fluid in a behaviour termed as "bubbling" which has been thought to help the insects evaporate water and concentrate food or possibly to cool by evaporation. Some adult diptera are known for kleptoparasitism such as members of the Sarcophagidae. The miltogramminae are known as "satellite flies" for their habit of following wasps and stealing their stung prey or laying their eggs into them. Phorids, milichids and the genus Bengalia are known to steal food carried by ants. Adults of Ephydra hians forage underwater, and have special hydrophobic hairs that trap a bubble of air that lets them breathe underwater.
Anti-predator adaptations
Flies are eaten by other animals at all stages of their development. The eggs and larvae are parasitised by other insects and are eaten by many creatures, some of which specialise in feeding on flies but most of which consume them as part of a mixed diet. Birds, bats, frogs, lizards, dragonflies and spiders are among the predators of flies. Many flies have evolved mimetic resemblances that aid their protection. Batesian mimicry is widespread with many hoverflies resembling bees and wasps, ants and some species of tephritid fruit fly resembling spiders. Some species of hoverfly are myrmecophilous—their young live and grow within the nests of ants. They are protected from the ants by imitating chemical odours given by ant colony members. Bombyliid bee flies such as Bombylius major are short-bodied, round, furry, and distinctly bee-like as they visit flowers for nectar, and are likely also Batesian mimics of bees.
In contrast, Drosophila subobscura, a species of fly in the genus Drosophila, lacks a category of hemocytes that are present in other studied species of Drosophila, leading to an inability to defend against parasitic attacks, a form of innate immunodeficiency.
Human interaction and cultural depictions
Symbolism
Flies play a variety of symbolic roles in different cultures. These include both positive and negative roles in religion. In the traditional Navajo religion, Big Fly is an important spirit being. In Christian demonology, Beelzebub is a demonic fly, the "Lord of the Flies", and a god of the Philistines.
Flies have appeared in literature since ancient Sumer. In a Sumerian poem, a fly helps the goddess Inanna when her husband Dumuzid is being chased by galla demons. In the Mesopotamian versions of the flood myth, the dead corpses floating on the waters are compared to flies. Later, the gods are said to swarm "like flies" around the hero Utnapishtim's offering. Flies appear on Old Babylonian seals as symbols of Nergal, the god of death. Fly-shaped lapis lazuli beads were often worn in ancient Mesopotamia, along with other kinds of fly-jewellery.
In Ancient Egypt, flies appear in amulets and as a military award for bravery and tenacity, due to the fact that they always come back when swatted at. It is thought that flies may have also been associated with the departing spirit of the dead, as they are often found near dead bodies. In modern Egypt, a similar belief persists in some areas to not swat at shiny green flies, as they may be carrying the soul of a recently deceased person.
In a little-known Greek myth, a very chatty and talkative maiden named Myia (meaning "fly") enraged the moon-goddess Selene by attempting to seduce her lover, the sleeping Endymion, and was thus turned by the angry goddess into a fly, who now always deprives people of their sleep in memory of her past life. In Prometheus Bound, which is attributed to the Athenian tragic playwright Aeschylus, a gadfly sent by Zeus's wife Hera pursues and torments his mistress Io, who has been transformed into a cow and is watched constantly by the hundred eyes of the herdsman Argus: "Io: Ah! Hah! Again the prick, the stab of gadfly-sting! O earth, earth, hide, the hollow shape—Argus—that evil thing—the hundred-eyed." William Shakespeare, inspired by Aeschylus, has Tom o'Bedlam in King Lear, "Whom the foul fiend hath led through fire and through flame, through ford and whirlpool, o'er bog and quagmire", driven mad by the constant pursuit. In Antony and Cleopatra, Shakespeare similarly likens Cleopatra's hasty departure from the Actium battlefield to that of a cow chased by a gadfly. More recently, in 1962 the biologist Vincent Dethier wrote To Know a Fly, introducing the general reader to the behaviour and physiology of the fly.
Musca depicta ("painted fly" in Latin) is a depiction of a fly as an inconspicuous element of various paintings. This feature was widespread in 15th and 16th centuries paintings and its presence may be explained by various reasons.
Flies appear in popular culture in concepts such as fly-on-the-wall documentary-making in film and television production. The metaphoric name suggests that events are seen candidly, as a fly might see them. Flies have inspired the design of miniature flying robots. Steven Spielberg's 1993 film Jurassic Park relied on the idea that DNA could be preserved in the stomach contents of a blood-sucking fly fossilised in amber, though the mechanism has been discounted by scientists.
Economic importance
Dipterans are an important group of insects and have a considerable impact on the environment. Some leaf-miner flies (Agromyzidae), fruit flies (Tephritidae and Drosophilidae) and gall midges (Cecidomyiidae) are pests of agricultural crops; others such as tsetse flies, screwworm and botflies (Oestridae) attack livestock, causing wounds, spreading disease, and creating significant economic harm. See article: Parasitic flies of domestic animals. A few can even cause myiasis in humans. Still others such as mosquitoes (Culicidae), blackflies (Simuliidae) and drain flies (Psychodidae) impact human health, acting as vectors of major tropical diseases.
Among these, Anopheles mosquitoes transmit malaria, filariasis, and arboviruses; Aedes aegypti mosquitoes carry dengue fever and the Zika virus; blackflies carry river blindness; sand flies carry leishmaniasis. Other dipterans are a nuisance to humans, especially when present in large numbers; these include houseflies, which contaminate food and spread food-borne illnesses; the biting midges and sandflies (Ceratopogonidae) and the houseflies and stable flies (Muscidae). In tropical regions, eye flies (Chloropidae) which visit the eye in search of fluids can be a nuisance in some seasons.
Many dipterans serve roles that are useful to humans. Houseflies, blowflies and fungus gnats (Mycetophilidae) are scavengers and aid in decomposition. Robber flies (Asilidae), tachinids (Tachinidae) and dagger flies and balloon flies (Empididae) are predators and parasitoids of other insects, helping to control a variety of pests. Many dipterans such as bee flies (Bombyliidae) and hoverflies (Syrphidae) are pollinators of crop plants.
Uses
Drosophila melanogaster, a fruit fly, has long been used as a model organism in research because of the ease with which it can be bred and reared in the laboratory, its small genome, and the fact that many of its genes have counterparts in higher eukaryotes. A large number of genetic studies have been undertaken based on this species; these have had a profound impact on the study of gene expression, gene regulatory mechanisms and mutation. Other studies have investigated physiology, microbial pathogenesis and development among other research topics. The studies on dipteran relationships by Willi Hennig helped in the development of cladistics, techniques that he applied to morphological characters but now adapted for use with molecular sequences in phylogenetics.
Maggots found on corpses are useful to forensic entomologists. Maggot species can be identified by their anatomical features and by matching their DNA. Maggots of different species of flies visit corpses and carcases at fairly well-defined times after the death of the victim, and so do their predators, such as beetles in the family Histeridae. Thus, the presence or absence of particular species provides evidence for the time since death, and sometimes other details such as the place of death, when species are confined to particular habitats such as woodland.
Some species of maggots such as blowfly larvae (gentles) and bluebottle larvae (casters) are bred commercially; they are sold as bait in angling, and as food for carnivorous animals (kept as pets, in zoos, or for research) such as some mammals, fishes, reptiles, and birds. It has been suggested that fly larvae could be used at a large scale as food for farmed chickens, pigs, and fish. However, consumers are opposed to the inclusion of insects in their food, and the use of insects in animal feed remains illegal in areas such as the European Union.
Fly larvae can be used as a biomedical tool for wound care and treatment. Maggot debridement therapy (MDT) is the use of blow fly larvae to remove the dead tissue from wounds, most commonly being amputations. Historically, this has been used for centuries, both intentional and unintentional, on battlefields and in early hospital settings. Removing the dead tissue promotes cell growth and healthy wound healing. The larvae also have biochemical properties such as antibacterial activity found in their secretions as they feed. These medicinal maggots are a safe and effective treatment for chronic wounds.
The Sardinian cheese casu marzu is exposed to flies known as cheese skippers such as Piophila casei, members of the family Piophilidae. The digestive activities of the fly larvae soften the cheese and modify the aroma as part of the process of maturation. At one time European Union authorities banned sale of the cheese and it was becoming hard to find, but the ban has been lifted on the grounds that the cheese is a traditional local product made by traditional methods.
Hazards
Flies are a health hazard and are attracted to toilets because of their smell. The New Scientist magazine suggested a trap for these flies. A pipe acting as a chimney was fitted to the toilet which let in some light to attract these flies up to the end of this pipe where a gauze covering prevented escape to the air outside so that they were trapped and died. Toilets are generally dark inside particularly if the door is closed.
| Biology and health sciences | Flies (Diptera) | null |
63679 | https://en.wikipedia.org/wiki/Van | Van | A van is a type of road vehicle used for transporting goods or people. There is some variation in the scope of the word across the different English-speaking countries. The smallest vans, microvans, are used for transporting either goods or people in tiny quantities. Mini MPVs, compact MPVs, and MPVs are all small vans usually used for transporting people in small quantities. Larger vans with passenger seats are used for institutional purposes, such as transporting students. Larger vans with only front seats are often used for business purposes, to carry goods and equipment. Specially equipped vans are used by television stations as mobile studios. Postal services and courier companies use large step vans to deliver packages.
Word origin and usage
Van meaning a type of vehicle arose as a contraction of the word caravan. The earliest records of a van as a vehicle in English are in the mid-19th century meaning a covered wagon for transporting goods; the earliest reported record of such was in 1829. The words caravan with the same meaning has been used since the 1670s. A caravan, meaning one wagon, had arisen as an extension, or corruption, of a caravan meaning a convoy of multiple wagons.
The word van has slightly different, but overlapping, meanings in different forms of English. While the word now applies everywhere to boxy cargo vans, other applications are found to a greater or lesser extent in different English-speaking countries; some examples follow:
Australia
In Australian English, the term van is commonly used to describe a minivan, a passenger minibus, or an Australian panel van as manufactured by companies such as Holden and Ford at various times.
A full-size van used for commercial purposes is also known as a "van" in Australia; however, a passenger vehicle with more than seven or eight seats is more likely to be called a "minibus".
The term van can also sometimes be used interchangeably with what Australians usually call a "caravan", which in the U.S. is referred to as a "travel trailer".
The British term people mover is also used in Australian English to describe a passenger van. The American usage of "van" which describes a cargo box trailer or semi-trailer is used rarely, if ever, in Australia.
India
In India, the van is one of the most common modes of transportation and is often used for taking children to and from schools, usually when parents, especially working parents, are too busy to pick their children up from school or when school buses are full and unable to accommodate other children. Vans are also used for commercial purposes and office cabs. Some of the popular vans include Maruti Suzuki Omni and the Maruti Suzuki Eeco.
Japan
Early Japanese vans include the Kurogane Baby, Mazda Bongo, and the Toyota LiteAce. The Japanese also produced many vans based on the American flat nose model, but also minivans which for the American market have generally evolved to the long-wheelbase front-wheel drive form. The Nissan Prairie and Mitsubishi Chariot as well as microvans that fulfill kei car regulations, are popular for small businesses. The term is also used to describe full-fledged station wagons (passenger car front sheet metal, flat-folding back seats, windows all around) and even hatchbacks with basic trim packages intended for commercial use. These are referred to as "light vans" (), with "light" referring to the glazing rather than the weight of the vehicle.
United Kingdom
In British English, the word van refers to vehicles that carry goods only, either on roads or on rails. What would be called a "minivan" in American English is called a "people-carrier", "MPV" or multi-purpose vehicle, and larger passenger vehicles are called "minibuses". The Telegraph newspaper introduced the idea of the "White Van Man", a typical working class man or small business owner who would have a white Ford Transit, Mercedes-Benz Sprinter, or similar panel van. Today the phrase "man and van" or "man with a van" refers to light removal firms normally operated by a sole business owner transporting anything from the contents of a whole house to just a few boxes. The word "van" also refers to railway covered goods wagons, called "boxcars" in the United States.
United States
In the United States, a van can also refer to a box-shaped trailer or semi-trailer used to carry goods. In this case, there is a differentiation between a "dry van", used to carry most goods, and a refrigerated van, or "reefer", used for cold goods. A railway car used to carry baggage is also called a "van".
A vehicle referred to in the US as a "full-size van" is usually a large, boxy vehicle that has a platform and powertrain similar to their light truck counterparts. These vans may be sold with the space behind the front seats empty for transporting goods (cargo van), furnished for passenger use by either the manufacturer (wagon), or another company for more personal comforts (conversion van). Full-size vans often have short hoods, with the engine placed under the passenger cabin.
A cutaway van chassis is a variation of the full-size van that was developed for use by second stage manufacturers. Such a unit has a van front end and driver controls in a cab body that extends to a point behind the front seats, where the rest of the van body is cut off (leading to the terminology "cutaway"). From that point aft, only the chassis frame rails and running gear extend to the rear when the unit is shipped as an "incomplete vehicle". A second-stage manufacturer, commonly known as a bodybuilder, will complete the vehicle for uses such as recreational vehicles, small school buses, minibusses, type III ambulances, and delivery trucks. A large proportion of cutaway van chassis are equipped with dual rear wheels. Second-stage manufacturers sometimes add third weight-bearing single wheel "tag axles" for their larger minibus models.
The term van in the US may also refer to a minivan. Minivans are usually distinguished by their smaller size and front wheel drive powertrain, although some are equipped with four-wheel drive. Minivans typically offer seven- or eight-passenger seating capacity, and better fuel economy than full-sized vans, at the expense of power, cargo space, and towing capacity. Minivans are often equipped with sliding doors.
History
The precursor to American vans would be the sedan deliveries of the 1930s to late-1950s. The first generation of American vans were the 1960s compact vans, which were patterned in size after the Volkswagen Bus. The Corvair-based entry even imitated the rear-mounted, air-cooled engine design. The Ford Falcon-based first-generation Econoline had a flat nose, with the engine mounted between and behind the front seats. The Dodge A100 had a similar layout and could accommodate a V8 engine. Chevrolet also switched to this layout. The Ford, Dodge, and Corvair vans were also produced as pickup trucks.
The standard or full size vans appeared with Ford's innovation of moving the engine forward under a short hood and using pickup truck components. The engine cockpit housing is often called a dog house. Over time, they evolved longer noses and sleeker shapes. The Dodge Sportsman was available with an extension to the rear of its long-wheelbase model to create a 15-passenger van. Vehicles have been sold as both cargo and passenger models, as well as in cutaway van chassis versions for second stage manufacturers to make box vans, ambulances, campers, and other vehicles. Second-stage manufacturers also modify the original manufacturer's body to create custom vans.
Use
In urban areas of the United States, full-size vans have been used as commuter vans since 1971, when Dodge introduced a van that could transport up to 15 passengers. Commuter vans are used as an alternative to carpooling and other ride-sharing arrangements.
Many mobile businesses use a van to carry almost their entire business to various places where they work. For example, those who come to homes or places of business to perform various services, installations, or repairs. Vans are also used to shuttle people and their luggage between hotels and airports, to transport commuters between parking lots and their places of work, and along established routes as minibusses. Vans are also used to transport elderly and mobility-impaired worshipers to and from church services or to transport youth groups for outings to amusement parks, picnics, and visiting other churches. Vans are also used by schools to drive sports teams to intramural games. Vans have been used by touring music groups to haul equipment and people to music venues around the country.
Full-size van
Full-size van is a marketing term used in North America for a van larger than a minivan, that is characterized by a large, boxy appearance, a short hood, and heavy cargo and passenger-hauling capability.
The first full-size van was the 1969 Ford Econoline, which used components from the Ford F-Series pickups. General Motors and the Dodge Ram Van followed with designs with the engines placed further forward, and succeeding generations of the Econoline introduced longer hoods.
Step van
Another type of van specific to North America is the step van, named because of the design to facilitate users to step in and out of the vehicle. Widely used by delivery services, courier companies, and the parcel division of the US Postal Service and Canada Post, they are often seen driven with the door open. Step vans have more boxy shapes, wider bodies, and higher rooftops than other vans, and are rarely employed for carrying passengers.
Minivan
The Minivan is a van which is smaller in size in length and height than a full-size van. Minivans are often used for personal use, as well as commercial passenger operations such as taxis and shuttles, and cargo operations like delivery of mail and packages. They offer more cargo space than traditional sedans and SUVs. Their lower center of gravity is also useful in handling and rollover prevention.
Rollover safety
A van is taller than a typical passenger car, resulting in a higher center of gravity. The suspension is also higher to accommodate the weight of 15 passengers, who can weigh over one ton alone. In the United States, it is common for only the front seat passengers to use their safety belts. The U.S. National Highway Traffic Safety Administration (NHTSA) has determined that belted passengers are about four times more likely to survive in rollover crashes.
Safety can be improved by understanding the unique characteristics of 12- and 15-passenger vans and by following guidelines developed for their drivers, according to the U.S. National Highway Traffic Safety Administration (NHTSA).
Safety equipment
Many commercial vans are fitted with cargo barriers behind the front seats (or rear seats, if fitted) to prevent injuries caused by unsecured cargo in the event of sudden deceleration, collision, or a rollover. Cargo barriers in vans are sometimes fitted with doors permitting the driver to pass through to the cargo compartment of the vehicle.
| Technology | Road transport | null |
63704 | https://en.wikipedia.org/wiki/Carpal%20bones | Carpal bones | The carpal bones are the eight small bones that make up the wrist (carpus) that connects the hand to the forearm. The terms "carpus" and "carpal" are derived from the Latin carpus and the Greek καρπός (karpós), meaning "wrist". In human anatomy, the main role of the carpal bones is to articulate with the radial and ulnar heads to form a highly mobile condyloid joint (i.e. wrist joint), to provide attachments for thenar and hypothenar muscles, and to form part of the rigid carpal tunnel which allows the median nerve and tendons of the anterior forearm muscles to be transmitted to the hand and fingers.
In tetrapods, the carpus is the sole cluster of bones in the wrist between the radius and ulna and the metacarpus. The bones of the carpus do not belong to individual fingers (or toes in quadrupeds), whereas those of the metacarpus do. The corresponding part of the foot is the tarsus. The carpal bones allow the wrist to move and rotate vertically.
Structure
Bones
The eight carpal bones may be conceptually organized as either two transverse rows, or three longitudinal columns.
When considered as paired rows, each row forms an arch which is convex proximally and concave distally. On the palmar side, the carpus is concave and forms the carpal tunnel, which is covered by the flexor retinaculum. The proximal row comprises the scaphoid, lunate, triquetral, and pisiform bones which articulate with the surfaces of the radius and distal carpal row, and thus constantly adapts to these mobile surfaces. Within the proximal row, each carpal bone has slight independent mobility. For example, the scaphoid contributes to midcarpal stability by articulating distally with the trapezium and the trapezoid. In contrast, the distal row is more rigid as its transverse arch moves with the metacarpals.
Biomechanically and clinically, the carpal bones are better conceptualized as three longitudinal columns:
Radial scaphoid column: scaphoid, trapezium, and trapezoid
Lunate column: lunate and capitate
Ulnar triquetral column: triquetrum and hamate
In this context the pisiform is regarded as a sesamoid bone embedded in the tendon of the flexor carpi ulnaris. The ulnar column leaves a gap between the ulna and the triquetrum, and therefore, only the radial or scaphoid and central or capitate columns articulate with the radius. The wrist is more stable in flexion than in extension more because of the strength of various capsules and ligaments than the interlocking parts of the skeleton.
Almost all carpals (except the pisiform) have six surfaces. Of these the palmar or anterior and the dorsal or posterior surfaces are rough, for ligamentous attachment; the dorsal surfaces being the broader, except in the lunate.
The superior or proximal, and inferior or distal surfaces are articular, the superior generally convex, the inferior concave; the medial and lateral surfaces are also articular where they are in contact with contiguous bones, otherwise they are rough and tuberculated.
The structure in all is similar: cancellous tissue enclosed in a layer of compact bone.
Joints
Accessory bones
Occasionally accessory bones are found in the carpus, but of more than 20 such described bones, only four (the central, styloid, secondary trapezoid, and secondary pisiform bones) are considered to be proven accessory bones. Sometimes the scaphoid, triquetrum, and pisiform bones are divided into two.
Development
The carpal bones are ossified endochondrally (from within the cartilage) and the ossific centers appear only after birth.
The formation of these centers roughly follows a chronological spiral pattern starting in the capitate and hamate during the first year of life. The ulnar bones are then ossified before the radial bones, while the sesamoid pisiform arises in the tendon of the flexor carpi ulnaris after more than ten years.
The commencement of ossification for each bone occurs over period like other bones. This is useful in forensic age estimation.
Function
Ligaments
There are four groups of ligaments in the region of the wrist:
The ligaments of the wrist proper which unite the ulna and radius with the carpus: the ulnar and radial collateral ligaments; the palmar and dorsal radiocarpal ligaments; and the palmar ulnocarpal ligament. (Shown in blue in the figure.)
The ligaments of the intercarpal articulations which unite the carpal bones with one another: the radiate carpal ligament; the dorsal, palmar, and interosseous intercarpal ligaments; and the pisohamate ligament. (Shown in red in the figure.)
The ligaments of the carpometacarpal articulations which unite the carpal bones with the metacarpal bones: the pisometacarpal ligament and the palmar and dorsal carpometacarpal ligaments. (Shown in green in the figure.)
The ligaments of the intermetacarpal articulations which unite the metacarpal bones: the dorsal, interosseous, and palmar metacarpal ligaments. (Shown in yellow in the figure.)
Movements
The hand is said to be in straight position when the third finger runs over the capitate bone and is in a straight line with the forearm. This should not be confused with the midposition of the hand which corresponds to an ulnar deviation of 12 degrees. From the straight position two pairs of movements of the hand are possible: abduction (movement towards the radius, so called radial deviation or abduction) of 15 degrees and adduction (movement towards the ulna, so called ulnar deviation or adduction) of 40 degrees when the arm is in strict supination and slightly greater in strict pronation.
Flexion (tilting towards the palm, so called palmar flexion) and extension (tilting towards the back of the hand, so called dorsiflexion) is possible with a total range of 170 degrees.
Radial abduction/ulnar adduction
During radial abduction the scaphoid is tilted towards the palmar side which allows the trapezium and trapezoid to approach the radius. Because the trapezoid is rigidly attached to the second metacarpal bone to which also the flexor carpi radialis and extensor carpi radialis are attached, radial abduction effectively pulls this combined structure towards the radius. During radial abduction the pisiform traverses the greatest path of all carpal bones.
Radial abduction is produced by (in order of importance) extensor carpi radialis longus, abductor pollicis longus, extensor pollicis longus, flexor carpi radialis, and flexor pollicis longus.
Ulnar adduction causes a tilting or dorsal shifting of the proximal row of carpal bones.
It is produced by extensor carpi ulnaris, flexor carpi ulnaris, extensor digitorum, and extensor digiti minimi.
Both radial abduction and ulnar adduction occurs around a dorsopalmar axis running through the head of the capitate bone.
Palmar flexion/dorsiflexion
During palmar flexion the proximal carpal bones are displaced towards the dorsal side and towards the palmar side during dorsiflexion. While flexion and extension consist of movements around a pair of transverse axes — passing through the lunate bone for the proximal row and through the capitate bone for the distal row — palmar flexion occurs mainly in the radiocarpal joint and dorsiflexion in the midcarpal joint.
Dorsiflexion is produced by (in order of importance) extensor digitorum, extensor carpi radialis longus, extensor carpi radialis brevis, extensor indicis, extensor pollicis longus, and extensor digiti minimi. Palmar flexion is produced by (in order of importance) flexor digitorum superficialis, flexor digitorum profundus, flexor carpi ulnaris, flexor pollicis longus, flexor carpi radialis, and abductor pollicis longus.
Combined movements
Combined with movements in both the elbow and shoulder joints, intermediate or combined movements in the wrist approximate those of a ball-and-socket joint with some necessary restrictions, such as maximum palmar flexion blocking abduction.
Accessory movements
Anteroposterior gliding movements between adjacent carpal bones or along the midcarpal joint can be achieved by stabilizing individual bones while moving another (i.e. gripping the bone between the thumb and index finger).
Other animals
The structure of the carpus varies widely between different groups of tetrapods, even among those that retain the full set of five digits. In primitive fossil amphibians, such as Eryops, the carpus consists of three rows of bones; a proximal row of three carpals, a second row of four bones, and a distal row of five bones. The proximal carpals are referred to as the radiale, intermedium, and ulnare, after their proximal articulations, and are homologous with the scaphoid, lunate, and triquetral bones respectively. The remaining bones are simply numbered, as the first to fourth centralia (singular: centrale), and the first to fifth distal carpals. Primitively, each of the distal bones appears to have articulated with a single metacarpal.
However, the vast majority of later vertebrates, including modern amphibians, have undergone varying degrees of loss and fusion of these primitive bones, resulting in a smaller number of carpals. Almost all mammals and reptiles, for example, have lost the fifth distal carpal, and have only a single centrale - and even this is missing in humans. The pisiform bone is somewhat unusual, in that it first appears in primitive reptiles, and is never found in amphibians.
Because many tetrapods have fewer than five digits on the forelimb, even greater degrees of fusion are common, and a huge array of different possible combinations are found. The wing of a modern bird, for example, has only two remaining carpals; the radiale (the scaphoid of mammals) and a bone formed from the fusion of four of the distal carpals.
The carpus and tarsus are both described as podial elements or (clusters of) podial bones.
In some macropods, the scaphoid and lunar bones are fused into the scapholunar bone.
In crustaceans, "carpus" is the scientific term for the claws or "pincers" present on some legs. (See Decapod anatomy)
Etymology
The Latin word "carpus" is derived from Greek meaning "wrist". The root "carp-" translates to "pluck", an action performed by the wrist.
| Biology and health sciences | Skeletal system | Biology |
63764 | https://en.wikipedia.org/wiki/Dysentery | Dysentery | {{Infobox medical condition (new)
| name = Dysentery
| synonyms = Bloody diarrhea
| image = Dysentery Patient, Burma Hospital, Siam Art.IWMART1541787.jpg
| caption = A person with dysentery in a Burmese POW camp, 1943
| field = Infectious disease
| symptoms = Bloody diarrhea, abdominal pain, fever
| complications = Dehydration
| onset =
| duration = Less than a week
| types =
| causes = Usually Shigella or Entamoeba histolytica
| risks or H.Nana = Contamination of food and water with feces due to poor sanitation
| diagnosis = Based on symptoms, Stool test
| differential =
| prevention = Hand washing, food safety
| treatment = Drinking sufficient fluids, antibiotics (severe cases)
| medication =
| prognosis =
| frequency = Occurs often in many parts of the world
| deaths = 1.1 million a year
}}
Dysentery ( , ), historically known as the bloody flux, is a type of gastroenteritis that results in bloody diarrhea. Other symptoms may include fever, abdominal pain, and a feeling of incomplete defecation. Complications may include dehydration.
The cause of dysentery is usually the bacteria from genus Shigella, in which case it is known as shigellosis, or the amoeba Entamoeba histolytica; then it is called amoebiasis. Other causes may include certain chemicals, other bacteria, other protozoa, or parasitic worms. It may spread between people. Risk factors include contamination of food and water with feces due to poor sanitation. The underlying mechanism involves inflammation of the intestine, especially of the colon.
Efforts to prevent dysentery include hand washing and food safety measures while traveling in countries of high risk. While the condition generally resolves on its own within a week, drinking sufficient fluids such as oral rehydration solution is important. Antibiotics such as azithromycin may be used to treat cases associated with travelling in the developing world. While medications used to decrease diarrhea such as loperamide are not recommended on their own, they may be used together with antibiotics.Shigella results in about 165 million cases of diarrhea and 1.1 million deaths a year with nearly all cases in the developing world. In areas with poor sanitation nearly half of cases of diarrhea are due to Entamoeba histolytica. Entamoeba histolytica affects millions of people and results in more than 55,000 deaths a year. It commonly occurs in less developed areas of Central and South America, Africa, and Asia. Dysentery has been described at least since the time of Hippocrates.
Signs and symptoms
The most common form of dysentery is bacillary dysentery, which is typically a mild sickness, causing symptoms normally consisting of mild abdominal pains and frequent passage of loose stools or diarrhea. Symptoms normally present themselves after 1–3 days, and are usually no longer present after a week. The frequency of urges to defecate, the large volume of liquid feces ejected, and the presence of blood, mucus, or pus depends on the pathogen causing the disease. Temporary lactose intolerance can occur, as well. In some occasions, severe abdominal cramps, fever, shock, and delirium can all be symptoms.
In extreme cases, people may pass more than one liter of fluid per hour. More often, individuals will complain of diarrhea with blood, accompanied by extreme abdominal pain, rectal pain and a low-grade fever. Rapid weight loss and muscle aches sometimes also accompany dysentery, while nausea and vomiting are rare.
On rare occasions, the amoebic parasite will invade the body through the bloodstream and spread beyond the intestines. In such cases, it may more seriously infect other organs such as the brain, lungs, and most commonly the liver.
Cause
Dysentery results from bacterial or parasitic infections. Viruses do not generally cause the disease. These pathogens typically reach the large intestine after entering orally, through ingestion of contaminated food or water, oral contact with contaminated objects or hands, and so on. Each specific pathogen has its own mechanism or pathogenesis, but in general, the result is damage to the intestinal linings, leading to the inflammatory immune responses. This can cause elevated physical temperature, painful spasms of the intestinal muscles (cramping), swelling due to fluid leaking from capillaries of the intestine (edema) and further tissue damage by the body's immune cells and the chemicals, called cytokines, which are released to fight the infection. The result can be impaired nutrient absorption, excessive water and mineral loss through the stools due to breakdown of the control mechanisms in the intestinal tissue that normally remove water from the stools, and in severe cases, the entry of pathogenic organisms into the bloodstream. Anemia may also arise due to the blood loss through diarrhea.
Bacterial infections that cause bloody diarrhea are typically classified as being either invasive or toxogenic. Invasive species cause damage directly by invading into the mucosa. The toxogenic species do not invade, but cause cellular damage by secreting toxins, resulting in bloody diarrhea. This is also in contrast to toxins that cause watery diarrhea, which usually do not cause cellular damage, but rather they take over cellular machinery for a portion of life of the cell.
Definitions of dysentery can vary by region and by medical specialty. The U. S. Centers for Disease Control and Prevention (CDC) limits its definition to "diarrhea with visible blood". Others define the term more broadly. These differences in definition must be taken into account when defining mechanisms. For example, using the CDC definition requires that intestinal tissue be so severely damaged that blood vessels have ruptured, allowing visible quantities of blood to be lost with defecation. Other definitions require less specific damage.
Amoebic dysentery
Amoebiasis, also known as amoebic dysentery, is caused by an infection from the amoeba Entamoeba histolytica, which is found mainly in tropical areas. Proper treatment of the underlying infection of amoebic dysentery is important; insufficiently treated amoebiasis can lie dormant for years and subsequently lead to severe, potentially fatal, complications.
When amoebae inside the bowel of an infected person are ready to leave the body, they group together and form a shell that surrounds and protects them. This group of amoebae is known as a cyst, which is then passed out of the person's body in the feces and can survive outside the body. If hygiene standards are poor – for example, if the person does not dispose of the feces hygienically – then it can contaminate the surroundings, such as nearby food and water.
If another person then eats or drinks food or water that has been contaminated with feces containing the cyst, that person will also become infected with the amoebae. Amoebic dysentery is particularly common in parts of the world where human feces are used as fertilizer.
After entering the person's body through the mouth, the cyst travels down into the stomach. The amoebae inside the cyst are protected from the stomach's digestive acid. From the stomach, the cyst travels to the intestines, where it breaks open and releases the amoebae, causing the infection. The amoebae can burrow into the walls of the intestines and cause small abscesses and ulcers to form. The cycle then begins again.
Bacillary dysentery
Dysentery may also be caused by shigellosis, an infection by bacteria of the genus Shigella, and is then known as bacillary dysentery (or Marlow syndrome). The term bacillary dysentery etymologically might seem to refer to any dysentery caused by any bacilliform bacteria, but its meaning is restricted by convention to Shigella dysentery.
Other bacteria
Some strains of Escherichia coli cause bloody diarrhea. The typical culprits are enterohemorrhagic Escherichia coli, of which O157:H7 is the best known. These types of E. coli also make Shiga toxin.
Diagnosis
A diagnosis may be made by taking a history and doing a brief examination. Dysentery should not be confused with hematochezia, which is the passage of fresh blood through the anus, usually in or with stools.
Physical exam
The mouth, skin, and lips may appear dry due to dehydration. Lower abdominal tenderness may also be present.
Stool and blood tests
Cultures of stool samples are examined to identify the organism causing dysentery. Usually, several samples must be obtained due to the number of amoebae, which changes daily. Blood tests can be used to measure abnormalities in the levels of essential minerals and salts.
Prevention
Efforts to prevent dysentery include hand washing and food safety measures while traveling in areas of high risk.
Vaccine
Although there is currently no vaccine that protects against Shigella infection, several are in development. Vaccination may eventually become a part of the strategy to reduce the incidence and severity of diarrhea, particularly among children in low-resource settings. For example, Shigella is a longstanding World Health Organization (WHO) target for vaccine development, and sharp declines in age-specific diarrhea/dysentery attack rates for this pathogen indicate that natural immunity does develop following exposure; thus, vaccination to prevent this disease should be feasible. The development of vaccines against these types of infection has been hampered by technical constraints, insufficient support for coordination, and a lack of market forces for research and development. Most vaccine development efforts are taking place in the public sector or as research programs within biotechnology companies.
Treatment
Dysentery is managed by maintaining fluids using oral rehydration therapy. If this treatment cannot be adequately maintained due to vomiting or the profuseness of diarrhea, hospital admission may be required for intravenous fluid replacement. In ideal situations, no antimicrobial therapy should be administered until microbiological microscopy and culture studies have established the specific infection involved. When laboratory services are not available, it may be necessary to administer a combination of drugs, including an amoebicidal drug to kill the parasite, and an antibiotic to treat any associated bacterial infection. Laudanum (Deodorized Tincture of Opium)] may be used for severe pain and to combat severe diarrhea.
If shigellosis is suspected and it is not too severe, letting it run its course may be reasonable – usually less than a week. If the case is severe, antibiotics such as ciprofloxacin or TMP-SMX may be useful. However, many strains of Shigella are becoming resistant to common antibiotics, and effective medications are often in short supply in developing countries. If necessary, a doctor may have to reserve antibiotics for those at highest risk for death, including young children, people over 50, and anyone suffering from dehydration or malnutrition.
Amoebic dysentery is often treated with two antimicrobial drugs such as metronidazole and paromomycin or iodoquinol.
Prognosis
With correct treatment, most cases of amoebic and bacterial dysentery subside within 10 days, and most individuals achieve a full recovery within two to four weeks after beginning proper treatment. If the disease is left untreated, the prognosis varies with the immune status of the individual patient and the severity of disease. Extreme dehydration can delay recovery and significantly raises the risk for serious complications including death.
Epidemiology
Insufficient data exists, but Shigella is estimated to have caused the death of 34,000 children under the age of five in 2013, and 40,000 deaths in people over five years of age. Amoebiasis infects over 50 million people each year, of whom 50,000 die (one per thousand).
History
Shigella evolved with the human expansion out of Africa 50,000 to 200,000 years ago.
The seed, leaves, and bark of the kapok tree have been used in traditional medicines by indigenous peoples of the rainforest regions in the Americas, west-central Africa, and Southeast Asia in the treatment of this disease.
In 1915, Australian bacteriologist Fannie Eleanor Williams was serving as a medic in Greece with the Australian Imperial Force, receiving casualties directly from Gallipoli. In Gallipoli, dysentery was severely affecting soldiers and causing significant loss of manpower. Williams carried out serological investigations into dysentery, co-authoring several groundbreaking papers with Sir Charles Martin, director of the Lister Institute. The result of their work into dysentery was increased demand for specific diagnostics and curative sera.Bacillus subtilis was marketed throughout America and Europe from 1946 as an immunostimulatory aid in the treatment of gut and urinary tract diseases such as rotavirus and Shigella'', but declined in popularity after the introduction of consumer antibiotics.
Notable cases
580: Childesinda, son of Chilperic I, Frankish king, died of dysentery as a child
580: Austregilde, Frankish queen, died of dysentery. According to Gregory of Tours she blamed her doctors for her death and asked her husband, King Guntram, to kill them after she died, which he did.
685: Constantine IV, the Byzantine emperor, died of dysentery in September 685.
1183: Henry the Young King died of dysentery at the castle of Martel on 11 June 1183.
1216: John, King of England died of dysentery at Newark Castle on 19 October 1216.
1270: Louis IX of France died of dysentery in Tunis while commanding his troops for the Eighth Crusade on 25 August 1270.
1307: Edward I of England caught dysentery on his way to the Scottish border and died in his servants' arms on 7 July 1307.
1322: Philip V of France died of dysentery at the Abbey of Longchamp (site of the present hippodrome in the Bois de Boulogne) in Paris while visiting his daughter, Blanche, who had taken her vows as a nun there in 1322. He died on 3 January 1322.
1376: Edward the Black Prince, son of Edward III of England and heir to the English throne. Died of apparent dysentery in June, after a months-long period of illness during which he predicted his own imminent death, in his 46th year.
1422: King Henry V of England died suddenly on 31 August 1422 at the Château de Vincennes, apparently from dysentery, which he had contracted during the siege of Meaux. He was 35 years old and had reigned for nine years.
1536: Erasmus, Dutch renaissance humanist and theologian. At Basel.
1596: Sir Francis Drake, vice admiral, died of dysentery on 28 January 1596 whilst anchored off the coast of Portobelo.
1605: Akbar, ruler of the Mughal Empire of South Asia, died of dysentery. On 3 October 1605, he fell ill with an attack of dysentery, from which he never recovered. He is believed to have died on or about 27 October 1605, after which his body was buried in a mausoleum in Agra, present-day India.
1675: Jacques Marquette died of dysentery on his way north from what is today Chicago, traveling to the mission where he intended to spend the rest of his life.
1676: Nathaniel Bacon died of dysentery after taking control of Virginia following Bacon's Rebellion. He is believed to have died in October 1676, allowing Virginia's ruling elite to regain control.
1680: Shivaji, founder and ruler of the Maratha Empire of South Asia, died of dysentery on 3 April 1680. In 1680, Shivaji fell ill with fever and dysentery, dying around 3–5 April 1680 at the age of 52 on the eve of Hanuman Jayanti. He was cremated at Raigad Fort, where his Samadhi is built in Mahad, Raigad district of Maharashtra, India.
1827: Queen Nandi kaBhebhe, (mother of Shaka Zulu) died of dysentery on 10 October 1827.
1873: The explorer David Livingstone died of dysentery on 1 May 1873.
1896: Phan Đình Phùng, a Vietnamese revolutionary who led rebel armies against French colonial forces in Vietnam, died of dysentery as the French surrounded his forces on 21 January 1896.
1910: Luo Yixiu, first wife of Mao Zedong, died of dysentery on 11 February 1910. She was 20 years old.
1930: The French explorer and writer Michel Vieuchange died of dysentery in Agadir on 30 November 1930, on his return from the "forbidden city" of Smara. He was nursed by his brother, Doctor Jean Vieuchange, who was unable to save him. The notebooks and photographs, edited by Jean Vieuchange, went on to become bestsellers.
1942: The Selarang Barracks incident in the summer of 1942 during World War II involved the forced crowding of 17,000 Anglo-Australian prisoners-of-war (POWs) by their Japanese captors in the areas around the barracks square for nearly five days with little water and no sanitation after the Selarang Barracks POWs refused to sign a pledge not to escape. The incident ended with the surrender of the Australian commanders due to the spreading of dysentery among their men.
| Biology and health sciences | Infectious disease | null |
63780 | https://en.wikipedia.org/wiki/Sporangium | Sporangium | A sporangium (from Late Latin, ); : sporangia) is an enclosure in which spores are formed. It can be composed of a single cell or can be multicellular. Virtually all plants, fungi, and many other groups form sporangia at some point in their life cycle. Sporangia can produce spores by mitosis, but in land plants and many fungi, sporangia produce genetically distinct haploid spores by meiosis.
Fungi
In some phyla of fungi, the sporangium plays a role in asexual reproduction, and may play an indirect role in sexual reproduction. The sporangium forms on the sporangiophore and contains haploid nuclei and cytoplasm. Spores are formed in the sporangiophore by encasing each haploid nucleus and cytoplasm in a tough outer membrane. During asexual reproduction, these spores are dispersed via wind and germinate into haploid hyphae.
Although sexual reproduction in fungi varies between phyla, for some fungi the sporangium plays an indirect role in sexual reproduction. For Zygomycota, sexual reproduction occurs when the haploid hyphae from two individuals join to form a zygosporangium in response to unfavorable conditions. The haploid nuclei within the zygosporangium then fuse into diploid nuclei. When conditions improve, the zygosporangium germinates, undergoes meiosis and produces a sporangium, which releases spores.
Land plants
In mosses, liverworts and hornworts, an unbranched sporophyte produces a single sporangium, which may be quite complex morphologically. Most non-vascular plants, as well as many lycophytes and most ferns, are homosporous (only one kind of spore is produced). Some lycophytes, such as the Selaginellaceae and Isoetaceae, the extinct Lepidodendrales, and ferns, such as the Marsileaceae and Salviniaceae are heterosporous (two kinds of spores are produced). These plants produce both microspores and megaspores, which give rise to gametophytes that are functionally male or female, respectively. Most heterosporous plants there are two kinds of sporangia, termed microsporangia and megasporangia.
Sporangia can be terminal (on the tips) or lateral (placed along the side) of stems or associated with leaves. In ferns, sporangia are typically found on the abaxial surface (underside) of the leaf and are densely aggregated into clusters called sori. Sori may be covered by a structure called an indusium. Some ferns have their sporangia scattered along reduced leaf segments or along (or just in from) the margin of the leaf. Lycophytes, in contrast, bear their sporangia on the adaxial surface (the upper side) of leaves or laterally on stems. Leaves that bear sporangia are called sporophylls. If the plant is heterosporous, the sporangia-bearing leaves are distinguished as either microsporophylls or megasporophylls. In seed plants, sporangia are typically located within strobili or flowers.
Cycads form their microsporangia on microsporophylls which are aggregated into strobili. Megasporangia are formed into ovules, which are borne on megasporophylls, which are aggregated into strobili on separate plants (all cycads are dioecious). Conifers typically bear their microsporangia on microsporophylls aggregated into papery pollen strobili, and the ovules, are located on modified stem axes forming compound ovuliferous cone scales. Flowering plants contain microsporangia in the anthers of stamens (typically four microsporangia per anther) and megasporangia inside ovules inside ovaries. In all seed plants, spores are produced by meiosis and develop into gametophytes while still inside the sporangium. The microspores become microgametophytes (pollen). The megaspores become megagametophytes (embryo sacs).
Eusporangia and leptosporangia
Categorized based on developmental sequence, eusporangia and leptosporangia are differentiated in the vascular plants.
In a leptosporangium, found only in leptosporangiate ferns, development involves a single initial cell that becomes the stalk, wall, and spores within the sporangium. There are around 64 spores in a leptosporangium.
In a eusporangium, characteristic of all other vascular plants and some primitive ferns, the initials are in a layer (i.e., more than one). A eusporangium is larger (hence contain more spores), and its wall is multi-layered. Although the wall may be stretched and damaged, resulting in only one cell-layer remaining.
Synangium
A cluster of sporangia that have become fused in development is called a synangium (pl. synangia). This structure is most prominent in Psilotum and Marattiaceae such as Christensenia, Danaea and Marattia.
Internal structures
A columella (pl. columellae) is a sterile (non-reproductive) structure that extends into and supports the sporangium of some species. In fungi, the columella, which may be branched or unbranched, may be of fungal or host origin. Secotium species have a simple, unbranched columella, while in Gymnoglossum species, the columella is branched. In some Geastrum species, the columella appears as an extension of the stalk into the spore mass (gleba).
| Biology and health sciences | Fungal morphology and anatomy | Biology |
63791 | https://en.wikipedia.org/wiki/Peptic%20ulcer%20disease | Peptic ulcer disease | Peptic ulcer disease is when the inner part of the stomach's gastric mucosa (lining of the stomach), the first part of the small intestine, or sometimes the lower esophagus, gets damaged. An ulcer in the stomach is called a gastric ulcer, while one in the first part of the intestines is a duodenal ulcer. The most common symptoms of a duodenal ulcer are waking at night with upper abdominal pain, and upper abdominal pain that improves with eating. With a gastric ulcer, the pain may worsen with eating. The pain is often described as a burning or dull ache. Other symptoms include belching, vomiting, weight loss, or poor appetite. About a third of older people with peptic ulcers have no symptoms. Complications may include bleeding, perforation, and blockage of the stomach. Bleeding occurs in as many as 15% of cases.
Common causes include infection with Helicobacter pylori and non-steroidal anti-inflammatory drugs (NSAIDs). Other, less common causes include tobacco smoking, stress as a result of other serious health conditions, Behçet's disease, Zollinger–Ellison syndrome, Crohn's disease, and liver cirrhosis. Older people are more sensitive to the ulcer-causing effects of NSAIDs. The diagnosis is typically suspected due to the presenting symptoms with confirmation by either endoscopy or barium swallow. H. pylori can be diagnosed by testing the blood for antibodies, a urea breath test, testing the stool for signs of the bacteria, or a biopsy of the stomach. Other conditions that produce similar symptoms include stomach cancer, coronary heart disease, and inflammation of the stomach lining or gallbladder inflammation.
Diet does not play an important role in either causing or preventing ulcers. Treatment includes stopping smoking, stopping use of NSAIDs, stopping alcohol, and taking medications to decrease stomach acid. The medication used to decrease acid is usually either a proton pump inhibitor (PPI) or an H2 blocker, with four weeks of treatment initially recommended. Ulcers due to H. pylori are treated with a combination of medications, such as amoxicillin, clarithromycin, and a PPI. Antibiotic resistance is increasing and thus treatment may not always be effective. Bleeding ulcers may be treated by endoscopy, with open surgery typically only used in cases in which it is not successful.
Peptic ulcers are present in around 4% of the population. New ulcers were found in around 87.4 million people worldwide during 2015. About 10% of people develop a peptic ulcer at some point in their life. Peptic ulcers resulted in 267,500 deaths in 2015, down from 327,000 in 1990. The first description of a perforated peptic ulcer was in 1670, in Princess Henrietta of England. H. pylori was first identified as causing peptic ulcers by Barry Marshall and Robin Warren in the late 20th century, a discovery for which they received the Nobel Prize in 2005.
Signs and symptoms
Signs and symptoms of a peptic ulcer can include one or more of the following:
abdominal pain, classically epigastric, strongly correlated with mealtimes. In case of duodenal ulcers, the pain appears about three hours after taking a meal and wakes the person from sleep;
bloating and abdominal fullness;
waterbrash (a rush of saliva after an episode of regurgitation to dilute the acid in esophagus, although this is more associated with gastroesophageal reflux disease);
nausea and copious vomiting;
loss of appetite and weight loss, in gastric ulcer;
weight gain, in duodenal ulcer, as the pain is relieved by eating;
hematemesis (vomiting of blood); this can occur due to bleeding directly from a gastric ulcer or from damage to the esophagus from severe/continuing vomiting.
melena (tarry, foul-smelling feces due to presence of oxidized iron from hemoglobin);
rarely, an ulcer can lead to a gastric or duodenal perforation, which leads to acute peritonitis and extreme, stabbing pain, and requires immediate surgery.
A history of heartburn or gastroesophageal reflux disease (GERD) and use of certain medications can raise the suspicion for peptic ulcer. Medicines associated with peptic ulcer include NSAIDs (non-steroidal anti-inflammatory drugs) that inhibit cyclooxygenase and most glucocorticoids (e.g., dexamethasone and prednisolone).
In people over the age of 45 with more than two weeks of the above symptoms, the odds for peptic ulceration are high enough to warrant rapid investigation by esophagogastroduodenoscopy.
The timing of symptoms in relation to the meal may differentiate between gastric and duodenal ulcers. A gastric ulcer would give epigastric pain during the meal, associated with nausea and vomiting, as gastric acid production is increased as food enters the stomach. Pain in duodenal ulcers would be aggravated by hunger and relieved by a meal and is associated with night pain.
Also, the symptoms of peptic ulcers may vary with the location of the ulcer and the person's age. Furthermore, typical ulcers tend to heal and recur, and as a result the pain may occur for few days and weeks and then wane or disappear. Usually, children and the elderly do not develop any symptoms unless complications have arisen.
A burning or gnawing feeling in the stomach area lasting between 30 minutes and 3 hours commonly accompanies ulcers. This pain can be misinterpreted as hunger, indigestion, or heartburn. Pain is usually caused by the ulcer, but it may be aggravated by the stomach acid when it comes into contact with the ulcerated area. The pain caused by peptic ulcers can be felt anywhere from the navel up to the sternum, it may last from few minutes to several hours, and it may be worse when the stomach is empty. Also, sometimes the pain may flare at night, and it can commonly be temporarily relieved by eating foods that buffer stomach acid or by taking anti-acid medication. However, peptic ulcer disease symptoms may be different for everyone.
Complications
Gastrointestinal bleeding is the most common complication. Sudden large bleeding can be life-threatening. It is associated with 5% to 10% death rate.
Perforation (a hole in the wall of the gastrointestinal tract) following a gastric ulcer often leads to catastrophic consequences if left untreated. Erosion of the gastrointestinal wall by the ulcer leads to spillage of the stomach or intestinal contents into the abdominal cavity, leading to an acute chemical peritonitis. The first sign is often sudden intense abdominal pain, as seen in Valentino's syndrome. Posterior gastric wall perforation may lead to bleeding due to the involvement of gastroduodenal artery that lies posterior to the first part of the duodenum. The death rate in this case is 20%.
Penetration is a form of perforation in which the hole leads to and the ulcer continues into adjacent organs such as the liver and pancreas.
Gastric outlet obstruction (stenosis) is a narrowing of the pyloric canal by scarring and swelling of the gastric antrum and duodenum due to peptic ulcers. The person often presents with severe vomiting.
Cancer is included in the differential diagnosis (elucidated by biopsy), Helicobacter pylori as the etiological factor making it 3 to 6 times more likely to develop stomach cancer from the ulcer. The risk for developing gastrointestinal cancer also appears to be slightly higher with gastric ulcers.
Cause
H. pylori
Helicobacter pylori is one of the major causative factors of peptic ulcer disease. It secretes urease to create an alkaline environment, which is suitable for its survival. It expresses blood group antigen-binding adhesin (BabA) and outer inflammatory protein adhesin (OipA), which enables it to attach to the gastric epithelium. The bacterium also expresses virulence factors such as CagA and PicB, which cause stomach mucosal inflammation. The VacA gene encodes for vacuolating cytotoxin, but its mechanism of causing peptic ulcers is unclear. Such stomach mucosal inflammation can be associated with hyperchlorhydria (increased stomach acid secretion) or hypochlorhydria (reduced stomach acid secretion). Inflammatory cytokines inhibit the parietal cell acid secretion. H. pylori also secretes certain products that inhibit hydrogen potassium ATPase; activate calcitonin gene-related peptide sensory neurons, which increases somatostatin secretion to inhibit acid production by parietal cells; and inhibit gastrin secretion. This reduction in acid production causes gastric ulcers. On the other hand, increased acid production at the pyloric antrum is associated with duodenal ulcers in 10% to 15% of H. pylori infection cases. In this case, somatostatin production is reduced and gastrin production is increased, leading to increased histamine secretion from the enterochromaffin cells, thus increasing acid production. An acidic environment at the antrum causes metaplasia of the duodenal cells, causing duodenal ulcers.
Human immune response toward the bacteria also determines the emergence of peptic ulcer disease. The human IL1B gene encodes for Interleukin 1 beta, and other genes that encode for tumour necrosis factor (TNF) and Lymphotoxin alpha also play a role in gastric inflammation.
NSAIDs
Taking nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin can increase the risk of peptic ulcer disease by four times compared to non-users. The risk of getting a peptic ulcer is two times for aspirin users. Risk of bleeding increases if NSAIDs are combined with selective serotonin reuptake inhibitor (SSRI), corticosteroids, antimineralocorticoids, and anticoagulants. The gastric mucosa protects itself from gastric acid with a layer of mucus, the secretion of which is stimulated by certain prostaglandins. NSAIDs block the function of cyclooxygenase 1 (COX-1), which is essential for the production of these prostaglandins. Besides this, NSAIDs also inhibit stomach mucosa cells proliferation and mucosal blood flow, reducing bicarbonate and mucus secretion, which reduces the integrity of the mucosa. Another type of NSAIDs, called COX-2 selective anti-inflammatory drugs (such as celecoxib), preferentially inhibit COX-2, which is less essential in the gastric mucosa. This reduces the probability of getting peptic ulcers; however, it can still delay ulcer healing for those who already have a peptic ulcer. Peptic ulcers caused by NSAIDs differ from those caused by H. pylori as the latter's appear as a consequence of inflammation of the mucosa (presence of neutrophil and submucosal edema), the former instead as a consequence of a direct damage of the NSAID molecule against COX enzymes, altering the hydrophobic state of the mucus, the permeability of the lining epithelium and mitochondrial machinery of the cell itself. In this way NSAID's ulcers tend to complicate faster and dig deeper in the tissue causing more complications, often asymptomatically until a great portion of the tissue is involved.
Stress
Physiological (not psychological) stress due to serious health problems, such as those requiring treatment in an intensive care unit, is well described as a cause of peptic ulcers, which are also known as stress ulcers.
While chronic life stress was once believed to be the main cause of ulcers, this is no longer the case. It is, however, still occasionally believed to play a role. This may be due to the well-documented effects of stress on gastric physiology, increasing the risk in those with other causes, such as H. pylori or NSAID use.
Diet
Dietary factors, such as spice consumption, were hypothesized to cause ulcers until the late 20th century, but have been shown to be of relatively minor importance. Caffeine and coffee, also commonly thought to cause or exacerbate ulcers, appear to have little effect. Similarly, while studies have found that alcohol consumption increases risk when associated with H. pylori infection, it does not seem to independently increase risk. Even when coupled with H. pylori infection, the increase is modest in comparison to the primary risk factor.
Other
Other causes of peptic ulcer disease include gastric ischaemia, drugs, metabolic disturbances, cytomegalovirus (CMV), upper abdominal radiotherapy, Crohn's disease, and vasculitis. Gastrinomas (Zollinger–Ellison syndrome), or rare gastrin-secreting tumors, also cause multiple and difficult-to-heal ulcers.
It is still unclear whether smoking increases the risk of getting peptic ulcers.
Diagnosis
The diagnosis is mainly established based on the characteristic symptoms. Stomach pain is the most common sign of a peptic ulcer.
More specifically, peptic ulcers erode the muscularis mucosae, at minimum reaching to the level of the submucosa (contrast with erosions, which do not involve the muscularis mucosae).
Confirmation of the diagnosis is made with the help of tests such as endoscopies or barium contrast x-rays. The tests are typically ordered if the symptoms do not resolve after a few weeks of treatment, or when they first appear in a person who is over age 45 or who has other symptoms such as weight loss, because stomach cancer can cause similar symptoms. Also, when severe ulcers resist treatment, particularly if a person has several ulcers or the ulcers are in unusual places, a doctor may suspect an underlying condition that causes the stomach to overproduce acid.
An esophagogastroduodenoscopy (EGD), a form of endoscopy, also known as a gastroscopy, is carried out on people in whom a peptic ulcer is suspected. It is also the gold standard of diagnosis for peptic ulcer disease. By direct visual identification, the location and severity of an ulcer can be described. Moreover, if no ulcer is present, EGD can often provide an alternative diagnosis.
One of the reasons that blood tests are not reliable for accurate peptic ulcer diagnosis on their own is their inability to differentiate between past exposure to the bacteria and current infection. Additionally, a false negative result is possible with a blood test if the person has recently been taking certain drugs, such as antibiotics or proton-pump inhibitors.
The diagnosis of Helicobacter pylori can be made by:
Urea breath test (noninvasive and does not require EGD);
Direct culture from an EGD biopsy specimen; this is difficult and can be expensive. Most labs are not set up to perform H. pylori cultures;
Direct detection of urease activity in a biopsy specimen by rapid urease test;
Measurement of antibody levels in the blood (does not require EGD). It is still somewhat controversial whether a positive antibody without EGD is enough to warrant eradication therapy;
Stool antigen test;
Histological examination and staining of an EGD biopsy.
The breath test uses radioactive carbon to detect H. pylori. To perform this exam, the person is asked to drink a tasteless liquid that contains the carbon as part of the substance that the bacteria breaks down. After an hour, the person is asked to blow into a sealed bag. If the person is infected with H. pylori, the breath sample will contain radioactive carbon dioxide. This test provides the advantage of being able to monitor the response to treatment used to kill the bacteria.
The possibility of other causes of ulcers, notably malignancy (gastric cancer), needs to be kept in mind. This is especially true in ulcers of the greater curvature of the stomach; most are also a consequence of chronic H. pylori infection.
If a peptic ulcer perforates, air will leak from inside the gastrointestinal tract (which always contains some air) to the peritoneal cavity (which normally never contains air). This leads to "free gas" within the peritoneal cavity. If the person stands, as when having a chest X-ray, the gas will float to a position underneath the diaphragm. Therefore, gas in the peritoneal cavity, shown on an erect chest X-ray or supine lateral abdominal X-ray, is an omen of perforated peptic ulcer disease.
Classification
Peptic ulcers are a form of acid–peptic disorder. Peptic ulcers can be classified according to their location and other factors.
By location
Duodenum (called duodenal ulcer)
Esophagus (called esophageal ulcer)
Stomach (called gastric ulcer)
Meckel's diverticulum (called Meckel's diverticulum ulcer; is very tender with palpation)
Modified Johnson
Type I: Ulcer along the body of the stomach, most often along the lesser curve at incisura angularis along the locus minoris resistantiae. Not associated with acid hypersecretion.
Type II: Ulcer in the body in combination with duodenal ulcers. Associated with acid oversecretion.
Type III: In the pyloric channel within 3 cm of pylorus. Associated with acid oversecretion.
Type IV: Proximal gastroesophageal ulcer.
Type V: Can occur throughout the stomach. Associated with the chronic use of NSAIDs (such as ibuprofen).
Macroscopic appearance
Gastric ulcers are most often localized on the lesser curvature of the stomach. The ulcer is a round to oval parietal defect ("hole"), 2–4 cm diameter, with a smooth base and perpendicular borders. These borders are not elevated or irregular in the acute form of peptic ulcer, and regular but with elevated borders and inflammatory surrounding in the chronic form. In the ulcerative form of gastric cancer, the borders are irregular. Surrounding mucosa may present radial folds, as a consequence of the parietal scarring.
Microscopic appearance
A gastric peptic ulcer is a mucosal perforation that penetrates the muscularis mucosae and lamina propria, usually produced by acid-pepsin aggression. Ulcer margins are perpendicular and present chronic gastritis. During the active phase, the base of the ulcer shows four zones: fibrinoid necrosis, inflammatory exudate, granulation tissue and fibrous tissue. The fibrous base of the ulcer may contain vessels with thickened wall or with thrombosis.
Differential diagnosis
Conditions that may appear similar include:
Gastritis
Stomach cancer
Gastroesophageal reflux disease
Pancreatitis
Hepatic congestion
Cholecystitis
Biliary colic
Inferior myocardial infarction
Referred pain (pleurisy, pericarditis)
Superior mesenteric artery syndrome
Prevention
Prevention of peptic ulcer disease for those who are taking NSAIDs (with low cardiovascular risk) can be achieved by adding a proton pump inhibitor (PPI), an H2 antagonist, or misoprostol. NSAIDs of the COX-2 inhibitors type may reduce the rate of ulcers when compared to non-selective NSAIDs. PPI is the most popular agent in peptic ulcer prevention. However, there is no evidence that H2 antagonists can prevent stomach bleeding for those taking NSAIDs. Although misoprostol is effective in preventing peptic ulcer, its properties of promoting abortion and causing gastrointestinal distress limit its use. For those with high cardiovascular risk, naproxen with PPI can be a useful choice. Otherwise, low-dose aspirin, celecoxib, and PPI can also be used.
Management
Eradication therapy
Once the diagnosis of H. pylori is confirmed, the first-line treatment would be a triple regimen in which pantoprazole and clarithromycin are combined with either amoxicillin or metronidazole. This treatment regimen can be given for 7–14 days. However, its effectiveness in eradicating H. pylori has been reducing from 90% to 70%. However, the rate of eradication can be increased by doubling the dosage of pantoprazole or increasing the duration of treatment to 14 days. Quadruple therapy (pantoprazole, clarithromycin, amoxicillin, and metronidazole) can also be used. The quadruple therapy can achieve an eradication rate of 90%. If the clarithromycin resistance rate is higher than 15% in an area, the usage of clarithromycin should be abandoned. Instead, bismuth-containing quadruple therapy can be used (pantoprazole, bismuth citrate, tetracycline, and metronidazole) for 14 days. The bismuth therapy can also achieve an eradication rate of 90% and can be used as second-line therapy when the first-line triple-regimen therapy has failed.
NSAIDs-induced ulcers
NSAID-associated ulcers heal in six to eight weeks provided the NSAIDs are withdrawn with the introduction of proton pump inhibitors (PPI).
Bleeding
For those with bleeding peptic ulcers, fluid replacement with crystalloids is sometimes given to maintain volume in the blood vessels. Maintaining haemoglobin at greater than 7 g/dL (70 g/L) through restrictive blood transfusion has been associated with reduced rate of death. Glasgow-Blatchford score is used to determine whether a person should be treated inside a hospital or as an outpatient. Intravenous PPIs can suppress stomach bleeding more quickly than oral ones. A neutral stomach pH is required to keep platelets in place and prevent clot lysis. Tranexamic acid and antifibrinolytic agents are not useful in treating peptic ulcer disease.
Early endoscopic therapy can help to stop bleeding by using cautery, endoclip, or epinephrine injection. Treatment is indicated if there is active bleeding in the stomach, visible vessels, or an adherent clot. Endoscopy is also helpful in identifying people who are suitable for hospital discharge. Prokinetic agents such as erythromycin and metoclopramide can be given before endoscopy to improve endoscopic view. Either high- or low-dose PPIs are equally effective in reducing bleeding after endoscopy. High-dose intravenous PPI is defined as a bolus dose of 80 mg followed by an infusion of 8 mg per hour for 72 hours—in other words, the continuous infusion of PPI of greater than 192 mg per day. Intravenous PPI can be changed to oral once there is no high risk of rebleeding from peptic ulcer.
For those with hypovolemic shock and ulcer size of greater than 2 cm, there is a high chance that the endoscopic treatment would fail. Therefore, surgery and angiographic embolism are reserved for these complicated cases. However, there is a higher rate of complication for those who underwent surgery to patch the stomach bleeding site when compared to repeated endoscopy. Angiographic embolisation has a higher rebleeding rate but a similar rate of death to surgery.
Anticoagulants
According to expert opinion, for those who are already on anticoagulants, the international normalized ratio (INR) should be kept at 1.5. For aspirin users who required endoscopic treatment for bleeding peptic ulcer, there is two times increased risk of rebleeding but with ten times reduced risk of death at eight weeks following the resumption of aspirin. For those who were on double antiplatelet agents for indwelling stent in blood vessels, both antiplatelet agents should not be stopped because there is a high risk of stent thrombosis. For those who were under warfarin treatment, fresh frozen plasma (FFP), vitamin K, prothrombin complex concentrates, or recombinant factor VIIa can be given to reverse the effect of warfarin. High doses of vitamin K should be avoided to reduce the time for rewarfarinisation once the stomach bleeding has stopped. Prothrombin complex concentrates are preferred for severe bleeding. Recombinant factor VIIa is reserved for life-threatening bleeding because of its high risk of thromboembolism. Direct oral anticoagulants (DOAC) are recommended instead of warfarin as they are more effective in preventing thromboembolism. In case of bleeding caused by DOAC, activated charcoal within four hours is the antidote of choice.
Epidemiology
The lifetime risk for developing a peptic ulcer is approximately 5% to 10% with the rate of 0.1% to 0.3% per year. Peptic ulcers resulted in 301,000 deaths in 2013, down from 327,000 in 1990.
In Western countries, the percentage of people with H. pylori infections roughly matches age (i.e., 20% at age 20, 30% at age 30, 80% at age 80, etc.). Prevalence is higher in third world countries, where it is estimated at 70% of the population, whereas developed countries show a maximum of a 40% ratio. Overall, H. pylori infections show a worldwide decrease, more so in developed countries. Transmission occurs via food, contaminated groundwater, or human saliva (such as from kissing or sharing food utensils).
Peptic ulcer disease had a tremendous effect on morbidity and mortality until the last decades of the 20th century when epidemiological trends started to point to an impressive fall in its incidence. The reason that the rates of peptic ulcer disease decreased is thought to be the development of new effective medication and acid suppressants and the rational use of nonsteroidal anti-inflammatory drugs (NSAIDs).
History
John Lykoudis, a general practitioner in Greece, treated people for peptic ulcer disease with antibiotics beginning in 1958, long before it was commonly recognized that bacteria were a dominant cause for the disease.
Helicobacter pylori was identified in 1982 by two Australian scientists, Robin Warren and Barry J. Marshall, as a causative factor for ulcers. In their original paper, Warren and Marshall contended that most gastric ulcers and gastritis were caused by colonization with this bacterium, not by stress or spicy food, as had been assumed before.
The H. pylori hypothesis was still poorly received, so in an act of self-experimentation Marshall drank a Petri dish containing a culture of organisms extracted from a person with an ulcer and five days later developed gastritis. His symptoms disappeared after two weeks, but he took antibiotics to kill the remaining bacteria at the urging of his wife, since halitosis is one of the symptoms of infection. This experiment was published in 1984 in the Australian Medical Journal and is among the most cited articles from the journal.
In 1997, the Centers for Disease Control and Prevention, with other government agencies, academic institutions, and industry, launched a national education campaign to inform health care providers and consumers about the link between H. pylori and ulcers. This campaign reinforced the news that ulcers are a curable infection and that health can be greatly improved and money saved by disseminating information about H. pylori.
In 2005, the Karolinska Institute in Stockholm awarded the Nobel Prize in Physiology or Medicine to Marshall and his long-time collaborator Warren "for their discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease." Marshall continues research related to H. pylori and runs a molecular biology lab at UWA in Perth, Western Australia.
A 1998 New England Medical Journal study found that mastic gum, a tree resin extract, actively eliminated the H. pylori bacteria. However, multiple subsequent studies (in mice and in vivo) have found no effect of using mastic gum on reducing H. pylori levels.
| Biology and health sciences | Specific diseases | Health |
63793 | https://en.wikipedia.org/wiki/Meteoroid | Meteoroid | A meteoroid ( ) is a small rocky or metallic body in outer space.
Meteoroids are distinguished as objects significantly smaller than asteroids, ranging in size from grains to objects up to a meter wide. Objects smaller than meteoroids are classified as micrometeoroids or space dust. Many are fragments from comets or asteroids, whereas others are collision impact debris ejected from bodies such as the Moon or Mars.
The visible passage of a meteoroid, comet, or asteroid entering Earth's atmosphere is called a meteor, and a series of many meteors appearing seconds or minutes apart and appearing to originate from the same fixed point in the sky is called a meteor shower.
An estimated 25 million meteoroids, micrometeoroids and other space debris enter Earth's atmosphere each day, which results in an estimated 15,000 tonnes of that material entering the atmosphere each year.
A meteorite is the remains of a meteoroid that has survived the ablation of its surface material during its passage through the atmosphere as a meteor and has impacted the ground.
Meteoroids
In 1961, the International Astronomical Union (IAU) defined a meteoroid as "a solid object moving in interplanetary space, of a size considerably smaller than an asteroid and considerably larger than an atom". In 1995, Beech and Steel, writing in the Quarterly Journal of the Royal Astronomical Society, proposed a new definition where a meteoroid would be between 100 μm and across. In 2010, following the discovery of asteroids below 10 m in size, Rubin and Grossman proposed a revision of the previous definition of meteoroid to objects between and in diameter in order to maintain the distinction. According to Rubin and Grossman, the minimum size of an asteroid is given by what can be discovered from Earth-bound telescopes, so the distinction between meteoroid and asteroid is fuzzy. Some of the smallest asteroids discovered (based on absolute magnitude H) are with H = 33.2 and with H = 32.1 both with an estimated size of . In April 2017, the IAU adopted an official revision of its definition, limiting size to between and one meter in diameter, but allowing for a deviation for any object causing a meteor.
Objects smaller than meteoroids are classified as micrometeoroids and interplanetary dust. The Minor Planet Center does not use the term "meteoroid".
Composition
Almost all meteoroids contain extraterrestrial nickel and iron. They have three main classifications: iron, stone, and stony-iron. Some stone meteoroids contain grain-like inclusions known as chondrules and are called chondrites. Stony meteoroids without these features are called "achondrites", which are typically formed from extraterrestrial igneous activity; they contain little or no extraterrestrial iron. The composition of meteoroids can be inferred as they pass through Earth's atmosphere from their trajectories and the light spectra of the resulting meteor. Their effects on radio signals also give information, especially useful for daytime meteors, which are otherwise very difficult to observe. From these trajectory measurements, meteoroids have been found to have many different orbits, some clustering in streams (see meteor showers) often associated with a parent comet, others apparently sporadic. Debris from meteoroid streams may eventually be scattered into other orbits. The light spectra, combined with trajectory and light curve measurements, have yielded various compositions and densities, ranging from fragile snowball-like objects with density about a quarter that of ice, to nickel-iron rich dense rocks. The study of meteorites also gives insights into the composition of non-ephemeral meteoroids.
In the Solar System
Most meteoroids come from the asteroid belt, having been perturbed by the gravitational influences of planets, but others are particles from comets, giving rise to meteor showers. Some meteoroids are fragments from bodies such as Mars or the Moon, that have been thrown into space by an impact.
Meteoroids travel around the Sun in a variety of orbits and at various velocities. The fastest move at about through space in the vicinity of Earth's orbit. This is escape velocity from the Sun, equal to the square root of two times Earth's speed, and is the upper speed limit of objects in the vicinity of Earth, unless they come from interstellar space. Earth travels at about , so when meteoroids meet the atmosphere head-on (which only occurs when meteors are in a retrograde orbit such as the Leonids, which are associated with the retrograde comet 55P/Tempel–Tuttle) the combined speed may reach about (see Specific energy#Astrodynamics). Meteoroids moving through Earth's orbital space average about , but due to Earth's gravity meteors such as the Phoenicids can make atmospheric entry at as slow as about 11 km/s.
On January 17, 2013, at 05:21 PST, a one-meter-sized comet from the Oort cloud entered Earth atmosphere over California and Nevada. The object had a retrograde orbit with perihelion at 0.98 ± 0.03 AU. It approached from the direction of the constellation Virgo (which was in the south about 50° above the horizon at the time), and collided head-on with Earth's atmosphere at vaporising more than above ground over a period of several seconds.
Collision with Earth's atmosphere
When meteoroids intersect with Earth's atmosphere at night, they are likely to become visible as meteors. If meteoroids survive the entry through the atmosphere and reach Earth's surface, they are called meteorites. Meteorites are transformed in structure and chemistry by the heat of entry and force of impact. A noted asteroid, , was observed in space on a collision course with Earth on 6 October 2008 and entered Earth's atmosphere the next day, striking a remote area of northern Sudan. It was the first time that a meteoroid had been observed in space and tracked prior to impacting Earth. NASA has produced a map showing the most notable asteroid collisions with Earth and its atmosphere from 1994 to 2013 from data gathered by U.S. government sensors (see below).
Meteorites
A meteorite is a portion of a meteoroid or asteroid that survives its passage through the atmosphere and hits the ground without being destroyed. Meteorites are sometimes, but not always, found in association with hypervelocity impact craters; during energetic collisions, the entire impactor may be vaporized, leaving no meteorites. Geologists use the term, "bolide", in a different sense from astronomers to indicate a very large impactor. For example, the USGS uses the term to mean a generic large crater-forming projectile in a manner "to imply that we do not know the precise nature of the impacting body ... whether it is a rocky or metallic asteroid, or an icy comet for example".
Meteoroids also hit other bodies in the Solar System. On such stony bodies as the Moon or Mars that have little or no atmosphere, they leave enduring craters.
Impact craters
Meteoroid collisions with solid Solar System objects, including the Moon, Mercury, Callisto, Ganymede, and most small moons and asteroids, create impact craters, which are the dominant geographic features of many of those objects. On other planets and moons with active surface geological processes, such as Earth, Venus, Mars, Europa, Io, and Titan, visible impact craters may become eroded, buried, or transformed by tectonics over time. In early literature, before the significance of impact cratering was widely recognised, the terms cryptoexplosion or cryptovolcanic structure were often used to describe what are now recognised as impact-related features on Earth. Molten terrestrial material ejected from a meteorite impact crater can cool and solidify into an object known as a tektite. These are often mistaken for meteorites. Terrestrial rock, sometimes with pieces of the original meteorite, created or modified by an impact of a meteorite is called impactite.
Gallery of meteorites
| Physical sciences | Planetary science | null |
63794 | https://en.wikipedia.org/wiki/Impact%20event | Impact event | An impact event is a collision between astronomical objects causing measurable effects. Impact events have been found to regularly occur in planetary systems, though the most frequent involve asteroids, comets or meteoroids and have minimal effect. When large objects impact terrestrial planets such as the Earth, there can be significant physical and biospheric consequences, as the impacting body is usually traveling at several kilometres a second (a minimum of for an Earth impacting body), though atmospheres mitigate many surface impacts through atmospheric entry. Impact craters and structures are dominant landforms on many of the Solar System's solid objects and present the strongest empirical evidence for their frequency and scale.
Impact events appear to have played a significant role in the evolution of the Solar System since its formation. Major impact events have significantly shaped Earth's history, and have been implicated in the formation of the Earth–Moon system. Impact events also appear to have played a significant role in the evolutionary history of life. Impacts may have helped deliver the building blocks for life (the panspermia theory relies on this premise). Impacts have been suggested as the origin of water on Earth. They have also been implicated in several mass extinctions. The prehistoric Chicxulub impact, 66 million years ago, is believed to not only be the cause of the Cretaceous–Paleogene extinction event but acceleration of the evolution of mammals, leading to their dominance and, in turn, setting in place conditions for the eventual rise of humans.
Throughout recorded history, hundreds of Earth impacts (and exploding bolides) have been reported, with some occurrences causing deaths, injuries, property damage, or other significant localised consequences. One of the best-known recorded events in modern times was the Tunguska event, which occurred in Siberia, Russia, in 1908. The 2013 Chelyabinsk meteor event is the only known such incident in modern times to result in numerous injuries. Its meteor is the largest recorded object to have encountered the Earth since the Tunguska event. The Comet Shoemaker–Levy 9 impact provided the first direct observation of an extraterrestrial collision of Solar System objects, when the comet broke apart and collided with Jupiter in July 1994. An extrasolar impact was observed in 2013, when a massive terrestrial planet impact was detected around the star ID8 in the star cluster NGC 2547 by NASA's Spitzer Space Telescope and confirmed by ground observations. Impact events have been a plot and background element in science fiction.
In April 2018, the B612 Foundation reported: "It's 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent certain when." Also in 2018, physicist Stephen Hawking considered in his final book Brief Answers to the Big Questions that an asteroid collision was the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. On 26 September 2022, the Double Asteroid Redirection Test demonstrated the deflection of an asteroid. It was the first such experiment to be carried out by humankind and was considered to be highly successful. The orbital period of the target body was changed by 32 minutes. The criterion for success was a change of more than 73 seconds.
Impacts and the Earth
Major impact events have significantly shaped Earth's history, having been implicated in the formation of the Earth–Moon system, the evolutionary history of life, the origin of water on Earth, and several mass extinctions. Impact structures are the result of impact events on solid objects and, as the dominant landforms on many of the System's solid objects, present the most solid evidence of prehistoric events. Notable impact events include the hypothesized Late Heavy Bombardment, which would have occurred early in the history of the Earth–Moon system, and the confirmed Chicxulub impact 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event.
Frequency and risk
Small objects frequently collide with Earth. There is an inverse relationship between the size of the object and the frequency of such events. The lunar cratering record shows that the frequency of impacts decreases as approximately the cube of the resulting crater's diameter, which is on average proportional to the diameter of the impactor. Asteroids with a diameter strike Earth every 500,000 years on average. Large collisions – with objects – happen approximately once every twenty million years. The last known impact of an object of or more in diameter was at the Cretaceous–Paleogene extinction event 66 million years ago.
The energy released by an impactor depends on diameter, density, velocity, and angle. The diameter of most near-Earth asteroids that have not been studied by radar or infrared can generally only be estimated within about a factor of two, by basing it on the asteroid's brightness. The density is generally assumed, because the diameter and mass, from which density can be calculated, are also generally estimated. Due to Earth's escape velocity, the minimum impact velocity is 11 km/s with asteroid impacts averaging around 17 km/s on the Earth. The most probable impact angle is 45 degrees.
Impact conditions such as asteroid size and speed, but also density and impact angle determine the kinetic energy released in an impact event. The more energy is released, the more damage is likely to occur on the ground due to the environmental effects triggered by the impact. Such effects can be shock waves, heat radiation, the formation of craters with associated earthquakes, and tsunamis if bodies of water are hit. Human populations are vulnerable to these effects if they live within the affected zone. Large seiche waves arising from earthquakes and large-scale deposit of debris can also occur within minutes of impact, thousands of kilometres from impact.
Airbursts
Stony asteroids with a diameter of enter Earth's atmosphere about once a year. Asteroids with a diameter of 7 meters enter the atmosphere about every 5 years with as much kinetic energy as the atomic bomb dropped on Hiroshima (approximately 16 kilotons of TNT), but the air burst is reduced to just 5 kilotons. These ordinarily explode in the upper atmosphere and most or all of the solids are vaporized. However, asteroids with a diameter of , and which strike Earth approximately twice every century, produce more powerful airbursts. The 2013 Chelyabinsk meteor was estimated to be about 20 m in diameter with an airburst of around 500 kilotons, an explosion 30 times the Hiroshima bomb impact. Much larger objects may impact the solid earth and create a crater.
Objects with a diameter less than are called meteoroids and seldom make it to the ground to become meteorites. An estimated 500 meteorites reach the surface each year, but only 5 or 6 of these typically create a weather radar signature with a strewn field large enough to be recovered and be made known to scientists.
The late Eugene Shoemaker of the U.S. Geological Survey estimated the rate of Earth impacts, concluding that an event about the size of the nuclear weapon that destroyed Hiroshima occurs about once a year. Such events would seem to be spectacularly obvious, but they generally go unnoticed for a number of reasons: the majority of the Earth's surface is covered by water; a good portion of the land surface is uninhabited; and the explosions generally occur at relatively high altitude, resulting in a huge flash and thunderclap but no real damage.
Although no human is known to have been killed directly by an impact, over 1000 people were injured by the Chelyabinsk meteor airburst event over Russia in 2013. In 2005 it was estimated that the chance of a single person born today dying due to an impact is around 1 in 200,000. The two to four-meter-sized asteroids , , 2018 LA, 2019 MO, 2022 EB5, and the suspected artificial satellite WT1190F are the only known objects to be detected before impacting the Earth.
Geological significance
Impacts have had, during the history of the Earth, a significant geological and climatic influence.
The Moon's existence is widely attributed to a huge impact early in Earth's history. Impact events earlier in the history of Earth have been credited with creative as well as destructive events; it has been proposed that impacting comets delivered the Earth's water, and some have suggested that the origins of life may have been influenced by impacting objects by bringing organic chemicals or lifeforms to the Earth's surface, a theory known as exogenesis.
These modified views of Earth's history did not emerge until relatively recently, chiefly due to a lack of direct observations and the difficulty in recognizing the signs of an Earth impact because of erosion and weathering. Large-scale terrestrial impacts of the sort that produced the Barringer Crater, locally known as Meteor Crater, east of Flagstaff, Arizona, are rare. Instead, it was widely thought that cratering was the result of volcanism: the Barringer Crater, for example, was ascribed to a prehistoric volcanic explosion (not an unreasonable hypothesis, given that the volcanic San Francisco Peaks stand only to the west). Similarly, the craters on the surface of the Moon were ascribed to volcanism.
It was not until 1903–1905 that the Barringer Crater was correctly identified as an impact crater, and it was not until as recently as 1963 that research by Eugene Merle Shoemaker conclusively proved this hypothesis. The findings of late 20th-century space exploration and the work of scientists such as Shoemaker demonstrated that impact cratering was by far the most widespread geological process at work on the Solar System's solid bodies. Every surveyed solid body in the Solar System was found to be cratered, and there was no reason to believe that the Earth had somehow escaped bombardment from space. In the last few decades of the 20th century, a large number of highly modified impact craters began to be identified. The first direct observation of a major impact event occurred in 1994: the collision of the comet Shoemaker-Levy 9 with Jupiter.
Based on crater formation rates determined from the Earth's closest celestial partner, the Moon, astrogeologists have determined that during the last 600 million years, the Earth has been struck by 60 objects of a diameter of or more. The smallest of these impactors would leave a crater almost across. Only three confirmed craters from that time period with that size or greater have been found: Chicxulub, Popigai, and Manicouagan, and all three have been suspected of being linked to extinction events though only Chicxulub, the largest of the three, has been consistently considered. The impact that caused Mistastin crater generated temperatures exceeding 2,370 °C, the highest known to have occurred on the surface of the Earth.
Besides the direct effect of asteroid impacts on a planet's surface topography, global climate and life, recent studies have shown that several consecutive impacts might have an effect on the dynamo mechanism at a planet's core responsible for maintaining the magnetic field of the planet, and may have contributed to Mars' lack of current magnetic field. An impact event may cause a mantle plume (volcanism) at the antipodal point of the impact. The Chicxulub impact may have increased volcanism at mid-ocean ridges and has been proposed to have triggered flood basalt volcanism at the Deccan Traps.
While numerous impact craters have been confirmed on land or in the shallow seas over continental shelves, no impact craters in the deep ocean have been widely accepted by the scientific community. Impacts of projectiles as large as one km in diameter are generally thought to explode before reaching the sea floor, but it is unknown what would happen if a much larger impactor struck the deep ocean. The lack of a crater, however, does not mean that an ocean impact would not have dangerous implications for humanity. Some scholars have argued that an impact event in an ocean or sea may create a megatsunami, which can cause destruction both at sea and on land along the coast, but this is disputed. The Eltanin impact into the Pacific Ocean 2.5 Mya is thought to involve an object about across but remains craterless.
Biospheric effects
The effect of impact events on the biosphere has been the subject of scientific debate. Several theories of impact-related mass extinction have been developed. In the past 500 million years there have been five generally accepted major mass extinctions that on average extinguished half of all species. One of the largest mass extinctions to have affected life on Earth was the Permian-Triassic, which ended the Permian period 250 million years ago and killed off 90 percent of all species; life on Earth took 30 million years to recover. The cause of the Permian-Triassic extinction is still a matter of debate; the age and origin of proposed impact craters, i.e. the Bedout High structure, hypothesized to be associated with it are still controversial. The last such mass extinction led to the demise of the non-avian dinosaurs and coincided with a large meteorite impact; this is the Cretaceous–Paleogene extinction event (also known as the K–T or K–Pg extinction event), which occurred 66 million years ago. There is no definitive evidence of impacts leading to the three other major mass extinctions.
In 1980, physicist Luis Alvarez; his son, geologist Walter Alvarez; and nuclear chemists Frank Asaro and Helen V. Michael from the University of California, Berkeley discovered unusually high concentrations of iridium in a specific layer of rock strata in the Earth's crust. Iridium is an element that is rare on Earth but relatively abundant in many meteorites. From the amount and distribution of iridium present in the 65-million-year-old "iridium layer", the Alvarez team later estimated that an asteroid of must have collided with Earth. This iridium layer at the Cretaceous–Paleogene boundary has been found worldwide at 100 different sites. Multidirectionally shocked quartz (coesite), which is normally associated with large impact events or atomic bomb explosions, has also been found in the same layer at more than 30 sites. Soot and ash at levels tens of thousands times normal levels were found with the above.
Anomalies in chromium isotopic ratios found within the K-T boundary layer strongly support the impact theory. Chromium isotopic ratios are homogeneous within the earth, and therefore these isotopic anomalies exclude a volcanic origin, which has also been proposed as a cause for the iridium enrichment. Further, the chromium isotopic ratios measured in the K-T boundary are similar to the chromium isotopic ratios found in carbonaceous chondrites. Thus a probable candidate for the impactor is a carbonaceous asteroid, but a comet is also possible because comets are assumed to consist of material similar to carbonaceous chondrites.
Probably the most convincing evidence for a worldwide catastrophe was the discovery of the crater which has since been named Chicxulub Crater. This crater is centered on the Yucatán Peninsula of Mexico and was discovered by Tony Camargo and Glen Penfield while working as geophysicists for the Mexican oil company PEMEX. What they reported as a circular feature later turned out to be a crater estimated to be in diameter. This convinced the vast majority of scientists that this extinction resulted from a point event that is most probably an extraterrestrial impact and not from increased volcanism and climate change (which would spread its main effect over a much longer time period).
Although there is now general agreement that there was a huge impact at the end of the Cretaceous that led to the iridium enrichment of the K-T boundary layer, remnants have been found of other, smaller impacts, some nearing half the size of the Chicxulub crater, which did not result in any mass extinctions, and there is no clear linkage between an impact and any other incident of mass extinction.
Paleontologists David M. Raup and Jack Sepkoski have proposed that an excess of extinction events occurs roughly every 26 million years (though many are relatively minor). This led physicist Richard A. Muller to suggest that these extinctions could be due to a hypothetical companion star to the Sun called Nemesis periodically disrupting the orbits of comets in the Oort cloud, leading to a large increase in the number of comets reaching the inner Solar System where they might hit Earth. Physicist Adrian Melott and paleontologist Richard Bambach have more recently verified the Raup and Sepkoski finding, but argue that it is not consistent with the characteristics expected of a Nemesis-style periodicity.
Sociological and cultural effects
An impact event is commonly seen as a scenario that would bring about the end of civilization. In 2000, Discover magazine published a list of 20 possible sudden doomsday scenarios with an impact event listed as the most likely to occur.
A joint Pew Research Center/Smithsonian survey from April 21 to 26, 2010 found that 31 percent of Americans believed that an asteroid will collide with Earth by 2050. A majority (61 percent) disagreed.
Earth impacts
In the early history of the Earth (about four billion years ago), bolide impacts were almost certainly common since the Solar System contained far more discrete bodies than at present. Such impacts could have included strikes by asteroids hundreds of kilometers in diameter, with explosions so powerful that they vaporized all the Earth's oceans. It was not until this heavy bombardment slackened that life appears to have begun to evolve on Earth.
Precambrian
The leading theory of the Moon's origin is the giant impact theory, which postulates that Earth was once hit by a planetoid the size of Mars; such a theory is able to explain the size and composition of the Moon, something not done by other theories of lunar formation.
According to the theory of the Late Heavy Bombardment, there should have been 22,000 or more impact craters with diameters >20 km (12 mi), about 40 impact basins with diameters about 1,000 km (620 mi), and several impact basins with diameters about 5,000 km (3,100 mi). However, hundreds of millions of years of deformation at the Earth's crust pose significant challenges to conclusively identifying impacts from this period. Only two pieces of pristine lithosphere are believed to remain from this era: Kaapvaal Craton (in contemporary South Africa) and Pilbara Craton (in contemporary Western Australia) to search within which may potentially reveal evidence in the form of physical craters. Other methods may be used to identify impacts from this period, for example, indirect gravitational or magnetic analysis of the mantle, but may prove inconclusive.
In 2021, evidence for a probable impact 3.46 billion-years ago at Pilbara Craton has been found in the form of a crater created by the impact of a asteroid (named "The Apex Asteroid") into the sea at a depth of (near the site of Marble Bar, Western Australia). The event caused global tsunamis. It is also coincidental to some of the earliest evidence of life on Earth, fossilized Stromatolites.
Evidence for at least 4 impact events have been found in spherule layers (dubbed S1 through S8) from the Barberton Greenstone Belt in South Africa, spanning around 3.5-3.2 billion years ago. The sites of the impacts are thought to have been distant from the location of the belt. The impactors that generated these events are thought to have been much larger than those that created the largest known still existing craters/impact structures on Earth, with the impactors having estimated diameters of ~, with the craters generated by these impacts having an estimated diameter of . The largest impacts like those represented by the S2 layer are likely to have had far-reaching effects, such as the boiling of the surface layer of the oceans.
The Maniitsoq structure, dated to around 3 billion years old (3 Ga), was once thought to be the result of an impact; however, follow-up studies have not confirmed its nature as an impact structure. The Maniitsoq structure is not recognised as an impact structure by the Earth Impact Database.
In 2020, scientists discovered the world's oldest confirmed impact crater, the Yarrabubba crater, caused by an impact that occurred in Yilgarn Craton (what is now Western Australia), dated at more than 2.2 billion years ago with the impactor estimated to be around wide. It is believed that, at this time, the Earth was mostly or completely frozen, commonly called the Huronian glaciation.
The Vredefort impact event, which occurred around 2 billion years ago in Kaapvaal Craton (what is now South Africa), caused the largest verified crater, a multi-ringed structure across, forming from an impactor approximately in diameter.
The Sudbury impact event occurred on the Nuna supercontinent (now Canada) from a bolide approximately in diameter approximately 1.849 billion years ago Debris from the event would have been scattered across the globe.
Paleozoic and Mesozoic
Two asteroids are now believed to have struck Australia between 360 and 300 million years ago at the Western Warburton and East Warburton Basins, creating a . According to evidence found in 2015, it is the largest ever recorded. A third, possible impact was also identified in 2015 to the north, on the upper Diamantina River, also believed to have been caused by an asteroid 10 km across about 300 million years ago, but further studies are needed to establish that this crustal anomaly was indeed the result of an impact event.
The prehistoric Chicxulub impact, 66 million years ago, believed to be the cause of the Cretaceous–Paleogene extinction event, was caused by an asteroid estimated to be about wide.
Paleogene
Analysis of the Hiawatha Glacier reveals the presence of a 31 km wide impact crater dated at 58 million years of age, less than 10 million years after the Cretaceous–Paleogene extinction event, scientists believe that the impactor was a metallic asteroid with a diameter in the order of 1.5 kilometres (0.9 mi). The impact would have had global effects.
Pleistocene
Artifacts recovered with tektites from the 803,000-year-old Australasian strewnfield event in Asia link a Homo erectus population to a significant meteorite impact and its aftermath. Significant examples of Pleistocene impacts include the Lonar crater lake in India, approximately 52,000 years old (though a study published in 2010 gives a much greater age), which now has a flourishing semi-tropical jungle around it.
Holocene
The Rio Cuarto craters in Argentina were produced approximately 10,000 years ago, at the beginning of the Holocene. If proved to be impact craters, they would be the first impact of the Holocene.
The Campo del Cielo ("Field of Heaven") refers to an area bordering Argentina's Chaco Province where a group of iron meteorites were found, estimated as dating to 4,000–5,000 years ago. It first came to attention of Spanish authorities in 1576; in 2015, police arrested four alleged smugglers trying to steal more than a ton of protected meteorites. The Henbury craters in Australia (~5,000 years old) and Kaali craters in Estonia (~2,700 years old) were apparently produced by objects that broke up before impact.
Whitecourt crater in Alberta, Canada is estimated to be between 1,080 and 1,130 years old. The crater is approximately 36 m (118 ft) in diameter and 9 m (30 ft) deep, is heavily forested and was discovered in 2007 when a metal detector revealed fragments of meteoric iron scattered around the area.
A Chinese record states that 10,000 people were killed in the 1490 Qingyang event with the deaths caused by a hail of "falling stones"; some astronomers hypothesize that this may describe an actual meteorite fall, although they find the number of deaths implausible.
Kamil Crater, discovered from Google Earth image review in Egypt, in diameter and deep, is thought to have been formed less than 3,500 years ago in a then-unpopulated region of western Egypt. It was found February 19, 2009 by V. de Michelle on a Google Earth image of the East Uweinat Desert, Egypt.
20th-century impacts
One of the best-known recorded impacts in modern times was the Tunguska event, which occurred in Siberia, Russia, in 1908. This incident involved an explosion that was probably caused by the airburst of an asteroid or comet above the Earth's surface, felling an estimated 80 million trees over .
In February 1947, another large bolide impacted the Earth in the Sikhote-Alin Mountains, Primorye, Soviet Union. It was during daytime hours and was witnessed by many people, which allowed V. G. Fesenkov, then chairman of the meteorite committee of the USSR Academy of Science, to estimate the meteoroid's orbit before it encountered the Earth. Sikhote-Alin is a massive fall with the overall size of the meteoroid estimated at . A more recent estimate by Tsvetkov (and others) puts the mass at around . It was an iron meteorite belonging to the chemical group IIAB and with a coarse octahedrite structure. More than 70 tonnes (metric tons) of material survived the collision.
A case of a human injured by a space rock occurred on November 30, 1954, in Sylacauga, Alabama. There a stone chondrite crashed through a roof and hit Ann Hodges in her living room after it bounced off her radio. She was badly bruised by the fragments. Several persons have since claimed to have been struck by "meteorites" but no verifiable meteorites have resulted.
A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first was the Příbram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite.
Following the Příbram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Meteorite Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern U.S. This program also observed a meteorite fall, the "Lost City" chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, "Innisfree", in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Příbram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002.
On August 10, 1972, a meteor which became known as the 1972 Great Daylight Fireball was witnessed by many people as it moved north over the Rocky Mountains from the U.S. Southwest to Canada. It was filmed by a tourist at the Grand Teton National Park in Wyoming with an 8-millimeter color movie camera. In size range the object was roughly between a car and a house, and while it could have ended its life in a Hiroshima-sized blast, there was never any explosion. Analysis of the trajectory indicated that it never came much lower than off the ground, and the conclusion was that it had grazed Earth's atmosphere for about 100 seconds, then skipped back out of the atmosphere to return to its orbit around the Sun.
Many impact events occur without being observed by anyone on the ground. Between 1975 and 1992, American missile early warning satellites picked up 136 major explosions in the upper atmosphere. In the November 21, 2002, edition of the journal Nature, Peter Brown of the University of Western Ontario reported on his study of U.S. early warning satellite records for the preceding eight years. He identified 300 flashes caused by meteors in that time period and estimated the rate of Tunguska-sized events as once in 400 years. Eugene Shoemaker estimated that an event of such magnitude occurs about once every 300 years, though more recent analyses have suggested he may have overestimated by an order of magnitude.
In the dark morning hours of January 18, 2000, a fireball exploded over the city of Whitehorse, Yukon Territory at an altitude of about , lighting up the night like day. The meteor that produced the fireball was estimated to be about in diameter, with a weight of 180 tonnes. This blast was also featured on the Science Channel series Killer Asteroids, with several witness reports from residents in Atlin, British Columbia.
21st-century impacts
On 7 June 2006, a meteor was observed striking a location in the Reisadalen valley in Nordreisa Municipality in Troms County, Norway. Although initial witness reports stated that the resultant fireball was equivalent to the Hiroshima nuclear explosion, scientific analysis places the force of the blast at anywhere from 100 to 500 tonnes TNT equivalent, around three percent of Hiroshima's yield.
On 15 September 2007, a chondritic meteor crashed near the village of Carancas in southeastern Peru near Lake Titicaca, leaving a water-filled hole and spewing gases across the surrounding area. Many residents became ill, apparently from the noxious gases shortly after the impact.
On 7 October 2008, an approximately 4 meter asteroid labeled was tracked for 20 hours as it approached Earth and as it fell through the atmosphere and impacted in Sudan. This was the first time an object was detected before it reached the atmosphere and hundreds of pieces of the meteorite were recovered from the Nubian Desert.
On 15 February 2013, an asteroid entered Earth's atmosphere over Russia as a fireball and exploded above the city of Chelyabinsk during its passage through the Ural Mountains region at 09:13 YEKT (03:13 UTC). The object's air burst occurred at an altitude between above the ground, and about 1,500 people were injured, mainly by broken window glass shattered by the shock wave. Two were reported in serious condition; however, there were no fatalities. Initially some 3,000 buildings in six cities across the region were reported damaged due to the explosion's shock wave, a figure which rose to over 7,200 in the following weeks. The Chelyabinsk meteor was estimated to have caused over $30 million in damage. It is the largest recorded object to have encountered the Earth since the 1908 Tunguska event. The meteor is estimated to have an initial diameter of 17–20 metres and a mass of roughly 10,000 tonnes. On 16 October 2013, a team from Ural Federal University led by Victor Grokhovsky recovered a large fragment of the meteor from the bottom of Russia's Lake Chebarkul, about 80 km west of the city.
On 1 January 2014, a 3-meter (10 foot) asteroid, 2014 AA was discovered by the Mount Lemmon Survey and observed over the next hour, and was soon found to be on a collision course with Earth. The exact location was uncertain, constrained to a line between Panama, the central Atlantic Ocean, The Gambia, and Ethiopia. Around roughly the time expected (2 January 3:06 UTC) an infrasound burst was detected near the center of the impact range, in the middle of the Atlantic Ocean. This marks the second time a natural object was identified prior to impacting earth after 2008 TC3.
Nearly two years later, on October 3, WT1190F was detected orbiting Earth on a highly eccentric orbit, taking it from well within the Geocentric satellite ring to nearly twice the orbit of the Moon. It was estimated to be perturbed by the Moon onto a collision course with Earth on November 13. With over a month of observations, as well as precovery observations found dating back to 2009, it was found to be far less dense than a natural asteroid should be, suggesting that it was most likely an unidentified artificial satellite. As predicted, it fell over Sri Lanka at 6:18 UTC (11:48 local time). The sky in the region was very overcast, so only an airborne observation team was able to successfully observe it falling above the clouds. It is now thought to be a remnant of the Lunar Prospector mission in 1998, and is the third time any previously unknown object – natural or artificial – was identified prior to impact.
On 22 January 2018, an object, A106fgF, was discovered by the Asteroid Terrestrial-impact Last Alert System (ATLAS) and identified as having a small chance of impacting Earth later that day. As it was very dim, and only identified hours before its approach, no more than the initial 4 observations covering a 39-minute period were made of the object. It is unknown if it impacted Earth or not, but no fireball was detected in either infrared or infrasound, so if it did, it would have been very small, and likely near the eastern end of its potential impact area – in the western Pacific Ocean.
On 2 June 2018, the Mount Lemmon Survey detected (ZLAF9B2), a small 2–5 meter asteroid which further observations soon found had an 85% chance of impacting Earth. Soon after the impact, a fireball report from Botswana arrived to the American Meteor Society. Further observations with ATLAS extended the observation arc from 1 hour to 4 hours and confirmed that the asteroid orbit indeed impacted Earth in southern Africa, fully closing the loop with the fireball report and making this the third natural object confirmed to impact Earth, and the second on land after .
On 8 March 2019, NASA announced the detection of a large airburst that occurred on 18 December 2018 at 11:48 local time off the eastern coast of the Kamchatka Peninsula. The Kamchatka superbolide is estimated to have had a mass of roughly 1600 tons, and a diameter of 9 to 14 meters depending on its density, making it the third largest asteroid to impact Earth since 1900, after the Chelyabinsk meteor and the Tunguska event. The fireball exploded in an airburst above Earth's surface.
2019 MO, an approximately 4m asteroid, was detected by ATLAS a few hours before it impacted the Caribbean Sea near Puerto Rico in June 2019.
In 2023, a small meteorite is believed to have crashed through the roof of a home in Trenton, New Jersey. The metallic rock was approximately 4 inches by 6 inches and weighed 4 pounds. The item was seized by police and tested for radioactivity. The object was later confirmed to be a meteorite by scientists at The College of New Jersey, as well as meteorite expert Jerry Delaney, who previously worked at Rutgers University and the American Museum of Natural History.
Asteroid impact prediction
In the late 20th and early 21st century scientists put in place measures to detect Near Earth objects, and predict the dates and times of asteroids impacting Earth, along with the locations at which they will impact. The International Astronomical Union Minor Planet Center (MPC) is the global clearing house for information on asteroid orbits. NASA's Sentry System continually scans the MPC catalog of known asteroids, analyzing their orbits for any possible future impacts. Currently none are predicted (the single highest probability impact currently listed is ~7 m asteroid , which is due to pass earth in September 2095 with only a 5% predicted chance of impacting).
Currently prediction is mainly based on cataloging asteroids years before they are due to impact. This works well for larger asteroids (> 1 km across) as they are easily seen from a long distance. Over 95% of them are already known and their orbits have been measured, so any future impacts can be predicted long before they are on their final approach to Earth. Smaller objects are too faint to observe except when they come very close and so most cannot be observed before their final approach. Current mechanisms for detecting asteroids on final approach rely on wide-field ground based telescopes, such as the ATLAS system. However, current telescopes only cover part of the Earth and even more importantly cannot detect asteroids on the day-side of the planet, which is why so few of the smaller asteroids that commonly impact Earth are detected during the few hours that they would be visible.
So far only four impact events have been successfully predicted, all from innocuous 2–5 m diameter asteroids and detected a few hours in advance.
Current response status
In April 2018, the B612 Foundation reported "It's 100 per cent certain we’ll be hit [by a devastating asteroid], but we're not 100 per cent certain when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation to launch a mission to intercept an asteroid. The preferred method is to deflect rather than disrupt an asteroid.
Elsewhere in the Solar System
Evidence of massive past impact events
Impact craters provide evidence of past impacts on other planets in the Solar System, including possible interplanetary terrestrial impacts. Without carbon dating, other points of reference are used to estimate the timing of these impact events. Mars provides some significant evidence of possible interplanetary collisions. The North Polar Basin on Mars is speculated by some to be evidence for a planet-sized impact on the surface of Mars between 3.8 and 3.9 billion years ago, while Utopia Planitia is the largest confirmed impact and Hellas Planitia is the largest visible crater in the Solar System. The Moon provides similar evidence of massive impacts, with the South Pole–Aitken basin being the biggest. Mercury's Caloris Basin is another example of a crater formed by a massive impact event. Rheasilvia on Vesta is an example of a crater formed by an impact capable of, based on ratio of impact to size, severely deforming a planetary-mass object. Impact craters on the moons of Saturn such as Engelier and Gerin on Iapetus, Mamaldi on Rhea and Odysseus on Tethys and Herschel on Mimas form significant surface features. Models developed in 2018 to explain the unusual spin of Uranus support a long-held hypothesis that this was caused by an oblique collision with a massive object twice the size of Earth.
Observed events
Jupiter
Jupiter is the most massive planet in the Solar System, and because of its large mass it has a vast sphere of gravitational influence, the region of space where an asteroid capture can take place under favorable conditions.
Jupiter is able to capture comets in orbit around the Sun with a certain frequency. In general, these comets travel some revolutions around the planet following unstable orbits as highly elliptical and perturbable by solar gravity. While some of them eventually recover a heliocentric orbit, others crash on the planet or, more rarely, on its satellites.
In addition to the mass factor, its relative proximity to the inner solar system allows Jupiter to influence the distribution of minor bodies there. For a long time it was believed that these characteristics led the gas giant to expel from the system or to attract most of the wandering objects in its vicinity and, consequently, to determine a reduction in the number of potentially dangerous objects for the Earth. Subsequent dynamic studies have shown that in reality the situation is more complex: the presence of Jupiter, in fact, tends to reduce the frequency of impact on the Earth of objects coming from the Oort cloud, while it increases it in the case of asteroids and short period comets.
For this reason Jupiter is the planet of the Solar System characterized by the highest frequency of impacts, which justifies its reputation as the "sweeper" or "cosmic vacuum cleaner" of the Solar System. 2009 studies suggest an impact frequency of one every 50–350 years, for an object of 0.5–1 km in diameter; impacts with smaller objects would occur more frequently. Another study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years.
In July 1994, Comet Shoemaker–Levy 9 was a comet that broke apart and collided with Jupiter, providing the first direct observation of an extraterrestrial collision of Solar System objects. The event served as a "wake-up call", and astronomers responded by starting programs such as Lincoln Near-Earth Asteroid Research (LINEAR), Near-Earth Asteroid Tracking (NEAT), Lowell Observatory Near-Earth Object Search (LONEOS) and several others which have drastically increased the rate of asteroid discovery.
The 2009 impact event happened on July 19 when a new black spot about the size of Earth was discovered in Jupiter's southern hemisphere by amateur astronomer Anthony Wesley. Thermal infrared analysis showed it was warm and spectroscopic methods detected ammonia. JPL scientists confirmed that there was another impact event on Jupiter, probably involving a small undiscovered comet or other icy body. The impactor is estimated to have been about 200–500 meters in diameter.
Later minor impacts were observed by amateur astronomers in 2010, 2012, 2016, and 2017; one impact was observed by Juno in 2020.
Other impacts
In 1998, two comets were observed plunging toward the Sun in close succession. The first of these was on June 1 and the second the next day. A video of this, followed by a dramatic ejection of solar gas (unrelated to the impacts), can be found at the NASA website. Both of these comets evaporated before coming into contact with the surface of the Sun. According to a theory by NASA Jet Propulsion Laboratory scientist Zdeněk Sekanina, the latest impactor to actually make contact with the Sun was the "supercomet" Howard-Koomen-Michels, also known as Solwind 1, on August 30, 1979. ( | Physical sciences | Planetary science | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.