id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
50469 | https://en.wikipedia.org/wiki/Garlic | Garlic | Garlic (Allium sativum) is a species of bulbous flowering plants in the genus Allium. Its close relatives include the onion, shallot, leek, chives, Welsh onion, and Chinese onion. It is native to Central Asia, South Asia, and northeastern Iran. It has long been used as a seasoning and culinary ingredient worldwide, with a history of several thousand years of human consumption and use, including use in traditional medicine. It was known to ancient Egyptians and other ancient cultures for which its consumption has had a significant culinary cultural impact, especially across the Mediterranean region and across parts of Asia. It is produced globally, but the largest producer is China, which produced 73% of the world's supply of garlic in 2021. There are two subspecies and hundreds of varieties of garlic.
Description
Garlic is a perennial flowering plant that is native to Central Asia, South Asia and northeastern Iran. and grows from a bulb. It has a tall, erect flowering stem that grows up to . The leaf blade is flat, linear, solid, and approximately wide, with an acute apex. The plant may produce pink to purple flowers from July to September in the Northern Hemisphere. The bulb has a strong odor and is typically made up of 10 to 20 cloves. The cloves close to the center are symmetrical, and those surrounding the center can be asymmetrical. Each clove is enclosed in an inner sheathing leaf surrounded by layers of outer sheathing leaves. If garlic is planted at the proper time and depth, it can be grown as far north as Alaska. It produces hermaphroditic flowers. It is pollinated by butterflies, moths, and other insects.
Chemistry
Fresh or crushed garlic yields the sulfur-containing compounds allicin, ajoene, diallyl polysulfides, vinyldithiins, and , as well as enzymes, saponins, flavonoids, and Maillard reaction products when cooked, which are not sulfur-containing compounds.
The phytochemicals responsible for the sharp flavor of garlic are produced when the plant's cells are damaged. When a cell is broken by chopping, chewing, or crushing, enzymes stored in cell vacuoles trigger the breakdown of several sulfur-containing compounds stored in the cell fluids (cytosol). The resultant compounds are responsible for the sharp or hot taste and strong smell of garlic. Some of the compounds are unstable and continue to react over time.
Among alliums, garlic has by far the highest concentrations of initial reaction products, making garlic much more potent than onion, shallot, or leeks. Although many humans enjoy the taste of garlic, these compounds are believed to have evolved as a defensive mechanism, deterring animals such as birds, insects, and worms from eating the plant.
A large number of sulfur compounds contribute to the smell and taste of garlic. Allicin has been found to be the compound most responsible for the "hot" sensation of raw garlic. This chemical opens thermo-transient receptor potential channels that are responsible for the burning sense of heat in foods. The process of cooking garlic removes allicin, thus mellowing its spiciness. Allicin, along with its decomposition products diallyl disulfide and diallyl trisulfide, are major contributors to the characteristic odor of garlic, with other allicin-derived compounds, such as vinyldithiins and ajoene.
Taxonomy
Identification of the wild progenitor of common garlic is difficult due to the sterility of its many cultivars, which limits the ability to cross test with wild relatives. Genetically and morphologically, garlic is most similar to the wild species Allium longicuspis, which grows in central and southwestern Asia. However, because A. longicuspis is also mostly sterile, it is doubtful that it is the ancestor of A. sativum. Other candidates that have been suggested include A. tuncelianum, A. macrochaetum, and A. truncatum, all of which are native to the Middle East.
Allium sativum grows in the wild in areas where it has become naturalized. The "wild garlic", "crow garlic", and "field garlic" of Britain are members of the species A. ursinum, A. vineale, and A. oleraceum, respectively. In North America, A. vineale (known as "wild garlic" or "crow garlic") and Allium canadense (known as "meadow garlic", "wild garlic", or "wild onion") are common weeds in fields. So-called elephant garlic is actually a wild leek (A. ampeloprasum) and not a true garlic. Single clove garlic (also called pearl or solo garlic) originated in the Yunnan province of China.
Subspecies and varieties
There are two subspecies of A. sativum, ten major groups of varieties, and hundreds of varieties, or cultivars.
A. sativum var. ophioscorodon (Link) Döll, called Ophioscorodon or hardneck garlic, includes porcelain garlics, rocambole garlic, and purple stripe garlics. It is sometimes considered to be a separate species, Allium ophioscorodon G.Don.
A. sativum var. sativum, or softneck garlic, includes artichoke garlic, silverskin garlic, and creole garlic.
There are at least 120 cultivars originating from Central Asia, making it the main center of garlic biodiversity.
Some garlics have protected status in the UK and the EU, including:
Etymology
The word garlic derives from Old English, garlēac, meaning gar (spear) and leek, as a 'spear-shaped leek'.
Ecology
Garlic plants are usually hardy and not affected by many pests or diseases. Garlic plants are said to repel rabbits and moles. The California Department of Food and Agriculture conducts a certification program to assure freedom from nematode and white rot disease caused by Stromatinia cepivora, two pathogens that can both destroy a crop and remain in the soil indefinitely once introduced. Garlic may also suffer from pink root, a typically non-fatal disease that stunts the roots and turns them pink or red; or leek rust, which usually appears as bright orange spots. The larvae of the leek moth attack garlic by mining into the leaves or bulbs.
Botrytis neck and bulb rot is a disease of onion, garlic, leek and shallot. Botrytis allii and Botrytis aclada cause this disease in onion and Botrytis porri causes it in garlic. According to the University of California,Initial symptoms usually begin at the neck, where affected tissue softens, becomes water-soaked, and turns brown. In a humid atmosphere, a gray and feltlike growth (where spores are produced) appears on rotting scales, and mycelia may develop between scales. Dark-brown-to-black sclerotia (the resting bodies of the pathogen) may eventually develop in the neck or between scales.
Cultivation
Garlic is easy to cultivate and may grow year-round in mild climates. While sexual propagation of garlic is possible, nearly all of the garlic in cultivation is propagated asexually by planting individual cloves in the ground.
In colder climates, cloves are best planted about six weeks before the soil freezes. The goal is to have the bulbs produce only roots and no shoots above the ground.
Harvest is in late spring or early summer.
Garlic plants can be grown closely together, leaving enough space for the bulbs to mature, and are easily grown in containers of sufficient depth. Garlic does well in loose, dry, well-drained soils in sunny locations, and is hardy throughout USDA climate zones 4–9. When selecting garlic for planting, it is important to pick large bulbs from which to separate cloves. Large cloves, along with proper spacing in the planting bed, will also increase bulb size. Garlic plants prefer to grow in a soil with a high organic material content, but are capable of growing in a wide range of soil conditions and pH levels.
There are different varieties of garlic, most notably split into the subspecies of hardneck garlic and softneck garlic. The latitude where the garlic is grown affects the choice of type, as garlic can be day-length sensitive. Hardneck garlic is generally grown in cooler climates and produces relatively large cloves, whereas softneck garlic is generally grown closer to the equator and produces small, tightly packed cloves.
Garlic scapes are removed to focus all the garlic's energy into bulb growth. The scapes can be eaten raw or cooked.
Propagation
The method of propagating garlic from planting cloves is called division. Asexual propagation of garlic for production purposes requires cool temperatures that can vary depending on the cultivar. Hardneck varieties require long cold temperature exposure where as softneck varieties thrive in milder climates. This cold climate is required for the process of vernalization, a form of stratification of the cloves necessary for the development of multiple-clove bulbs. Solo garlic is the result of garlic grown without the process of vernalization.
Production
In 2021, world production of garlic was 28 million tonnes, with China alone accounting for 73% of the total.
Adverse effects and toxicology
The scent of garlic is known to linger upon the human body and cause bad breath (halitosis) and body odor. This is caused by allyl methyl sulfide (AMS). AMS is a volatile liquid which is absorbed into the blood during the metabolism of garlic-derived sulfur compounds; from the blood it travels to the lungs (and from there to the mouth, causing garlic breath) and skin, where it is exuded through skin pores. Since digestion takes several hours, and release of AMS several hours more, the effect of eating garlic may be present for a long time. Washing the skin with soap is only a partial and imperfect solution to the smell. Studies have shown sipping milk at the same time as consuming garlic can significantly neutralize bad breath. Mixing garlic with milk in the mouth before swallowing reduced the odor better than drinking milk afterward. Plain water, mushrooms, and basil may also reduce the odor; the mix of fat and water found in milk, however, was the most effective. Garlic breath is allegedly alleviated by eating fresh parsley.
Abundant sulfur compounds in garlic are also responsible for turning garlic green or blue during pickling and cooking. Under these conditions (i.e., acidity, heat) the sulfur-containing compound alliin reacts with common amino acids to make pyrroles, clusters of carbon-nitrogen rings. These rings can be linked together into polypyrrole molecules. Ring structures absorb particular wavelengths of light and thus appear colored. The two-pyrrole molecule looks red, the three-pyrrole molecule looks blue, and the four-pyrrole molecule looks green (like chlorophyll, a tetrapyrrole). Like chlorophyll, the pyrrole pigments are safe to eat. Upon cutting, similar to a color change in onion caused by reactions of amino acids with sulfur compounds, garlic can turn green.
The green, dry "folds" in the center of the garlic clove are especially pungent. The sulfur compound allicin, produced by crushing or chewing fresh garlic, produces other sulfur compounds: ajoene, allyl polysulfides, and vinyldithiins. Aged garlic lacks allicin, but may have some activity due to the presence of S-allylcysteine.
Some people suffer from allergies to garlic and other species of Allium. Symptoms can include irritable bowel, diarrhea, mouth and throat ulcerations, nausea, breathing difficulties, and, in rare cases, anaphylaxis. Garlic-sensitive people show positive tests to diallyl disulfide, allylpropyldisulfide, allylmercaptan, and allicin, all of which are present in garlic. People who suffer from garlic allergies are often sensitive to many other plants, including onions, chives, leeks, shallots, garden lilies, ginger, and bananas.
Several reports of serious burns resulting from garlic being applied topically for various purposes, including naturopathic uses and acne treatment, indicate care must be taken for these uses, usually testing a small area of skin using a low concentration of garlic. On the basis of numerous reports of such burns, including burns to children, topical use of raw garlic, as well as insertion of raw garlic into body cavities, is discouraged. In particular, topical application of raw garlic to young children is not advisable.
The side effects of long-term garlic supplementation are largely unknown. Possible side effects include gastrointestinal discomfort, sweating, dizziness, allergic reactions, bleeding, and menstrual irregularities.
Some breastfeeding mothers have found, after consuming garlic, that their babies can be slow to feed, and have noted a garlic odor coming from them.
If higher-than-recommended doses of garlic are taken with anticoagulant medications, this can lead to a higher risk of bleeding. Garlic may interact with warfarin, saquinavir, antihypertensives, calcium channel blockers, the quinolone family of antibiotics such as ciprofloxacin, and hypoglycemic drugs, as well as other medications. The American Veterinary Medical Association considers garlic to be toxic to pets.
Uses
Because of sulfur compounds circulating in blood, consumed garlic may act as a mosquito repellent, although there is no scientific evidence of its efficacy.
Nutrition
In the typical serving size of 1–3 cloves (3–9 grams), raw garlic provides no significant nutritional value, with the content of all essential nutrients below 10% of the Daily Value (DV). In a reference amount of , raw garlic contains some micronutrients in rich amounts (20% or more of the DV), including vitamins B6 (73% DV) and C (35% DV), and the dietary mineral, manganese (73% DV). Per 100 gram serving, raw garlic is a moderate source (10–19% DV) of the B vitamins, thiamin and pantothenic acid, as well as the dietary minerals, calcium, potassium, phosphorus, and zinc.
The composition of raw garlic is 59% water, 33% carbohydrates, 6% protein, 2% dietary fiber, and less than 1% fat.
Culinary
Garlic is widely used around the world for its pungent flavor as a seasoning or condiment.
The garlic plant's bulb is the most commonly used part of the plant. With the exception of the single clove types, garlic bulbs are normally divided into numerous fleshy sections called cloves. Garlic cloves are used for consumption (raw or cooked) or for medicinal purposes. They have a characteristic pungent, spicy flavor that mellows and sweetens considerably with cooking. The distinctive aroma is mainly due to organosulfur compounds including allicin present in fresh garlic cloves and ajoene which forms when they are crushed or chopped. A further metabolite allyl methyl sulfide is responsible for garlic breath.
Other parts of the garlic plant are also edible. The leaves and flowers (bulbils) on the head (spathe) are sometimes eaten. They are milder in flavor than the bulbs, and are most often consumed while immature and still tender. Immature garlic is sometimes pulled, rather like a scallion, and sold as "green garlic". When green garlic is allowed to grow past the "scallion" stage, but not permitted to fully mature, it may produce a garlic "round", a bulb like a boiling onion, but not separated into cloves like a mature bulb.
Green garlic imparts a garlic flavor and aroma in food, minus the spiciness. Green garlic is often chopped and stir-fried or cooked in soup or hot pot in Southeast Asian (i.e. Vietnamese, Thai, Myanmar, Lao, Cambodian, Singaporean), and Chinese cookery, and is very abundant and low-priced. Additionally, the immature flower stalks (scapes) of the hardneck are sometimes marketed for uses similar to asparagus in stir-fries.
Inedible or rarely eaten parts of the garlic plant include the "skin" covering each clove and root cluster. The papery, protective layers of "skin" over various parts of the plant are generally discarded during preparation for most culinary uses, though in Korea immature whole heads are sometimes prepared with the tender skins intact. The root cluster attached to the basal plate of the bulb is the only part not typically considered palatable in any form.
An alternative is to cut the top off the bulb, coat the cloves by dribbling olive oil (or other oil-based seasoning) over them, and roast them in an oven. Garlic softens and can be extracted from the cloves by squeezing the (root) end of the bulb, or individually by squeezing one end of the clove. In Korea, heads of garlic are heated over the course of several weeks; the resulting product, called black garlic, is sweet and syrupy, and is exported to the United States, United Kingdom, and Australia.
Garlic may be applied to different kinds of bread, usually in a medium of butter or oil, to create a variety of classic dishes, such as garlic bread, garlic toast, bruschetta, crostini, and canapé. The flavor varies in intensity and aroma with the different cooking methods. It is often paired with onion, tomato, or ginger.
Immature scapes are tender and edible. They are also known as "garlic spears", "stems", or "tops". Scapes generally have a milder taste than the cloves. They are often used in stir frying or braised like asparagus. Garlic leaves are a popular vegetable in many parts of Asia. The leaves are cut, cleaned, and then stir-fried with eggs, meat, or vegetables.
Garlic powder is made from dehydrated garlic and can be used as a substitute for fresh garlic, though the taste is not quite the same. Garlic salt combines garlic powder with table salt.
Regions
Garlic is a fundamental component in many or most dishes of various regions, including eastern Asia, South Asia, Southeast Asia, the Middle East, northern Africa, southern Europe, Eastern Europe and parts of Latin America. Latin American seasonings, particularly, use garlic in sofritos and mofongos.
Oils can be flavored with garlic cloves. These infused oils are used to season all categories of vegetables, meats, breads, and pasta. Garlic, along with fish sauce, chopped fresh chilis, lime juice, sugar, and water, is a basic essential item in dipping fish sauce, a highly used dipping sauce condiment used in Indochina. In East and Southeast Asia, chili oil with garlic is a popular dipping sauce, especially for meat and seafood. Tuong ot toi Viet Nam (Vietnam chili garlic sauce) is a highly popular condiment and dip across North America and Asia.
In some cuisines, the young bulbs are pickled for three to six weeks in a mixture of sugar, salt, and spices. In eastern Europe, the shoots are pickled and eaten as an appetizer. Laba garlic, prepared by soaking garlic in vinegar, is a type of pickled garlic served with dumplings in northern China to celebrate the Chinese New Year.
Garlic is essential in Middle Eastern and Arabic cooking, with its presence in many food items. In the Levant, garlic is traditionally crushed together with olive oil, and occasionally salt, to create a Middle Eastern garlic sauce called Toum (تُوم; meaning "garlic" in Arabic). While not exclusively served with meats, toum is commonly paired with chicken or other meat dishes such as shawarma. Garlic is also a key component in some hummus varieties, an Arabic dip composed of chickpeas, tahini, garlic, lemon juice, and salt.
Lightly smoked garlic is used in British and other European cuisine. It is particularly prized for stuffing poultry and game, and in soups and stews.
Emulsifying garlic with olive oil produces aioli. Garlic, oil, and a chunky base produce skordalia. Crushed garlic, oil, and water produce a strong flavored sauce, mujdei. Blending garlic, almond, oil, and soaked bread produces ajoblanco. Tzatziki, yogurt mixed with garlic and salt, is a common sauce in Eastern Mediterranean cuisines.
Culinary history
Numerous cuneiform records show that garlic has been cultivated in Mesopotamia for at least 4,000 years. The use of garlic in China and Egypt also dates back thousands of years. Well-preserved garlic was found in the tomb of Tutankhamun (c. 1325 BC). It was consumed by ancient Greek and Roman soldiers, sailors, and rural classes (Virgil, Eclogues ii. 11), and, according to Pliny the Elder (Natural History xix. 32), by the African peasantry. Garlic was placed by the ancient Greeks on the piles of stones at crossroads, as a supper for Hecate (Theophrastus, Characters, The Superstitious Man).
Garlic was rare in traditional English cuisine (though it is said to have been grown in England before 1548) but has been a common ingredient in Mediterranean Europe. Translations of the Assize of Weights and Measures, an English statute generally dated to the 13th century, indicate a passage as dealing with standardized units of garlic production, sale, and taxation—the hundred of 15 ropes of 15 heads each—but the Latin version of the text may refer to herring rather than garlic.
Storage
Domestically, garlic is stored warm [above 18°C (64°F)] and dry to keep it dormant (to inhibit sprouting). It is traditionally hung; softneck varieties are often braided in strands called plaits or grappes. Peeled cloves may be stored in wine or vinegar in the refrigerator. Commercially, garlic is stored at 0°C (32°F), in a dry, low-humidity environment. Garlic will keep longer if the tops remain attached.
Garlic is often kept in oil to produce flavored oil; however, the practice requires measures to be taken to prevent the garlic from spoiling which may include rancidity and growth of Clostridium botulinum. Acidification with a mild solution of vinegar minimizes bacterial growth. Refrigeration does not assure the safety of garlic kept in oil, requiring use within one month to avoid bacterial spoilage. Garlic is also dried at low temperatures, to preserve the enzymatic activity and sold and kept as garlic granules, and can be rehydrated to reactivate it.
Stored garlic can be affected by Penicillium decay known as "blue mold" (or "green mold" in some locales), especially in high humidity. Infection may first appear as soft or water-soaked spots, followed by white patches (of mycelium) which turn blue or green with sporulation. As sporulation and germination are delayed at low temperature, and at −4 °C are inhibited entirely, in refrigerated cloves one may only see the white mycelium during early stages. Penicillium hirsutum and Penicillium allii are two of the predominant species identified in blue mold.
Medical research
Cardiovascular
As of 2016, clinical research found that consuming garlic produces only a small reduction in blood pressure (4 mmHg), and there is no clear long-term effect on hypertension, cardiovascular morbidity or mortality. A 2016 meta-analysis indicated there was no effect of garlic consumption on blood levels of lipoprotein(a), a biomarker of atherosclerosis.
Because garlic might reduce platelet aggregation, people taking anticoagulant medication are cautioned about consuming garlic.
Cancer
Two reviews found no effect of consuming garlic on colorectal cancer. A 2016 meta-analysis of case-control and cohort studies found a moderate inverse association between garlic intake and some cancers of the upper digestive tract.
Common cold
A 2014 review found insufficient evidence to determine the effects of garlic in preventing or treating the common cold. Other reviews concluded a similar absence of high-quality evidence for garlic having a significant effect on the common cold.
Folk medicine
Garlic has been used for traditional medicine in diverse cultures such as in Korea, Egypt, Japan, China, Rome, and Greece. In his Natural History, Pliny gave a list of conditions in which garlic was considered beneficial (N.H. xx. 23). Galen, writing in the second century, eulogized garlic as the "rustic's theriac" (cure-all) (see F. Adams' Paulus Aegineta, p. 99). Alexander Neckam, a writer of the 12th century (see Wright's edition of his works, p. 473, 1863), discussed it as a palliative for the heat of the sun in field labor. In the 17th century, Thomas Sydenham valued it as an application in confluent smallpox, and William Cullen's Materia Medica of 1789 found some dropsies cured by it alone.
Other uses
The sticky juice within the bulb cloves is used as an adhesive in mending glass and porcelain. An environmentally benign garlic-derived polysulfide product is approved for use in the European Union (under Annex 1 of 91/414) and the UK as a nematicide and insecticide, including for use in the control of cabbage root fly and red mite in poultry.
In culture
Garlic is present in the folklore of many cultures. In Europe, many cultures have used garlic for protection or white magic, perhaps owing to its reputation in folk medicine. Central European folk beliefs considered garlic a powerful ward against demons, werewolves, and vampires. To ward off vampires, garlic could be worn, hung in windows, or rubbed on chimneys and keyholes.
In the foundation myth of the ancient Korean kingdom of Gojoseon, eating nothing but 20 cloves of garlic and a bundle of Korean mugwort for 100 days let a bear be transformed into a woman.
In celebration of Nowruz (Persian calendar New Year), garlic is one of the essential items in a Haft-sin ("seven things beginning with 'S'") table, a traditional New Year's display: the name for garlic in Persian is سیر (seer), which begins with "س" (sin, pronounced "seen") the Perso-Arabic letter corresponding to "S".
In Islam, it is recommended not to eat raw garlic prior to going to the mosque. This is based on several hadith.
Some Mahāyāna Buddhists and sects in China and Vietnam avoid eating onions, garlic, scallions, chives and leeks, which are known as Wu hun ((), 'the five forbidden pungent vegetables').
Because of its strong odor, garlic is sometimes called the "stinking rose".
Gallery
| Biology and health sciences | Monocots | null |
50482 | https://en.wikipedia.org/wiki/Flood | Flood | A flood is an overflow of water (or rarely other fluids) that submerges land that is usually dry. In the sense of "flowing water", the word may also be applied to the inflow of the tide. Floods are of significant concern in agriculture, civil engineering and public health. Human changes to the environment often increase the intensity and frequency of flooding. Examples for human changes are land use changes such as deforestation and removal of wetlands, changes in waterway course or flood controls such as with levees. Global environmental issues also influence causes of floods, namely climate change which causes an intensification of the water cycle and sea level rise. For example, climate change makes extreme weather events more frequent and stronger. This leads to more intense floods and increased flood risk.
Natural types of floods include river flooding, groundwater flooding coastal flooding and urban flooding sometimes known as flash flooding. Tidal flooding may include elements of both river and coastal flooding processes in estuary areas. There is also the intentional flooding of land that would otherwise remain dry. This may take place for agricultural, military, or river-management purposes. For example, agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries.
Flooding may occur as an overflow of water from water bodies, such as a river, lake, sea or ocean. In these cases, the water overtops or breaks levees, resulting in some of that water escaping its usual boundaries. Flooding may also occur due to an accumulation of rainwater on saturated ground. This is called an areal flood. The size of a lake or other body of water naturally varies with seasonal changes in precipitation and snow melt. Those changes in size are however not considered a flood unless they flood property or drown domestic animals.
Floods can also occur in rivers when the flow rate exceeds the capacity of the river channel, particularly at bends or meanders in the waterway. Floods often cause damage to homes and businesses if these buildings are in the natural flood plains of rivers. People could avoid riverine flood damage by moving away from rivers. However, people in many countries have traditionally lived and worked by rivers because the land is usually flat and fertile. Also, the rivers provide easy travel and access to commerce and industry.
Flooding can damage property and also lead to secondary impacts. These include in the short term an increased spread of waterborne diseases and vector-bourne disesases, for example those diseases transmitted by mosquitos. Flooding can also lead to long-term displacement of residents. Floods are an area of study of hydrology and hydraulic engineering.
A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions.
Types
Areal flooding
Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can either infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths. Surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, or roofs. Areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience areal flooding during periods when precipitation exceeds evaporation.
River flooding
Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the world's largest rivers. When overland flow occurs on tilled fields, it can result in a muddy flood where sediments are picked up by run off and carried as suspended matter or bed load. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, or beaver dams.
Slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, or tropical cyclones. However, large rivers may have rapid flooding events in areas with dry climates, since they may have large basins but small river channels, and rainfall can be very intense in smaller areas of those basins.
In extremely flat areas, such as the Red River Valley of the North in Minnesota, North Dakota, and Manitoba, a type of hybrid river/areal flooding can occur, known locally as "overland flooding". This is different from "overland flow" defined as "surface runoff". The Red River Valley is a former glacial lakebed, created by Lake Agassiz, and over a length of , the river course drops only , for an average slope of about 5 inches per mile (or 8.2 cm per kilometer). In this very large area, spring snowmelt happens at different rates in different places, and if winter snowfall was heavy, a fast snowmelt can push water out of the banks of a tributary river so that it moves overland, to a point further downstream in the river or completely to another streambed. Overland flooding can be devastating because it is unpredictable, it can occur very suddenly with surprising speed, and in such flat land it can run for miles. It is these qualities that set it apart from simple "overland flow".
Rapid flooding events, including flash floods, more often occur on smaller rivers, rivers with steep valleys, rivers that flow for much of their length over impermeable terrain, or normally-dry channels. The cause may be localized convective precipitation (intense thunderstorms) or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flash flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the flow rate increased from about in just one minute. Two larger floods occurred at the same site within a week, but no one was at the waterfall on those days. The deadly flood resulted from a thunderstorm over part of the drainage basin, where steep, bare rock slopes are common and the thin soil was already saturated.
Flash floods are the most common flood type in normally-dry channels in arid zones, known as arroyos in the southwest United States and many other names elsewhere. In that setting, the first flood water to arrive is depleted as it wets the sandy stream bed. The leading edge of the flood thus advances more slowly than later and higher flows. As a result, the rising limb of the hydrograph becomes ever quicker as the flood moves downstream, until the flow rate is so great that the depletion by wetting soil becomes insignificant.
Coastal flooding
Coastal areas may be flooded by storm surges combining with high tides and large wave events at sea, resulting in waves over-topping flood defenses or in severe cases by tsunami or tropical cyclones. A storm surge, from either a tropical cyclone or an extratropical cyclone, falls within this category. A storm surge is "an additional rise of water generated by a storm, over and above the predicted astronomical tides". Due to the effects of climate change (e.g. sea level rise and an increase in extreme weather events) and an increase in the population living in coastal areas, the damage caused by coastal flood events has intensified and more people are being affected.
Flooding in estuaries is commonly caused by a combination of storm surges caused by winds and low barometric pressure and large waves meeting high upstream river flows.
Urban flooding
Intentional floods
The intentional flooding of land that would otherwise remain dry may take place for agricultural, military or river-management purposes. This is a form of hydraulic engineering. Agricultural flooding may occur in preparing paddy fields for the growing of semi-aquatic rice in many countries.
Flooding for river management may occur in the form of diverting flood waters in a river at flood stage upstream from areas that are considered more valuable than the areas that are sacrificed in this way. This may be done ad hoc, or permanently, as in the so-called overlaten (literally "let-overs"), an intentionally lowered segment in Dutch riparian levees, like the Beerse Overlaat in the left levee of the Meuse between the villages of Gassel and Linden, North Brabant.
Military inundation creates an obstacle in the field that is intended to impede the movement of the enemy. This may be done both for offensive and defensive purposes. Furthermore, in so far as the methods used are a form of hydraulic engineering, it may be useful to differentiate between controlled inundations and uncontrolled ones. Examples for controlled inundations include those in the Netherlands under the Dutch Republic and its successor states in that area and exemplified in the two Hollandic Water Lines, the Stelling van Amsterdam, the Frisian Water Line, the IJssel Line, the Peel-Raam Line, and the Grebbe line in that country.
To count as controlled, a military inundation has to take the interests of the civilian population into account, by allowing them a timely evacuation, by making the inundation reversible, and by making an attempt to minimize the adverse ecological impact of the inundation. That impact may also be adverse in a hydrogeological sense if the inundation lasts a long time.
Examples for uncontrolled inundations are the second Siege of Leiden during the first part of the Eighty Years' War, the flooding of the Yser plain during the First World War, and the Inundation of Walcheren, and the Inundation of the Wieringermeer during the Second World War).
Causes
Floods are caused by many factors or a combination of any of these generally prolonged heavy rainfall (locally concentrated or throughout a catchment area), highly accelerated snowmelt, severe winds over water, unusual high tides, tsunamis, or failure of dams, levees, retention ponds, or other structures that retained the water. Flooding can be exacerbated by increased amounts of impervious surface or by other natural hazards such as wildfires, which reduce the supply of vegetation that can absorb rainfall.
During times of rain, some of the water is retained in ponds or soil, some is absorbed by grass and vegetation, some evaporates, and the rest travels over the land as surface runoff. Floods occur when ponds, lakes, riverbeds, soil, and vegetation cannot absorb all the water.
This has been exacerbated by human activities such as draining wetlands that naturally store large amounts of water and building paved surfaces that do not absorb any water. Water then runs off the land in quantities that cannot be carried within stream channels or retained in natural ponds, lakes, and human-made reservoirs. About 30 percent of all precipitation becomes runoff and that amount might be increased by water from melting snow.
Upslope factors
River flooding is often caused by heavy rain, sometimes increased by melting snow. A flood that rises rapidly, with little or no warning, is called a flash flood. Flash floods usually result from intense rainfall over a relatively small area, or if the area was already saturated from previous precipitation.
The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow.
Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins.
The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately . The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively.
Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest. The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins.
Downslope factors
Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation in coastal flooding lands is often the ocean or some coastal flooding bars which form natural lakes. In flooding low lands, elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel and, especially, by depth of channel, speed of flow and amount of sediments in it Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels.
Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel.
Periodic floods occur on many rivers, forming a surrounding region known as the flood plain. Even when rainfall is relatively light, the shorelines of lakes and bays can be flooded by severe winds—such as during hurricanes—that blow water into the shore areas.
Climate change
Coincidence
Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams. Coincident events may cause extensive flooding to be more frequent than anticipated from simplistic statistical prediction models considering only precipitation runoff flowing within unobstructed drainage channels. Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flood-damaged structures and vehicles, including boats and railway equipment. Recent field measurements during the 2010–11 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by velocity and water depth fluctuations. These considerations ignore further the risks associated with large debris entrained by the flow motion.
Negative impacts
Floods can be a huge destructive power. When water flows, it has the ability to demolish all kinds of buildings and objects, such as bridges, structures, houses, trees, and cars. Economical, social and natural environmental damages are common factors that are impacted by flooding events and the impacts that flooding has on these areas can be catastrophic.
Impacts on infrastructure and societies
There have been numerous flood incidents around the world which have caused devastating damage to infrastructure, the natural environment and human life.
Floods can have devastating impacts to human societies. Flooding events worldwide are increasing in frequency and severity, leading to increasing costs to societies.
Catastrophic riverine flooding can result from major infrastructure failures, often the collapse of a dam. It can also be caused by drainage channel modification from a landslide, earthquake or volcanic eruption. Examples include outburst floods and lahars. Tsunamis can cause catastrophic coastal flooding, most commonly resulting from undersea earthquakes.
Economic impacts
The primary effects of flooding include loss of life and damage to buildings and other structures, including bridges, sewerage systems, roadways, and canals. The economic impacts caused by flooding can be severe.
Every year flooding causes countries billions of dollars worth of damage that threatens the livelihood of individuals. As a result, there is also significant socio-economic threats to vulnerable populations around the world from flooding. For example, in Bangladesh in 2007, a flood was responsible for the destruction of more than one million houses. And yearly in the United States, floods cause over $7 billion in damage.
Flood waters typically inundate farm land, making the land unworkable and preventing crops from being planted or harvested, which can lead to shortages of food both for humans and farm animals. Entire harvests for a country can be lost in extreme flood circumstances. Some tree species may not survive prolonged flooding of their root systems.
Flooding in areas where people live also has significant economic implications for affected neighborhoods. In the United States, industry experts estimate that wet basements can lower property values by 10–25 percent and are cited among the top reasons for not purchasing a home. According to the U.S. Federal Emergency Management Agency (FEMA), almost 40 percent of small businesses never reopen their doors following a flooding disaster. In the United States, insurance is available against flood damage to both homes and businesses.
Economic hardship due to a temporary decline in tourism, rebuilding costs, or food shortages leading to price increases is a common after-effect of severe flooding. The impact on those affected may cause psychological damage to those affected, in particular where deaths, serious injuries and loss of property occur.
Health impacts
Fatalities connected directly to floods are usually caused by drowning; the waters in a flood are very deep and have strong currents. Deaths do not just occur from drowning, deaths are connected with dehydration, heat stroke, heart attack and any other illness that needs medical supplies that cannot be delivered.
Injuries can lead to an excessive amount of morbidity when a flood occurs. Injuries are not isolated to just those who were directly in the flood, rescue teams and even people delivering supplies can sustain an injury. Injuries can occur anytime during the flood process; before, during and after. During floods accidents occur with falling debris or any of the many fast moving objects in the water. After the flood rescue attempts are where large numbers injuries can occur.
Communicable diseases are increased due to many pathogens and bacteria that are being transported by the water.There are many waterborne diseases such as cholera, hepatitis A, hepatitis E and diarrheal diseases, to mention a few. Gastrointestinal disease and diarrheal diseases are very common due to a lack of clean water during a flood. Most of clean water supplies are contaminated when flooding occurs. Hepatitis A and E are common because of the lack of sanitation in the water and in living quarters depending on where the flood is and how prepared the community is for a flood.
When floods hit, people lose nearly all their crops, livestock, and food reserves and face starvation.
Floods also frequently damage power transmission and sometimes power generation, which then has knock-on effects caused by the loss of power. This includes loss of drinking water treatment and water supply, which may result in loss of drinking water or severe water contamination. It may also cause the loss of sewage disposal facilities. Lack of clean water combined with human sewage in the flood waters raises the risk of waterborne diseases, which can include typhoid, giardia, cryptosporidium, cholera and many other diseases depending upon the location of the flood.
Damage to roads and transport infrastructure may make it difficult to mobilize aid to those affected or to provide emergency health treatment.
Flooding can cause chronically wet houses, leading to the growth of indoor mold and resulting in adverse health effects, particularly respiratory symptoms. Respiratory diseases are a common after the disaster has occurred. This depends on the amount of water damage and mold that grows after an incident. Research suggests that there will be an increase of 30–50% in adverse respiratory health outcomes caused by dampness and mold exposure for those living in coastal and wetland areas. Fungal contamination in homes is associated with increased allergic rhinitis and asthma. Vector borne diseases increase as well due to the increase in still water after the floods have settled. The diseases that are vector borne are malaria, dengue, West Nile, and yellow fever. Floods have a huge impact on victims' psychosocial integrity. People suffer from a wide variety of losses and stress. One of the most treated illness in long-term health problems are depression caused by the flood and all the tragedy that flows with one.
Loss of life
Below is a list of the deadliest floods worldwide, showing events with death tolls at or above 100,000 individuals.
Positive impacts (benefits)
Floods (in particular more frequent or smaller floods) can also bring many benefits, such as recharging ground water, making soil more fertile and increasing nutrients in some soils. Flood waters provide much needed water resources in arid and semi-arid regions where precipitation can be very unevenly distributed throughout the year and kills pests in the farming land. Freshwater floods particularly play an important role in maintaining ecosystems in river corridors and are a key factor in maintaining floodplain biodiversity. Flooding can spread nutrients to lakes and rivers, which can lead to increased biomass and improved fisheries for a few years.
For some fish species, an inundated floodplain may form a highly suitable location for spawning with few predators and enhanced levels of nutrients or food. Fish, such as the weather fish, make use of floods in order to reach new habitats. Bird populations may also profit from the boost in food production caused by flooding.
Flooding can bring benefits, such as making the soil more fertile and providing it with more nutrients. For this reason, periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River among others.
The viability of hydropower, a renewable source of energy, is also higher in flood prone regions.
Protections against floods and associated hazards
Flood management
Flood management examples
In many countries around the world, waterways prone to floods are often carefully managed. Defenses such as detention basins, levees, bunds, reservoirs, and weirs are used to prevent waterways from overflowing their banks. When these defenses fail, emergency measures such as sandbags or portable inflatable tubes are often used to try to stem flooding. Coastal flooding has been addressed in portions of Europe and the Americas with coastal defenses, such as sea walls, beach nourishment, and barrier islands.
In the riparian zone near rivers and streams, erosion control measures can be taken to try to slow down or reverse the natural forces that cause many waterways to meander over long periods of time. Flood controls, such as dams, can be built and maintained over time to try to reduce the occurrence and severity of floods as well. In the United States, the U.S. Army Corps of Engineers maintains a network of such flood control dams.
In areas prone to urban flooding, one solution is the repair and expansion of human-made sewer systems and stormwater infrastructure. Another strategy is to reduce impervious surfaces in streets, parking lots and buildings through natural drainage channels, porous paving, and wetlands (collectively known as green infrastructure or sustainable urban drainage systems (SUDS)). Areas identified as flood-prone can be converted into parks and playgrounds that can tolerate occasional flooding. Ordinances can be adopted to require developers to retain stormwater on site and require buildings to be elevated, protected by floodwalls and levees, or designed to withstand temporary inundation. Property owners can also invest in solutions themselves, such as re-landscaping their property to take the flow of water away from their building and installing rain barrels, sump pumps, and check valves.
Flood safety planning
In the United States, the National Weather Service gives out the advice "Turn Around, Don't Drown" for floods; that is, it recommends that people get out of the area of a flood, rather than trying to cross it. At the most basic level, the best defense against floods is to seek higher ground for high-value uses while balancing the foreseeable risks with the benefits of occupying flood hazard zones. Critical community-safety facilities, such as hospitals, emergency-operations centers, and police, fire, and rescue services, should be built in areas least at risk of flooding. Structures, such as bridges, that must unavoidably be in flood hazard areas should be designed to withstand flooding. Areas most at risk for flooding could be put to valuable uses that could be abandoned temporarily as people retreat to safer areas when a flood is imminent.
Planning for flood safety involves many aspects of analysis and engineering, including:
observation of previous and present flood heights and inundated areas,
statistical, hydrologic, and hydraulic model analyses,
mapping inundated areas and flood heights for future flood scenarios,
long-term land use planning and regulation,
engineering design and construction of structures to control or withstand flooding,
intermediate-term monitoring, forecasting, and emergency-response planning, and
short-term monitoring, warning, and response operations.
Each topic presents distinct yet related questions with varying scope and scale in time, space, and the people involved. Attempts to understand and manage the mechanisms at work in floodplains have been made for at least six millennia.
In the United States, the Association of State Floodplain Managers works to promote education, policies, and activities that mitigate current and future losses, costs, and human suffering caused by flooding and to protect the natural and beneficial functions of floodplains – all without causing adverse impacts. A portfolio of best practice examples for disaster mitigation in the United States is available from the Federal Emergency Management Agency.
Flood clean-up safety
Clean-up activities following floods often pose hazards to workers and volunteers involved in the effort. Potential dangers include electrical hazards, carbon monoxide exposure, musculoskeletal hazards, heat or cold stress, motor vehicle-related dangers, fire, drowning, and exposure to hazardous materials. Because flooded disaster sites are unstable, clean-up workers might encounter sharp jagged debris, biological hazards in the flood water, exposed electrical lines, blood or other body fluids, and animal and human remains. In planning for and reacting to flood disasters, managers provide workers with hard hats, goggles, heavy work gloves, life jackets, and watertight boots with steel toes and insoles.
Flood predictions
Mathematical models and computer tools
A series of annual maximum flow rates in a stream reach can be analyzed statistically to estimate the 100-year flood and floods of other recurrence intervals there. Similar estimates from many sites in a hydrologically similar region can be related to measurable characteristics of each drainage basin to allow indirect estimation of flood recurrence intervals for stream reaches without sufficient data for direct analysis.
Physical process models of channel reaches are generally well understood and will calculate the depth and area of inundation for given channel conditions and a specified flow rate, such as for use in floodplain mapping and flood insurance. Conversely, given the observed inundation area of a recent flood and the channel conditions, a model can calculate the flow rate. Applied to various potential channel configurations and flow rates, a reach model can contribute to selecting an optimum design for a modified channel. Various reach models are available as of 2015, either 1D models (flood levels measured in the channel) or 2D models (variable flood depths measured across the extent of a floodplain). HEC-RAS, the Hydraulic Engineering Center model, is among the most popular software, if only because it is available free of charge. Other models such as TUFLOW combine 1D and 2D components to derive flood depths across both river channels and the entire floodplain.
Physical process models of complete drainage basins are even more complex. Although many processes are well understood at a point or for a small area, others are poorly understood at all scales, and process interactions under normal or extreme climatic conditions may be unknown. Basin models typically combine land-surface process components (to estimate how much rainfall or snowmelt reaches a channel) with a series of reach models. For example, a basin model can calculate the runoff hydrograph that might result from a 100-year storm, although the recurrence interval of a storm is rarely equal to that of the associated flood. Basin models are commonly used in flood forecasting and warning, as well as in analysis of the effects of land use change and climate change.
In the United States, an integrated approach to real-time hydrologic computer modelling uses observed data from the U.S. Geological Survey (USGS), various cooperative observing networks, various automated weather sensors, the NOAA National Operational Hydrologic Remote Sensing Center (NOHRSC), various hydroelectric companies, etc. combined with quantitative precipitation forecasts (QPF) of expected rainfall and/or snow melt to generate daily or as-needed hydrologic forecasts. The NWS also cooperates with Environment Canada on hydrologic forecasts that affect both the US and Canada, like in the area of the Saint Lawrence Seaway.
The Global Flood Monitoring System, "GFMS", a computer tool which maps flood conditions worldwide, is available online. Users anywhere in the world can use GFMS to determine when floods may occur in their area. GFMS uses precipitation data from NASA's Earth observing satellites and the Global Precipitation Measurement satellite, "GPM". Rainfall data from GPM is combined with a land surface model that incorporates vegetation cover, soil type, and terrain to determine how much water is soaking into the ground, and how much water is flowing into streamflow.
Users can view statistics for rainfall, streamflow, water depth, and flooding every 3 hours, at each 12-kilometer gridpoint on a global map. Forecasts for these parameters are 5 days into the future. Users can zoom in to see inundation maps (areas estimated to be covered with water) in 1-kilometer resolution.
Flood forecasts and warnings
Anticipating floods before they occur allows for precautions to be taken and people to be warned so that they can be prepared in advance for flooding conditions. For example, farmers can remove animals from low-lying areas and utility services can put in place emergency provisions to re-route services if needed. Emergency services can also make provisions to have enough resources available ahead of time to respond to emergencies as they occur. People can evacuate areas to be flooded.
In order to make the most accurate flood forecasts for waterways, it is best to have a long time-series of historical data that relates stream flows to measured past rainfall events. Coupling this historical information with real-time knowledge about volumetric capacity in catchment areas, such as spare capacity in reservoirs, ground-water levels, and the degree of saturation of area aquifers is also needed in order to make the most accurate flood forecasts.
Radar estimates of rainfall and general weather forecasting techniques are also important components of good flood forecasting. In areas where good quality data is available, the intensity and height of a flood can be predicted with fairly good accuracy and plenty of lead time. The output of a flood forecast is typically a maximum expected water level and the likely time of its arrival at key locations along a waterway, and it also may allow for the computation of the likely statistical return period of a flood. In many developed countries, urban areas at risk of flooding are protected against a 100-year flood – that is a flood that has a probability of around 63% of occurring in any 100-year period of time.
According to the U.S. National Weather Service (NWS) Northeast River Forecast Center (RFC) in Taunton, Massachusetts, a rule of thumb for flood forecasting in urban areas is that it takes at least of rainfall in around an hour's time in order to start significant ponding of water on impermeable surfaces. Many NWS RFCs routinely issue Flash Flood Guidance and Headwater Guidance, which indicate the general amount of rainfall that would need to fall in a short period of time in order to cause flash flooding or flooding on larger water basins.
Flood risk assessment
Flood risks can be defined as the risk that floods pose to individuals, property and the natural landscape based on specific hazards and vulnerability. The extent of flood risks can impact the types of mitigation strategies required and implemented.
A large amount of the world's population lives in close proximity to major coastlines, while many major cities and agricultural areas are located near floodplains. There is significant risk for increased coastal and fluvial flooding due to changing climatic conditions.
Examples by country or region
Worldwide: List of floods
Africa: List of floods#Africa
Asia: List of floods#Asia
Europe: List of floods in Europe
North Sea: Storm tides of the North Sea
The Netherlands: Floods in the Netherlands, Flood control in the Netherlands
Oceania: List of floods#Oceania
Australia: Floods in Australia
United States: Lists of floods in the United States
Society and culture
Myths and religion
Etymology
The word "flood" comes from the Old English , a word common to Germanic languages (compare German , Dutch from the same root as is seen in flow, float; also compare with Latin , ), meaning "a flowing of water, tide, an overflowing of land by water, a deluge, Noah's Flood; mass of water, river, sea, wave". The Old English word comes from the Proto-Germanic floduz (Old Frisian , Old Norse , Middle Dutch , Dutch , German , and Gothic derives from floduz).
| Physical sciences | Earth science | null |
50484 | https://en.wikipedia.org/wiki/Goose | Goose | A goose (: geese) is a bird of any of several waterfowl species in the family Anatidae. This group comprises the genera Anser (grey geese and white geese) and Branta (black geese). Some members of the Tadorninae subfamily (e.g., Egyptian goose, Orinoco goose) are commonly called geese, but are not considered "true geese" taxonomically. More distantly related members of the family Anatidae are swans, most of which are larger than true geese, and ducks, which are smaller.
The term "goose" may refer to such bird of either sex, but when paired with "gander", "goose" refers specifically to a female one ("gander" referring to a male). Young birds before fledging are called goslings. The collective noun for a group of geese on the ground is a gaggle; when in flight, they are called a skein, a team, or a wedge; when flying close together, they are called a plump.
Etymology
The word "goose" is a direct descendant of Proto-Indo-European *ǵʰh₂éns. In Germanic languages, the root gave Old English gōs with the plural gēs and gandra (becoming Modern English goose, geese, gander, respectively), West Frisian goes, gies and guoske, , New High German Gans, Gänse, and Ganter, and Old Norse gās and gæslingr, whence English gosling.
This term also gave , (goose, from Old Irish géiss), Hindi: कलहंस, , Spanish and , Ancient (khēn), (swans), , Avestan zāō, , , (huska / husak), (gusyna / gus), , and (ghāz).
True geese and their relatives
The two living genera of true geese are: Anser, grey geese and white geese, such as the greylag goose and snow goose, and Branta, black geese, such as the Canada goose.
Two genera of geese are only tentatively placed in the Anserinae; they may belong to the shelducks or form a subfamily on their own: Cereopsis, the Cape Barren goose, and Cnemiornis, the prehistoric New Zealand goose. Either these or, more probably, the goose-like coscoroba swan is the closest living relative of the true geese.
Fossils of true geese are hard to assign to genus; all that can be said is that their fossil record, particularly in North America, is dense and comprehensively documents many different species of true geese that have been around since about 10 million years ago in the Miocene. The aptly named Anser atavus (meaning "progenitor goose") from some 12 million years ago had even more plesiomorphies in common with swans. In addition, some goose-like birds are known from subfossil remains found on the Hawaiian Islands.
Geese are monogamous, living in permanent pairs throughout the year; however, unlike most other permanently monogamous animals, they are territorial only during the short nesting season. Paired geese are more dominant and feed more, two factors that result in more young.
Fossil record
Goose fossils have been found ranging from 10 to 12 million years ago (Middle Miocene). Garganornis ballmanni from Late Miocene (~ 6–9 Ma) of Gargano region of central Italy, stood one and a half meters tall and weighed about 22 kilograms. The evidence suggests the bird was flightless, unlike modern geese.
Migratory patterns
Most goose species are migratory, though populations of Canada geese living near human developments may remain in a locality year-round. These 'resident' geese, found primarily in the eastern United States, may migrate only short distances, or not at all, if they have adequate food supply and access to open water.
Navigation
Migratory geese may use several environmental cues in timing the beginning of their migration, including temperature, predation threat, and food availability. Like all migratory birds, geese exhibit an ability to navigate using an internal compass, using a combination of innate and learned behaviors. The preferred direction of migration is heritable, and birds appear to orient themselves using Earth's magnetic field. Migrations occur over the course of several weeks, and up to 85% of migration time is spent at perennial stopover sites, where individuals rest and build up fat stores for further travel.
Formation
Geese, like other birds, fly in a V formation. This formation helps to conserve energy in flight, and aids in communication and monitoring of flock mates. Using great white pelicans as a model species, researchers showed that flying in a V formation increased the aerodynamics of trailing birds, thus requiring fewer wing flaps to stay aloft and lowering individuals' heartrates. Leading geese switch positions on longer flights to allow for multiple individuals to gain benefits from the less energy-intensive trailing positions; in family groups, parental birds almost always lead.
Other birds called "geese"
Some mainly Southern Hemisphere birds are called "geese", most of which belong to the shelduck subfamily Tadorninae. These are:
The Orinoco goose (Neochen jubata)
The Egyptian goose (Alopochen aegyptiaca)
The South American sheldgeese in the genus Chloephaga
The prehistoric Malagasy sheldgoose (Centrornis majori)
Others:
The spur-winged goose (Plectropterus gambensis) is most closely related to the shelducks, but distinct enough to warrant its own subfamily, the Plectropterinae.
The blue-winged goose (Cyanochen cyanopterus) and the Cape Barren goose (Cereopsis novaehollandiae) have disputed affinities. They belong to separate ancient lineages that may ally either to the Tadorninae, the Anserinae, or closer to the dabbling ducks (Anatinae).
The three species of small waterfowl in the genus Nettapus named "pygmy geese"; they seem to represent another ancient lineage, with possible affinities to the Cape Barren goose or the spur-winged goose.
The maned goose, also known as the maned duck or Australian wood duck (Chenonetta jubata)
A genus of prehistorically extinct seaducks, Chendytes, is sometimes called the "diving-geese" due to their large size.
The magpie goose (Anseranas semipalmata) is the only living species in the family Anseranatidae.
The northern gannet (Morus bassanus), a seabird, is also known as the "solan goose", although it is unrelated to the true geese, or any other Anseriformes for that matter.
In popular culture
Sayings and phrases that reference geese
To "have a gander" is to look at something.
"What's good sauce for the goose is good sauce for the gander" or "What's good for the goose is good for the gander" means that what is an appropriate treatment for one person is equally appropriate for someone else. This statement supporting equality is frequently used in the context of sex and gender, because a goose is female and a gander is male.
Saying that someone's "goose is cooked" means that they are about to be punished.
The common phrase "silly goose" is used when referring to someone who is acting particularly silly.
"Killing the goose that lays the golden eggs", derived from Aesop's Fables, is a saying referring to a greed-motivated action that destroys or otherwise renders useless a favourable situation that would have provided benefits over time.
"A wild goose chase" is a useless, futile waste of time and effort. It is derived from a 16th-century horse racing event.
A raised, rounded area of swelling (typically a hematoma) caused by an impact injury is sometimes metaphorically called a "goose egg", especially if it occurs on the head.
Geese as characters in cultural works
Mother Goose is a fictitious children's storybook author associated with several collections of fairy tales and nursery rhymes translated into English during the 18th century.
Gänsewein (German, ) is a playful term for plain drinking water, first documented the Podagrammisch Trostbüchlein by Johann Fischart (1577).
Popular indie game Untitled Goose Game released in 2019 chronicles the activities of an ornery goose in an English village.
In the late 18th century poem, The Goose and the Common, geese serve to illustrate the social and economic issues cased by the enclosure of common land.
"Gray Goose Laws" in Iceland
The oldest collection of Medieval Icelandic laws is known as "Grágás"; i.e., the Gray Goose Laws. Various etymologies were offered for that name:
The fact that the laws were written with a goose quill;
The fact that the laws were bound in goose skin;
Because of the age of the laws — it was then believed that geese lived longer than other birds.
Gallery
| Biology and health sciences | Anseriformes | null |
50505 | https://en.wikipedia.org/wiki/Mule | Mule | The mule is a domestic equine hybrid between a donkey and a horse. It is the offspring of a male donkey (a jack) and a female horse (a mare). The horse and the donkey are different species, with different numbers of chromosomes; of the two possible first-generation hybrids between them, the mule is easier to obtain and more common than the hinny, which is the offspring of a male horse (a stallion) and a female donkey (a jenny).
Mules vary widely in size, and may be of any color seen in horses or donkeys. They are more patient, hardier and longer-lived than horses, and are perceived as less obstinate and more intelligent than donkeys.
Terminology
A female mule that has oestrus cycles, and so could, in theory, carry a fetus, is called a "molly" or "Molly mule", although the term is sometimes used to refer to female mules in general. A male mule is properly called a "horse mule", although it is often called a "john mule", which is the correct term for a gelded mule. A young male mule is called a "mule colt", and a young female is called a "mule filly".
History
Breeding of mules became possible only when the range of the domestic horse, which originated in Central Asia in about , extended into that of the domestic ass, which originated in north-eastern Africa. This overlap probably occurred in Anatolia and Mesopotamia in Western Asia, and mules were bred there before .
A painting in the Tomb of Nebamun at Thebes, dating from approximately , shows a chariot drawn by a pair of animals which have been variously identified as onagers, as mules or as hinnies. Mules were present in Israel and Judah in the time of King David. There are many representations of them in Mesopotamian works of art dating from the first millennium BC. Among the bas-reliefs depicting the Lion Hunt of Ashurbanipal from the North Palace of Nineveh is a clear and detailed image of two mules loaded with nets for hunting.
Homer noted their arrival in Asia Minor in the Iliad in 800 BC.
Christopher Columbus allegedly brought mules to the New World.
George Washington bred mules at his Mount Vernon home. At the time, they were not common in the United States, but Washington understood their value, as they were "more docile than donkeys and cheap to maintain." In the nineteenth century, they were used in various capacities as draught animals – on farms, especially where clay made the soil slippery and sticky; pulling canal boats; and famously for pulling, often in teams of 20 or more animals, wagonloads of borax out of Death Valley, California from 1883 to 1889. The wagons were among the largest ever pulled by draught animals, designed to carry 10 short tons (9 metric tons) of borax ore at a time.
Mules were used by armies to transport supplies, occasionally as mobile firing platforms for smaller cannons, and to pull heavier field guns with wheels over mountainous trails such as in Afghanistan during the Second Anglo-Afghan War.
In the second half of the twentieth century, widespread use of mules declined in industrialised countries. The use of mules for farming and for transportation of agricultural products largely gave way to steam-, then diesel-powered, tractors and lorries.
On 5 May 2003, Idaho Gem, a mule foal cloned by nuclear transfer of cells from foetal material, was born at the University of Idaho in Moscow, Idaho. Neither an equid nor a hybrid animal had been cloned before.
Characteristics
In general terms, in both the mule and the hinny, the foreparts and head of the animal are similar to those of the father sire, while the hindparts and tail tend to resemble those of the dam. A mule is generally larger than a hinny, with longer ears and a heavier head; the tail is usually covered with long hair like that of its mare mother. A mule has the thin limbs, small narrow hooves and short mane of the donkey, while its height, the shape of the neck and body, and the uniformity of its coat and teeth are more similar to those of the horse.
Mules vary widely in size, from small miniature mules under to large and powerful draught mules standing up to at the withers. The median weight range is between about .
The coat may be of any color seen in the horse or in the donkey. Mules usually display the light points commonly seen in donkeys: pale or mealy areas on the belly and the insides of the thighs, on the muzzle, and around the eyes. They often have primitive markings such as dorsal stripe, shoulder stripe or zebra stripes on the legs.
The mule exhibits hybrid vigor. Charles Darwin wrote: "The mule always appears to me a most surprising animal. That a hybrid should possess more reason, memory, obstinacy, social affection, powers of muscular endurance, and length of life, than either of its parents, seems to indicate that art has here outdone nature".
The mule inherits from the donkey the traits of intelligence, sure-footedness, toughness, endurance, disposition, and natural cautiousness. From the horse it inherits speed, conformation, and agility. Mules are reputed to exhibit a higher cognitive intelligence than their parent species, but robust scientific evidence to back up these claims is lacking. Preliminary data exist from at least two evidence-based studies, but they rely on a limited set of specialized cognitive tests and a small number of subjects. Mules are generally taller at the shoulder than donkeys and have better endurance than horses, although a lower top speed.
In the early twentieth century the mule was preferred to the horse as a pack animal – its skin is harder and less sensitive than that of a horse, and it is better able to bear heavy weights.
Fertility
A mule has 63 chromosomes, intermediate between the 64 of the horse and the 62 of the donkey. Mules are usually infertile for this reason.
Pregnancy is rare, but can occasionally occur naturally, as well as through embryo transfer. A few mare mules have produced offspring when mated with a horse or a jack. Herodotus gives an account of such an event as an ill omen of Xerxes' invasion of Greece in 480 BC: "There happened also a portent of another kind while he was still at Sardis—a mule brought forth young and gave birth to a mule" (Herodotus The Histories 7:57), and a mule's giving birth was a frequently recorded portent in antiquity, although scientific writers also doubted whether it was really possible (see e.g. Aristotle, Historia animalium, 6.24; Varro, De re rustica, 2.1.28). Between 1527 and 2002, approximately sixty such births were reported. In Morocco in early 2002 and Colorado in 2007, mare mules produced colts. Blood and hair samples from the Colorado birth verified that the mother was indeed a mule and the foal was indeed her offspring.
A 1939 article in the Journal of Heredity describes two offspring of a fertile mare mule named "Old Bec," which was owned at the time by Texas A&M University in the late 1920s. One of the foals was a female, sired by a jack. Unlike her mother, she was sterile. The other, sired by a five-gaited Saddlebred stallion, exhibited no characteristics of any donkey. That horse, a stallion, was bred to several mares, which gave birth to live foals that showed no characteristics of the donkey. In 1995, a group from the Federal University of Minas Gerais described a female mule that was pregnant for a seventh time, having previously produced two donkey sires, two foals with the typical 63 chromosomes of mules, and several horse stallions that had produced four foals. The three of the latter available for testing each bore 64 horse-like chromosomes. These foals phenotypically resembled horses, though they bore markings absent from the sire's known lineages, and one had ears noticeably longer than those typical of her sire's breed. The elder two horse-like foals had proved fertile at the time of publication, with their progeny being typical of horses.
Use
While a few mules can carry live weight up to , the superiority of the mule becomes apparent in their additional endurance. In general, a mule can be packed with dead weight up to 20% of its body weight, or around . Although it depends on the individual animal, mules trained by the Army of Pakistan are reported to be able to carry up to and walk without resting. The average equine in general can carry up to roughly 30% of its body weight in live weight, such as a rider.
About 3.5 million donkeys and mules are slaughtered each year for meat worldwide.
Mule trains have been part of working portions of transportation links as recently as 2005 by the World Food Programme, and are still used extensively to transport cargo in rugged, roadless regions.
The Food and Agriculture Organization of the United Nations reports that China was the top market for mules in 2003, closely followed by Mexico and many Central and South American nations.
Gallery
| Biology and health sciences | Hybrids | null |
50510 | https://en.wikipedia.org/wiki/Kilometre | Kilometre | The kilometre (SI symbol: km; or ), spelt kilometer in American and Philippine English, is a unit of length in the International System of Units (SI), equal to one thousand metres (kilo- being the SI prefix for ). It is the preferred measurement unit to express distances between geographical places on land in most of the world; notable exceptions are the United States and the United Kingdom where the statute mile is used.
Pronunciation
There are two common pronunciations for the word.
The first pronunciation follows a pattern in English whereby SI units are pronounced with the stress on the first syllable (as in kilogram, kilojoule and kilohertz) and the pronunciation of the actual base unit does not change irrespective of the prefix (as in centimetre, millimetre, nanometre and so on). It is generally preferred by the British Broadcasting Corporation (BBC), the Canadian Broadcasting Corporation (CBC), and the Australian Broadcasting Corporation (ABC).
Many other users, particularly in countries where SI (the metric system) is not widely used, use the second pronunciation with stress on the second syllable. The second pronunciation follows the stress pattern used for the names of measuring instruments (such as micrometer, barometer, thermometer, tachometer, and speedometer). The contrast is even more obvious in countries that use the American spelling of the word metre. This pronunciation is irregular because it makes the kilometre the only SI unit with the stress on the second syllable.
After Australia introduced the metric system in 1970, the first pronunciation was declared official by the government's Metric Conversion Board. However, the Australian prime minister at the time, Gough Whitlam, insisted that the second pronunciation was the correct one because of the Greek origins of the two parts of the word.
Equivalence to other units of length
{|
|-
| rowspan="8" style="vertical-align:top;"| 1 kilometre
| ≡
| align=right |
| metres
|-
| ≈
| align=right |
|feet
|-
| ≈
| align=right |
| yards
|-
| ≈
| align=right | 0.621
| miles
|-
| ≈
| align=right | 0.540
| nautical miles
|-
| ≈
| align=right |
| astronomical units
|-
| ≈
| align=right |
| light-years
|-
| ≈
| align=right |
| parsecs
|}
History
By a decree of 8 May 1790, the French National Constituent Assembly ordered the French Academy of Sciences to develop a new measurement system. In August 1793, the French National Convention decreed the metre as the sole length measurement system in the French Republic and it was based on millionth of the distance from the orbital poles (either North or South) to the Equator, this being a truly internationally based unit. The first name of the kilometre was "Millaire".
Although the metre was formally defined in 1799, the myriametre ( metres) was preferred to the "kilometre" for everyday use. The term "myriamètre" appeared a number of times in the text of Develey's book Physique d'Emile: ou, Principes de la science de la nature, (published in 1802), while the term kilometre only appeared in an appendix. French maps published in 1835 had scales showing myriametres and "lieues de Poste (Postal leagues of about metres).
The Dutch, on the other hand, adopted the kilometre in 1817 but gave it the local name of the mijl. It was only in 1867 that the term "kilometer became the only official unit of measure in the Netherlands to represent metres.
Two German textbooks dated 1842 and 1848 respectively give a snapshot of the use of the kilometre across Europe: the kilometre was in use in the Netherlands and in Italy, and the myriametre was in use in France.
In 1935, the International Committee for Weights and Measures (CIPM) officially abolished the prefix "myria-" and with it the "myriametre", leaving the kilometre as the recognised unit of length for measurements of that magnitude.
The symbol km for the kilometre is in lower case and has been standardised by the BIPM. A slang term for the kilometre in the US, UK, and Canadian militaries is klick.
Kilometre records
Some sporting disciplines feature (one-kilometre) races in major events (such as the Olympic Games). In some disciplines—although world records are catalogued—one-kilometre events remain a minority.
| Physical sciences | Metric | Basics and measurement |
50513 | https://en.wikipedia.org/wiki/Melanin | Melanin | Melanin (; ) is a family of biomolecules organized as oligomers or polymers, which among other functions provide the pigments of many organisms. Melanin pigments are produced in a specialized group of cells known as melanocytes.
There are five basic types of melanin: eumelanin, pheomelanin, neuromelanin, allomelanin and pyomelanin. Melanin is produced through a multistage chemical process known as melanogenesis, where the oxidation of the amino acid tyrosine is followed by polymerization. Pheomelanin is a cysteinated form containing polybenzothiazine portions that are largely responsible for the red or yellow tint given to some skin or hair colors. Neuromelanin is found in the brain. Research has been undertaken to investigate its efficacy in treating neurodegenerative disorders such as Parkinson's. Allomelanin and pyomelanin are two types of nitrogen-free melanin.
The phenotypic color variation observed in the epidermis and hair of mammals is primarily determined by the levels of eumelanin and pheomelanin in the examined tissue. In an average human individual, eumelanin is more abundant in tissues requiring photoprotection, such as the epidermis and the retinal pigment epithelium. In healthy subjects, epidermal melanin is correlated with UV exposure, while retinal melanin has been found to correlate with age, with levels diminishing 2.5-fold between the first and ninth decades of life, which has been attributed to oxidative degradation mediated by reactive oxygen species generated via lipofuscin-dependent pathways. In the absence of albinism or hyperpigmentation, the human epidermis contains approximately 74% eumelanin and 26% pheomelanin, largely irrespective of skin tone, with eumelanin content ranging between 71.8–78.9%, and pheomelanin varying between 21.1–28.2%. Total melanin content in the epidermis ranges from around 0 μg/mg in albino epidermal tissue to >10 μg/mg in darker tissue.
In the human skin, melanogenesis is initiated by exposure to UV radiation, causing the skin to darken. Eumelanin is an effective absorbent of light; the pigment is able to dissipate over 99.9% of absorbed UV radiation. Because of this property, eumelanin is thought to protect skin cells from UVA and UVB radiation damage, reducing the risk of folate depletion and dermal degradation. Exposure to UV radiation is associated with increased risk of malignant melanoma, a cancer of melanocytes (melanin cells). Studies have shown a lower incidence for skin cancer in individuals with more concentrated melanin, i.e. darker skin tone.
Melanin types
Eumelanin
Eumelanin () has two forms linked to 5,6-dihydroxyindole (DHI) and 5,6-dihydroxyindole-2-carboxylic acid (DHICA). DHI-derived eumelanin is dark brown or black and insoluble, and DHICA -derived eumelanin which is lighter and soluble in alkali. Both eumelanins arise from the oxidation of tyrosine in specialized organelles called melanosomes. This reaction is catalyzed by the enzyme tyrosinase. The initial product, dopaquinone can transform into either 5,6-dihydroxyindole (DHI) or 5,6-dihydroxyindole-2-carboxylic acid (DHICA). DHI and DHICA are oxidized and then polymerize to form the two eumelanins.
In natural conditions, DHI and DHICA often co-polymerize, resulting in a range of eumelanin polymers. These polymers contribute to the variety of melanin components in human skin and hair, ranging from light yellow/red pheomelanin to light brown DHICA-enriched eumelanin and dark brown or black DHI-enriched eumelanin. These final polymers differ in solubility and color.
Analysis of highly pigmented (Fitzpatrick type V and VI) skin finds that DHI-eumelanin comprises the largest portion, approximately 60–70%, followed by DHICA-eumelanin at 25–35%, and pheomelanin only 2–8%. Notably, while an enrichment of DHI-eumelanin occurs in during sun tanning, it is accompanied by a decrease in DHICA-eumelanin and pheomelanin. A small amount of black eumelanin in the absence of other pigments causes grey hair. A small amount of eumelanin in the absence of other pigments causes blond hair. Eumelanin is present in the skin and hair, etc.
Pheomelanin
Pheomelanins (or phaeomelanins, from Greek φαιός phaios, "grey") impart a range of yellowish to reddish colors. Pheomelanins are particularly concentrated in the lips, nipples, glans of the penis, and vagina. When a small amount of eumelanin in hair (which would otherwise cause blond hair) is mixed with pheomelanin, the result is orange hair, which is typically called "red" or "ginger" hair. Pheomelanin is also present in the skin, and redheads consequently often have a more pinkish hue to their skin as well. Exposure of the skin to ultraviolet light increases pheomelanin content, as it does for eumelanin; but rather than absorbing light, pheomelanin within the hair and skin reflect yellow to red light, which may increase damage from UV radiation exposure.
Pheomelanin production is highly dependent on cysteine availability, which is transported into the melanosome, reacting with dopaquinone to form cys-dopa. Cys-dopa then undergoes several transformations before forming pheomelanin. In chemical terms, pheomelanins differ from eumelanins in that the oligomer structure incorporates benzothiazine and benzothiazole units that are produced, instead of DHI and DHICA, when the amino acid L-cysteine is present.
Pheomelanins, unlike euemanins, are rare in lower organisms with claims they are an "evolutionary innovation in the tetrapod lineage" but recent research finds them also in some fish.
Neuromelanin
Neuromelanin (NM) is an insoluble polymer pigment produced in specific populations of catecholaminergic neurons in the brain. Humans have the largest amount of NM, which is present in lesser amounts in other primates, and totally absent in many other species. The biological function remains unknown, although human NM has been shown to efficiently bind transition metals such as iron, as well as other potentially toxic molecules. Therefore, it may play crucial roles in apoptosis and the related Parkinson's disease.
Other forms of melanins
Up until the 1960s, melanin was classified into eumelanin and pheomelanin. However, in 1955, a melanin associated with nerve cells was discovered, neuromelanin. In 1972 a water-soluble form, pyomelanin was discovered. In 1976, allomelanin, the fifth form of the melanins was found in nature.
Peptidomelanin
Peptidomelanin is another water-soluble form of melanin. It was found to be secreted into the surrounding medium by germinating Aspergillus niger (strain: melanoliber) spores. Peptidomelanin is formed as a copolymer between L-DOPA eumelanin and short peptides that form a 'corona', that are responsible for the substance's solubility. The peptide chains are linked to the L-DOPA core polymer via peptide bonds. This lead to a proposed biosynthetic process involving the hydroxylation of tyrosinylated peptides formed via proteases during sporogenesis, which are then incorporated autoxidatively into a growing L-DOPA core polymer.
Selenomelanin
It is possible to enrich melanin with selenium instead of sulphur. This selenium analogue of pheomelanin has been successfully synthesized through chemical and biosynthetic routes using selenocystine as a feedstock. Due to selenium's higher atomic number, the obtained selenomelanin can be expected to provide better protection against ionising radiation as compared to the other known forms of melanin. This protection has been demonstrated with radiation experiments on human cells and bacteria, opening up the possibility of applications in space travel.
Trichochromes
Trichochromes (formerly called trichosiderins) are pigments produced from the same metabolic pathway as the eumelanins and pheomelanins, but unlike those molecules they have low molecular weight. They occur in some red human hair.
Humans
In humans, melanin is the primary determinant of skin color. It is also found in hair, the pigmented tissue underlying the iris of the eye, and the stria vascularis of the inner ear. In the brain, tissues with melanin include the medulla and pigment-bearing neurons within areas of the brainstem, such as the locus coeruleus. It also occurs in the zona reticularis of the adrenal gland.
The melanin in the skin is produced by melanocytes, which are found in the basal layer of the epidermis. Although, in general, human beings possess a similar concentration of melanocytes in their skin, the melanocytes in some individuals and ethnic groups produce variable amounts of melanin. The ratio of eumelanin (74%) and pheomelanin (26%) in the epidermis is constant regardless of the degree of pigmentation. Some humans have very little or no melanin synthesis in their bodies, a condition known as albinism.
Because melanin is an aggregate of smaller component molecules, there are many different types of melanin with different proportions and bonding patterns of these component molecules. Both pheomelanin and eumelanin are found in human skin and hair, but eumelanin is the most abundant melanin in humans, as well as the form most likely to be deficient in albinism.
Other organisms
Melanins have very diverse roles and functions in various organisms. A form of melanin makes up the ink used by many cephalopods (see cephalopod ink) as a defense mechanism against predators. Melanins also protect microorganisms, such as bacteria and fungi, against stresses that involve cell damage such as UV radiation from the sun and reactive oxygen species. Melanin also protects against damage from high temperatures, chemical stresses (such as heavy metals and oxidizing agents), and biochemical threats (such as host defenses against invading microbes). Therefore, in many pathogenic microbes (for example, in Cryptococcus neoformans, a fungus) melanins appear to play important roles in virulence and pathogenicity by protecting the microbe against immune responses of its host. In invertebrates, a major aspect of the innate immune defense system against invading pathogens involves melanin. Within minutes after infection, the microbe is encapsulated within melanin (melanization), and the generation of free radical byproducts during the formation of this capsule is thought to aid in killing them. Some types of fungi, called radiotrophic fungi, appear to be able to use melanin as a photosynthetic pigment that enables them to capture gamma rays and harness this energy for growth.
In fish, melanin occurs not only in the skin but also in internal organs such as eyes. Most fish species use eumelanin, but Stegastes apicalis and Cyprinus carpio use pheomelanin instead.
The darker feathers of birds owe their color to melanin and are less readily degraded by bacteria than unpigmented ones or those containing carotenoid pigments. Feathers that contain melanin are also 39% more resistant to abrasion than those that do not because melanin granules help fill the space between the keratin strands that form feathers. Pheomelanin synthesis in birds implies the consumption of cysteine, a semi‐essential amino acid that is necessary for the synthesis of the antioxidant glutathione (GSH) but that may be toxic if in excess in the diet. Indeed, many carnivorous birds, which have a high protein content in their diet, exhibit pheomelanin‐based coloration.
Melanin is also important in mammalian pigmentation. The coat pattern of mammals is determined by the agouti gene which regulates the distribution of melanin. The mechanisms of the gene have been extensively studied in mice to provide an insight into the diversity of mammalian coat patterns.
Melanin in arthropods has been observed to be deposited in layers thus producing a Bragg reflector of alternating refractive index. When the scale of this pattern matches the wavelength of visible light, structural coloration arises: giving a number of species an iridescent color.
Arachnids are one of the few groups in which melanin has not been easily detected, though researchers found data suggesting spiders do in fact produce melanin.
Some moth species, including the wood tiger moth, convert resources to melanin to enhance their thermoregulation. As the wood tiger moth has populations over a large range of latitudes, it has been observed that more northern populations showed higher rates of melanization. In both yellow and white male phenotypes of the wood tiger moth, individuals with more melanin had a heightened ability to trap heat but an increased predation rate due to a weaker and less effective aposematic signal.
Melanin may protect Drosophila flies and mice against DNA damage from non-UV radiation.
Plants
Melanin produced by plants are sometimes referred to as 'catechol melanins' as they can yield catechol on alkali fusion. It is commonly seen in the enzymatic browning of fruits such as bananas. Chestnut shell melanin can be used as an antioxidant and coloring agent. Biosynthesis involves the oxidation of indole-5,6-quinone by the tyrosinase type polyphenol oxidase from tyrosine and catecholamines leading to the formation of catechol melanin. Despite this many plants contain compounds which inhibit the production of melanins.
Interpretation as a single monomer
It is now understood that melanins do not have a single structure or stoichiometry. Nonetheless, chemical databases such as PubChem include structural and empirical formulae; typically 3,8-Dimethyl-2,7-dihydrobenzo[1,2,3-cd:4,5,6-c′d′]diindole-4,5,9,10-tetrone. This can be thought of as a single monomer that accounts for the measured elemental composition and some properties of melanin, but is unlikely to be found in nature. Solano claims that this misleading trend stems from a report of an empirical formula in 1948, but provides no other historical detail.
Biosynthetic pathways
The first step of the biosynthetic pathway for both eumelanins and pheomelanins is catalysed by tyrosinase.
Tyrosine → DOPA → dopaquinone
Dopaquinone can combine with cysteine by two pathways to benzothiazines and pheomelanins
dopaquinone + cysteine → 5-S-cysteinyldopa → benzothiazine intermediate → pheomelanin
dopaquinone + cysteine → 2-S-cysteinyldopa → benzothiazine intermediate → pheomelanin
Also, dopaquinone can be converted to leucodopachrome and follow two more pathways to the eumelanins
dopaquinone → leucodopachrome → dopachrome → 5,6-dihydroxyindole-2-carboxylic acid → quinone → eumelanin
dopaquinone → leucodopachrome → dopachrome → 5,6-dihydroxyindole → quinone → eumelanin
Detailed metabolic pathways can be found in the KEGG database (see External links).
Microscopic appearance
Melanin is brown, non-refractile, and finely granular with individual granules having a diameter of less than 800 nanometers. This differentiates melanin from common blood breakdown pigments, which are larger, chunky, and refractile, and range in color from green to yellow or red-brown. In heavily pigmented lesions, dense aggregates of melanin can obscure histologic detail. A dilute solution of potassium permanganate is an effective melanin bleach.
Genetic disorders and disease states
There are approximately nine types of oculocutaneous albinism, which is mostly an autosomal recessive disorder. Certain ethnicities have higher incidences of different forms. For example, the most common type, called oculocutaneous albinism type 2 (OCA2), is especially frequent among people of black African descent and white Europeans. People with OCA2 usually have fair skin, but are often not as pale as OCA1. They (OCA2 or OCA1? see comments in History) have pale blonde to golden, strawberry blonde, or even brown hair, and most commonly blue eyes. 98.7–100% of modern Europeans are carriers of the derived allele SLC24A5, a known cause of nonsyndromic oculocutaneous albinism. It is an autosomal recessive disorder characterized by a congenital reduction or absence of melanin pigment in the skin, hair, and eyes. The estimated frequency of OCA2 among African-Americans is 1 in 10,000, which contrasts with a frequency of 1 in 36,000 in white Americans. In some African nations, the frequency of the disorder is even higher, ranging from 1 in 2,000 to 1 in 5,000. Another form of Albinism, the "yellow oculocutaneous albinism", appears to be more prevalent among the Amish, who are of primarily Swiss and German ancestry. People with this IB variant of the disorder commonly have white hair and skin at birth, but rapidly develop normal skin pigmentation in infancy.
Ocular albinism affects not only eye pigmentation but visual acuity, as well. People with albinism typically test poorly, within the 20/60 to 20/400 range. In addition, two forms of albinism, with approximately 1 in 2,700 most prevalent among people of Puerto Rican origin, are associated with mortality beyond melanoma-related deaths.
The connection between albinism and deafness is well known, though poorly understood. In his 1859 treatise On the Origin of Species, Charles Darwin observed that "cats which are entirely white and have blue eyes are generally deaf". In humans, hypopigmentation and deafness occur together in the rare Waardenburg's syndrome, predominantly observed among the Hopi in North America. The incidence of albinism in Hopi Indians has been estimated as approximately 1 in 200 individuals. Similar patterns of albinism and deafness have been found in other mammals, including dogs and rodents. However, a lack of melanin per se does not appear to be directly responsible for deafness associated with hypopigmentation, as most individuals lacking the enzymes required to synthesize melanin have normal auditory function. Instead, the absence of melanocytes in the stria vascularis of the inner ear results in cochlear impairment, though the reasons for this are not fully understood.
In Parkinson's disease, a disorder that affects neuromotor functioning, there is decreased neuromelanin in the substantia nigra and locus coeruleus as a consequence of specific dropping out of dopaminergic and noradrenergic pigmented neurons. This results in diminished dopamine and norepinephrine synthesis. While no correlation between race and the level of neuromelanin in the substantia nigra has been reported, the significantly lower incidence of Parkinson's in blacks than in whites has "prompt[ed] some to suggest that cutaneous melanin might somehow serve to protect the neuromelanin in substantia nigra from external toxins."
In addition to melanin deficiency, the molecular weight of the melanin polymer may be decreased by various factors such as oxidative stress, exposure to light, perturbation in its association with melanosomal matrix proteins, changes in pH, or in local concentrations of metal ions. A decreased molecular weight or a decrease in the degree of polymerization of ocular melanin has been proposed to turn the normally anti-oxidant polymer into a pro-oxidant. In its pro-oxidant state, melanin has been suggested to be involved in the causation and progression of macular degeneration and melanoma. Rasagiline, an important monotherapy drug in Parkinson's disease, has melanin binding properties, and melanoma tumor reducing properties.
Higher eumelanin levels also can be a disadvantage, however, beyond a higher disposition toward vitamin D deficiency. Dark skin is a complicating factor in the laser removal of port-wine stains. Effective in treating white skin, in general, lasers are less successful in removing port-wine stains in people of Asian or African descent. Higher concentrations of melanin in darker-skinned individuals simply diffuse and absorb the laser radiation, inhibiting light absorption by the targeted tissue. In a similar manner, melanin can complicate laser treatment of other dermatological conditions in people with darker skin.
Freckles and moles are formed where there is a localized concentration of melanin in the skin. They are highly associated with pale skin.
Nicotine has an affinity for melanin-containing tissues because of its precursor function in melanin synthesis or its irreversible binding of melanin. This has been suggested to underlie the increased nicotine dependence and lower smoking cessation rates in darker pigmented individuals.
Human adaptations
Physiology
Melanocytes insert granules of melanin into specialized cellular vesicles called melanosomes. These are then transferred into the keratinocyte cells of the human epidermis. The melanosomes in each recipient cell accumulate atop the cell nucleus, where they protect the nuclear DNA from mutations caused by the ionizing radiation of the sun's ultraviolet rays. In general, people whose ancestors lived for long periods in the regions of the globe near the equator have larger quantities of eumelanin in their skins. This makes their skins brown or black and protects them against high levels of exposure to the sun, which more frequently result in melanomas in lighter-skinned people.
Not all the effects of pigmentation are advantageous. Pigmentation increases the heat load in hot climates, and dark-skinned people absorb 30% more heat from sunlight than do very light-skinned people, although this factor may be offset by more profuse sweating. In cold climates dark skin entails more heat loss by radiation. Pigmentation also hinders synthesis of vitamin D. Since pigmentation appears to be not entirely advantageous to life in the tropics, other hypotheses about its biological significance have been advanced; for example a secondary phenomenon induced by adaptation to parasites and tropical diseases.
Evolutionary origins
Early humans evolved dark skin color, as an adaptation to a loss of body hair that increased the effects of UV radiation. Before the development of hairlessness, early humans might have had light skin underneath their fur, similar to that found in other primates. Anatomically modern humans evolved in Africa between 200,000 and 100,000 years ago, and then populated the rest of the world through migration between 80,000 and 50,000 years ago, in some areas interbreeding with certain archaic human species (Neanderthals, Denisovans, and possibly others). The first modern humans had darker skin as the indigenous people of Africa today. Following migration and settlement in Asia and Europe, the selective pressure dark UV-radiation protecting skin decreased where radiation from the sun was less intense. This resulted in the current range of human skin color. Of the two common gene variants known to be associated with pale human skin, Mc1r does not appear to have undergone positive selection, while SLC24A5 has undergone positive selection.
Effects
As with peoples having migrated northward, those with light skin migrating toward the equator acclimatize to the much stronger solar radiation. Nature selects for less melanin when ultraviolet radiation is weak. Most people's skin darkens when exposed to UV light, giving them more protection when it is needed. This is the physiological purpose of sun tanning. Dark-skinned people, who produce more skin-protecting eumelanin, have a greater protection against sunburn and the development of melanoma, a potentially deadly form of skin cancer, as well as other health problems related to exposure to strong solar radiation, including the photodegradation of certain vitamins such as riboflavins, carotenoids, tocopherol, and folate.
Melanin in the eyes, in the iris and choroid, helps protect from ultraviolet and high-frequency visible light; people with blue, green, and grey eyes are more at risk of sun-related eye problems. Furthermore, the ocular lens yellows with age, providing added protection. However, the lens also becomes more rigid with age, losing most of its accommodation—the ability to change shape to focus from far to near—a detriment due probably to protein crosslinking caused by UV exposure.
Recent research suggests that melanin may serve a protective role other than photoprotection. Melanin is able to effectively chelate metal ions through its carboxylate and phenolic hydroxyl groups, often much more efficiently than the powerful chelating ligand ethylenediaminetetraacetate (EDTA). Thus, it may serve to sequester potentially toxic metal ions, protecting the rest of the cell. This hypothesis is supported by the fact that the loss of neuromelanin, observed in Parkinson's disease, is accompanied by an increase in iron levels in the brain.
Physical properties and technological applications
Evidence exists for a highly cross-linked heteropolymer bound covalently to matrix scaffolding melanoproteins. It has been proposed that the ability of melanin to act as an antioxidant is directly proportional to its degree of polymerization or molecular weight. Suboptimal conditions for the effective polymerization of melanin monomers may lead to formation of pro-oxidant melanin with lower-molecular-weight, implicated in the causation and progression of macular degeneration and melanoma. Signaling pathways that upregulate melanization in the retinal pigment epithelium (RPE) also may be implicated in the downregulation of rod outer segment phagocytosis by the RPE. This phenomenon has been attributed in part to foveal sparing in macular degeneration.
Role in melanoma metastasis
Heavily pigmented melanoma cells have a Young's modulus of about 4.93 kPa, compared to non-pigmented cells, with a value of 0.98 kPa. The elasticity of melanoma cells is crucial to metastasis and growth; non-pigmented tumors were larger than pigmented tumors, and spread far more easily. Pigmented and non-pigmented cells are both present in melanoma tumors, so that they can both be drug-resistant and metastatic.
| Biology and health sciences | Biochemistry and molecular biology | null |
50530 | https://en.wikipedia.org/wiki/Mole%20%28animal%29 | Mole (animal) | Moles are small, subterranean mammals. They have cylindrical bodies, velvety fur, very small, inconspicuous eyes and ears, reduced hindlimbs, and short, powerful forelimbs with large paws adapted for digging.
The word "mole" most commonly refers to many species in the family Talpidae (which are named after the Latin word for mole, talpa). True moles are found in most parts of North America, Europe and Asia. Other mammals referred to as moles include the African golden moles and the Australian marsupial moles, which have a similar ecology and lifestyle to true moles, but are unrelated.
Moles may be viewed as pests to gardeners, but they provide positive contributions to soil, gardens, and ecosystems, including soil aeration, feeding on slugs and small creatures that eat plant roots, and providing prey for other wildlife. They eat earthworms and other small invertebrates in the soil.
Terminology
In Middle English, moles were known as moldwarps. The expression "don't make a mountain out of a molehill" (which means "exaggerating problems") was first recorded in Tudor times. By the era of Early Modern English, the mole was also known in English as mouldywarp or mouldiwarp, a word having cognates in other Germanic languages such as German (Maulwurf), and Danish, Norwegian, Swedish and Icelandic (muldvarp, moldvarp, mullvad, moldvarpa), where muld/mull/mold refers to soil and varp/vad/varpa refers to throwing, hence "one who throws soil" or "dirt-tosser".
Male moles are called "boars"; females are called "sows".
Characteristics
Underground breathing
Moles have been found to tolerate higher levels of carbon dioxide than other mammals, because their blood cells have a special form of hemoglobin that has a higher affinity to oxygen than other forms. In addition, moles use oxygen more effectively by reusing the exhaled air, and can survive in low-oxygen environments such as burrows.
Extra thumbs
Moles have polydactyl forepaws: each has an extra thumb (also known as a prepollex) next to the regular thumb. While the mole's other digits have multiple joints, the prepollex has a single, sickle-shaped bone that develops later and differently from the other fingers during embryogenesis from a transformed sesamoid bone in the wrist, independently evolved but similar to the giant panda thumb. This supernumerary digit is species-specific, as it is not present in shrews, the mole's closest relatives. Androgenic steroids are known to affect the growth and formation of bones, and a connection is possible between this species-specific trait and the male genital apparatus in female moles of many mole species (gonads with testicular and ovary tissues).
Diet
Moles are omnivores, but their diet primarily consists of earthworms and other small invertebrates found in the soil. The mole runs are in reality "worm traps", the mole sensing when a worm falls into the tunnel and quickly running along to kill and eat it. Because their saliva contains a toxin that can paralyze earthworms, moles are able to store their still-living prey for later consumption. They construct special underground "larders" for just this purpose; researchers have discovered such larders with over a thousand earthworms in them. Before eating earthworms, moles pull them between their squeezed paws to force the collected earth and dirt out of the worm's gut.
The star-nosed mole can detect, catch and eat food faster than the human eye can follow.
Breeding
Breeding season for a mole depends on species, but is generally from February through May. Males search for females by letting out high-pitched squeals and tunneling through foreign areas.
The gestation period of the Eastern (North America) mole (Scalopus aquaticus) is approximately 42 days. Three to five young are born, mainly in March and early April. Townsend's moles mate in February and March, and the 2–4 young are born in March and April after a gestation period of about 1 month.
Social structure
Moles are allegedly solitary creatures, coming together only to mate. Territories may overlap, but moles avoid each other and males may fight fiercely if they meet.
Classification
The family Talpidae contains all the true moles and some of their close relatives. Those species called "shrew moles" represent an intermediate form between the moles and their shrew ancestors, and as such may not be fully described by the article.
Moles were traditionally classified in the order Insectivora, but that order has since been abandoned because it has been shown to not be monophyletic. Moles are now classified with shrews and hedgehogs, in the more narrowly defined order Eulipotyphla.
Subfamily Scalopinae: New World moles
Tribe Condylurini: Star-nosed mole (North America)
Genus Condylura: Star-nosed mole (the sole species)
Tribe Scalopini: New World moles
Genus Alpiscaptulus: Medog mole (China)
Genus Parascalops: Hairy-tailed mole (northeastern North America)
Genus Scalopus: Eastern mole (North America)
Genus Scapanulus: Gansu mole (China)
Genus Scapanus: Western North American moles (five species)
Subfamily Talpinae: Old World moles, desmans, and shrew moles
Tribe Desmanini
Genus Desmana: Russian desman
Genus Galemys: Pyrenean desman
Tribe Talpini: Old World moles
Genus Euroscaptor: Ten Asian species
Genus Mogera: Nine species from Japan, Korea, and eastern China
Genus Parascaptor: White-tailed mole, southern Asia
Genus Scaptochirus: Short-faced mole, China
Genus Talpa: Thirteen species, Europe and western Asia
Tribe Scaptonychini: Long-tailed mole
Genus Scaptonyx: Long-tailed mole (China and Myanmar (Burma))
Tribe Urotrichini: Japanese shrew moles
Genus Dymecodon: True's shrew mole
Genus Urotrichus: Japanese shrew mole
Tribe Neurotrichini: New World shrew moles
Genus Neurotrichus: American shrew mole (US Pacific Northwest, southwest British Columbia)
Subfamily Uropsilinae: Asian shrew moles
Genus Uropsilus: Five species in China, Bhutan, and Myanmar (Burma)
Other "moles"
Many groups of burrowing animals (pink fairy armadillos, tuco-tucos, mole rats, mole crickets, pygmy mole crickets, and mole crabs) have independently developed close physical similarities with moles due to convergent evolution; two of these are so similar to true moles, they are commonly called and thought of as "moles" in common English, although they are completely unrelated to true moles or to each other. These are the golden moles of southern Africa and the marsupial moles of Australia. While difficult to distinguish from each other, they are most easily distinguished from true moles by shovel-like patches on their noses, which they use in tandem with their abbreviated forepaws to swim through sandy soils.
Golden moles
The golden moles belong to the same branch on the phylogenetic tree as the tenrecs, called Tenrecomorpha, which, in turn, stem from a main branch of placental mammals called the Afrosoricida. This means that they share a closer common ancestor with such existing afrosoricids as elephants, manatees and aardvarks than they do with other placental mammals, such as true Talpidae moles.
ORDER AFROSORICIDA
Suborder Tenrecomorpha
Family Tenrecidae: tenrecs, 34 species in 10 genera
Suborder Chrysochloridea
Family Chrysochloridae
Subfamily Chrysochlorinae
Genus Carpitalpa
Arends' golden mole (Carpitalpa arendsi)
Genus Chlorotalpa
Duthie's golden mole (Chlorotalpa duthieae)
Sclater's golden mole (Chlorotalpa sclateri)
Genus Chrysochloris
Subgenus Chrysochloris
Cape golden mole (Chrysochloris asiatica)
Visagie's golden mole (Chrysochloris visagiei)
Subgenus Kilimatalpa
Stuhlmann's golden mole (Chrysochloris stuhlmanni)
Genus Chrysospalax
Giant golden mole (Chrysospalax trevelyani)
Rough-haired golden mole (Chrysospalax villosus)
Genus Cryptochloris
De Winton's golden mole (Cryptochloris wintoni)
Van Zyl's golden mole (Cryptochloris zyli)
Genus Eremitalpa
Grant's golden mole (Eremitalpa granti)
Subfamily Amblysominae
Genus Amblysomus
Fynbos golden mole (Amblysomus corriae)
Hottentot golden mole (Amblysomus hottentotus)
Marley's golden mole (Amblysomus marleyi)
Robust golden mole (Amblysomus robustus)
Highveld golden mole (Amblysomus septentrionalis)
Genus Calcochloris
Subgenus Huetia
Congo golden mole (Calcochloris leucorhinus)
Subgenus Calcochloris
Yellow golden mole (Calcochloris obtusirostris)
Subgenus incertae sedis
Somali golden mole (Calcochloris tytonis)
Genus Neamblysomus
Juliana's golden mole (Neamblysomus julianae)
Gunning's golden mole (Neamblysomus gunningi)
Marsupial moles
As marsupials, these moles are even more distantly related to true talpid moles than golden moles are, both of which belong to the Eutheria, or placental mammals. This means that they are more closely related to such existing Australian marsupials as kangaroos or koalas, and even to a lesser extent to American marsupials, such as opossums, than they are to placental mammals, such as golden or Talpidae moles.
Class Mammalia
Subclass Prototheria: monotremes (echidnas and the platypus)
Subclass Theriiformes: live-bearing mammals and their prehistoric relatives
Infraclass Holotheria: modern live-bearing mammals and their prehistoric relatives
Supercohort Theria: live-bearing mammals
Cohort Marsupialia: marsupials
Magnorder Ameridelphia: New World marsupials
Order Didelphimorphia (opossums)
Order Paucituberculata (shrew opossums)
Superorder Australidelphia Australian marsupials
Order Dasyuromorphia (the Tasmanian devil, the numbat, thylacines, quolls, dunnarts and others)
Order Peramelemorphia (bilbies, bandicoots and rainforest bandicoots)
Order Diprotodontia (koalas, wombats, diprotodonts, possums, cuscuses, sugar gliders, kangaroos and others)
Order Notoryctemorphia (marsupial moles and closely related extinct families of marsupials)
Family Notoryctidae (living and extinct marsupial mole genera)
Genus Notoryctes (only genus of marsupial moles with living species)
Species Notoryctes typhlops (southern marsupial mole)
Species Notoryctes caurinus (northern marsupial mole)
Interaction with humans
Pelts
Moles' pelts have a velvety texture not found in surface animals. Surface-dwelling animals tend to have longer fur with a natural tendency for the nap to lie in a particular direction, but to facilitate their burrowing lifestyle, mole pelts are short and very dense and have no particular direction to the nap. This makes it easy for moles to move backwards underground, as their fur is not "brushed the wrong way". The leather is extremely soft and supple. Queen Alexandra, the wife of Edward VII of the United Kingdom, ordered a mole-fur garment to start a fashion that would create a demand for mole fur, thereby turning what had been a serious pest problem in Scotland into a lucrative industry for the country. Hundreds of pelts are cut into rectangles and sewn together to make a coat. The natural color is taupe, (derived from the French noun taupe meaning mole) but it is readily dyed any color.
The term "moleskin" for a tough cotton fabric is in common use today.
Pest status: extermination and humane options
Moles are considered agricultural pests in some countries, while in others, such as Germany, they are a protected species, but may be killed with a permit. Problems cited as caused by moles include contamination of silage with soil particles, making it unpalatable to livestock, the covering of pasture with fresh soil reducing its size and yield, damage to agricultural machinery by the exposure of stones, damage to young plants through disturbance of the soil, weed invasion of pasture through exposure of freshly tilled soil, and damage to drainage systems and watercourses. Other species such as weasels and voles may use mole tunnels to gain access to enclosed areas or plant roots.
Moles burrow and raise molehills, killing parts of lawns. They can undermine plant roots, indirectly causing damage or death. Moles do not eat plant roots.
Moles are controlled with traps such as mole-catchers, smoke bombs, and poisons such as calcium carbide, which produces acetylene gas to drive moles away. Strychnine was also used for this purpose in the past. The most common method now is Phostoxin or Talunex tablets. They contain aluminium phosphide and are inserted in the mole tunnels, where they turn into phosphine gas (not be confused with phosgene gas). More recently, high-grade nitrogen gas has proven effective at killing moles, with the added advantage of having no polluting effect to the environment.
Other common defensive measures include cat litter and blood meal, to repel the mole, or smoking its burrow. Devices are also sold to trap the mole in its burrow, when one sees the "mole hill" moving and therefore knows where the animal is, and then stabbing it.
Other humane options are also possible including humane traps that capture the mole alive so it may be transported elsewhere. In many contexts including ordinary gardens, the damage caused by moles to lawns is mostly visual, and it is possible instead of extermination to simply remove the earth of the molehills as they appear, leaving their permanent galleries for the moles to continue their existence underground. However, when the tunnels are near the surface in soft ground or after heavy rain, they may collapse, leaving (small) unsightly furrows in the lawn.
Meat
William Buckland, known for eating every animal he could, opined that mole meat tasted vile.
Archaeology
Moles can inadvertently help archaeologists by bringing small artifacts to the surface through their digging. By examining molehills for sherds and other small objects, archaeologists can find evidence of human habitation.
| Biology and health sciences | Soricomorpha | null |
50557 | https://en.wikipedia.org/wiki/Phytoplankton | Phytoplankton | Phytoplankton () are the autotrophic (self-feeding) components of the plankton community and a key part of ocean and freshwater ecosystems. The name comes from the Greek words (), meaning 'plant', and (), meaning 'wanderer' or 'drifter'.
Phytoplankton obtain their energy through photosynthesis, as trees and other plants do on land. This means phytoplankton must have light from the sun, so they live in the well-lit surface layers (euphotic zone) of oceans and lakes. In comparison with terrestrial plants, phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). As a result, phytoplankton respond rapidly on a global scale to climate variations.
Phytoplankton form the base of marine and freshwater food webs and are key players in the global carbon cycle. They account for about half of global photosynthetic activity and at least half of the oxygen production, despite amounting to only about 1% of the global plant biomass.
Phytoplankton are very diverse, comprising photosynthesizing bacteria (cyanobacteria) and various unicellular protist groups (notably the diatoms).
Most phytoplankton are too small to be individually seen with the unaided eye. However, when present in high enough numbers, some varieties may be noticeable as colored patches on the water surface due to the presence of chlorophyll within their cells and accessory pigments (such as phycobiliproteins or xanthophylls) in some species.
Types
Phytoplankton are photosynthesizing microscopic protists and bacteria that inhabit the upper sunlit layer of marine and fresh water bodies of water on Earth. Paralleling plants on land, phytoplankton undertake primary production in water, creating organic compounds from carbon dioxide dissolved in the water. Phytoplankton form the base of — and sustain — the aquatic food web, and are crucial players in the Earth's carbon cycle.
Phytoplankton are very diverse, comprising photosynthesizing bacteria (cyanobacteria) and various unicellular protist groups (notably the diatoms). Many other organism groups formally named as phytoplankton, including coccolithophores and dinoflagellates, are now no longer included as they are not only phototrophic but can also eat. These organisms are now more correctly termed mixoplankton. This recognition has important consequences for how we view the functioning of the planktonic food web.
Ecology
Phytoplankton obtain energy through the process of photosynthesis and must therefore live in the well-lit surface layer (termed the euphotic zone) of an ocean, sea, lake, or other body of water. Phytoplankton account for about half of all photosynthetic activity on Earth. Their cumulative energy fixation in carbon compounds (primary production) is the basis for the vast majority of oceanic and also many freshwater food webs (chemosynthesis is a notable exception).
While almost all phytoplankton species are obligate photoautotrophs, there are some that are mixotrophic and other, non-pigmented species that are actually heterotrophic (the latter are often viewed as zooplankton). Of these, the best known are dinoflagellate genera such as Noctiluca and Dinophysis, that obtain organic carbon by ingesting other organisms or detrital material.
Phytoplankton live in the photic zone of the ocean, where photosynthesis is possible. During photosynthesis, they assimilate carbon dioxide and release oxygen. If solar radiation is too high, phytoplankton may fall victim to photodegradation. Phytoplankton species feature a large variety of photosynthetic pigments which species-specifically enables them to absorb different wavelengths of the variable underwater light. This implies different species can use the wavelength of light different efficiently and the light is not a single ecological resource but a multitude of resources depending on its spectral composition. By that it was found that changes in the spectrum of light alone can alter natural phytoplankton communities even if the same intensity is available. For growth, phytoplankton cells additionally depend on nutrients, which enter the ocean by rivers, continental weathering, and glacial ice meltwater on the poles. Phytoplankton release dissolved organic carbon (DOC) into the ocean. Since phytoplankton are the basis of marine food webs, they serve as prey for zooplankton, fish larvae and other heterotrophic organisms. They can also be degraded by bacteria or by viral lysis. Although some phytoplankton cells, such as dinoflagellates, are able to migrate vertically, they are still incapable of actively moving against currents, so they slowly sink and ultimately fertilize the seafloor with dead cells and detritus.
Phytoplankton are crucially dependent on a number of nutrients. These are primarily macronutrients such as nitrate, phosphate or silicic acid, which are required in relatively large quantities for growth. Their availability in the surface ocean is governed by the balance between the so-called biological pump and upwelling of deep, nutrient-rich waters. The stoichiometric nutrient composition of phytoplankton drives — and is driven by — the Redfield ratio of macronutrients generally available throughout the surface oceans. Phytoplankton also rely on trace metals such as iron (Fe), manganese (Mn), zinc (Zn), cobalt (Co), cadmium (Cd) and copper (Cu) as essential micronutrients, influencing their growth and community composition. Limitations in these metals can lead to co-limitations and shifts in phytoplankton community structure. Across large areas of the oceans such as the Southern Ocean, phytoplankton are often limited by the lack of the micronutrient iron. This has led to some scientists advocating iron fertilization as a means to counteract the accumulation of human-produced carbon dioxide (CO2) in the atmosphere. Large-scale experiments have added iron (usually as salts such as ferrous sulfate) to the oceans to promote phytoplankton growth and draw atmospheric CO2 into the ocean. Controversy about manipulating the ecosystem and the efficiency of iron fertilization has slowed such experiments. The ocean science community still has a divided attitude toward the study of iron fertilization as a potential marine Carbon Dioxide Removal (mCDR) approach.
Phytoplankton depend on B vitamins for survival. Areas in the ocean have been identified as having a major lack of some B Vitamins, and correspondingly, phytoplankton.
The effects of anthropogenic warming on the global population of phytoplankton is an area of active research. Changes in the vertical stratification of the water column, the rate of temperature-dependent biological reactions, and the atmospheric supply of nutrients are expected to have important effects on future phytoplankton productivity.
The effects of anthropogenic ocean acidification on phytoplankton growth and community structure has also received considerable attention. The cells of coccolithophore phytoplankton are typically covered in a calcium carbonate shell called a coccosphere that is sensitive to ocean acidification. Because of their short generation times, evidence suggests some phytoplankton can adapt to changes in pH induced by increased carbon dioxide on rapid time-scales (months to years).
Phytoplankton serve as the base of the aquatic food web, providing an essential ecological function for all aquatic life. Under future conditions of anthropogenic warming and ocean acidification, changes in phytoplankton mortality due to changes in rates of zooplankton grazing may be significant. One of the many food chains in the ocean – remarkable due to the small number of links – is that of phytoplankton sustaining krill (a crustacean similar to a tiny shrimp), which in turn sustain baleen whales.
The El Niño-Southern Oscillation (ENSO) cycles in the Equatorial Pacific area can affect phytoplankton. Biochemical and physical changes during ENSO cycles modify the phytoplankton community structure. Also, changes in the structure of the phytoplankton, such as a significant reduction in biomass and phytoplankton density, particularly during El Nino phases can occur. The sensitivity of phytoplankton to environmental changes is why they are often used as indicators of estuarine and coastal ecological condition and health. To study these events satellite ocean color observations are used to observe these changes. Satellite images help to have a better view of their global distribution.
Diversity
The term phytoplankton encompasses all photoautotrophic microorganisms in aquatic food webs. However, unlike terrestrial communities, where most autotrophs are plants, phytoplankton are a diverse group, incorporating protistan eukaryotes and both eubacterial and archaebacterial prokaryotes. There are about 5,000 known species of marine phytoplankton. How such diversity evolved despite scarce resources (restricting niche differentiation) is unclear.
In terms of numbers, the most important groups of phytoplankton include the diatoms, cyanobacteria and dinoflagellates, although many other groups of algae are represented. One group, the coccolithophorids, is responsible (in part) for the release of significant amounts of dimethyl sulfide (DMS) into the atmosphere. DMS is oxidized to form sulfate which, in areas where ambient aerosol particle concentrations are low, can contribute to the population of cloud condensation nuclei, mostly leading to increased cloud cover and cloud albedo according to the so-called CLAW hypothesis. Different types of phytoplankton support different trophic levels within varying ecosystems. In oligotrophic oceanic regions such as the Sargasso Sea or the South Pacific Gyre, phytoplankton is dominated by the small sized cells, called picoplankton and nanoplankton (also referred to as picoflagellates and nanoflagellates), mostly composed of cyanobacteria (Prochlorococcus, Synechococcus) and picoeucaryotes such as Micromonas. Within more productive ecosystems, dominated by upwelling or high terrestrial inputs, larger dinoflagellates are the more dominant phytoplankton and reflect a larger portion of the biomass.
Growth strategies
In the early twentieth century, Alfred C. Redfield found the similarity of the phytoplankton's elemental composition to the major dissolved nutrients in the deep ocean. Redfield proposed that the ratio of carbon to nitrogen to phosphorus (106:16:1) in the ocean was controlled by the phytoplankton's requirements, as phytoplankton subsequently release nitrogen and phosphorus as they are remineralized. This so-called "Redfield ratio" in describing stoichiometry of phytoplankton and seawater has become a fundamental principle to understand marine ecology, biogeochemistry and phytoplankton evolution. However, the Redfield ratio is not a universal value and it may diverge due to the changes in exogenous nutrient delivery and microbial metabolisms in the ocean, such as nitrogen fixation, denitrification and anammox.
The dynamic stoichiometry shown in unicellular algae reflects their capability to store nutrients in an internal pool, shift between enzymes with various nutrient requirements and alter osmolyte composition. Different cellular components have their own unique stoichiometry characteristics, for instance, resource (light or nutrients) acquisition machinery such as proteins and chlorophyll contain a high concentration of nitrogen but low in phosphorus. Meanwhile, growth machinery such as ribosomal RNA contains high nitrogen and phosphorus concentrations.
Based on allocation of resources, phytoplankton is classified into three different growth strategies, namely survivalist, bloomer and generalist. Survivalist phytoplankton has a high ratio of N:P (>30) and contains an abundance of resource-acquisition machinery to sustain growth under scarce resources. Bloomer phytoplankton has a low N:P ratio (<10), contains a high proportion of growth machinery, and is adapted to exponential growth. Generalist phytoplankton has similar N:P to the Redfield ratio and contain relatively equal resource-acquisition and growth machinery.
Factors affecting abundance
The NAAMES study was a five-year scientific research program conducted between 2015 and 2019 by scientists from Oregon State University and NASA to investigated aspects of phytoplankton dynamics in ocean ecosystems, and how such dynamics influence atmospheric aerosols, clouds, and climate (NAAMES stands for the North Atlantic Aerosols and Marine Ecosystems Study). The study focused on the sub-arctic region of the North Atlantic Ocean, which is the site of one of Earth's largest recurring phytoplankton blooms. The long history of research in this location, as well as relative ease of accessibility, made the North Atlantic an ideal location to test prevailing scientific hypotheses in an effort to better understand the role of phytoplankton aerosol emissions on Earth's energy budget.
NAAMES was designed to target specific phases of the annual phytoplankton cycle: minimum, climax and the intermediary decreasing and increasing biomass, in order to resolve debates on the timing of bloom formations and the patterns driving annual bloom re-creation. The NAAMES project also investigated the quantity, size, and composition of aerosols generated by primary production in order to understand how phytoplankton bloom cycles affect cloud formations and climate.
Factors affecting productivity
Phytoplankton are the key mediators of the biological pump. Understanding the response of phytoplankton to changing environmental conditions is a prerequisite to predict future atmospheric concentrations of CO2. Temperature, irradiance and nutrient concentrations, along with CO2 are the chief environmental factors that influence the physiology and stoichiometry of phytoplankton. The stoichiometry or elemental composition of phytoplankton is of utmost importance to secondary producers such as copepods, fish and shrimp, because it determines the nutritional quality and influences energy flow through the marine food chains. Climate change may greatly restructure phytoplankton communities leading to cascading consequences for marine food webs, thereby altering the amount of carbon transported to the ocean interior.
The figure gives an overview of the various environmental factors that together affect phytoplankton productivity. All of these factors are expected to undergo significant changes in the future ocean due to global change. Global warming simulations predict oceanic temperature increase; dramatic changes in oceanic stratification, circulation and changes in cloud cover and sea ice, resulting in an increased light supply to the ocean surface. Also, reduced nutrient supply is predicted to co-occur with ocean acidification and warming, due to increased stratification of the water column and reduced mixing of nutrients from the deep water to the surface.
Role of phytoplankton
The compartments influenced by phytoplankton include the atmospheric gas composition, inorganic nutrients, and trace element fluxes as well as the transfer and cycling of organic matter via biological processes (see figure). The photosynthetically fixed carbon is rapidly recycled and reused in the surface ocean, while a certain fraction of this biomass is exported as sinking particles to the deep ocean, where it is subject to ongoing transformation processes, e.g., remineralization.
Phytoplankton contribute to not only a basic pelagic marine food web but also to the microbial loop. Phytoplankton are the base of the marine food web and because they do not rely on other organisms for food, they make up the first trophic level. Organisms such as zooplankton feed on these phytoplankton which are in turn fed on by other organisms and so forth until the fourth trophic level is reached with apex predators. Approximately 90% of total carbon is lost between trophic levels due to respiration, detritus, and dissolved organic matter. This makes the remineralization process and nutrient cycling performed by phytoplankton and bacteria important in maintaining efficiency.
Phytoplankton blooms in which a species increases rapidly under conditions favorable to growth can produce harmful algal blooms (HABs).
Aquaculture
Phytoplankton are a key food item in both aquaculture and mariculture. Both utilize phytoplankton as food for the animals being farmed. In mariculture, the phytoplankton is naturally occurring and is introduced into enclosures with the normal circulation of seawater. In aquaculture, phytoplankton must be obtained and introduced directly. The plankton can either be collected from a body of water or cultured, though the former method is seldom used. Phytoplankton is used as a foodstock for the production of rotifers, which are in turn used to feed other organisms. Phytoplankton is also used to feed many varieties of aquacultured molluscs, including pearl oysters and giant clams. A 2018 study estimated the nutritional value of natural phytoplankton in terms of carbohydrate, protein and lipid across the world ocean using ocean-colour data from satellites, and found the calorific value of phytoplankton to vary considerably across different oceanic regions and between different time of the year.
The production of phytoplankton under artificial conditions is itself a form of aquaculture. Phytoplankton is cultured for a variety of purposes, including foodstock for other aquacultured organisms, a nutritional supplement for captive invertebrates in aquaria. Culture sizes range from small-scale laboratory cultures of less than 1L to several tens of thousands of litres for commercial aquaculture. Regardless of the size of the culture, certain conditions must be provided for efficient growth of plankton. The majority of cultured plankton is marine, and seawater of a specific gravity of 1.010 to 1.026 may be used as a culture medium. This water must be sterilized, usually by either high temperatures in an autoclave or by exposure to ultraviolet radiation, to prevent biological contamination of the culture. Various fertilizers are added to the culture medium to facilitate the growth of plankton. A culture must be aerated or agitated in some way to keep plankton suspended, as well as to provide dissolved carbon dioxide for photosynthesis. In addition to constant aeration, most cultures are manually mixed or stirred on a regular basis. Light must be provided for the growth of phytoplankton. The colour temperature of illumination should be approximately 6,500 K, but values from 4,000 K to upwards of 20,000 K have been used successfully. The duration of light exposure should be approximately 16 hours daily; this is the most efficient artificial day length.
Anthropogenic changes
Marine phytoplankton perform half of the global photosynthetic CO2 fixation (net global primary production of ~50 Pg C per year) and half of the oxygen production despite amounting to only ~1% of global plant biomass. In comparison with terrestrial plants, marine phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). Therefore, phytoplankton respond rapidly on a global scale to climate variations. These characteristics are important when one is evaluating the contributions of phytoplankton to carbon fixation and forecasting how this production may change in response to perturbations. Predicting the effects of climate change on primary productivity is complicated by phytoplankton bloom cycles that are affected by both bottom-up control (for example, availability of essential nutrients and vertical mixing) and top-down control (for example, grazing and viruses). Increases in solar radiation, temperature and freshwater inputs to surface waters strengthen ocean stratification and consequently reduce transport of nutrients from deep water to surface waters, which reduces primary productivity. Conversely, rising CO2 levels can increase phytoplankton primary production, but only when nutrients are not limiting.
Some studies indicate that overall global oceanic phytoplankton density has decreased in the past century, but these conclusions have been questioned because of the limited availability of long-term phytoplankton data, methodological differences in data generation and the large annual and decadal variability in phytoplankton production. Moreover, other studies suggest a global increase in oceanic phytoplankton production and changes in specific regions or specific phytoplankton groups. The global Sea Ice Index is declining, leading to higher light penetration and potentially more primary production; however, there are conflicting predictions for the effects of variable mixing patterns and changes in nutrient supply and for productivity trends in polar zones.
The effect of human-caused climate change on phytoplankton biodiversity is not well understood. Should greenhouse gas emissions continue rising to high levels by 2100, some phytoplankton models predict an increase in species richness, or the number of different species within a given area. This increase in plankton diversity is traced to warming ocean temperatures. In addition to species richness changes, the locations where phytoplankton are distributed are expected to shift towards the Earth's poles. Such movement may disrupt ecosystems, because phytoplankton are consumed by zooplankton, which in turn sustain fisheries. This shift in phytoplankton location may also diminish the ability of phytoplankton to store carbon that was emitted by human activities. Human (anthropogenic) changes to phytoplankton impact both natural and economic processes.
| Biology and health sciences | Other organisms: General | Plants |
50558 | https://en.wikipedia.org/wiki/Zooplankton | Zooplankton | Zooplankton are the heterotrophic component of the planktonic community (the "zoo-" prefix comes from ), having to consume other organisms to thrive. Plankton are aquatic organisms that are unable to swim effectively against currents. Consequently, they drift or are carried along by currents in the ocean, or by currents in seas, lakes or rivers.
Zooplankton can be contrasted with phytoplankton (cyanobacteria and microalgae), which are the plant-like component of the plankton community (the "phyto-" prefix comes from , although taxonomically not plants). Zooplankton are heterotrophic (other-feeding), whereas phytoplankton are autotrophic (self-feeding), often generating biological energy and macromolecules through chlorophyllic carbon fixation using sunlight — in other words, zooplankton cannot manufacture their own food, while phytoplankton can. As a result, zooplankton must acquire nutrients by feeding on other organisms such as phytoplankton, which are generally smaller than zooplankton. Most zooplankton are microscopic but some (such as jellyfish) are macroscopic, meaning they can be seen with the naked eye.
Many protozoans (single-celled protists that prey on other microscopic life) are zooplankton, including zooflagellates, foraminiferans, radiolarians, some dinoflagellates and marine microanimals. Macroscopic zooplankton include pelagic cnidarians, ctenophores, molluscs, arthropods and tunicates, as well as planktonic arrow worms and bristle worms.
The distinction between autotrophy and heterotrophy often breaks down in very small organisms. Recent studies of marine microplankton have indicated over half of microscopic plankton are mixotrophs, which can obtain energy and carbon from a mix of internal plastids and external sources. Many marine microzooplankton are mixotrophic, which means they could also be classified as phytoplankton.
Overview
Zooplankton (; ) are heterotrophic (sometimes detritivorous) plankton. The word zooplankton is derived from ; and .
Zooplankton is a categorization spanning a range of organism sizes including small protozoans and large metazoans. It includes holoplanktonic organisms whose complete life cycle lies within the plankton, as well as meroplanktonic organisms that spend part of their lives in the plankton before graduating to either the nekton or a sessile, benthic existence. Although zooplankton are primarily transported by ambient water currents, many have locomotion, used to avoid predators (as in diel vertical migration) or to increase prey encounter rate.
Just as any species can be limited within a geographical region, so are zooplankton. However, species of zooplankton are not dispersed uniformly or randomly within a region of the ocean. As with phytoplankton, 'patches' of zooplankton species exist throughout the ocean. Though few physical barriers exist above the mesopelagic, specific species of zooplankton are strictly restricted by salinity and temperature gradients, while other species can withstand wide temperature and salinity gradients. Zooplankton patchiness can also be influenced by biological factors, as well as other physical factors. Biological factors include breeding, predation, concentration of phytoplankton, and vertical migration. The physical factor that influences zooplankton distribution the most is mixing of the water column (upwelling and downwelling along the coast and in the open ocean) that affects nutrient availability and, in turn, phytoplankton production.
Through their consumption and processing of phytoplankton and other food sources, zooplankton play a role in aquatic food webs, as a resource for consumers on higher trophic levels (including fish), and as a conduit for packaging the organic material in the biological pump. Since they are typically small, zooplankton can respond rapidly to increases in phytoplankton abundance, for instance, during the spring bloom. Zooplankton are also a key link in the biomagnification of pollutants such as mercury.
Ecologically important protozoan zooplankton groups include the foraminiferans, radiolarians and dinoflagellates (the last of these are often mixotrophic). Important metazoan zooplankton include cnidarians such as jellyfish and the Portuguese Man o' War; crustaceans such as cladocerans, copepods, ostracods, isopods, amphipods, mysids and krill; chaetognaths (arrow worms); molluscs such as pteropods; and chordates such as salps and juvenile fish. This wide phylogenetic range includes a similarly wide range in feeding behavior: filter feeding, predation and symbiosis with autotrophic phytoplankton as seen in corals. Zooplankton feed on bacterioplankton, phytoplankton, other zooplankton (sometimes cannibalistically), detritus (or marine snow) and even nektonic organisms. As a result, zooplankton are primarily found in surface waters where food resources (phytoplankton or other zooplankton) are abundant.
Zooplankton can also act as a disease reservoir. Crustacean zooplankton have been found to house the bacterium Vibrio cholerae, which causes cholera, by allowing the cholera vibrios to attach to their chitinous exoskeletons. This symbiotic relationship enhances the bacterium's ability to survive in an aquatic environment, as the exoskeleton provides the bacterium with carbon and nitrogen.
Size classification
Body size has been defined as a "master trait" for plankton as it is a morphological characteristic shared by organisms across taxonomy that characterises the functions performed by organisms in ecosystems. It has a paramount effect on growth, reproduction, feeding strategies and mortality. One of the oldest manifestations of the biogeography of traits was proposed over 170 years ago, namely Bergmann's rule, in which field observations showed that larger species tend to be found at higher, colder latitudes.
In the oceans, size is critical in determining trophic links in planktonic ecosystems and is thus a critical factor in regulating the efficiency of the biological carbon pump. Body size is sensitive to changes in temperature due to the thermal dependence of physiological processes. The plankton is mainly composed of ectotherms which are organisms that do not generate sufficient metabolic heat to elevate their body temperature, so their metabolic processes depends on external temperature. Consequently, ectotherms grow more slowly and reach maturity at a larger body size in colder environments, which has long puzzled biologists because classic theories of life-history evolution predict smaller adult sizes in environments delaying growth. This pattern of body size variation, known as the temperature-size rule (TSR), has been observed for a wide range of ectotherms, including single-celled and multicellular species, invertebrates and vertebrates.
The processes underlying the inverse relationship between body size and temperature remain to be identified. Despite temperature playing a major role in shaping latitudinal variations in organism size, these patterns may also rely on complex interactions between physical, chemical and biological factors. For instance, oxygen supply plays a central role in determining the magnitude of ectothermic temperature-size responses, but it is hard to disentangle the relative effects of oxygen and temperature from field data because these two variables are often strongly inter-related in the surface ocean.
Zooplankton can be broken down into size classes which are diverse in their morphology, diet, feeding strategies, etc. both within classes and between classes:
Microzooplankton
Microzooplankton are defined as heterotrophic and mixotrophic plankton. They primarily consist of phagotrophic protists, including ciliates, dinoflagellates, and mesozooplankton nauplii. Microzooplankton are major grazers of the plankton community. As the primary consumers of marine phytoplankton, microzooplankton consume ~ 59–75% daily of the marine primary production, much larger than mesozooplankton. That said, macrozooplankton can sometimes have greater consumption rates in eutrophic ecosystems because the larger phytoplankton can be dominant there. Microzooplankton are also pivotal regenerators of nutrients which fuel primary production and food sources for metazoans.
Despite their ecological importance, microzooplankton remain understudied. Routine oceanographic observations seldom monitor microzooplankton biomass or herbivory rate, although the dilution technique, an elegant method of measuring microzooplankton herbivory rate, has been developed for over four decades (Landry and Hassett 1982). The number of observations of microzooplankton herbivory rate is around 1600 globally, far less than that of primary productivity (> 50,000). This makes validating and optimizing the grazing function of microzooplankton difficult in ocean ecosystem models.
Mesozooplankton
Mesozooplankton are one of the larger size classes of zooplankton. In most regions, mesozooplankton are dominated by copepods, such as Calanus finmarchicus and Calanus helgolandicus. Mesozooplankton are an important prey for fish.
As plankton are rarely fished, it has been argued that mesoplankton abundance and species composition can be used to study marine ecosystems' response to climate change. This is because they have life cycles that generally last less than a year, meaning they respond to climate changes between years. Sparse, monthly sampling will still indicate vacillations.
Taxonomic groups
Protozoans
Protozoans are protists that feed on organic matter such as other microorganisms or organic tissues and debris. Historically, the protozoa were regarded as "one-celled animals", because they often possess animal-like behaviours, such as motility and predation, and lack a cell wall, as found in plants and many algae. Although the traditional practice of grouping protozoa with animals is no longer considered valid, the term continues to be used in a loose way to identify single-celled organisms that can move independently and feed by heterotrophy.
Marine protozoans include zooflagellates, foraminiferans, radiolarians and some dinoflagellates.
Radiolarians
Radiolarians are unicellular predatory protists encased in elaborate globular shells usually made of silica and pierced with holes. Their name comes from the Latin for "radius". They catch prey by extending parts of their body through the holes. As with the silica frustules of diatoms, radiolarian shells can sink to the ocean floor when radiolarians die and become preserved as part of the ocean sediment. These remains, as microfossils, provide valuable information about past oceanic conditions.
Foraminiferans
Like radiolarians, foraminiferans (forams for short) are single-celled predatory protists, also protected with shells that have holes in them. Their name comes from the Latin for "hole bearers". Their shells, often called tests, are chambered (forams add more chambers as they grow). The shells are usually made of calcite, but are sometimes made of agglutinated sediment particles or chiton, and (rarely) silica. Most forams are benthic, but about 40 species are planktic. They are widely researched with well-established fossil records which allow scientists to infer a lot about past environments and climates.
Amoeba
Ciliates
Dinoflagellates
Dinoflagellates are a phylum of unicellular flagellates with about 2,000 marine species. Some dinoflagellates are predatory, and thus belong to the zooplankton community. Their name comes from the Greek "dinos" meaning whirling and the Latin "flagellum" meaning a whip or lash. This refers to the two whip-like attachments (flagella) used for forward movement. Most dinoflagellates are protected with red-brown, cellulose armour. Excavates may be the most basal flagellate lineage.
Dinoflagellates often live in symbiosis with other organisms. Many nassellarian radiolarians house dinoflagellate symbionts within their tests. The nassellarian provides ammonium and carbon dioxide for the dinoflagellate, while the dinoflagellate provides the nassellarian with a mucous membrane useful for hunting and protection against harmful invaders. There is evidence from DNA analysis that dinoflagellate symbiosis with radiolarians evolved independently from other dinoflagellate symbioses, such as with foraminifera.
Mixotrophs
A mixotroph is an organism that can use a mix of different sources of energy and carbon, instead of having a single trophic mode on the continuum from complete autotrophy at one end to heterotrophy at the other. It is estimated that mixotrophs comprise more than half of all microscopic plankton. There are two types of eukaryotic mixotrophs: those with their own chloroplasts, and those with endosymbionts—and others that acquire them through kleptoplasty or by enslaving the entire phototrophic cell.
The distinction between plants and animals often breaks down in very small organisms. Possible combinations are photo- and chemotrophy, litho- and organotrophy, auto- and heterotrophy or other combinations of these. Mixotrophs can be either eukaryotic or prokaryotic. They can take advantage of different environmental conditions.
Many marine microzooplankton are mixotrophic, which means they could also be classified as phytoplankton. Recent studies of marine microzooplankton found 30–45% of the ciliate abundance was mixotrophic, and up to 65% of the amoeboid, foram and radiolarian biomass was mixotrophic.
Phaeocystis species are endosymbionts to acantharian radiolarians. Phaeocystis is an important algal genus found as part of the marine phytoplankton around the world. It has a polymorphic life cycle, ranging from free-living cells to large colonies. It has the ability to form floating colonies, where hundreds of cells are embedded in a gel matrix, which can increase massively in size during blooms. As a result, Phaeocystis is an important contributor to the marine carbon and sulfur cycles.
A number of forams are mixotrophic. These have unicellular algae as endosymbionts, from diverse lineages such as the green algae, red algae, golden algae, diatoms, and dinoflagellates. Mixotrophic foraminifers are particularly common in nutrient-poor oceanic waters. Some forams are kleptoplastic, retaining chloroplasts from ingested algae to conduct photosynthesis.
By trophic orientation, dinoflagellates are all over the place. Some dinoflagellates are known to be photosynthetic, but a large fraction of these are in fact mixotrophic, combining photosynthesis with ingestion of prey (phagotrophy). Some species are endosymbionts of marine animals and other protists, and play an important part in the biology of coral reefs. Others predate other protozoa, and a few forms are parasitic. Many dinoflagellates are mixotrophic and could also be classified as phytoplankton. The toxic dinoflagellate Dinophysis acuta acquire chloroplasts from its prey. "It cannot catch the cryptophytes by itself, and instead relies on ingesting ciliates such as the red Myrionecta rubra, which sequester their chloroplasts from a specific cryptophyte clade (Geminigera/Plagioselmis/Teleaulax)".
Metazoa (animals)
Free-living species in the crustacean class Copepoda are typically 1 to 2 mm long with teardrop-shaped bodies. Like all crustaceans, their bodies are divided into three sections: head, thorax, and abdomen, with two pairs of antennae; the first pair is often long and prominent. They have a tough exoskeleton made of calcium carbonate and usually have a single red eye in the centre of their transparent head. About 13,000 species of copepods are known, of which about 10,200 are marine. They are usually among the more dominant members of the zooplankton.
In addition to copepods the crustacean classes ostracods, branchiopods and malacostracans also have planktonic members. Barnacles are planktonic only during the larval stage.
Holoplankton and meroplankton
Ichthyoplankton
Ichthyoplankton are the eggs and larvae of fish ("ichthyo" comes from the Greek word for fish). They are planktonic because they cannot swim effectively under their own power, but must drift with the ocean currents. Fish eggs cannot swim at all, and are unambiguously planktonic. Early stage larvae swim poorly, but later stage larvae swim better and cease to be planktonic as they grow into juvenile fish. Fish larvae are part of the zooplankton that eat smaller plankton, while fish eggs carry their own food supply. Both eggs and larvae are themselves eaten by larger animals.
Gelatinous zooplankton
Gelatinous zooplankton include ctenophores, medusae, salps, and Chaetognatha in coastal waters. Jellyfish are slow swimmers, and most species form part of the plankton. Traditionally jellyfish have been viewed as trophic dead ends, minor players in the marine food web, gelatinous organisms with a body plan largely based on water that offers little nutritional value or interest for other organisms apart from a few specialised predators such as the ocean sunfish and the leatherback sea turtle.
That view has recently been challenged. Jellyfish, and more gelatinous zooplankton in general, which include salps and ctenophores, are very diverse, fragile with no hard parts, difficult to see and monitor, subject to rapid population swings and often live inconveniently far from shore or deep in the ocean. It is difficult for scientists to detect and analyse jellyfish in the guts of predators, since they turn to mush when eaten and are rapidly digested. But jellyfish bloom in vast numbers, and it has been shown they form major components in the diets of tuna, spearfish and swordfish as well as various birds and invertebrates such as octopus, sea cucumbers, crabs and amphipods. "Despite their low energy density, the contribution of jellyfish to the energy budgets of predators may be much greater than assumed because of rapid digestion, low capture costs, availability, and selective feeding on the more energy-rich components. Feeding on jellyfish may make marine predators susceptible to ingestion of plastics." According to a 2017 study, narcomedusae consume the greatest diversity of mesopelagic prey, followed by physonect siphonophores, ctenophores and cephalopods.
The importance of the so-called "jelly web" is only beginning to be understood, but it seems medusae, ctenophores and siphonophores can be key predators in deep pelagic food webs with ecological impacts similar to predator fish and squid. Traditionally gelatinous predators were thought ineffectual providers of marine trophic pathways, but they appear to have substantial and integral roles in deep pelagic food webs.
Role in food webs
Grazing by single-celled zooplankton accounts for the majority of organic carbon loss from marine primary production. However, zooplankton grazing remains one of the key unknowns in global predictive models of carbon flux, the marine food web structure and ecosystem characteristics, because empirical grazing measurements are sparse, resulting in poor parameterisation of grazing functions. To overcome this critical knowledge gap, it has been suggested that a focused effort be placed on the development of instrumentation that can link changes in phytoplankton biomass or optical properties with grazing.
Grazing is a central, rate-setting process in ocean ecosystems and a driver of marine biogeochemical cycling. In all ocean ecosystems, grazing by heterotrophic protists constitutes the single largest loss factor of marine primary production and alters particle size distributions. Grazing affects all pathways of export production, rendering grazing important both for surface and deep carbon processes. Predicting central paradigms of ocean ecosystem function, including responses to environmental change requires accurate representation of grazing in global biogeochemical, ecosystem and cross-biome-comparison models. Several large-scale analyses have concluded that phytoplankton losses, which are dominated by grazing are the putative explanation for annual cycles in phytoplankton biomass, accumulation rates and export production.
Role in biogeochemistry
In addition to linking primary producers to higher trophic levels in marine food webs, zooplankton also play an important role as "recyclers" of carbon and other nutrients that significantly impact marine biogeochemical cycles, including the biological pump. This is particularly important in the oligotrophic waters of the open ocean. Through sloppy feeding, excretion, egestion, and leaching of fecal pellets, zooplankton release dissolved organic matter (DOM) which controls DOM cycling and supports the microbial loop. Absorption efficiency, respiration, and prey size all further complicate how zooplankton are able to transform and deliver carbon to the deep ocean.
Sloppy feeding and release of DOM
Excretion and sloppy feeding (the physical breakdown of food source) make up 80% and 20% of crustacean zooplankton-mediated DOM release respectively. In the same study, fecal pellet leaching was found to be an insignificant contributor. For protozoan grazers, DOM is released primarily through excretion and egestion and gelatinous zooplankton can also release DOM through the production of mucus. Leaching of fecal pellets can extend from hours to days after initial egestion and its effects can vary depending on food concentration and quality. Various factors can affect how much DOM is released from zooplankton individuals or populations. Absorption efficiency (AE) is the proportion of food absorbed by plankton that determines how available the consumed organic materials are in meeting the required physiological demands. Depending on the feeding rate and prey composition, variations in AE may lead to variations in fecal pellet production, and thus regulates how much organic material is recycled back to the marine environment. Low feeding rates typically lead to high AE and small, dense pellets, while high feeding rates typically lead to low AE and larger pellets with more organic content. Another contributing factor to DOM release is respiration rate. Physical factors such as oxygen availability, pH, and light conditions may affect overall oxygen consumption and how much carbon is loss from zooplankton in the form of respired CO2. The relative sizes of zooplankton and prey also mediate how much carbon is released via sloppy feeding. Smaller prey are ingested whole, whereas larger prey may be fed on more "sloppily", that is more biomatter is released through inefficient consumption. There is also evidence that diet composition can impact nutrient release, with carnivorous diets releasing more dissolved organic carbon (DOC) and ammonium than omnivorous diets.
Carbon export
Zooplankton play a critical role in supporting the ocean's biological pump through various forms of carbon export, including the production of fecal pellets, mucous feeding webs, molts, and carcasses. Fecal pellets are estimated to be a large contributor to this export, with copepod size rather than abundance expected to determine how much carbon actually reaches the ocean floor. The importance of fecal pellets can vary both by time and location. For example, zooplankton bloom events can produce larger quantities of fecal pellets, resulting in greater measures of carbon export. Additionally, as fecal pellets sink, they are reworked by microbes in the water column, which can thus alter the carbon composition of the pellet. This affects how much carbon is recycled in the euphotic zone and how much reaches depth. Fecal pellet contribution to carbon export is likely underestimated; however, new advances in quantifying this production are currently being developed, including the use of isotopic signatures of amino acids to characterize how much carbon is being exported via zooplankton fecal pellet production. Carcasses are also gaining recognition as being important contributors to carbon export. Jelly falls – the mass sinking of gelatinous zooplankton carcasses – occur across the world as a result of large blooms. Because of their large size, these gelatinous zooplankton are expected to hold a larger carbon content, making their sinking carcasses a potentially important source of food for benthic organisms.
| Biology and health sciences | Other organisms: General | Plants |
50563 | https://en.wikipedia.org/wiki/Sucrose | Sucrose | Sucrose, a disaccharide, is a sugar composed of glucose and fructose subunits. It is produced naturally in plants and is the main constituent of white sugar. It has the molecular formula .
For human consumption, sucrose is extracted and refined from either sugarcane or sugar beet. Sugar mills – typically located in tropical regions near where sugarcane is grown – crush the cane and produce raw sugar which is shipped to other factories for refining into pure sucrose. Sugar beet factories are located in temperate climates where the beet is grown, and process the beets directly into refined sugar. The sugar-refining process involves washing the raw sugar crystals before dissolving them into a sugar syrup which is filtered and then passed over carbon to remove any residual colour. The sugar syrup is then concentrated by boiling under a vacuum and crystallized as the final purification process to produce crystals of pure sucrose that are clear, odorless, and sweet.
Sugar is often an added ingredient in food production and recipes. About 185 million tonnes of sugar were produced worldwide in 2017.
Sucrose is particularly dangerous as a risk factor for tooth decay because Streptococcus mutans bacteria convert it into a sticky, extracellular, dextran-based polysaccharide that allows them to cohere, forming plaque. Sucrose is the only sugar that bacteria can use to form this sticky polysaccharide.
Etymology
The word sucrose was coined in 1857, by the English chemist William Miller from the French ("sugar") and the generic chemical suffix for sugars -ose. The abbreviated term Suc is often used for sucrose in scientific literature.
The name saccharose was coined in 1860 by the French chemist Marcellin Berthelot. Saccharose is an obsolete name for sugars in general, especially sucrose.
Physical and chemical properties
Structural O-α-D-glucopyranosyl-(1→2)-β-D-fructofuranoside
In sucrose, the monomers glucose and fructose are linked via an ether bond between C1 on the glucosyl subunit and C2 on the fructosyl unit. The bond is called a glycosidic linkage. Glucose exists predominantly as a mixture of α and β "pyranose" anomers, but sucrose has only the α form. Fructose exists as a mixture of five tautomers but sucrose has only the β-D-fructofuranose form. Unlike most disaccharides, the glycosidic bond in sucrose is formed between the reducing ends of both glucose and fructose, and not between the reducing end of one and the non-reducing end of the other. This linkage inhibits further bonding to other saccharide units, and prevents sucrose from spontaneously reacting with cellular and circulatory macromolecules in the manner that glucose and other reducing sugars do. Since sucrose contains no anomeric hydroxyl groups, it is classified as a non-reducing sugar.
Sucrose crystallizes in the monoclinic space group P21 with room-temperature lattice parameters a = 1.08631 nm, b = 0.87044 nm, c = 0.77624 nm, β = 102.938°.
The purity of sucrose is measured by polarimetry, through the rotation of plane-polarized light by a sugar solution. The specific rotation at using yellow "sodium-D" light (589 nm) is +66.47°. Commercial samples of sugar are assayed using this parameter. Sucrose does not deteriorate at ambient conditions.
Thermal and oxidative degradation
Sucrose does not melt at high temperatures. Instead, it decomposes at to form caramel. Like other carbohydrates, it combusts to carbon dioxide and water by the simplified equation:
Mixing sucrose with the oxidizer potassium nitrate produces the fuel known as rocket candy that is used to propel amateur rocket motors.
This reaction is somewhat simplified though. Some of the carbon does get fully oxidized to carbon dioxide, and other reactions, such as the water-gas shift reaction also take place. A more accurate theoretical equation is:
Sucrose burns with chloric acid, formed by the reaction of hydrochloric acid and potassium chlorate:
Sucrose can be dehydrated with sulfuric acid to form a black, carbon-rich solid, as indicated in the following idealized equation:
The formula for sucrose's decomposition can be represented as a two-step reaction: the first simplified reaction is dehydration of sucrose to pure carbon and water, and then carbon is oxidised to by from air.
Hydrolysis
Hydrolysis breaks the glycosidic bond converting sucrose into glucose and fructose. Hydrolysis is, however, so slow that solutions of sucrose can sit for years with negligible change. If the enzyme sucrase is added, however, the reaction will proceed rapidly. Hydrolysis can also be accelerated with acids, such as cream of tartar or lemon juice, both weak acids. Likewise, gastric acidity converts sucrose to glucose and fructose during digestion, the bond between them being an acetal bond which can be broken by an acid.
Given (higher) heats of combustion of 1349.6 kcal/mol for sucrose, 673.0 for glucose, and 675.6 for fructose, hydrolysis releases about per mole of sucrose, or about 3 small calories per gram of product.
Synthesis and biosynthesis of sucrose
The biosynthesis of sucrose proceeds via the precursors UDP-glucose and fructose 6-phosphate, catalyzed by the enzyme sucrose-6-phosphate synthase. The energy for the reaction is gained by the cleavage of uridine diphosphate (UDP).
Sucrose is formed by plants, algae and cyanobacteria but not by other organisms. Sucrose is the end product of photosynthesis and is found naturally in many food plants along with the monosaccharide fructose. In many fruits, such as pineapple and apricot, sucrose is the main sugar. In others, such as grapes and pears, fructose is the main sugar.
Chemical synthesis
After numerous unsuccessful attempts by others, Raymond Lemieux and George Huber succeeded in synthesizing sucrose from acetylated glucose and fructose in 1953.
Sources
In nature, sucrose is present in many plants, and in particular their roots, fruits and nectars, because it serves as a way to store energy, primarily from photosynthesis. Many mammals, birds, insects and bacteria accumulate and feed on the sucrose in plants and for some it is their main food source. Although honeybees consume sucrose, the honey they produce consists primarily of fructose and glucose, with only trace amounts of sucrose.
As fruits ripen, their sucrose content usually rises sharply, but some fruits contain almost no sucrose at all. This includes grapes, cherries, blueberries, blackberries, figs, pomegranates, tomatoes, avocados, lemons and limes.
Sucrose is a naturally occurring sugar, but with the advent of industrialization, it has been increasingly refined and consumed in all kinds of processed foods.
Production
History of sucrose refinement
The production of table sugar has a long history. Some scholars claim Indians discovered how to crystallize sugar during the Gupta dynasty, around CE 350.
Other scholars point to the ancient manuscripts of China, dated to the 8th century BCE, where one of the earliest historical mentions of sugar cane is included along with the fact that their knowledge of sugar cane was derived from India. By about 500 BCE, residents of modern-day India began making sugar syrup, cooling it in large flat bowls to produce raw sugar crystals that were easier to store and transport. In the local Indian language, these crystals were called (), which is the source of the word candy.
The army of Alexander the Great was halted on the banks of river Indus by the refusal of his troops to go further east. They saw people in the Indian subcontinent growing sugarcane and making "granulated, salt-like sweet powder", locally called (), (), pronounced as () in Greek (Modern Greek, , ). On their return journey, the Greek soldiers carried back some of the "honey-bearing reeds". Sugarcane remained a limited crop for over a millennium. Sugar was a rare commodity and traders of sugar became wealthy. Venice, at the height of its financial power, was the chief sugar-distributing center of Europe. Moors started producing it in Sicily and Spain. Only after the Crusades did it begin to rival honey as a sweetener in Europe. The Spanish began cultivating sugarcane in the West Indies in 1506 (Cuba in 1523). The Portuguese first cultivated sugarcane in Brazil in 1532.
Sugar remained a luxury in much of the world until the 18th century. Only the wealthy could afford it. In the 18th century, the demand for table sugar boomed in Europe and by the 19th century it had become regarded as a human necessity. The use of sugar grew from use in tea, to cakes, confectionery and chocolates. Suppliers marketed sugar in novel forms, such as solid cones, which required consumers to use a sugar nip, a pliers-like tool, in order to break off pieces.
The demand for cheaper table sugar drove, in part, colonization of tropical islands and nations where labor-intensive sugarcane plantations and table sugar manufacturing could thrive. Growing sugar cane crop in hot humid climates, and producing table sugar in high temperature sugar mills was harsh, inhumane work. The demand for cheap labor for this work, in part, first drove slave trade from Africa (in particular West Africa), followed by indentured labor trade from South Asia (in particular India). Millions of slaves, followed by millions of indentured laborers were brought into the Caribbean, Indian Ocean, Pacific Islands, East Africa, Natal, north and eastern parts of South America, and southeast Asia. The modern ethnic mix of many nations, settled in the last two centuries, has been influenced by table sugar.
Beginning in the late 18th century, the production of sugar became increasingly mechanized. The steam engine first powered a sugar mill in Jamaica in 1768, and, soon after, steam replaced direct firing as the source of process heat. During the same century, Europeans began experimenting with sugar production from other crops. Andreas Marggraf identified sucrose in beet root and his student Franz Achard built a sugar beet processing factory in Silesia (Prussia). The beet-sugar industry took off during the Napoleonic Wars, when France and the continent were cut off from Caribbean sugar. In 2009, about 20 percent of the world's sugar was produced from beets.
Today, a large beet refinery producing around 1,500 tonnes of sugar a day needs a permanent workforce of about 150 for 24-hour production.
Trends
Table sugar (sucrose) comes from plant sources. Two important sugar crops predominate: sugarcane (Saccharum spp.) and sugar beets (Beta vulgaris), in which sugar can account for 12% to 20% of the plant's dry weight. Minor commercial sugar crops include the date palm (Phoenix dactylifera), sorghum (Sorghum vulgare), and the sugar maple (Acer saccharum). Sucrose is obtained by extraction of these crops with hot water; concentration of the extract gives syrups, from which solid sucrose can be crystallized. In 2017, worldwide production of table sugar amounted to 185 million tonnes.
Most cane sugar comes from countries with warm climates, because sugarcane does not tolerate frost. Sugar beets, on the other hand, grow only in cooler temperate regions and do not tolerate extreme heat. About 80 percent of sucrose is derived from sugarcane, the rest almost all from sugar beets.
In mid-2018, India and Brazil had about the same production of sugar – 34 million tonnes – followed by the European Union, Thailand, and China as the major producers. India, the European Union, and China were the leading domestic consumers of sugar in 2018.
Beet sugar comes from regions with cooler climates: northwest and eastern Europe, northern Japan, plus some areas in the United States (including California). In the northern hemisphere, the beet-growing season ends with the start of harvesting around September. Harvesting and processing continues until March in some cases. The availability of processing plant capacity and the weather both influence the duration of harvesting and processing – the industry can store harvested beets until processed, but a frost-damaged beet becomes effectively unprocessable.
The United States sets high sugar prices to support its producers, with the effect that many former purchasers of sugar have switched to corn syrup (beverage manufacturers) or moved out of the country (candy manufacturers).
The low prices of glucose syrups produced from wheat and corn (maize) threaten the traditional sugar market. Used in combination with artificial sweeteners, they can allow drink manufacturers to produce very low-cost goods.
Types
Cane
Since the 6th century BCE, cane sugar producers have crushed the harvested vegetable material from sugarcane in order to collect and filter the juice. They then treat the liquid, often with lime (calcium oxide), to remove impurities and then neutralize it. Boiling the juice then allows the sediment to settle to the bottom for dredging out, while the scum rises to the surface for skimming off. In cooling, the liquid crystallizes, usually in the process of stirring, to produce sugar crystals. Centrifuges usually remove the uncrystallized syrup. The producers can then either sell the sugar product for use as is, or process it further to produce lighter grades. The later processing may take place in another factory in another country.
Sugarcane is a major component of Brazilian agriculture; the country is the world's largest producer of sugarcane and its derivative products, such as crystallized sugar and ethanol (ethanol fuel).
Beet
Beet sugar producers slice the washed beets, then extract the sugar with hot water in a "diffuser". An alkaline solution ("milk of lime" and carbon dioxide from the lime kiln) then serves to precipitate impurities (see carbonatation). After filtration, evaporation concentrates the juice to a content of about 70% solids, and controlled crystallisation extracts the sugar. A centrifuge removes the sugar crystals from the liquid, which gets recycled in the crystalliser stages. When economic constraints prevent the removal of more sugar, the manufacturer discards the remaining liquid, now known as molasses, or sells it on to producers of animal feed.
Sieving the resultant white sugar produces different grades for selling.
Cane versus beet
It is difficult to distinguish between fully refined sugar produced from beet and cane. One way is by isotope analysis of carbon. Cane uses C4 carbon fixation, and beet uses C3 carbon fixation, resulting in a different ratio of 13C and 12C isotopes in the sucrose. Tests are used to detect fraudulent abuse of European Union subsidies or to aid in the detection of adulterated fruit juice.
Sugar cane tolerates hot climates better, but the production of sugar cane needs approximately four times as much water as the production of sugar beet. As a result, some countries that traditionally produced cane sugar (such as Egypt) have built new beet sugar factories since about 2008. Some sugar factories process both sugar cane and sugar beets and extend their processing period in that way.
The production of sugar leaves residues that differ substantially depending on the raw materials used and on the place of production. While cane molasses is often used in food preparation, humans find molasses from sugar beets unpalatable, and it consequently ends up mostly as industrial fermentation feedstock (for example in alcohol distilleries), or as animal feed. Once dried, either type of molasses can serve as fuel for burning.
Pure beet sugar is difficult to find, so labelled, in the marketplace. Although some makers label their product clearly as "pure cane sugar", beet sugar is almost always labeled simply as sugar or pure sugar. Interviews with the five major beet sugar-producing companies revealed that many store brands or "private label" sugar products are pure beet sugar. The lot code can be used to identify the company and the plant from which the sugar came, enabling beet sugar to be identified if the codes are known.
Culinary sugars
Mill white
Mill white, also called plantation white, crystal sugar or superior sugar is produced from raw sugar. It is exposed to sulfur dioxide during the production to reduce the concentration of color compounds and helps prevent further color development during the crystallization process. Although common to sugarcane-growing areas, this product does not store or ship well. After a few weeks, its impurities tend to promote discoloration and clumping; therefore this type of sugar is generally limited to local consumption.
Blanco directo
Blanco directo, a white sugar common in India and other south Asian countries, is produced by precipitating many impurities out of cane juice using phosphoric acid and calcium hydroxide, similar to the carbonatation technique used in beet sugar refining. Blanco directo is more pure than mill white sugar, but less pure than white refined.
White refined
White refined is the most common form of sugar in North America and Europe. Refined sugar is made by dissolving and purifying raw sugar using phosphoric acid similar to the method used for blanco directo, a carbonatation process involving calcium hydroxide and carbon dioxide, or by various filtration strategies. It is then further purified by filtration through a bed of activated carbon or bone char. Beet sugar refineries produce refined white sugar directly without an intermediate raw stage.
White refined sugar is typically sold as granulated sugar, which has been dried to prevent clumping and comes in various crystal sizes for home and industrial use:
Coarse-grain, such as sanding sugar (also called "pearl sugar", "decorating sugar", nibbed sugar or sugar nibs) is a coarse grain sugar used to add sparkle and flavor atop baked goods and candies. Its large reflective crystals will not dissolve when subjected to heat.
Granulated, familiar as table sugar, with a grain size about 0.5 mm across. "Sugar cubes" are lumps for convenient consumption produced by mixing granulated sugar with sugar syrup.
Caster (0.35 mm), a very fine sugar in Britain and other Commonwealth countries, so-named because the grains are small enough to fit through a sugar caster which is a small vessel with a perforated top, from which to sprinkle sugar at table. Commonly used in baking and mixed drinks, it is sold as "superfine" sugar in the United States. Because of its fineness, it dissolves faster than regular white sugar and is especially useful in meringues and cold liquids. Caster sugar can be prepared at home by grinding granulated sugar for a couple of minutes in a mortar or food processor.
Powdered, 10X sugar, confectioner's sugar (0.060 mm), or icing sugar (0.024 mm), produced by grinding sugar to a fine powder. The manufacturer may add a small amount of anticaking agent to prevent clumping — either corn starch (1% to 3%) or tri-calcium phosphate.
Brown sugar comes either from the late stages of cane sugar refining, when sugar forms fine crystals with significant molasses content, or from coating white refined sugar with a cane molasses syrup (blackstrap molasses). Brown sugar's color and taste become stronger with increasing molasses content, as do its moisture-retaining properties. Brown sugars also tend to harden if exposed to the atmosphere, although proper handling can reverse this.
Measurement
Dissolved sugar content
Scientists and the sugar industry use degrees Brix (symbol °Bx), introduced by Adolf Brix, as units of measurement of the mass ratio of dissolved substance to water in a liquid. A 25 °Bx sucrose solution has 25 grams of sucrose per 100 grams of liquid; or, to put it another way, 25 grams of sucrose sugar and 75 grams of water exist in the 100 grams of solution.
The Brix degrees are measured using an infrared sensor. This measurement does not equate to Brix degrees from a density or refractive index measurement, because it will specifically measure dissolved sugar concentration instead of all dissolved solids. When using a refractometer, one should report the result as "refractometric dried substance" (RDS). One might speak of a liquid as having 20 °Bx RDS. This refers to a measure of percent by weight of total dried solids and, although not technically the same as Brix degrees determined through an infrared method, renders an accurate measurement of sucrose content, since sucrose in fact forms the majority of dried solids. The advent of in-line infrared Brix measurement sensors has made measuring the amount of dissolved sugar in products economical using a direct measurement.
Consumption
Refined sugar was a luxury before the 18th century. It became widely popular in the 18th century, then graduated to becoming a necessary food in the 19th century. This evolution of taste and demand for sugar as an essential food ingredient unleashed major economic and social changes. Eventually, table sugar became sufficiently cheap and common enough to influence standard cuisine and flavored drinks.
Sucrose forms a major element in confectionery and desserts. Cooks use it for sweetening. It can also act as a food preservative when used in sufficient concentrations, and thus is an important ingredient in the production of fruit preserves. Sucrose is important to the structure of many foods, including biscuits and cookies, cakes and pies, candy, and ice cream and sorbets. It is a common ingredient in many processed and so-called "junk foods".
Nutritional information
Fully refined sugar is 99.9% sucrose, thus providing only carbohydrate as dietary nutrient and 390 kilocalories per 100 g serving (table). There are no micronutrients of significance in fully refined sugar (table).
Metabolism of sucrose
In humans and other mammals, sucrose is broken down into its constituent monosaccharides, glucose and fructose, by sucrase or isomaltase glycoside hydrolases, which are located in the membrane of the microvilli lining the duodenum. The resulting glucose and fructose molecules are then rapidly absorbed into the bloodstream. In bacteria and some animals, sucrose is digested by the enzyme invertase. Sucrose is an easily assimilated macronutrient that provides a quick source of energy, provoking a rapid rise in blood glucose upon ingestion. Sucrose, as a pure carbohydrate, has an energy content of 3.94 kilocalories per gram (or 17 kilojoules per gram).
If consumed excessively, sucrose may contribute to the development of metabolic syndrome, including increased risk for type 2 diabetes, insulin resistance, weight gain and obesity in adults and children.
Tooth decay
Tooth decay (dental caries) has become a pronounced health hazard associated with the consumption of sugars, especially sucrose. Oral bacteria such as Streptococcus mutans live in dental plaque and metabolize any free sugars (not just sucrose, but also glucose, lactose, fructose, and cooked starches) into lactic acid. The resultant lactic acid lowers the pH of the tooth's surface, stripping it of minerals in the process known as tooth decay.
All 6-carbon sugars and disaccharides based on 6-carbon sugars can be converted by dental plaque bacteria into acid that demineralizes teeth, but sucrose may be uniquely useful to Streptococcus sanguinis (formerly Streptococcus sanguis) and Streptococcus mutans. Sucrose is the only dietary sugar that can be converted to sticky glucans (dextran-like polysaccharides) by extracellular enzymes. These glucans allow the bacteria to adhere to the tooth surface and to build up thick layers of plaque. The anaerobic conditions deep in the plaque encourage the formation of acids, which leads to carious lesions. Thus, sucrose could enable S. mutans, S. sanguinis and many other species of bacteria to adhere strongly and resist natural removal, e.g. by flow of saliva, although they are easily removed by brushing. The glucans and levans (fructose polysaccharides) produced by the plaque bacteria also act as a reserve food supply for the bacteria.
Such a special role of sucrose in the formation of tooth decay is much more significant in light of the almost universal use of sucrose as the most desirable sweetening agent. Widespread replacement of sucrose by high-fructose corn syrup (HFCS) has not diminished the danger from sucrose. If smaller amounts of sucrose are present in the diet, they will still be sufficient for the development of thick, anaerobic plaque and plaque bacteria will metabolise other sugars in the diet, such as the glucose and fructose in HFCS.
Glycemic index
Sucrose is a disaccharide made up of 50% glucose and 50% fructose and has a glycemic index of 65. Sucrose is digested rapidly, but has a relatively low glycemic index due to its content of fructose, which has a minimal effect on blood glucose.
As with other sugars, sucrose is digested into its components via the enzyme sucrase to glucose (blood sugar). The glucose component is transported into the blood where it serves immediate metabolic demands, or is converted and reserved in the liver as glycogen.
Gout
The occurrence of gout is connected with an excess production of uric acid. A diet rich in sucrose may lead to gout as it raises the level of insulin, which prevents excretion of uric acid from the body. As the concentration of uric acid in the body increases, so does the concentration of uric acid in the joint liquid and beyond a critical concentration, the uric acid begins to precipitate into crystals. Researchers have implicated sugary drinks high in fructose in a surge in cases of gout.
Sucrose intolerance
UN dietary recommendation
In 2015, the World Health Organization published a new guideline on sugars intake for adults and children, as a result of an extensive review of the available scientific evidence by a multidisciplinary group of experts. The guideline recommends that both adults and children ensure their intake of free sugars (monosaccharides and disaccharides added to foods and beverages by the manufacturer, cook or consumer, and sugars naturally present in honey, syrups, fruit juices and fruit juice concentrates) is less than 10% of total energy intake. A level below 5% of total energy intake brings additional health benefits, especially with regards to dental caries.
Religious concerns
The sugar refining industry often uses bone char (calcinated animal bones) for decolorizing. About 25% of sugar produced in the U.S. is processed using bone char as a filter, the remainder being processed with activated carbon. As bone char does not seem to remain in finished sugar, Jewish religious leaders consider sugar filtered through it to be pareve, meaning that it is neither meat nor dairy and may be used with either type of food. However, the bone char must source to a kosher animal (e.g. cow, sheep) for the sugar to be kosher.
Trade and economics
One of the most widely traded commodities in the world throughout history, sugar accounts for around 2% of the global dry cargo market. International sugar prices show great volatility, ranging from around 3 cents to over 60 cents per pound in the 50 years. About 100 of the world's 180 countries produce sugar from beet or cane, a few more refine raw sugar to produce white sugar, and all countries consume sugar. Consumption of sugar ranges from around per person per annum in Ethiopia to around in Belgium. Consumption per capita rises with income per capita until it reaches a plateau of around per person per year in middle income countries.
Many countries subsidize sugar production heavily. The European Union, the United States, Japan, and many developing countries subsidize domestic production and maintain high tariffs on imports. Sugar prices in these countries have often up to triple the prices on the international market; , with world market sugar futures prices strong, such prices were typically double world prices.
Within international trade bodies, especially in the World Trade Organization (WTO), the "G20" countries led by Brazil have long argued that, because these sugar markets in essence exclude cane sugar imports, the G20 sugar producers receive lower prices than they would under free trade. While both the European Union and United States maintain trade agreements whereby certain developing and least developed countries (LDCs) can sell certain quantities of sugar into their markets, free of the usual import tariffs, countries outside these preferred trade régimes have complained that these arrangements violate the "most favoured nation" principle of international trade. This has led to numerous tariffs and levies in the past.
In 2004, the WTO sided with a group of cane sugar exporting nations (led by Brazil and Australia) and ruled illegal the EU sugar-régime and the accompanying ACP-EU Sugar Protocol, that granted a group of African, Caribbean, and Pacific countries receive preferential access to the European sugar market. In response to this and to other rulings of the WTO, and owing to internal pressures against the EU sugar-régime, the European Commission proposed on 22 June 2005 a radical reform of the EU sugar-régime that cut prices by 39% and eliminated all EU sugar exports.
In 2007, it seemed that the U.S. Sugar Program could become the next target for reform. However, some commentators expected heavy lobbying from the U.S. sugar industry, which donated $2.7 million to U.S. House and Senate incumbents in the 2006 U.S. election, more than any other group of U.S. food-growers. Especially prominent among sugar lobbyists were the Fanjul Brothers, so-called "sugar barons" who made the single individual contributions of soft money to both the Democratic and Republican parties in the U.S. political system.
Small quantities of sugar, especially specialty grades of sugar, reach the market as 'fair trade' commodities; the fair trade system produces and sells these products with the understanding that a larger-than-usual fraction of the revenue will support small farmers in the developing world. However, whilst the Fairtrade Foundation offers a premium of $60.00 per tonne to small farmers for sugar branded as "Fairtrade", government schemes such as the U.S. Sugar Program and the ACP-EU Sugar Protocol offer premiums of around $400.00 per tonne above world market prices. However, the EU announced on 14 September 2007 that it had offered "to eliminate all duties and quotas on the import of sugar into the EU".
| Biology and health sciences | Carbohydrates | Biology |
50571 | https://en.wikipedia.org/wiki/Transportation%20engineering | Transportation engineering | Transportation engineering or transport engineering is the application of technology and scientific principles to the planning, functional design, operation and management of facilities for any mode of transportation to provide for the safe, efficient, rapid, comfortable, convenient, economical, and environmentally compatible movement of people and goods transport.
Theory
The planning aspects of transportation engineering relate to elements of urban planning, and involve technical forecasting decisions and political factors. Technical forecasting of passenger travel usually involves an urban transportation planning model, requiring the estimation of trip generation, trip distribution, mode choice, and route assignment. More sophisticated forecasting can include other aspects of traveler decisions, including auto ownership, trip chaining (the decision to link individual trips together in a tour) and the choice of residential or business location (known as land use forecasting). Passenger trips are the focus of transportation engineering because they often represent the peak of demand on any transportation system.
A review of descriptions of the scope of various committees indicates that while facility planning and design continue to be the core of the transportation engineering field, such areas as operations planning, logistics, network analysis, financing, and policy analysis are also important, particularly to those working in highway and urban transportation. The National Council of Examiners for Engineering and Surveying (NCEES) list online the safety protocols, geometric design requirements, and signal timing.
Transportation engineering, primarily involves planning, design, construction, maintenance, and operation of transportation facilities. The facilities support air, highway, railroad, pipeline, water, and even space transportation. The design aspects of transportation engineering include the sizing of transportation facilities (how many lanes or how much capacity the facility has), determining the materials and thickness used in pavement designing the geometry (vertical and horizontal alignment) of the roadway (or track).
Before any planning occurs an engineer must take what is known as an inventory of the area or, if it is appropriate, the previous system in place. This inventory or database must include information on population, land use, economic activity, transportation facilities and services, travel patterns and volumes, laws and ordinances, regional financial resources, and community values and expectations. These inventories help the engineer create business models to complete accurate forecasts of the future conditions of the system.
Operations and management involve traffic engineering, so that vehicles move smoothly on the road or track. Older techniques include signs, signals, markings, and tolling. Newer technologies involve intelligent transportation systems, including advanced traveler information systems (such as variable message signs), advanced traffic control systems (such as ramp meters), and vehicle infrastructure integration. Human factors are an aspect of transportation engineering, particularly concerning driver-vehicle interface and user interface of road signs, signals, and markings.
Specializations
Highway engineering
Engineers in this specialization:
Handle the planning, design, construction, and operation of highways, roads, and other vehicular facilities as well as their related bicycle and pedestrian realms
Estimate the transportation needs of the public and then secure the funding for projects
Analyze locations of high traffic volumes and high collisions for safety and capacity
Use engineering principles to improve the transportation system
Utilize the three design controls, which are the drivers, the vehicles, and the roadways themselves
Railroad engineering
Railway engineers handle the design, construction, and operation of railroads and mass transit systems that use a fixed guideway (such as light rail or monorails).
Typical tasks include:
Determine horizontal and vertical alignment of the railways
Determine station location
Design functional segments of stations like lines, platforms, etc.
Estimate construction cost
Railway engineers work to build a cleaner and safer transportation network by reinvesting and revitalizing the rail system to meet future demands. In the United States, railway engineers work with elected officials in Washington, D.C., on rail transportation issues to make sure that the rail system meets the country's transportation needs.
Railroad engineers can also move into the specialized field of train dispatching which focuses on train movement control.
Port and harbor engineering
Port and harbor engineers handle the design, construction, and operation of ports, harbors, canals, and other maritime facilities.
Airport engineering
Airport engineers design and construct airports. Airport engineers must account for the impacts and demands of aircraft in their design of airport facilities. These engineers must use the analysis of predominant wind direction to determine runway orientation, determine the size of runway border and safety areas, different wing tip to wing tip clearances for all gates and must designate the clear zones in the entire port. The Civil Engineering Department, consisting of Civil and Structural Engineers, undertakes the structural design of passenger, terminal design and cargo terminals, aircraft hangars (for parking commercial, private and government aircraft), runways and other pavements, technical buildings for installation of airport ground aids etc. for the airports in-house requirements and consultancy projects. They are even responsible for the master plan for airports they are authorized to work with.
| Technology | Disciplines | null |
50582 | https://en.wikipedia.org/wiki/Zircon | Zircon | Zircon () is a mineral belonging to the group of nesosilicates and is a source of the metal zirconium. Its chemical name is zirconium(IV) silicate, and its corresponding chemical formula is ZrSiO4. An empirical formula showing some of the range of substitution in zircon is (Zr1–y, REEy)(SiO4)1–x(OH)4x–y. Zircon precipitates from silicate melts and has relatively high concentrations of high field strength incompatible elements. For example, hafnium is almost always present in quantities ranging from 1 to 4%. The crystal structure of zircon is tetragonal crystal system. The natural color of zircon varies between colorless, yellow-golden, red, brown, blue, and green.
The name derives from the Persian zargun, meaning "gold-hued". This word is changed into "jargoon", a term applied to light-colored zircons. The English word "zircon" is derived from Zirkon, which is the German adaptation of this word. Yellow, orange, and red zircon is also known as "hyacinth", from the flower hyacinthus, whose name is of Ancient Greek origin.
Properties
Zircon is common in the crust of Earth. It occurs as a common accessory mineral in igneous rocks (as primary crystallization products), in metamorphic rocks and as detrital grains in sedimentary rocks. Large zircon crystals are rare. Their average size in granite rocks is about , but they can also grow to sizes of several cm, especially in mafic pegmatites and carbonatites. Zircon is fairly hard (with a Mohs hardness of 7.5) and chemically stable, and so is highly resistant to weathering. It also is resistant to heat, so that detrital zircon grains are sometimes preserved in igneous rocks formed from melted sediments. Its resistance to weathering, together with its relatively high specific gravity (4.68), make it an important component of the heavy mineral fraction of sandstones.
Because of their uranium and thorium content, some zircons undergo metamictization. Connected to internal radiation damage, these processes partially disrupt the crystal structure and partly explain the highly variable properties of zircon. As zircon becomes more and more modified by internal radiation damage, the density decreases, the crystal structure is compromised, and the color changes.
Zircon occurs in many colors, including reddish brown, yellow, green, blue, gray, and colorless. The color of zircons can sometimes be changed by heat treatment. Common brown zircons can be transformed into colorless and blue zircons by heating to . In geological settings, the development of pink, red, and purple zircon occurs after hundreds of millions of years, if the crystal has sufficient trace elements to produce color centers. Color in this red or pink series is annealed in geological conditions above temperatures of around .
Structurally, zircon consists of parallel chains of alternating silica tetrahedra (silicon ions in fourfold coordination with oxygen ions) and zirconium ions, with the large zirconium ions in eightfold coordination with oxygen ions.
Applications
Zircon is mainly consumed as an opacifier, and has been known to be used in the decorative ceramics industry. It is also the principal precursor not only to metallic zirconium, although this application is small, but also to all compounds of zirconium including zirconium dioxide (), an important refractory oxide with a melting point of .
Other applications include use in refractories and foundry casting and a growing array of specialty applications as zirconia and zirconium chemicals, including in nuclear fuel rods, catalytic fuel converters and in water and air purification systems.
Zircon is one of the key minerals used by geologists for geochronology.
Zircon is a part of the ZTR index to classify highly-weathered sediments.
Gemstone
Transparent zircon is a well-known form of semi-precious gemstone, favored for its high specific gravity (between 4.2 and 4.86) and adamantine luster. Because of its high refractive index (1.92) it has sometimes been used as a substitute for diamond, though it does not display quite the same play of color as a diamond. Zircon is one of the heaviest types of gemstone. Its Mohs hardness is between that of quartz and topaz, at 7.5 on the 10 point scale, though below that of the similar manmade stone cubic zirconia (8-8.5). Zircons may sometimes lose their inherent color after long exposure to bright sunlight, which is unusual in a gemstone. It is immune to acid attack except by sulfuric acid and then only when ground into a fine powder.
Most gem-grade zircons show a high degree of birefringence which, on stones cut with a table and pavilion cuts (i.e., nearly all cut stones), can be seen as the apparent doubling-up of the latter when viewed through the former, and this characteristic can be used to distinguish them from diamonds and cubic zirconias (CZ) as well as soda-lime glass, none of which show this characteristic. However, some zircons from Sri Lanka display only weak or no birefringence at all, and some other Sri Lanka stones may show clear birefringence in one place and little or none in another part of the same cut stone. Other gemstones also display birefringence, so while the presence of this characteristic may help distinguish a given zircon from a diamond or a CZ, it will not help distinguish it from, for example, a topaz gemstone. The high specific gravity of zircon, however, can usually separate it from any other gem and is simple to test.
Also, birefringence depends on the cut of the stone in relation to its optical axis. If a zircon is cut with this axis perpendicular to its table, birefringence may be reduced to undetectable levels unless viewed with a jeweler's loupe or other magnifying optics. The highest grade zircons are cut to minimize birefringence.
The value of a zircon gem depends largely on its color, clarity, and size. Prior to World War II, blue zircons (the most valuable color) were available from many gemstone suppliers in sizes between 15 and 25 carats; since then, stones even as large as 10 carats have become very scarce, especially in the most desirable color varieties.
Synthetic zircons have been created in laboratories. They are occasionally used in jewellery such as earrings. Zircons are sometimes imitated by spinel and synthetic sapphire, but are not difficult to distinguish from them with simple tools.
Zircon from Ratanakiri province in Cambodia is heat treated to produce blue zircon gemstones, sometimes referred to by the trade name cambolite.
Occurrence
Zircon is a common accessory to trace mineral constituent of all kinds of igneous rocks, but particularly granite and felsic igneous rocks. Due to its hardness, durability and chemical inertness, zircon persists in sedimentary deposits and is a common constituent of most sands. Zircon can occasionally be found as a trace mineral in ultrapotassic igneous rocks such as kimberlites, carbonatites, and lamprophyre, owing to the unusual magma genesis of these rocks.
Zircon forms economic concentrations within heavy mineral sands ore deposits, within certain pegmatites, and within some rare alkaline volcanic rocks, for example the Toongi Trachyte, Dubbo, New South Wales Australia in association with the zirconium-hafnium minerals eudialyte and armstrongite.
Australia leads the world in zircon mining, producing 37% of the world total and accounting for 40% of world EDR (economic demonstrated resources) for the mineral. South Africa is Africa's main producer, with 30% of world production, second after Australia.
Radiometric dating
Zircon has played an important role during the evolution of radiometric dating. Zircons contain trace amounts of uranium and thorium (from 10 ppm up to 1 wt%) and can be dated using several modern analytical techniques. Because zircons can survive geologic processes like erosion, transport, even high-grade metamorphism, they contain a rich and varied record of geological processes. Currently, zircons are typically dated by uranium-lead (U-Pb), fission-track, and U+Th/He techniques. Imaging the cathodoluminescence emission from fast electrons can be used as a prescreening tool for high-resolution secondary-ion-mass spectrometry (SIMS) to image the zonation pattern and identify regions of interest for isotope analysis. This is done using an integrated cathodoluminescence and scanning electron microscope. Zircons in sedimentary rock can identify the sediment source.
Zircons from Jack Hills in the Narryer Gneiss Terrane, Yilgarn Craton, Western Australia, have yielded U-Pb ages up to 4.404 billion years, interpreted to be the age of crystallization, making them the oldest minerals so far dated on Earth. In addition, the oxygen isotopic compositions of some of these zircons have been interpreted to indicate that more than 4.3 billion years ago there was already liquid water on the surface of the Earth. This interpretation is supported by additional trace element data, but is also the subject of debate. In 2015, "remains of biotic life" were found in 4.1-billion-year-old rocks in the Jack Hills of Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth ... then it could be common in the universe."
Similar minerals
Hafnon (), xenotime (), béhierite, schiavinatoite (), thorite (), and coffinite () all share the same crystal structure (IVX IVY O4, IIIX VY O4 in the case of xenotime) as zircon.
Gallery
| Physical sciences | Silicate minerals | Earth science |
50595 | https://en.wikipedia.org/wiki/Flash%20memory | Flash memory | Flash memory is an electronic non-volatile computer memory storage medium that can be electrically erased and reprogrammed. The two main types of flash memory, NOR flash and NAND flash, are named for the NOR and NAND logic gates. Both use the same cell design, consisting of floating-gate MOSFETs. They differ at the circuit level depending on whether the state of the bit line or word lines is pulled high or low: in NAND flash, the relationship between the bit line and the word lines resembles a NAND gate; in NOR flash, it resembles a NOR gate.
Flash memory, a type of floating-gate memory, was invented by Fujio Masuoka at Toshiba in 1980 and is based on EEPROM technology. Toshiba began marketing flash memory in 1987. EPROMs had to be erased completely before they could be rewritten. NAND flash memory, however, may be erased, written, and read in blocks (or pages), which generally are much smaller than the entire device. NOR flash memory allows a single machine word to be written to an erased location or read independently. A flash memory device typically consists of one or more flash memory chips (each holding many flash memory cells), along with a separate flash memory controller chip.
The NAND type is found mainly in memory cards, USB flash drives, solid-state drives (those produced since 2009), feature phones, smartphones, and similar products, for general storage and transfer of data. NAND or NOR flash memory is also often used to store configuration data in digital products, a task previously made possible by EEPROM or battery-powered static RAM. A key disadvantage of flash memory is that it can endure only a relatively small number of write cycles in a specific block.
NOR flash is known for its direct random access capabilities, making it apt for executing code directly. Its architecture allows for individual byte access, facilitating faster read speeds compared to NAND flash. NAND flash memory operates with a different architecture, relying on a serial access approach. This makes NAND suitable for high-density data storage but less efficient for random access tasks. NAND flash is often employed in scenarios where cost-effective, high-capacity storage is crucial, such as in USB drives, memory cards, and solid-state drives (SSDs).
The primary differentiator lies in their use cases and internal structures. NOR flash is optimal for applications requiring quick access to individual bytes, like in embedded systems for program execution. NAND flash, on the other hand, shines in scenarios demanding cost-effective, high-capacity storage with sequential data access.
Flash memory is used in computers, PDAs, digital audio players, digital cameras, mobile phones, synthesizers, video games, scientific instrumentation, industrial robotics, and medical electronics. Flash memory has a fast read access time but it is not as fast as static RAM or ROM. In portable devices, it is preferred to use flash memory because of its mechanical shock resistance since mechanical drives are more prone to mechanical damage.
Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over non-flash EEPROM when writing large amounts of data. flash memory costs greatly less than byte-programmable EEPROM and had become the dominant memory type wherever a system required a significant amount of non-volatile solid-state storage. EEPROMs, however, are still used in applications that require only small amounts of storage, e.g. in SPD implementations on computer memory modules.
Flash memory packages can use die stacking with through-silicon vias and several dozen layers of 3D TLC NAND cells (per die) simultaneously to achieve capacities of up to 1 tebibyte per package using 16 stacked dies and an integrated flash controller as a separate die inside the package.
History
Background
The origins of flash memory can be traced back to the development of the floating-gate MOSFET (FGMOS), also known as the floating-gate transistor. The original MOSFET was invented at Bell Labs between 1955 and 1960, after Frosch and Derick discovered surface passivation and used their discovery to create the first planar transistors. Dawon Kahng went on to develop a variation, the floating-gate MOSFET, with Taiwanese-American engineer Simon Min Sze at Bell Labs in 1967. They proposed that it could be used as floating-gate memory cells for storing a form of programmable read-only memory (PROM) that is both non-volatile and re-programmable.
Early types of floating-gate memory included EPROM (erasable PROM) and EEPROM (electrically erasable PROM) in the 1970s. However, early floating-gate memory required engineers to build a memory cell for each bit of data, which proved to be cumbersome, slow, and expensive, restricting floating-gate memory to niche applications in the 1970s, such as military equipment and the earliest experimental mobile phones.
Invention and commercialization
Modern EEPROM based on Fowler-Nordheim tunnelling to erase data was invented by Bernward and patented by Siemens in 1974. And further developed between 1976 and 1978 by Eliyahou Harari at Hughes Aircraft Company and George Perlegos and others at Intel. This led to Masuoka's invention of flash memory at Toshiba in 1980. The improvement between EEPROM and flash being that flash is programmed in blocks while EEPROM is programmed in bytes. According to Toshiba, the name "flash" was suggested by Masuoka's colleague, Shōji Ariizumi, because the erasure process of the memory contents reminded him of the flash of a camera. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash at the IEEE 1987 International Electron Devices Meeting (IEDM) held in San Francisco.
Toshiba commercially launched NAND flash memory in 1987. Intel Corporation introduced the first commercial NOR type flash chip in 1988. NOR-based flash has long erase and write times, but provides full address and data buses, allowing random access to any memory location. This makes it a suitable replacement for older read-only memory (ROM) chips, which are used to store program code that rarely needs to be updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance may be from as little as 100 erase cycles for an on-chip flash memory, to a more typical 10,000 or 100,000 erase cycles, up to 1,000,000 erase cycles. NOR-based flash was the basis of early flash-based removable media; CompactFlash was originally based on it, though later cards moved to less expensive NAND flash.
NAND flash has reduced erase and write times, and requires less chip area per cell, thus allowing greater storage density and lower cost per bit than NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits. This makes NAND flash unsuitable as a drop-in replacement for program ROM, since most microprocessors and microcontrollers require byte-level random access. In this regard, NAND flash is similar to other secondary data storage devices, such as hard disks and optical media, and is thus highly suitable for use in mass-storage devices, such as memory cards and solid-state drives (SSD). For example, SSDs store data using multiple NAND flash memory chips.
The first NAND-based removable memory card format was SmartMedia, released in 1995. Many others followed, including MultiMediaCard, Secure Digital, Memory Stick, and xD-Picture Card.
Later developments
A new generation of memory card formats, including RS-MMC, miniSD and microSD, feature extremely small form factors. For example, the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm.
NAND flash has achieved significant levels of memory density as a result of several major technologies that were commercialized during the late 2000s to early 2010s.
NOR flash was the most common type of Flash memory sold until 2005, when NAND flash overtook NOR flash in sales.
Multi-level cell (MLC) technology stores more than one bit in each memory cell. NEC demonstrated multi-level cell (MLC) technology in 1998, with an 80Mb flash memory chip storing 2 bits per cell. STMicroelectronics also demonstrated MLC in 2000, with a 64MB NOR flash memory chip. In 2009, Toshiba and SanDisk introduced NAND flash chips with QLC technology storing 4 bits per cell and holding a capacity of 64Gbit. Samsung Electronics introduced triple-level cell (TLC) technology storing 3-bits per cell, and began mass-producing NAND chips with TLC technology in 2010.
Charge trap flash
Charge trap flash (CTF) technology replaces the polysilicon floating gate, which is sandwiched between a blocking gate oxide above and a tunneling oxide below it, with an electrically insulating silicon nitride layer; the silicon nitride layer traps electrons. In theory, CTF is less prone to electron leakage, providing improved data retention.
Because CTF replaces the polysilicon with an electrically insulating nitride, it allows for smaller cells and higher endurance (lower degradation or wear). However, electrons can become trapped and accumulate in the nitride, leading to degradation. Leakage is exacerbated at high temperatures since electrons become more excited with increasing temperatures. CTF technology however still uses a tunneling oxide and blocking layer which are the weak points of the technology, since they can still be damaged in the usual ways (the tunnel oxide can be degraded due to extremely high electric fields and the blocking layer due to Anode Hot Hole Injection (AHHI).
Degradation or wear of the oxides is the reason why flash memory has limited endurance, and data retention goes down (the potential for data loss increases) with increasing degradation, since the oxides lose their electrically insulating characteristics as they degrade. The oxides must insulate against electrons to prevent them from leaking which would cause data loss.
In 1991, NEC researchers including N. Kodama, K. Oyama and Hiroki Shirai described a type of flash memory with a charge trap method. In 1998, Boaz Eitan of Saifun Semiconductors (later acquired by Spansion) patented a flash memory technology named NROM that took advantage of a charge trapping layer to replace the conventional floating gate used in conventional flash memory designs. In 2000, an Advanced Micro Devices (AMD) research team led by Richard M. Fastow, Egyptian engineer Khaled Z. Ahmed and Jordanian engineer Sameer Haddad (who later joined Spansion) demonstrated a charge-trapping mechanism for NOR flash memory cells. CTF was later commercialized by AMD and Fujitsu in 2002. 3D V-NAND (vertical NAND) technology stacks NAND flash memory cells vertically within a chip using 3D charge trap flash (CTP) technology. 3D V-NAND technology was first announced by Toshiba in 2007, and the first device, with 24 layers, was first commercialized by Samsung Electronics in 2013.
3D integrated circuit technology
3D integrated circuit (3D IC) technology stacks integrated circuit (IC) chips vertically into a single 3D IC package. Toshiba introduced 3D IC technology to NAND flash memory in April 2007, when they debuted a 16GB eMMC compliant (product number THGAM0G7D8DBAI6, often abbreviated THGAM on consumer websites) embedded NAND flash memory package, which was manufactured with eight stacked 2GB NAND flash chips. In September 2007, Hynix Semiconductor (now SK Hynix) introduced 24-layer 3D IC technology, with a 16GB flash memory package that was manufactured with 24 stacked NAND flash chips using a wafer bonding process. Toshiba also used an eight-layer 3D IC for their 32GB THGBM flash package and in 2008. In 2010, Toshiba used a 16-layer 3D IC for their 128GB THGBM2 flash package, which was manufactured with 16 stacked 8GB chips. In the 2010s, 3D ICs came into widespread commercial use for NAND flash memory in mobile devices.
In 2016, Micron and Intel introduced a technology known as CMOS Under the Array/CMOS Under Array (CUA), Core over Periphery (COP), Periphery Under Cell (PUA), or Xtacking, in which the control circuitry for the flash memory is placed under or above the flash memory cell array. This has allowed for an increase in the number of planes or sections a flash memory chip has, increasing from 2 planes to 4, without increasing the area dedicated to the control or periphery circuitry. This increases the number of IO operations per flash chip or die, but it also introduces challenges when building capacitors for charge pumps used to write to the flash memory. Some flash dies have as many as 6 planes.
As of August 2017, microSD cards with a capacity up to 400 GB (400 billion bytes) are available. The same year, Samsung combined 3D IC chip stacking with its 3D V-NAND and TLC technologies to manufacture its 512GB KLUFG8R1EM flash memory package with eight stacked 64-layer V-NAND chips. In 2019, Samsung produced a 1024GB flash package, with eight stacked 96-layer V-NAND package and with QLC technology.
Principles of operation
Flash memory stores information in an array of memory cells made from floating-gate transistors. In single-level cell (SLC) devices, each cell stores only one bit of information. Multi-level cell (MLC) devices, including triple-level cell (TLC) devices, can store more than one bit per cell.
The floating gate may be conductive (typically polysilicon in most kinds of flash memory) or non-conductive (as in SONOS flash memory).
Floating-gate MOSFET
In flash memory, each memory cell resembles a standard metal–oxide–semiconductor field-effect transistor (MOSFET) except that the transistor has two gates instead of one. The cells can be seen as an electrical switch in which current flows between two terminals (source and drain) and is controlled by a floating gate (FG) and a control gate (CG). The CG is similar to the gate in other MOS transistors, but below this, there is the FG insulated all around by an oxide layer. The FG is interposed between the CG and the MOSFET channel. Because the FG is electrically isolated by its insulating layer, electrons placed on it are trapped. When the FG is charged with electrons, this charge screens the electric field from the CG, thus, increasing the threshold voltage (VT) of the cell. This means that the VT of the cell can be changed between the uncharged FG threshold voltage (VT1) and the higher charged FG threshold voltage (VT2) by changing the FG charge. In order to read a value from the cell, an intermediate voltage (VI) between VT1 and VT2 is applied to the CG. If the channel conducts at VI, the FG must be uncharged (if it were charged, there would not be conduction because VI is less than VT2). If the channel does not conduct at the VI, it indicates that the FG is charged. The binary value of the cell is sensed by determining whether there is current flowing through the transistor when VI is asserted on the CG. In a multi-level cell device, which stores more than one bit per cell, the amount of current flow is sensed (rather than simply its presence or absence), in order to determine more precisely the level of charge on the FG.
Floating gate MOSFETs are so named because there is an electrically insulating tunnel oxide layer between the floating gate and the silicon, so the gate "floats" above the silicon. The oxide keeps the electrons confined to the floating gate. Degradation or wear (and the limited endurance of floating gate Flash memory) occurs due to the extremely high electric field (10 million volts per centimeter) experienced by the oxide. Such high voltage densities can break atomic bonds over time in the relatively thin oxide, gradually degrading its electrically insulating properties and allowing electrons to be trapped in and pass through freely (leak) from the floating gate into the oxide, increasing the likelihood of data loss since the electrons (the quantity of which is used to represent different charge levels, each assigned to a different combination of bits in MLC Flash) are normally in the floating gate. This is why data retention goes down and the risk of data loss increases with increasing degradation. The silicon oxide in a cell degrades with every erase operation. The degradation increases the amount of negative charge in the cell over time due to trapped electrons in the oxide and negates some of the control gate voltage, this over time also makes erasing the cell slower, so to maintain the performance and reliability of the NAND chip, the cell must be retired from use. Endurance also decreases with the number of bits in a cell. With more bits in a cell, the number of possible states (each represented by a different voltage level) in a cell increases and is more sensitive to the voltages used for programming. Voltages may be adjusted to compensate for degradation of the silicon oxide, and as the number of bits increases, the number of possible states also increases and thus the cell is less tolerant of adjustments to programming voltages, because there is less space between the voltage levels that define each state in a cell.
Fowler–Nordheim tunneling
The process of moving electrons from the control gate and into the floating gate is called Fowler–Nordheim tunneling, and it fundamentally changes the characteristics of the cell by increasing the MOSFET's threshold voltage. This, in turn, changes the drain-source current that flows through the transistor for a given gate voltage, which is ultimately used to encode a binary value. The Fowler-Nordheim tunneling effect is reversible, so electrons can be added to or removed from the floating gate, processes traditionally known as writing and erasing.
Internal charge pumps
Despite the need for relatively high programming and erasing voltages, virtually all flash chips today require only a single supply voltage and produce the high voltages that are required using on-chip charge pumps.
Over half the energy used by a 1.8 V-NAND flash chip is lost in the charge pump itself. Since boost converters are inherently more efficient than charge pumps, researchers developing low-power SSDs have proposed returning to the dual Vcc/Vpp supply voltages used on all early flash chips, driving the high Vpp voltage for all flash chips in an SSD with a single shared external boost converter.
In spacecraft and other high-radiation environments, the on-chip charge pump is the first part of the flash chip to fail, although flash memories will continue to work in read-only mode at much higher radiation levels.
NOR flash
In NOR flash, each cell has one end connected directly to ground, and the other end connected directly to a bit line. This arrangement is called "NOR flash" because it acts like a NOR gate: when one of the word lines (connected to the cell's CG) is brought high, the corresponding storage transistor acts to pull the output bit line low. NOR flash continues to be the technology of choice for embedded applications requiring a discrete non-volatile memory device. The low read latencies characteristic of NOR devices allow for both direct code execution and data storage in a single memory product.
Programming
A single-level NOR flash cell in its default state is logically equivalent to a binary "1" value, because current will flow through the channel under application of an appropriate voltage to the control gate, so that the bitline voltage is pulled down. A NOR flash cell can be programmed, or set to a binary "0" value, by the following procedure:
an elevated on-voltage (typically >5 V) is applied to the CG
the channel is now turned on, so electrons can flow from the source to the drain (assuming an NMOS transistor)
the source-drain current is sufficiently high to cause some high energy electrons to jump through the insulating layer onto the FG, via a process called hot-electron injection.
Erasing
To erase a NOR flash cell (resetting it to the "1" state), a large voltage of the opposite polarity is applied between the CG and source terminal, pulling the electrons off the FG through Fowler–Nordheim tunneling (FN tunneling). This is known as Negative gate source source erase. Newer NOR memories can erase using negative gate channel erase, which biases the wordline on a NOR memory cell block and the P-well of the memory cell block to allow FN tunneling to be carried out, erasing the cell block. Older memories used source erase, in which a high voltage was applied to the source and then electrons from the FG were moved to the source. Modern NOR flash memory chips are divided into erase segments (often called blocks or sectors). The erase operation can be performed only on a block-wise basis; all the cells in an erase segment must be erased together. Programming of NOR cells, however, generally can be performed one byte or word at a time.
NAND flash
NAND flash also uses floating-gate transistors, but they are connected in a way that resembles a NAND gate: several transistors are connected in series, and the bit line is pulled low only if all the word lines are pulled high (above the transistors' VT). These groups are then connected via some additional transistors to a NOR-style bit line array in the same way that single transistors are linked in NOR flash.
Compared to NOR flash, replacing single transistors with serial-linked groups adds an extra level of addressing. Whereas NOR flash might address memory by page then word, NAND flash might address it by page, word and bit. Bit-level addressing suits bit-serial applications (such as hard disk emulation), which access only one bit at a time. applications, on the other hand, require every bit in a word to be accessed simultaneously. This requires word-level addressing. In any case, both bit and word addressing modes are possible with either NOR or NAND flash.
To read data, first the desired group is selected (in the same way that a single transistor is selected from a NOR array). Next, most of the word lines are pulled up above VT2, while one of them is pulled up to VI. The series group will conduct (and pull the bit line low) if the selected bit has not been programmed.
Despite the additional transistors, the reduction in ground wires and bit lines allows a denser layout and greater storage capacity per chip. (The ground wires and bit lines are actually much wider than the lines in the diagrams.) In addition, NAND flash is typically permitted to contain a certain number of faults (NOR flash, as is used for a BIOS ROM, is expected to be fault-free). Manufacturers try to maximize the amount of usable storage by shrinking the size of the transistors or cells, however the industry can avoid this and achieve higher storage densities per die by using 3D NAND, which stacks cells on top of each other.
NAND flash cells are read by analysing their response to various voltages.
Writing and erasing
NAND flash uses tunnel injection for writing and tunnel release for erasing. NAND flash memory forms the core of the removable USB storage devices known as USB flash drives, as well as most memory card formats and solid-state drives available today.
The hierarchical structure of NAND flash starts at a cell level which establishes strings, then pages, blocks, planes and ultimately a die. A string is a series of connected NAND cells in which the source of one cell is connected to the drain of the next one. Depending on the NAND technology, a string typically consists of 32 to 128 NAND cells. Strings are organised into pages which are then organised into blocks in which each string is connected to a separate line called a bitline. All cells with the same position in the string are connected through the control gates by a wordline. A plane contains a certain number of blocks that are connected through the same bitline. A flash die consists of one or more planes, and the peripheral circuitry that is needed to perform all the read, write, and erase operations.
The architecture of NAND flash means that data can be read and programmed (written) in pages, typically between 4 KiB and 16 KiB in size, but can only be erased at the level of entire blocks consisting of multiple pages. When a block is erased, all the cells are logically set to 1. Data can only be programmed in one pass to a page in a block that was erased. The programming process is set one or more cells from 1 to 0. Any cells that have been set to 0 by programming can only be reset to 1 by erasing the entire block. This means that before new data can be programmed into a page that already contains data, the current contents of the page plus the new data must be copied to a new, erased page. If a suitable erased page is available, the data can be written to it immediately. If no erased page is available, a block must be erased before copying the data to a page in that block. The old page is then marked as invalid and is available for erasing and reuse. This is different from operating system LBA view, for example, if operating system writes 1100 0011 to the flash storage device (such as SSD), the data actually written to the flash memory may be 0011 1100.
Vertical NAND
Vertical NAND (V-NAND) or 3D NAND memory stacks memory cells vertically and uses a charge trap flash architecture. The vertical layers allow larger areal bit densities without requiring smaller individual cells. It is also sold under the trademark BiCS Flash, which is a trademark of Kioxia Corporation (formerly Toshiba Memory Corporation). 3D NAND was first announced by Toshiba in 2007. V-NAND was first commercially manufactured by Samsung Electronics in 2013.
Structure
V-NAND uses a charge trap flash geometry (which was commercially introduced in 2002 by AMD and Fujitsu) that stores charge on an embedded silicon nitride film. Such a film is more robust against point defects and can be made thicker to hold larger numbers of electrons. V-NAND wraps a planar charge trap cell into a cylindrical form. As of 2020, 3D NAND flash memories by Micron and Intel instead use floating gates, however, Micron 128 layer and above 3D NAND memories use a conventional charge trap structure, due to the dissolution of the partnership between Micron and Intel. Charge trap 3D NAND flash is thinner than floating gate 3D NAND. In floating gate 3D NAND, the memory cells are completely separated from one another, whereas in charge trap 3D NAND, vertical groups of memory cells share the same silicon nitride material.
An individual memory cell is made up of one planar polysilicon layer containing a hole filled by multiple concentric vertical cylinders. The hole's polysilicon surface acts as the gate electrode. The outermost silicon dioxide cylinder acts as the gate dielectric, enclosing a silicon nitride cylinder that stores charge, in turn enclosing a silicon dioxide cylinder as the tunnel dielectric that surrounds a central rod of conducting polysilicon which acts as the conducting channel.
Memory cells in different vertical layers do not interfere with each other, as the charges cannot move vertically through the silicon nitride storage medium, and the electric fields associated with the gates are closely confined within each layer. The vertical collection is electrically identical to the serial-linked groups in which conventional NAND flash memory is configured. There is also string stacking, which builds several 3D NAND memory arrays or "plugs" separately, but stacked together to create a product with a higher number of 3D NAND layers on a single die. Often, two or 3 arrays are stacked. The misalignment between plugs is in the order of 30 to 10nm.
Construction
Growth of a group of V-NAND cells begins with an alternating stack of conducting (doped) polysilicon layers and insulating silicon dioxide layers.
The next step is to form a cylindrical hole through these layers. In practice, a 128 Gbit V-NAND chip with 24 layers of memory cells requires about 2.9 billion such holes. Next, the hole's inner surface receives multiple coatings, first silicon dioxide, then silicon nitride, then a second layer of silicon dioxide. Finally, the hole is filled with conducting (doped) polysilicon.
Performance
V-NAND flash architecture allows read and write operations twice as fast as conventional NAND and can last up to 10 times as long, while consuming 50 percent less power. They offer comparable physical bit density using 10-nm lithography but may be able to increase bit density by up to two orders of magnitude, given V-NAND's use of up to several hundred layers. As of 2020, V-NAND chips with 160 layers are under development by Samsung. As the number of layers increases, the capacity and endurance of flash memory may be increased.
Cost
The wafer cost of a 3D NAND is comparable with scaled down (32 nm or less) planar NAND flash. However, with planar NAND scaling stopping at 16 nm, the cost per bit reduction can continue by 3D NAND starting with 16 layers. However, due to the non-vertical sidewall of the hole etched through the layers; even a slight deviation leads to a minimum bit cost, i.e., minimum equivalent design rule (or maximum density), for a given number of layers; this minimum bit cost layer number decreases for smaller hole diameter.
Limitations
Block erasure
One limitation of flash memory is that it can be erased only a block at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations but does not offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written values. For example, a nibble value may be erased to 1111, then written as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Essentially, erasure sets all bits to 1, and programming can only clear bits to 0.
Some file systems designed for flash devices make use of this rewrite capability, for example YAFFS1, to represent sector metadata. Other flash file systems, such as YAFFS2, never make use of this "rewrite" capability – they do a lot of extra work to meet a "write once rule".
Although data structures in flash memory cannot be updated in completely general ways, this allows members to be "removed" by marking them as invalid. This technique may need to be modified for multi-level cell devices, where one memory cell holds more than one bit.
Common flash devices such as USB flash drives and memory cards provide only a block-level interface, or flash translation layer (FTL), which writes to a different cell each time to wear-level the device. This prevents incremental writing within a block; however, it does help the device from being prematurely worn out by intensive write patterns.
Data retention
Data stored on flash cells is steadily lost due to electron detrapping. The rate of loss increases exponentially as the absolute temperature increases. For example: For a 45 nm NOR flash, at 1000 hours, the threshold voltage (Vt) loss at 25°C is about half that at 90°C.
Memory wear
Another limitation is that flash memory has a finite number of program–erase cycles (typically written as P/E cycles). Micron Technology and Sun Microsystems announced an SLC NAND flash memory chip rated for 1,000,000 P/E cycles on 17 December 2008.
The guaranteed cycle count may apply only to block zero (as is the case with TSOP NAND devices), or to all blocks (as in NOR). This effect is mitigated in some chip firmware or file system drivers by counting the writes and dynamically remapping blocks in order to spread write operations between sectors; this technique is called wear leveling. Another approach is to perform write verification and remapping to spare sectors in case of write failure, a technique called bad block management (BBM). For portable consumer devices, these wear out management techniques typically extend the life of the flash memory beyond the life of the device itself, and some data loss may be acceptable in these applications. For high-reliability data storage, however, it is not advisable to use flash memory that would have to go through a large number of programming cycles. This limitation also exists for "read-only" applications such as thin clients and routers, which are programmed only once or at most a few times during their lifetimes, due to read disturb (see below).
In December 2012, Taiwanese engineers from Macronix revealed their intention to announce at the 2012 IEEE International Electron Devices Meeting that they had figured out how to improve NAND flash storage read/write cycles from 10,000 to 100 million cycles using a "self-healing" process that used a flash chip with "onboard heaters that could anneal small groups of memory cells." The built-in thermal annealing was to replace the usual erase cycle with a local high temperature process that not only erased the stored charge, but also repaired the electron-induced stress in the chip, giving write cycles of at least 100 million. The result was to be a chip that could be erased and rewritten over and over, even when it should theoretically break down. As promising as Macronix's breakthrough might have been for the mobile industry, however, there were no plans for a commercial product featuring this capability to be released any time in the near future.
Read disturb
The method used to read NAND flash memory can cause nearby cells in the same memory block to change over time (become programmed). This is known as read disturb. The threshold number of reads is generally in the hundreds of thousands of reads between intervening erase operations. If reading continually from one cell, that cell will not fail but rather one of the surrounding cells will on a subsequent read. To avoid the read disturb problem the flash controller will typically count the total number of reads to a block since the last erase. When the count exceeds a target limit, the affected block is copied over to a new block, erased, then released to the block pool. The original block is as good as new after the erase. If the flash controller does not intervene in time, however, a read disturb error will occur with possible data loss if the errors are too numerous to correct with an error-correcting code.
X-ray effects
Most flash ICs come in ball grid array (BGA) packages, and even the ones that do not are often mounted on a PCB next to other BGA packages. After PCB Assembly, boards with BGA packages are often X-rayed to see if the balls are making proper connections to the proper pad, or if the BGA needs rework. These X-rays can erase programmed bits in a flash chip (convert programmed "0" bits into erased "1" bits). Erased bits ("1" bits) are not affected by X-rays.
Some manufacturers are now making X-ray proof SD and USB memory devices.
Low-level access
The low-level interface to flash memory chips differs from those of other memory types such as DRAM, ROM, and EEPROM, which support bit-alterability (both zero to one and one to zero) and random access via externally accessible address buses.
NOR memory has an external address bus for reading and programming. For NOR memory, reading and programming are random-access, and unlocking and erasing are block-wise. For NAND memory, reading and programming are page-wise, and unlocking and erasing are block-wise.
NOR memories
Reading from NOR flash is similar to reading from random-access memory, provided the address and data bus are mapped correctly. Because of this, most microprocessors can use NOR flash memory as execute in place (XIP) memory, meaning that programs stored in NOR flash can be executed directly from the NOR flash without needing to be copied into RAM first. NOR flash may be programmed in a random-access manner similar to reading. Programming changes bits from a logical one to a zero. Bits that are already zero are left unchanged. Erasure must happen a block at a time, and resets all the bits in the erased block back to one. Typical block sizes are 64, 128, or 256 KiB.
Bad block management is a relatively new feature in NOR chips. In older NOR devices not supporting bad block management, the software or device driver controlling the memory chip must correct for blocks that wear out, or the device will cease to work reliably.
The specific commands used to lock, unlock, program, or erase NOR memories differ for each manufacturer. To avoid needing unique driver software for every device made, special Common Flash Memory Interface (CFI) commands allow the device to identify itself and its critical operating parameters.
Besides its use as random-access ROM, NOR flash can also be used as a storage device, by taking advantage of random-access programming. Some devices offer read-while-write functionality so that code continues to execute even while a program or erase operation is occurring in the background. For sequential data writes, NOR flash chips typically have slow write speeds, compared with NAND flash.
Typical NOR flash does not need an error correcting code.
NAND memories
NAND flash architecture was introduced by Toshiba in 1989. These memories are accessed much like block devices, such as hard disks. Each block consists of a number of pages. The pages are typically 512, 2,048 or 4,096 bytes in size. Associated with each page are a few bytes (typically 1/32 of the data size) that can be used for storage of an error correcting code (ECC) checksum.
Typical block sizes include:
32 pages of 512+16 bytes each for a block size (effective) of 16 KiB
64 pages of 2,048+64 bytes each for a block size of 128 KiB
64 pages of 4,096+128 bytes each for a block size of 256 KiB
128 pages of 4,096+128 bytes each for a block size of 512 KiB.
Modern NAND flash may have erase block size between 1 MiB to 128 MiB. While reading and programming is performed on a page basis, erasure can only be performed on a block basis. Because change a cell from 0 to 1 needs to erase entire block, not just modify some pages, so modify the data of a block may need a read-erase-write process, and the new data is actually moved to another block. In addition, on a NVM Express Zoned Namespaces SSD, it usually uses flash block size as the zone size.
NAND devices also require bad block management by the device driver software or by the flash memory controller chip. Some SD cards, for example, include controller circuitry to perform bad block management and wear leveling. When a logical block is accessed by high-level software, it is mapped to a physical block by the device driver or controller. A number of blocks on the flash chip may be set aside for storing mapping tables to deal with bad blocks, or the system may simply check each block at power-up to create a bad block map in RAM. The overall memory capacity gradually shrinks as more blocks are marked as bad.
NAND relies on ECC to compensate for bits that may spontaneously fail during normal device operation. A typical ECC will correct a one-bit error in each 2048 bits (256 bytes) using 22 bits of ECC, or a one-bit error in each 4096 bits (512 bytes) using 24 bits of ECC. If the ECC cannot correct the error during read, it may still detect the error. When doing erase or program operations, the device can detect blocks that fail to program or erase and mark them bad. The data is then written to a different, good block, and the bad block map is updated.
Hamming codes are the most commonly used ECC for SLC NAND flash. Reed–Solomon codes and BCH codes (Bose–Chaudhuri–Hocquenghem codes) are commonly used ECC for MLC NAND flash. Some MLC NAND flash chips internally generate the appropriate BCH error correction codes.
Most NAND devices are shipped from the factory with some bad blocks. These are typically marked according to a specified bad block marking strategy. By allowing some bad blocks, manufacturers achieve far higher yields than would be possible if all blocks had to be verified to be good. This significantly reduces NAND flash costs and only slightly decreases the storage capacity of the parts.
When executing software from NAND memories, virtual memory strategies are often used: memory contents must first be paged or copied into memory-mapped RAM and executed there (leading to the common combination of NAND + RAM). A memory management unit (MMU) in the system is helpful, but this can also be accomplished with overlays. For this reason, some systems will use a combination of NOR and NAND memories, where a smaller NOR memory is used as software ROM and a larger NAND memory is partitioned with a file system for use as a non-volatile data storage area.
NAND sacrifices the random-access and execute-in-place advantages of NOR. NAND is best suited to systems requiring high capacity data storage. It offers higher densities, larger capacities, and lower cost. It has faster erases, sequential writes, and sequential reads.
Standardization
A group called the Open NAND Flash Interface Working Group (ONFI) has developed a standardized low-level interface for NAND flash chips. This allows interoperability between conforming NAND devices from different vendors. The ONFI specification version 1.0 was released on 28 December 2006. It specifies:
A standard physical interface (pinout) for NAND flash in TSOP-48, WSOP-48, LGA-52, and BGA-63 packages
A standard command set for reading, writing, and erasing NAND flash chips
A mechanism for self-identification (comparable to the serial presence detection feature of SDRAM memory modules)
The ONFI group is supported by major NAND flash manufacturers, including Hynix, Intel, Micron Technology, and Numonyx, as well as by major manufacturers of devices incorporating NAND flash chips.
Two major flash device manufacturers, Toshiba and Samsung, have chosen to use an interface of their own design known as Toggle Mode (and now Toggle). This interface isn't pin-to-pin compatible with the ONFI specification. The result is that a product designed for one vendor's devices may not be able to use another vendor's devices.
A group of vendors, including Intel, Dell, and Microsoft, formed a Non-Volatile Memory Host Controller Interface (NVMHCI) Working Group. The goal of the group is to provide standard software and hardware programming interfaces for nonvolatile memory subsystems, including the "flash cache" device connected to the PCI Express bus.
Distinction between NOR and NAND flash
NOR and NAND flash differ in two important ways:
The connections of the individual memory cells are different.
The interface provided for reading and writing the memory is different; NOR allows random access as it can be either byte-addressable or word-addressable, with words being for example 32 bits long, while NAND allows only page access.
NOR and NAND flash get their names from the structure of the interconnections between memory cells. In NOR flash, cells are connected in parallel to the bit lines, allowing cells to be read and programmed individually. The parallel connection of cells resembles the parallel connection of transistors in a CMOS NOR gate. In NAND flash, cells are connected in series, resembling a CMOS NAND gate. The series connections consume less space than parallel ones, reducing the cost of NAND flash. It does not, by itself, prevent NAND cells from being read and programmed individually.
Each NOR flash cell is larger than a NAND flash cell 10 F2 vs 4 F2 even when using exactly the same semiconductor device fabrication and so each transistor, contact, etc. is exactly the same size because NOR flash cells require a separate metal contact for each cell.
Because of the series connection and removal of wordline contacts, a large grid of NAND flash memory cells will occupy perhaps only 60% of the area of equivalent NOR cells (assuming the same CMOS process resolution, for example, 130 nm, 90 nm, or 65 nm). NAND flash's designers realized that the area of a NAND chip, and thus the cost, could be further reduced by removing the external address and data bus circuitry. Instead, external devices could communicate with NAND flash via sequential-accessed command and data registers, which would internally retrieve and output the necessary data. This design choice made random-access of NAND flash memory impossible, but the goal of NAND flash was to replace mechanical hard disks, not to replace ROMs.
The first GSM phones and many feature phones had NOR flash memory, from which processor instructions could be executed directly in an execute in place architecture and allowed for short boot times. With smartphones, NAND flash memory was adopted as it has larger storage capacities and lower costs, but causes longer boot times because instructions cannot be executed from it directly, and must be copied to RAM memory first before execution.
Write endurance
The write endurance of SLC floating-gate NOR flash is typically equal to or greater than that of NAND flash, while MLC NOR and NAND flash have similar endurance capabilities. Examples of endurance cycle ratings listed in datasheets for NAND and NOR flash, as well as in storage devices using flash memory, are provided.
However, by applying certain algorithms and design paradigms such as wear leveling and memory over-provisioning, the endurance of a storage system can be tuned to serve specific requirements.
In order to compute the longevity of the NAND flash, one must account for the size of the memory chip, the type of memory (e.g. SLC/MLC/TLC), and use pattern. Industrial NAND and server NAND are in demand due to their capacity, longer endurance and reliability in sensitive environments.
As the number of bits per cell increases, performance and life of NAND flash may degrade, increasing random read times to 100μs for TLC NAND which is 4 times the time required in SLC NAND, and twice the time required in MLC NAND, for random reads.
Flash file systems
Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear leveling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NOR flash blocks. The basic concept behind flash file systems is the following: when the flash store is to be updated, the file system will write a new copy of the changed data to a fresh block, remap the file pointers, then erase the old block later when it has time.
In practice, flash file systems are used only for memory technology devices (MTDs), which are embedded flash memories that do not have a controller. Removable flash memory cards, SSDs, eMMC/eUFS chips and USB flash drives have built-in controllers to perform wear leveling and error correction so use of a specific flash file system may not add benefit.
Capacity
Multiple chips are often arrayed or die stacked to achieve higher capacities for use in consumer electronic devices such as multimedia players or GPSs. The capacity scaling (increase) of flash chips used to follow Moore's law because they are manufactured with many of the same integrated circuits techniques and equipment. Since the introduction of 3D NAND, scaling is no longer necessarily associated with Moore's law since ever smaller transistors (cells) are no longer used.
Consumer flash storage devices typically are advertised with usable sizes expressed as a small integer power of two (2, 4, 8, etc.) and a conventional designation of megabytes (MB) or gigabytes (GB); e.g., 512 MB, 8 GB. This includes SSDs marketed as hard drive replacements, in accordance with traditional hard drives, which use decimal prefixes. Thus, an SSD marked as "64 GB" is at least bytes (64 GB). Most users will have slightly less capacity than this available for their files, due to the space taken by file system metadata and because some operating systems report SSD capacity using binary prefixes which are somewhat larger than conventional prefixes .
The flash memory chips inside them are sized in strict binary multiples, but the actual total capacity of the chips is not usable at the drive interface.
It is considerably larger than the advertised capacity in order to allow for distribution of writes (wear leveling), for sparing, for error correction codes, and for other metadata needed by the device's internal firmware.
In 2005, Toshiba and SanDisk developed a NAND flash chip capable of storing 1 GB of data using multi-level cell (MLC) technology, capable of storing two bits of data per cell. In September 2005, Samsung Electronics announced that it had developed the world's first 2 GB chip.
In March 2006, Samsung announced flash hard drives with capacity of 4 GB, essentially the same order of magnitude as smaller laptop hard drives, and in September 2006, Samsung announced an 8 GB chip produced using a 40 nm manufacturing process.
In January 2008, SanDisk announced availability of their 16 GB MicroSDHC and 32 GB SDHC Plus cards.
More recent flash drives (as of 2012) have much greater capacities, holding 64, 128, and 256 GB.
A joint development at Intel and Micron will allow the production of 32-layer 3.5 terabyte (TB) NAND flash sticks and 10 TB standard-sized SSDs. The device includes 5 packages of 16 × 48 GB TLC dies, using a floating gate cell design.
Flash chips continue to be manufactured with capacities under or around 1 MB (e.g. for BIOS-ROMs and embedded applications).
In July 2016, Samsung announced the 4 TB Samsung 850 EVO which utilizes their 256 Gbit 48-layer TLC 3D V-NAND. In August 2016, Samsung announced a 32 TB 2.5-inch SAS SSD based on their 512 Gbit 64-layer TLC 3D V-NAND. Further, Samsung expects to unveil SSDs with up to 100 TB of storage by 2020.
Transfer rates
Flash memory devices are typically much faster at reading than writing. Performance also depends on the quality of storage controllers, which become more critical when devices are partially full. Even when the only change to manufacturing is die-shrink, the absence of an appropriate controller can result in degraded speeds.
Applications
Serial flash
Serial flash is a small, low-power flash memory that provides only serial access to the data - rather than addressing individual bytes, the user reads or writes large contiguous groups of bytes in the address space serially. Serial Peripheral Interface Bus (SPI) is a typical protocol for accessing the device.
When incorporated into an embedded system, serial flash requires fewer wires on the PCB than parallel flash memories, since it transmits and receives data one bit at a time. This may permit a reduction in board space, power consumption, and total system cost.
There are several reasons why a serial device, with fewer external pins than a parallel device, can significantly reduce overall cost:
Many ASICs are pad-limited, meaning that the size of the die is constrained by the number of wire bond pads, rather than the complexity and number of gates used for the device logic. Eliminating bond pads thus permits a more compact integrated circuit, on a smaller die; this increases the number of dies that may be fabricated on a wafer, and thus reduces the cost per die.
Reducing the number of external pins also reduces assembly and packaging costs. A serial device may be packaged in a smaller and simpler package than a parallel device.
Smaller and lower pin-count packages occupy less PCB area.
Lower pin-count devices simplify PCB routing.
There are two major SPI flash types. The first type is characterized by small blocks and one internal SRAM block buffer allowing a complete block to be read to the buffer, partially modified, and then written back (for example, the Atmel AT45 DataFlash or the Micron Technology Page Erase NOR Flash). The second type has larger sectors where the smallest sectors typically found in this type of SPI flash are 4 kB, but they can be as large as 64 kB. Since this type of SPI flash lacks an internal SRAM buffer, the complete block must be read out and modified before being written back, making it slow to manage. However, the second type is cheaper than the first and is therefore a good choice when the application is code shadowing.
The two types are not easily exchangeable, since they do not have the same pinout, and the command sets are incompatible.
Most FPGAs are based on SRAM configuration cells and require an external configuration device, often a serial flash chip, to reload the configuration bitstream every power cycle.
Firmware storage
With the increasing speed of modern CPUs, parallel flash devices are often much slower than the memory bus of the computer they are connected to. Conversely, modern SRAM offers access times below 10 ns, while DDR2 SDRAM offers access times below 20 ns. Because of this, it is often desirable to shadow code stored in flash into RAM; that is, the code is copied from flash into RAM before execution, so that the CPU may access it at full speed. Device firmware may be stored in a serial flash chip, and then copied into SDRAM or SRAM when the device is powered-up. Using an external serial flash device rather than on-chip flash removes the need for significant process compromise (a manufacturing process that is good for high-speed logic is generally not good for flash and vice versa). Once it is decided to read the firmware in as one big block it is common to add compression to allow a smaller flash chip to be used. Since 2005, many devices use serial NOR flash to deprecate parallel NOR flash for firmware storage. Typical applications for serial NOR flash include storing firmware for hard drives, BIOS, Option ROM of expansion cards, DSL modems, etc.
Flash memory as a replacement for hard drives
One more recent application for flash memory is as a replacement for hard disks. Flash memory does not have the mechanical limitations and latencies of hard drives, so a solid-state drive (SSD) is attractive when considering speed, noise, power consumption, and reliability. Flash drives are gaining traction as mobile device secondary storage devices; they are also used as substitutes for hard drives in high-performance desktop computers and some servers with RAID and SAN architectures.
There remain some aspects of flash-based SSDs that make them unattractive. The cost per gigabyte of flash memory remains significantly higher than that of hard disks. Also flash memory has a finite number of P/E (program/erase) cycles, but this seems to be currently under control since warranties on flash-based SSDs are approaching those of current hard drives. In addition, deleted files on SSDs can remain for an indefinite period of time before being overwritten by fresh data; erasure or shred techniques or software that work well on magnetic hard disk drives have no effect on SSDs, compromising security and forensic examination. However, due to the so-called TRIM command employed by most solid state drives, which marks the logical block addresses occupied by the deleted file as unused to enable garbage collection, data recovery software is not able to restore files deleted from such.
For relational databases or other systems that require ACID transactions, even a modest amount of flash storage can offer vast speedups over arrays of disk drives.
In May 2006, Samsung Electronics announced two flash-memory based PCs, the Q1-SSD and Q30-SSD were expected to become available in June 2006, both of which used 32 GB SSDs, and were at least initially available only in South Korea. The Q1-SSD and Q30-SSD launch was delayed and finally was shipped in late August 2006.
The first flash-memory based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began to be shipped in Japan on 3 July 2006 with a 16 GB flash memory hard drive. In late September 2006 Sony upgraded the flash-memory in the Vaio UX90 to 32 GB.
A solid-state drive was offered as an option with the first MacBook Air introduced in 2008, and from 2010 onwards, all models were shipped with an SSD. Starting in late 2011, as part of Intel's Ultrabook initiative, an increasing number of ultra-thin laptops are being shipped with SSDs standard.
There are also hybrid techniques such as hybrid drive and ReadyBoost that attempt to combine the advantages of both technologies, using flash as a high-speed non-volatile cache for files on the disk that are often referenced, but rarely modified, such as application and operating system executable files.
On smartphones, the NAND flash products are used as file storage device, for example, eMMC and eUFS.
Flash memory as RAM
there are attempts to use flash memory as the main computer memory, DRAM.
Archival or long-term storage
Floating-gate transistors in the flash storage device hold charge which represents data. This charge gradually leaks over time, leading to an accumulation of logical errors, also known as "bit rot" or "bit fading".
Data retention
It is unclear how long data on flash memory will persist under archival conditions (i.e., benign temperature and humidity with infrequent access with or without prophylactic rewrite). Datasheets of Atmel's flash-based "ATmega" microcontrollers typically promise retention times of 20 years at 85 °C (185 °F) and 100 years at 25 °C (77 °F).
The retention span varies among types and models of flash storage. When supplied with power and idle, the charge of the transistors holding the data is routinely refreshed by the firmware of the flash storage. The ability to retain data varies among flash storage devices due to differences in firmware, data redundancy, and error correction algorithms.
An article from CMU in 2015 states "Today's flash devices, which do not require flash refresh, have a typical retention age of 1 year at room temperature." And that retention time decreases exponentially with increasing temperature. The phenomenon can be modeled by the Arrhenius equation.
FPGA configuration
Some FPGAs are based on flash configuration cells that are used directly as (programmable) switches to connect internal elements together, using the same kind of floating-gate transistor as the flash data storage cells in data storage devices.
Industry
One source states that, in 2008, the flash memory industry includes about US$9.1 billion in production and sales. Other sources put the flash memory market at a size of more than US$20 billion in 2006, accounting for more than eight percent of the overall semiconductor market and more than 34 percent of the total semiconductor memory market.
In 2012, the market was estimated at $26.8 billion. It can take up to 10 weeks to produce a flash memory chip.
Manufacturers
The following were the largest NAND flash memory manufacturers, as of the second quarter of 2023.
Samsung Electronics 31.4%
Kioxia 20.6%
Western Digital Corporation 12.6%
SK Hynix 18.5%
Micron Technology 12.3%
Others 8.7% Note: SK Hynix acquired Intel's NAND business at the end of 2021 Kioxia spun out and got renamed of Toshiba in 2018/2019.
Samsung remains the largest NAND flash memory manufacturer as of first quarter 2022.
Shipments
In addition to individual flash memory chips, flash memory is also embedded in microcontroller (MCU) chips and system-on-chip (SoC) devices. Flash memory is embedded in ARM chips, which have sold 150billion units worldwide , and in programmable system-on-chip (PSoC) devices, which have sold 1.1billion units . This adds up to at least 151.1billion MCU and SoC chips with embedded flash memory, in addition to the 45.4billion known individual flash chip sales , totalling at least 196.5billion chips containing flash memory.
Flash scalability
Due to its relatively simple structure and high demand for higher capacity, NAND flash memory is the most aggressively scaled technology among electronic devices. The heavy competition among the top few manufacturers only adds to the aggressiveness in shrinking the floating-gate MOSFET design rule or process technology node. While the expected shrink timeline is a factor of two every three years per original version of Moore's law, this has recently been accelerated in the case of NAND flash to a factor of two every two years.
As the MOSFET feature size of flash memory cells reaches the 15–16 nm minimum limit, further flash density increases will be driven by TLC (3 bits/cell) combined with vertical stacking of NAND memory planes. The decrease in endurance and increase in uncorrectable bit error rates that accompany feature size shrinking can be compensated by improved error correction mechanisms. Even with these advances, it may be impossible to economically scale flash to smaller and smaller dimensions as the number of electron holding capacity reduces. Many promising new technologies (such as FeRAM, MRAM, PMC, PCM, ReRAM, and others) are under investigation and development as possible more scalable replacements for flash.
Timeline
| Technology | Non-volatile memory | null |
50601 | https://en.wikipedia.org/wiki/Cystic%20fibrosis | Cystic fibrosis | Cystic fibrosis (CF) is a genetic disorder inherited in an autosomal recessive manner that impairs the normal clearance of mucus from the lungs, which facilitates the colonization and infection of the lungs by bacteria, notably Staphylococcus aureus. CF is a rare genetic disorder that affects mostly the lungs, but also the pancreas, liver, kidneys, and intestine. The hallmark feature of CF is the accumulation of thick mucus in different organs. Long-term issues include difficulty breathing and coughing up mucus as a result of frequent lung infections. Other signs and symptoms may include sinus infections, poor growth, fatty stool, clubbing of the fingers and toes, and infertility in most males. Different people may have different degrees of symptoms.
Cystic fibrosis is inherited in an autosomal recessive manner. It is caused by the presence of mutations in both copies (alleles) of the gene encoding the cystic fibrosis transmembrane conductance regulator (CFTR) protein. Those with a single working copy are carriers and otherwise mostly healthy. CFTR is involved in the production of sweat, digestive fluids, and mucus. When the CFTR is not functional, secretions that are usually thin instead become thick. The condition is diagnosed by a sweat test and genetic testing. The sweat test measures sodium concentration, as people with cystic fibrosis have abnormally salty sweat, which can often be tasted by parents kissing their children. Screening of infants at birth takes place in some areas of the world.
There is no known cure for cystic fibrosis. Lung infections are treated with antibiotics which may be given intravenously, inhaled, or by mouth. Sometimes, the antibiotic azithromycin is used long-term. Inhaled hypertonic saline and salbutamol may also be useful. Lung transplantation may be an option if lung function continues to worsen. Pancreatic enzyme replacement and fat-soluble vitamin supplementation are important, especially in the young. Airway clearance techniques such as chest physiotherapy may have some short-term benefit, but long-term effects are unclear. The average life expectancy is between 42 and 50 years in the developed world, with a median of 40.7 years. Lung problems are responsible for death in 70% of people with cystic fibrosis.
CF is most common among people of Northern European ancestry, for whom it affects about 1 out of 3,000 newborns, and among which around 1 out of 25 people is a carrier. It is least common in Africans and Asians, though it does occur in all races. It was first recognized as a specific disease by Dorothy Andersen in 1938, with descriptions that fit the condition occurring at least as far back as 1595. The name "cystic fibrosis" refers to the characteristic fibrosis and cysts that form within the pancreas.
Signs and symptoms
Cystic fibrosis typically manifests early in life. Newborns and infants with cystic fibrosis tend to have frequent, large, greasy stools (a result of malabsorption) and are underweight for their age. 15–20% of newborns have their small intestine blocked by meconium, often requiring surgery to correct. Newborns occasionally have neonatal jaundice due to blockage of the bile ducts. Children with cystic fibrosis lose excessive salt in their sweat, and parents often notice salt crystallizing on the skin, or a salty taste when they kiss their child.
The primary cause of morbidity and death in people with cystic fibrosis is progressive lung disease, which eventually leads to respiratory failure. This typically begins as a prolonged respiratory infection that continues until treated with antibiotics. Chronic infection of the respiratory tract is nearly universal in people with cystic fibrosis, with Pseudomonas aeruginosa, fungi, and mycobacteria all increasingly common over time. Inflammation of the upper airway results in frequent runny nose and nasal obstruction. Nasal polyps are common, particularly in children and teenagers. As the disease progresses, people tend to have shortness of breath, and a chronic cough that produces sputum. Breathing problems make it increasingly challenging to exercise, and prolonged illness causes those affected to be underweight for their age. In late adolescence or adulthood, people begin to develop severe signs of lung disease: wheezing, digital clubbing, cyanosis, coughing up blood, pulmonary heart disease, and collapsed lung (atelectasis or pneumothorax).
In rare cases, cystic fibrosis can manifest itself as a coagulation disorder. Vitamin K is normally absorbed from breast milk, formula, and later, solid foods. This absorption is impaired in some CF patients. Young children are especially sensitive to vitamin K malabsorptive disorders because only a very small amount of vitamin K crosses the placenta, leaving the child with very low reserves and limited ability to absorb vitamin K from dietary sources after birth. Because clotting factors II, VII, IX, and X are vitamin K–dependent, low levels of vitamin K can result in coagulation problems. Consequently, when a child presents with unexplained bruising, a coagulation evaluation may be warranted to determine whether an underlying disease is present.
Lungs and sinuses
Lung disease results from clogging of the airways due to mucus build-up, decreased mucociliary clearance, and resulting inflammation. In later stages, changes in the architecture of the lung, such as pathology in the major airways (bronchiectasis), further exacerbate difficulties in breathing. Other signs include high blood pressure in the lung (pulmonary hypertension), heart failure, difficulties getting enough oxygen to the body (hypoxia), and respiratory failure requiring support with breathing masks, such as bilevel positive airway pressure machines or ventilators. Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa are the three most common organisms causing lung infections in CF patients. In addition, opportunistic infection due to Burkholderia cepacia complex can occur, especially through transmission from patient to patient.
In addition to typical bacterial infections, people with CF more commonly develop other types of lung diseases. Among these is allergic bronchopulmonary aspergillosis, in which the body's response to the common fungus Aspergillus fumigatus causes worsening of breathing problems. Another is infection with Mycobacterium avium complex, a group of bacteria related to tuberculosis, which can cause lung damage and do not respond to common antibiotics.
Mucus in the paranasal sinuses is equally thick and may also cause blockage of the sinus passages, leading to infection. This may cause facial pain, fever, nasal drainage, and headaches. Individuals with CF may develop overgrowth of the nasal tissue (nasal polyps) due to inflammation from chronic sinus infections. Recurrent sinonasal polyps can occur in 10% to 25% of CF patients. These polyps can block the nasal passages and increase breathing difficulties.
Cardiorespiratory complications are the most common causes of death (about 80%) in patients at most CF centers in the United States.
Gastrointestinal
Digestive problems are also prevalent in individuals with CF. Approximately 15%-20% of newborns diagnosed with CF experience intestinal blockage (meconium ileus), and other digestive issues may arise due to mucus accumulation in the pancreas. Consequently, there is impaired insulin production, leading to cystic fibrosis-related diabetes mellitus. Moreover, enzyme transport disruption from the pancreas to the intestines results in digestive problems such as recurrent diarrhea or weight loss.
In cystic fibrosis there is impaired chloride secretion due to mutation of CFTR. This disrupts the ionic balance, causes impaired bicarbonate secretion, and alters the pH. The pancreatic enzymes that work in a specific pH range cannot act as the chyme is not neutralized by bicarbonate ions. This causes impairment of the digestion process.
The thick mucus seen in the lungs has a counterpart in thickened secretions from the pancreas, an organ responsible for providing digestive juices that help break down food. These secretions block the exocrine movement of the digestive enzymes into the duodenum and result in irreversible damage to the pancreas, often with painful inflammation (pancreatitis). The pancreatic ducts are totally plugged in more advanced cases, usually seen in older children or adolescents. This causes atrophy of the exocrine glands and progressive fibrosis.
In addition, protrusion of internal rectal membranes (rectal prolapse) is more common, occurring in as many as 10% of children with CF, and it is caused by increased fecal volume, malnutrition, and increased intra–abdominal pressure due to coughing.
Individuals with CF also have difficulties absorbing the fat-soluble vitamins A, D, E, and K.
In addition to the pancreas problems, people with CF experience more heartburn, intestinal blockage by intussusception, and constipation. Older individuals with CF may develop distal intestinal obstruction syndrome, which occurs when feces becomes thick with mucus (inspissated) and can cause bloating, pain, and incomplete or complete bowel obstruction.
Exocrine pancreatic insufficiency occurs in the majority (85–90%) of patients with CF. It is mainly associated with "severe" CFTR mutations, where both alleles are completely nonfunctional (e.g. ΔF508/ΔF508). It occurs in 10–15% of patients with one "severe" and one "mild" CFTR mutation where little CFTR activity still occurs, or where two "mild" CFTR mutations exist. In these milder cases, sufficient pancreatic exocrine function is still present so that enzyme supplementation is not required. Usually, no other GI complications occur in pancreas-sufficient phenotypes, and in general, such individuals usually have excellent growth and development. Despite this, idiopathic chronic pancreatitis can occur in a subset of pancreas-sufficient individuals with CF, and is associated with recurrent abdominal pain and life-threatening complications.
Liver diseases are another common complication in CF patients. The prevalence observed in studies ranged from 18% at age two to 41% at age 12, with no significant increase thereafter. Another study found that males with CF are more prone to liver diseases compared to females, and those with meconium ileus have an increased risk of liver diseases.
Thickened secretions also may cause liver problems in patients with CF. Bile secreted by the liver to aid in digestion may block the bile ducts, leading to liver damage. Impaired digestion or absorption of lipids can result in steatorrhea. Over time, this can lead to scarring and nodularity (cirrhosis). The liver fails to rid the blood of toxins and does not make important proteins, such as those responsible for blood clotting. Liver disease is the third-most common cause of death associated with CF.
Around 5–7% of people experience liver damage severe enough to cause symptoms: typically gallstones causing biliary colic.
Endocrine
The pancreas contains the islets of Langerhans, which are responsible for making insulin, a hormone that helps regulate blood glucose. Damage to the pancreas can lead to loss of the islet cells, leading to a type of diabetes unique to those with the disease. This cystic fibrosis-related diabetes shares characteristics of type 1 and type 2 diabetes, and is one of the principal nonpulmonary complications of CF.
Vitamin D is involved in calcium and phosphate regulation. Poor uptake of vitamin D from the diet because of malabsorption can lead to the bone disease osteoporosis in which weakened bones are more susceptible to fractures.
Infertility
Infertility affects both men and women. At least 97% of men with cystic fibrosis are infertile, but not sterile, and can have children with assisted reproductive techniques. The main cause of infertility in men with cystic fibrosis is congenital absence of the vas deferens (which normally connects the testes to the ejaculatory ducts of the penis), but potentially also by other mechanisms causing no sperm, abnormally shaped sperm, and few sperm with poor motility. Many men found to have congenital absence of the vas deferens during evaluation for infertility have a mild, previously undiagnosed form of CF. While females with CF are generally fertile, around 20% of women with CF have fertility difficulties due to thickened cervical mucus or malnutrition. In severe cases, malnutrition disrupts ovulation and causes a lack of menstruation.
Causes
CF is caused by having no functional copies (alleles) of the gene cystic fibrosis transmembrane conductance regulator (CFTR). As of 2018, over 1,900 mutations leading to CF have been described, but only 5 of them have a frequency greater than 1% among patients. The most common mutant allele, ΔF508 (also termed F508del), is a deletion (Δ signifying deletion) of three nucleotides that results in a loss of the amino-acid residue phenylalanine (F) at the 508th position of the protein. This mutant allele is already present in 1 in 20 to 25 people of Northern European ancestry; it accounts for 70% of CF cases worldwide and 90% of cases in the United States; however, over 700 other mutant alleles, some of which represent new mutations, can produce CF. Although most people have two working copies (alleles) of the CFTR gene, only one is needed to prevent cystic fibrosis. CF develops when neither allele can produce a functional CFTR protein. Thus, CF is considered an autosomal recessive disease.
The CFTR gene, found at the q31.2 locus of chromosome 7, is 230,000 base pairs long, and encodes a protein that is 1,480 amino acids long. More specifically, the location is between base pair 117,120,016 and 117,308,718 on the long arm of chromosome 7, region 3, band 1, subband 2, represented as 7q31.2. Structurally, the CFTR is a type of gene known as an ABC gene. The product of this gene (the CFTR protein) is a chloride ion channel important in creating sweat, digestive juices, and mucus. This protein possesses two ATP-hydrolyzing domains, which allows the protein to use energy in the form of ATP. It also contains two domains comprising six alpha helices apiece, which allow the protein to cross the cell membrane. A regulatory binding site on the protein allows activation by phosphorylation, mainly by cAMP-dependent protein kinase. The carboxyl terminal of the protein is anchored to the cytoskeleton by a PDZ domain interaction. The majority of CFTR in lung passages is produced by rare ion-transporting cells that regulate mucus properties.
In addition, the evidence is increasing that genetic modifiers besides CFTR modulate the frequency and severity of the disease. One example is mannan-binding lectin, which is involved in innate immunity by facilitating phagocytosis of microorganisms. Polymorphisms in one or both mannan-binding lectin alleles that result in lower circulating levels of the protein are associated with a threefold higher risk of end-stage lung disease, as well as an increased burden of chronic bacterial infections.
Carriers
Up to one in 25 individuals of Northern European ancestry is considered a genetic carrier. The disease appears only when two of these carriers have children, as each pregnancy between them has a 25% chance of producing a child with the disease. Although only about one of every 3,000 newborns of the affected ancestry has CF, since the CFTR gene's discovery in 1989, over 2,000 variants have been identified, but only about 700 of these have been recognized as responsible for causing CF. Current tests look for the most common mutations.
The mutant alleles screened by the test vary according to a person's ethnic group or by the occurrence of CF already in the family. More than 10 million Americans, including one in 25 white Americans, are carriers of one mutant allele of the CF gene. CF is present in other races, though not as frequently as in white individuals. About one in 46 Hispanic Americans, one in 65 African Americans, and one in 90 Asian Americans carry a mutation of the CF gene.
Pathophysiology
The CFTR gene regulates the transport of salts and water through cell membranes, providing instructions for creating a pathway that allows the passage of chloride ions. A mutation in the CFTR gene can impair the normal function of chloride channels, leading to abnormal transport of chloride ions and water, resulting in the formation of thick and abnormal mucus.
In the pancreatic duct chloride transport occurs through the voltage gated chloride channels which are influenced by CFTR (Cystic Fibrosis transmembrane conductance regulator). These channel are localised in apical membrane of epitheal cell in pancreatic duct.
Several mutations in the CFTR gene can occur, and different mutations cause different defects in the CFTR protein, sometimes causing a milder or more severe disease. These protein defects are also targets for drugs which can sometimes restore their function. ΔF508-CFTR gene mutation, which occurs in >90% of patients in the U.S., creates a protein that does not fold normally and is not appropriately transported to the cell membrane, resulting in its degradation.
Other mutations result in proteins that are too short (truncated) because production is ended prematurely. Other mutations produce proteins that do not use energy (in the form of ATP) normally, do not allow chloride, iodide, and thiocyanate to cross the membrane appropriately, and degrade at a faster rate than normal. Mutations may also lead to fewer copies of the CFTR protein being produced.
The protein created by this gene is anchored to the outer membrane of cells in the sweat glands, lungs, pancreas, and all other remaining exocrine glands in the body.
The protein spans this membrane and acts as a channel connecting the inner part of the cell (cytoplasm) to the surrounding fluid. This channel is primarily responsible for controlling the movement of halide anions from inside to outside of the cell; however, in the sweat ducts, it facilitates the movement of chloride from the sweat duct into the cytoplasm. When the CFTR protein does not resorb ions in sweat ducts, chloride, and thiocyanate released from sweat glands are trapped inside the ducts and pumped to the skin.
Additionally hypothiocyanite, OSCN, cannot be produced by the immune defense system. Because chloride is negatively charged, this modifies the electrical potential inside and outside the cell that normally causes cations to cross into the cell. Sodium is the most common cation in the extracellular space. The excess chloride within sweat ducts prevents sodium resorption by epithelial sodium channels and the combination of sodium and chloride creates the salt, which is lost in high amounts in the sweat of individuals with CF. This lost salt forms the basis for the sweat test.
Most of the damage in CF is due to blockage of the narrow passages of affected organs with thickened secretions. These blockages lead to remodeling and infection in the lung, damage by accumulated digestive enzymes in the pancreas, blockage of the intestines by thick feces, etc. Several theories have been posited on how the defects in the protein and cellular function cause the clinical effects. The most current theory suggests that defective ion transport leads to dehydration in the airway epithelia, thickening mucus. In airway epithelial cells, the cilia exist in between the cell's apical surface and mucus in a layer known as airway surface liquid (ASL). The flow of ions from the cell and into this layer is determined by ion channels such as CFTR. CFTR not only allows chloride ions to be drawn from the cell and into the ASL, but it also regulates another channel called ENac, which allows sodium ions to leave the ASL and enter the respiratory epithelium. CFTR normally inhibits this channel, but if the CFTR is defective, then sodium flows freely from the ASL and into the cell.
As water follows sodium, the depth of ASL will be depleted and the cilia will be left in the mucous layer. As cilia cannot effectively move in a thick, viscous environment, mucociliary clearance is deficient and a buildup of mucus occurs, clogging small airways. The accumulation of more viscous, nutrient-rich mucus in the lungs allows bacteria to hide from the body's immune system, causing repeated respiratory infections. The presence of the same CFTR proteins in the pancreatic duct and sweat glands in the skin also cause symptoms in these systems.
Chronic infections
The lungs of individuals with cystic fibrosis are colonized and infected by bacteria from an early age. These bacteria, which often spread among individuals with CF, thrive in the altered mucus, which collects in the small airways of the lungs. This mucus leads to the formation of bacterial microenvironments known as biofilms that are difficult for immune cells and antibiotics to penetrate. Viscous secretions and persistent respiratory infections repeatedly damage the lung by gradually remodeling the airways, which makes infection even more difficult to eradicate. The natural history of CF lung infections and airway remodeling is poorly understood, largely due to the immense spatial and temporal heterogeneity both within and between the microbiomes of CF patients.
Over time, both the types of bacteria and their individual characteristics change in individuals with CF. In the initial stage, common bacteria such as S. aureus and H. influenzae colonize and infect the lungs. Eventually, Pseudomonas aeruginosa (and sometimes Burkholderia cepacia) dominates. By 18 years of age, 80% of patients with classic CF harbor P. aeruginosa, and 3.5% harbor B. cepacia. Once within the lungs, these bacteria adapt to the environment and develop resistance to commonly used antibiotics. Pseudomonas can develop special characteristics that allow the formation of large colonies, known as "mucoid" Pseudomonas, which are rarely seen in people who do not have CF. Scientific evidence suggests the interleukin 17 pathway plays a key role in resistance and modulation of the inflammatory response during P. aeruginosa infection in CF. In particular, interleukin 17-mediated immunity plays a double-edged activity during chronic airways infection; on one side, it contributes to the control of P. aeruginosa burden, while on the other, it propagates exacerbated pulmonary neutrophilia and tissue remodeling.
Infection can spread by passing between different individuals with CF. In the past, people with CF often participated in summer "CF camps" and other recreational gatherings. Hospitals grouped patients with CF into common areas and routine equipment (such as nebulizers) was not sterilized between individual patients. This led to transmission of more dangerous strains of bacteria among groups of patients. As a result, individuals with CF are now routinely isolated from one another in the healthcare setting, and healthcare providers are encouraged to wear gowns and gloves when examining patients with CF to limit the spread of virulent bacterial strains.
CF patients may also have their airways chronically colonized by filamentous fungi (such as Aspergillus fumigatus, Scedosporium apiospermum, Aspergillus terreus) and/or yeasts (such as Candida albicans); other filamentous fungi less commonly isolated include Aspergillus flavus and Aspergillus nidulans (occur transiently in CF respiratory secretions) and Exophiala dermatitidis and Scedosporium prolificans (chronic airway-colonizers); some filamentous fungi such as Penicillium emersonii and Acrophialophora fusispora are encountered in patients almost exclusively in the context of CF. Defective mucociliary clearance characterizing CF is associated with local immunological disorders. In addition, the prolonged therapy with antibiotics and the use of corticosteroid treatments may also facilitate fungal growth. Although the clinical relevance of the fungal airway colonization is still a matter of debate, filamentous fungi may contribute to the local inflammatory response and therefore to the progressive deterioration of the lung function, as often happens with allergic bronchopulmonary aspergillosis – the most common fungal disease in the context of CF, involving a Th2-driven immune response to Aspergillus species.
Diagnosis
Diagnosis of CF is initially based on clinical findings indicative of respiratory diseases, various digestive problems, meconium ileus, and more. Definitive diagnosis may involve genetic testing based on family history or chloride concentration testing in sweat, which is relatively high (>60mEq/L) in individuals with CF.
In many localities all newborns are screened for cystic fibrosis within the first few days of life, typically by blood test for high levels of immunoreactive trypsinogen. Newborns with positive tests or those who are otherwise suspected of having cystic fibrosis based on symptoms or family history, then undergo a sweat test. An electric current is used to drive pilocarpine into the skin, stimulating sweating. The sweat is collected and analyzed for salt levels. Having unusually high levels of chloride in the sweat suggests CFTR is dysfunctional; the person is then diagnosed with cystic fibrosis. Genetic testing is also available to identify the CFTR mutations typically associated with cystic fibrosis. Many laboratories can test for the 30–96 most common CFTR mutations, which can identify over 90% of people with cystic fibrosis.
People with CF have less thiocyanate and hypothiocyanite in their saliva and mucus (Banfi et al.). In the case of milder forms of CF, transepithelial potential difference measurements can be helpful. CF can also be diagnosed by identification of mutations in the CFTR gene.
In many cases, a parent makes the diagnosis because the infant tastes salty. Immunoreactive trypsinogen levels can be increased in individuals who have a single mutated copy of the CFTR gene (carriers) or, in rare instances, in individuals with two normal copies of the CFTR gene. Due to these false positives, CF screening in newborns can be controversial.
By 2010 every US state had instituted newborn screening programs and 21 European countries had programs in at least some regions.
Prenatal
Women who are pregnant or couples planning a pregnancy can have themselves tested for the CFTR gene mutations to determine the risk that their child will be born with CF. Testing is typically performed first on one or both parents and, if the risk of CF is high, testing on the fetus is performed. The American College of Obstetricians and Gynecologists recommends all people thinking of becoming pregnant be tested to see if they are a carrier.
Because development of CF in the fetus requires each parent to pass on a mutated copy of the CFTR gene and because CF testing is expensive, testing is often performed initially on one parent. If testing shows that parent is a CFTR gene mutation carrier, the other parent is tested to calculate the risk that their children will have CF. CF can result from more than a thousand different mutations. , typically only the most common mutations are tested for, such as ΔF508. Most commercially available tests look for 32 or fewer different mutations. If a family has a known uncommon mutation, specific screening for that mutation can be performed. Because not all known mutations are found on current tests, a negative screen does not guarantee that a child will not have CF.
During pregnancy, testing can be performed on the placenta (chorionic villus sampling) or the fluid around the fetus (amniocentesis). However, chorionic villus sampling has a risk of fetal death of one in 100 and amniocentesis of one in 200; a recent study has indicated this may be much lower, about one in 1,600.
Economically, for carrier couples of cystic fibrosis, when comparing preimplantation genetic diagnosis (PGD) with natural conception (NC) followed by prenatal testing and abortion of affected pregnancies, PGD provides net economic benefits up to a maternal age around 40 years, after which NC, prenatal testing, and abortion have higher economic benefit.
Management
Treatment for CF is diverse, tailored to different symptoms, and includes various devices, inhalation medications to alleviate respiratory difficulties, oral enzyme supplements to address exocrine pancreatic insufficiency, and, in some cases, surgical interventions for conditions such as meconium ileus. While treatment alleviates symptoms and prevents potential complications, there is currently no cure for the disease.
The management of CF has improved significantly over the past 70 years. While infants born with it 70 years ago would have been unlikely to live beyond their first year, infants today are likely to live well into adulthood. Advances in the treatment of cystic fibrosis have meant that people with cystic fibrosis can live a fuller life less encumbered by their condition. The cornerstones of management are the proactive treatment of airway infection, encouragement of good nutrition, and an active lifestyle. Pulmonary rehabilitation as a management of CF continues throughout a person's life, and is aimed at maximizing organ function, and therefore the quality of life. Occupational therapists use energy conservation techniques in the rehabilitation process for patients with cystic fibrosis. Examples of energy conservation techniques are ergonomic principles, pursed lip breathing, and diaphragmatic breathing. People with CF tend to have fatigue and dyspnoea due to chronic pulmonary infections, so reducing the amount of energy spent during activities can help people feel better and gain more independence. At best, current treatments delay the decline in organ function. Because of the wide variation in disease symptoms, treatment typically occurs at specialist multidisciplinary centers and is tailored to the individual. Targets for therapy are the lungs, gastrointestinal tract (including pancreatic enzyme supplements), the reproductive organs (including assisted reproductive technology), and psychological support.
The most consistent aspect of therapy in CF is limiting and treating the lung damage caused by thick mucus and infection, with the goal of maintaining quality of life. Intravenous, inhaled, and oral antibiotics are used to treat chronic and acute infections. Mechanical devices and inhalation medications are used to alter and clear the thickened mucus. These therapies, while effective, can be extremely time-consuming. Oxygen therapy at home is recommended in those with significant low oxygen levels. Many people with CF use probiotics, which are thought to be able to correct intestinal dysbiosis and inflammation, but the clinical trial evidence regarding the effectiveness of probiotics for reducing pulmonary exacerbations in people with CF is uncertain.
Antibiotics
Many people with CF are on one or more antibiotics at all times, even when healthy, to prophylactically suppress infection. The choice of antibiotics for cystic fibrosis depends on the specific bacteria that are causing the infection, as well as the patient's age, weight, and other medical conditions. Antibiotics are absolutely necessary whenever pneumonia is suspected or a noticeable decline in lung function is seen, and are usually chosen based on the results of a sputum analysis and the person's past response. This prolonged therapy often necessitates hospitalization and insertion of a more permanent IV such as a peripherally inserted central catheter or Port-a-Cath. Inhaled therapy with antibiotics such as tobramycin, colistin, and aztreonam is often given for months at a time to improve lung function by impeding the growth of colonized bacteria. Inhaled antibiotic therapy helps lung function by fighting infection, but also has significant drawbacks such as development of antibiotic resistance, tinnitus, and changes in the voice. Inhaled levofloxacin may be used to treat Pseudomonas aeruginosa in people with cystic fibrosis who are infected.
Antibiotics by mouth such as ciprofloxacin or azithromycin are given to help prevent infection or to control ongoing infection. The aminoglycoside antibiotics (e.g. tobramycin) used can cause hearing loss, damage to the balance system in the inner ear or kidney failure with long-term use. To prevent these side-effects, the amount of antibiotics in the blood is routinely measured and adjusted accordingly.
Currently, no reliable clinical trial evidence shows the effectiveness of antibiotics for pulmonary exacerbations in people with cystic fibrosis and Burkholderia cepacia complex or for the use of antibiotics to treat nontuberculous mycobacteria in people with CF.
Pseudomonas aeruginosa
The early management of Pseudomonas aeruginosa infection is usually suggested using nebulised antibiotics with or without oral antibiotics to remove the bacteria from the person's airways for a period of time. When choosing antibiotics to treat lung infections caused by Pseudomonas aeruginosa in people with cystic fibrosis, it is still unclear whether the choice of antibiotics should be based on the results of testing antibiotics separately (one at a time) or in combination with each other. It is also not clear if these treatment approaches for the Pseudomonas aeruginosa infection improve the person's quality of life or lifespan. The negative side effects of antibiotics for this infection are also not well studied. Intravenous antibiotic therapy to treat Pseudomonas aeruginosa infections has been shown not to be any better than antibiotics taken orally.
Methicillin-resistant Staphylococcus aureus
Methicillin-resistant Staphylococcus aureus (MRSA) infections can be dangerous for people with cystic fibrosis and can worsen lung damage leading to more rapid decline. Early treatment with antibiotics is standard; however, further research is needed to determine longer term effects and benefits (3–6 months after the treatment or longer) and survival rates associated with different treatment options.
Antibiotic adjuvant therapy
Factors related to the antibiotics use, the chronicity of the disease, and the emergence of resistant bacteria demand more exploration for different strategies such as antibiotic adjuvant therapy. Antibiotic adjuvant therapy refers to therapeutic approaches that aim to improve the action of antibiotics such a pharmaceutical agents or supplements that impact the virulence of the bacterium or that change the susceptibility of the organism to the antibiotic so that the antibiotics are more effective. There is no strong evidence to recommend specific antibiotic adjuvant therapies such as β-carotene, nitric oxide, zinc supplements, or KB001-A.
Other medication
Aerosolized medications that help loosen secretions include dornase alfa and hypertonic saline. Dornase is a recombinant human deoxyribonuclease, which breaks down DNA in the sputum, thus decreasing its viscosity. Dornase alpha improves lung function and probably decreases the risk of exacerbations but there is insufficient evidence to know if it is more or less effective than other similar medications. Dornase alpha may improve lung function; however, there is no strong evidence that it is better than other hyperosmolar therapies.
Denufosol, an investigational drug, opens an alternative chloride channel, helping to liquefy mucus. Whether inhaled corticosteroids are useful is unclear, but stopping inhaled corticosteroid therapy is safe. There is weak evidence that corticosteroid treatment may cause harm by interfering with growth. Pneumococcal vaccination has not been studied . , there is no clear evidence from randomized controlled trials that the influenza vaccine is beneficial for people with cystic fibrosis.
Ivacaftor is a medication taken by mouth for the treatment of CF due to a number of specific mutations responsive to ivacaftor-induced CFTR protein enhancement. It improves lung function by about 10%; however, it is expensive. The first year it was on the market, the list price was over $300,000 per year in the United States. In July 2015, the U.S. Food and Drug Administration approved lumacaftor/ivacaftor. In 2018, the FDA approved the combination ivacaftor/tezacaftor; the manufacturer announced a list price of $292,000 per year. Tezacaftor helps move the CFTR protein to the correct position on the cell surface, and is designed to treat people with the F508del mutation.
In 2019, the combination drug elexacaftor/ivacaftor/tezacaftor, marketed as Trikafta,was approved for CF patients over the age of 12 in the United States. In 2021, this was extended to include patients over the age of 6. In Europe this drug was approved in 2020 and marketed as Kaftrio. It is used in those that have a f508del mutation, which occurs in about 90% of patients with cystic fibrosis. According to the Cystic Fibrosis Foundation, "this medicine represents the single greatest therapeutic advancement in the history of CF, offering a treatment for the underlying cause of the disease that could eventually bring modulator therapy to 90 percent of people with CF." In a clinical trial, participants who were administered the combination drug experienced a subsequent 63% decrease in pulmonary exacerbations and a 41.8 mmol/L decrease in sweat chloride concentration. By mitigating a repertoire of symptoms associated with cystic fibrosis, the combination drug significantly improved quality-of-life metrics among patients with the disease as well. The combination drug is also known to interact with CYP3A inducers, such as carbamazepine used in the treatment of bipolar disorder, causing elexafaftor/ivacaftor/tezacaftor to circulate in the body at decreased concentrations. As such, concurrent use is not recommended. The list price in the US is going to be $311,000 per year; however, insurance may cover much of the cost of the drug.
Ursodeoxycholic acid, a bile salt, has been used; however, there is insufficient data to show if it is effective.
The combination vanzacaftor/tezacaftor/deutivacaftor (Alyftrek) was approved for medical use in the United States in December 2024.
Nutrient supplementation
It is uncertain whether vitamin A or beta-carotene supplementation have any effect on eye and skin problems caused by vitamin A deficiency.
There is no strong evidence that people with cystic fibrosis can prevent osteoporosis by increasing their intake of vitamin D.
For people with vitamin E deficiency and cystic fibrosis, there is evidence that vitamin E supplementation may improve vitamin E levels, although it is still uncertain what effect supplementation has on vitamin E-specific deficiency disorders or on lung function.
Robust evidence regarding the effects of vitamin K supplementation in people with cystic fibrosis is lacking as of 2020.
Various studies have examined the effects of omega-3 fatty acid supplementation for people with cystic fibrosis but the evidence is uncertain whether it has any benefits or adverse effects.
Procedures
Several mechanical techniques are used to dislodge sputum and encourage its expectoration. One technique good for short-term airway clearance is chest physiotherapy where a respiratory therapist percusses an individual's chest by hand several times a day, to loosen up secretions. This "percussive effect" can be administered also through specific devices that use chest wall oscillation or intrapulmonary percussive ventilator. Other methods such as biphasic cuirass ventilation, and associated clearance mode available in such devices, integrate a cough assistance phase, as well as a vibration phase for dislodging secretions. These are portable and adapted for home use.
Another technique is positive expiratory pressure physiotherapy that consists of providing a back pressure to the airways during expiration. This effect is provided by devices that consists of a mask or a mouthpiece in which a resistance is applied only on the expiration phase. Operating principles of this technique seems to be the increase of gas pressure behind mucus through collateral ventilation along with a temporary increase in functional residual capacity preventing the early collapse of small airways during exhalation.
As lung disease worsens, mechanical breathing support may become necessary. Individuals with CF may need to wear special masks at night to help push air into their lungs. These machines, known as bilevel positive airway pressure (BiPAP) ventilators, help prevent low blood oxygen levels during sleep. Non-invasive ventilators may be used during physical therapy to improve sputum clearance. It is not known if this type of therapy has an impact on pulmonary exacerbations or disease progression. It is not known what role non-invasive ventilation therapy has for improving exercise capacity in people with cystic fibrosis. However, the authors noted that "non-invasive ventilation may be a useful adjunct to other airway clearance techniques, particularly in people with cystic fibrosis who have difficulty expectorating sputum". During severe illness, a tube may be placed in the throat (a procedure known as a tracheostomy) to enable breathing supported by a ventilator.
For children, preliminary studies show massage therapy may help people and their families' quality of life.
Some lung infections require surgical removal of the infected part of the lung. If this is necessary many times, lung function is severely reduced. The most effective treatment options for people with CF who have spontaneous or recurrent pneumothoraces is not clear.
Transplantation
Lung transplantation may become necessary for individuals with CF as lung function and exercise tolerance decline. Although single lung transplantation is possible in other diseases, individuals with CF must have both lungs replaced because the remaining lung might contain bacteria that could infect the transplanted lung. A pancreatic or liver transplant may be performed at the same time to alleviate liver disease and/or diabetes. Lung transplantation is considered when lung function declines to the point where assistance from mechanical devices is required or someone's survival is threatened. According to Merck Manual, "bilateral lung transplantation for severe lung disease is becoming more routine and more successful with experience and improved techniques. Among adults with CF, median survival posttransplant is about 9 years."
Other aspects
Newborns with intestinal obstruction typically require surgery, whereas adults with distal intestinal obstruction syndrome typically do not. Treatment of pancreatic insufficiency by replacement of missing digestive enzymes allows the duodenum to properly absorb nutrients and vitamins that would otherwise be lost in the feces. However, the best dosage and form of pancreatic enzyme replacement is unclear, as are the risks and long-term effectiveness of this treatment.
So far, no large-scale research involving the incidence of atherosclerosis and coronary heart disease in adults with cystic fibrosis has been conducted. This is likely because the vast majority of people with cystic fibrosis do not live long enough to develop clinically significant atherosclerosis or coronary heart disease.
Diabetes is the most common nonpulmonary complication of CF. It mixes features of type 1 and type 2 diabetes, and is recognized as a distinct entity, cystic fibrosis-related diabetes. While oral antidiabetic drugs are sometimes used, the recommended treatment is the use of insulin injections or an insulin pump, and, unlike in type 1 and 2 diabetes, dietary restrictions are not recommended. While Stenotrophomonas maltophilia is relatively common in people with cystic fibrosis, the evidence about the effectiveness of antibiotics for S. maltophilia is uncertain.
Bisphosphonates taken by mouth or intravenously can be used to improve the bone mineral density in people with cystic fibrosis, but there are no proof that this reduces fractures or increases survival rates. When taking bisphosphates intravenously, adverse effects such as pain and flu-like symptoms can be an issue. The adverse effects of bisphosphates taken by mouth on the gastrointestinal tract are not known.
Poor growth may be avoided by insertion of a feeding tube for increasing food energy through supplemental feeds or by administration of injected growth hormone.
Sinus infections are treated by prolonged courses of antibiotics. The development of nasal polyps or other chronic changes within the nasal passages may severely limit airflow through the nose, and over time reduce the person's sense of smell. Sinus surgery is often used to alleviate nasal obstruction and to limit further infections. Nasal steroids such as fluticasone propionate are used to decrease nasal inflammation.
Female infertility may be overcome by assisted reproduction technology, particularly embryo transfer techniques. Male infertility caused by absence of the vas deferens may be overcome with testicular sperm extraction, collecting sperm cells directly from the testicles. If the collected sample contains too few sperm cells to likely have a spontaneous fertilization, intracytoplasmic sperm injection can be performed. Third party reproduction is also a possibility for women with CF. Whether taking antioxidants affects outcomes is unclear.
Physical exercise is usually part of outpatient care for people with cystic fibrosis. Aerobic exercise seems to be beneficial for aerobic exercise capacity, lung function and health-related quality of life; however, the quality of the evidence was poor.
Due to the use of aminoglycoside antibiotics, ototoxicity is common. Symptoms may include "tinnitus, hearing loss, hyperacusis, aural fullness, dizziness, and vertigo".
Gastrointestinal
Problems with the gastrointestinal system including constipation and obstruction of the gastrointestinal tract including distal intestinal obstruction syndrome are frequent complications for people with cystic fibrosis. Treatment of gastrointestinal problems is required in order to prevent a complete obstruction, reduce other CF symptoms, and improve the quality of life. While stool softeners, laxatives, and prokinetics (GI-focused treatments) are often suggested, there is no clear consensus from experts at to which approach is the best and comes with the least risks. Mucolytics or systemic treatments aimed at dysfunctional CFTR are also sometimes suggested to improve symptoms. The evidence supporting these recommendations are very weak and more research is needed to understand how to prevent and treat GI problems in people with CF.
Prognosis
The prognosis for cystic fibrosis has improved due to earlier diagnosis through screening and better treatment and access to health care. In 1959, the median age of survival of children with CF in the United States was six months.
In 2010, survival is estimated to be 37 years for women and 40 for men. In Canada, median survival increased from 24 years in 1982 to 47.7 in 2007. In the United States those born with CF in 2016 have a predicted life expectancy of 47.7 when cared for in specialty clinics. Due to the recent development of new treatments, such as CFTR modulators, life expectancy has increased rapidly during recent years. In 2020 the median predicted life expectancy was around 59 years, although there are uncertainties in the estimates due to the low number of annual deaths for persons with cystic fibrosis.
In the US, of those with CF who are more than 18 years old as of 2009, 92% had graduated from high school, 67% had at least some college education, 15% were disabled, 9% were unemployed, 56% were single, and 39% were married or living with a partner.
Quality of life
Chronic illnesses can be difficult to manage. CF is a chronic illness that affects the "digestive and respiratory tracts resulting in generalized malnutrition and chronic respiratory infections". The thick secretions clog the airways in the lungs, which often cause inflammation and severe lung infections. If it is compromised, it affects the quality of life of someone with CF and their ability to complete such tasks as everyday chores.
According to Schmitz and Goldbeck (2006), CF significantly increases emotional stress on both the individual and the family, "and the necessary time-consuming daily treatment routine may have further negative effects on quality of life". However, Havermans and colleagues (2006) have established that young outpatients with CF who have participated in the Cystic Fibrosis Questionnaire-Revised "rated some quality of life domains higher than did their parents". Consequently, outpatients with CF have a more positive outlook for themselves. As Merck Manual notes, "with appropriate support, most patients can make an age-appropriate adjustment at home and school. Despite myriad problems, the educational, occupational, and marital successes of patients are impressive."
Furthermore, there are many ways to enhance the quality of life in CF patients. Exercise is promoted to increase lung function. Integrating an exercise regimen into the CF patient's daily routine can significantly improve quality of life. No definitive cure for CF is known, but diverse medications are used, such as mucolytics, bronchodilators, steroids, and antibiotics, that have the purpose of loosening mucus, expanding airways, decreasing inflammation, and fighting lung infections, respectively.
Epidemiology
Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European heritage. In the United States, about 30,000 individuals have CF; most are diagnosed by six months of age. In Canada, about 4,000 people have CF. Around 1 in 25 people of European descent, and one in 30 of white Americans, is a carrier of a CF mutation. Although CF is less common in these groups, roughly one in 46 Hispanics, one in 65 Africans, and one in 90 Asians carry at least one abnormal CFTR gene. Ireland has the world's highest prevalence of CF, at one in 1353; Japan's prevalence of CF is among the lowest in the world, at one in 350,000.
Although technically a rare disease, CF is ranked as one of the most widespread life-shortening genetic diseases. It is most common among nations in the Western world. An exception is Finland, where only one in 80 people carries a CF mutation. The World Health Organization states, "In the European Union, one in 2000–3000 newborns is found to be affected by CF". In the United States, one in 3,500 children is born with CF. In 1997, about one in 3,300 white children in the United States was born with CF. In contrast, only one in 15,000 African American children have it, and in Asian Americans, the rate was even lower at one in 32,000.
Cystic fibrosis is diagnosed equally in males and females. For reasons that remain unclear, data have shown that males tend to have a longer life expectancy than females, though recent studies suggest this gender gap may no longer exist, perhaps due to improvements in health care facilities. A recent study from Ireland identified a link between the female hormone estrogen and worse outcomes in CF.
The distribution of CF alleles varies among populations. The frequency of ΔF508 carriers has been estimated at one in 200 in northern Sweden, one in 143 in Lithuanians, and one in 38 in Denmark. No ΔF508 carriers were found among 171 Finns and 151 Saami people. ΔF508 does occur in Finland, but it is a minority allele there. CF is known to occur in only 20 families (pedigrees) in Finland.
Evolution
The ΔF508 mutation is estimated to have occurred up to 52,000 years ago. Numerous hypotheses have been advanced as to why such a lethal allele has persisted and spread in the human population. Other common autosomal recessive diseases such as sickle-cell anemia have been found to protect carriers from other diseases, an evolutionary trade-off known as heterozygote advantage. Resistance to the following have all been proposed as possible sources of heterozygote advantage:
Cholera: With the discovery that cholera toxin requires normal host CFTR proteins to function properly, it was hypothesized that carriers of mutant CFTR alleles benefited from resistance to cholera and other causes of diarrhea. Further studies have not confirmed this hypothesis.
Typhoid: Normal CFTR proteins are also essential for the entry of Salmonella Typhi into cells, suggesting that carriers of mutant CFTR genes might be resistant to typhoid fever. No in vivo study has yet confirmed this. In both cases, the low level of cystic fibrosis outside of Europe, in places where both cholera and typhoid fever are endemic, is not immediately explicable.
Diarrhea: The prevalence of CF in Europe might be connected with the development of cattle domestication. In this hypothesis, carriers of a single mutant CFTR allele had some protection from diarrhea caused by lactose intolerance, before the mutations that created lactose tolerance appeared.
Tuberculosis: Another possible explanation is that carriers of the mutant allele could have some resistance to tuberculosis. This hypothesis is based on the thesis that CFTR mutant allele carriers have insufficient action in one of their enzymes – arylsulphatase – which is necessary for Mycobacterium tuberculosis virulence. As M. tuberculosis would use its host's sources to affect the individual, and due to the lack of enzyme it could not presents its virulence, being a carrier of CFTR mutant allele could provide resistance against tuberculosis.
History
CF is supposed to have appeared about 3,000 BC because of migration of peoples, gene mutations, and new conditions in nourishment. Although the entire clinical spectrum of CF was not recognized until the 1930s, certain aspects of CF were identified much earlier. Indeed, literature from Germany and Switzerland in the 18th century warned ("Woe to the child who tastes salty from a kiss on the forehead, for he is bewitched and soon must die"), recognizing the association between the salt loss in CF and illness.
In the 19th century, Carl von Rokitansky described a case of fetal death with meconium peritonitis, a complication of meconium ileus associated with CF. Meconium ileus was first described in 1905 by Karl Landsteiner. In 1936, Guido Fanconi described a connection between celiac disease, cystic fibrosis of the pancreas, and bronchiectasis.
In 1938, Dorothy Hansine Andersen published an article, "Cystic Fibrosis of the Pancreas and Its Relation to Celiac Disease: a Clinical and Pathological Study", in the American Journal of Diseases of Children. She was the first to describe the characteristic cystic fibrosis of the pancreas and to correlate it with the lung and intestinal disease prominent in CF. She also first hypothesized that CF was a recessive disease and first used pancreatic enzyme replacement to treat affected children. In 1952, Paul di Sant'Agnese discovered abnormalities in sweat electrolytes; a sweat test was developed and improved over the next decade.
The first linkage between CF and another marker (paraoxonase) was found in 1985 by Hans Eiberg, indicating that only one locus exists for CF. In 1988, the first mutation for CF, ΔF508, was discovered by Francis Collins, Lap-Chee Tsui, and John R. Riordan on the seventh chromosome. Subsequent research has found over 1,000 different mutations that cause CF.
Because mutations in the CFTR gene are typically small, classical genetics techniques had been unable to accurately pinpoint the mutated gene. Using protein markers, gene-linkage studies were able to map the mutation to chromosome 7. Chromosome walking and chromosome jumping techniques were then used to identify and sequence the gene. In 1989, Lap-Chee Tsui led a team of researchers at the Hospital for Sick Children in Toronto that discovered the gene responsible for CF. CF represents a classic example of how a human genetic disorder was elucidated strictly by the process of forward genetics.
Research
People with CF may be listed in a disease registry that allows researchers and doctors to track health results and identify candidates for clinical trials.
Gene therapy
Gene therapy has been explored as a potential cure for CF. Results from clinical trials have shown limited success , and using gene therapy as routine therapy is not suggested. A small study published in 2015 found a small benefit.
The focus of much CF gene therapy research is aimed at trying to place a normal copy of the CFTR gene into affected cells. Transferring the normal CFTR gene into the affected epithelial cells would result in the production of functional CFTR protein in all target cells, without adverse reactions or an inflammation response; this is known as the somatic cell therapy. To prevent the lung manifestations of CF, only 5–10% the normal amount of CFTR gene expression is needed. Multiple approaches have been tested for gene transfer, such as liposomes and viral vectors in animal models and clinical trials. However, both methods were found to be relatively inefficient treatment options, mainly because very few cells take up the vector and express the gene, so the treatment has little effect. Additionally, problems have been noted in cDNA recombination, such that the gene introduced by the treatment is rendered unusable. There has been a functional repair in culture of CFTR by CRISPR/Cas9 in intestinal stem cell organoids of cystic fibrosis patients.
Bacteriophage therapy
Bacteriophage therapy (phage therapy) is being studied for multidrug resistant bacteria in people with CF. Bacteriophage therapy is a treatment method that uses viruses, known as bacteriophages, to target and destroy harmful bacteria in the body. Unlike antibiotics, which can kill a wide range of bacteria and potentially disrupt the body's normal flora, phage therapy is highly specific, targeting only the harmful bacteria while leaving the beneficial ones unharmed. As such, the bacteriophage therapy makes is a promising alternative for treating infections caused by multidrug-resistant bacteria, such as Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa in CF patients, which are often protected by biofilms and thus resistant to conventional antibiotics.
Bacteriophage therapy uses viruses as antimicrobial agents to overcome the antibiotic resistance in bacteria with biofilms Phage therapy is used to treat the Pseudomonas aeruginosa infection in the lungs, which is frequently seen in cystic fibrosis patients, as these bacteria produce biofilms which give them multi-drug resistance.
Gene modulators
A number of small molecules that aim at compensating various mutations of the CFTR gene are under development. CFTR modulator therapies have been used in place of other types of genetic therapies. These therapies focus on the expression of a genetic mutation instead of the mutated gene itself. Modulators are split into two classes: potentiators and correctors. Potentiators act on the CFTR ion channels that are embedded in the cell membrane, and these types of drugs help open up the channel to allow transmembrane flow. Correctors are meant to assist in the transportation of nascent proteins, a protein that is formed by ribosomes before it is morphed into a specific shape, to the cell surface to be implemented into the cell membrane.
Most target the transcription stage of genetic expression. One approach has been to try and develop medication that get the ribosome to overcome the stop codon and produce a full-length CFTR protein. About 10% of CF results from a premature stop codon in the DNA, leading to early termination of protein synthesis and truncated proteins. These drugs target nonsense mutations such as G542X, which consists of the amino acid glycine in position 542 being replaced by a stop codon. Aminoglycoside antibiotics interfere with protein synthesis and error-correction. In some cases, they can cause the cell to overcome a premature stop codon by inserting a random amino acid, thereby allowing expression of a full-length protein. Future research for these modulators is focused on the cellular targets that can be effected by a change in a gene's expression. Otherwise, genetic therapy will be used as a treatment when modulator therapies do not work given that 10% of people with cystic fibrosis are not affected by these drugs.
Elexacaftor/ivacaftor/tezacaftor was approved in the United States in 2019 for cystic fibrosis. This combination of previously developed medicines is able to treat up to 90% of people with cystic fibrosis. This medications restores some effectiveness of the CFTR protein so that it can work as an ion channel on the cell's surface.
Ecological therapy
It has previously been shown that inter-species interactions are an important contributor to the pathology of CF lung infections. Examples include the production of antibiotic degrading enzymes such as β-lactamases and the production of metabolic by-products such as short-chain fatty acids (SCFAs) by anaerobic species, which can enhance the pathogenicity of traditional pathogens such as Pseudomonas aeruginosa. Due to this, it has been suggested that the direct alteration of CF microbial community composition and metabolic function would provide an alternative to traditional antibiotic therapies.
Antisense therapy
Antisense therapy is being researched to treat a subset of mutations which have limited or no response to CFTR modulators. Such mutations fall into two classes: splicing (e.g., c.3718-2477C>T) and nonsense (e.g., G542X, W1282X), both of which result in very low expression of CFTR protein, although the protein itself is usually unaffected. This is contrary to the more common mutations such as ΔF508 which have normal CFTR expression but in a non-functional form. Modulators serve only to correct these aberrant proteins and are of little to no benefit in the case of insufficient expression. Antisense oligonucleotides (ASOs) can solve this problem through the promotion of mRNA degradation or by changing pre-mRNA splicing, nonsense-mediated mRNA decay, or translation, thus increasing CFTR expression.
Society and culture
Salt in My Soul: An Unfinished Life, a posthumous memoir by Mallory Smith, a Californian with CF
Sick: The Life and Death of Bob Flanagan, Supermasochist, a 1997 documentary film
65_RedRoses, a 2009 documentary film
Breathing for a Living, a memoir by Laura Rothenberg
Every Breath I Take: Surviving and Thriving with Cystic Fibrosis, book by Claire Wineland
Five Feet Apart, a 2019 romantic drama film starring Cole Sprouse and Haley Lu Richardson
Orla Tinsley: Warrior, a 2018 documentary film about CF campaigner Orla Tinsley
The performance art of Martin O'Brien
Hi Nanna, 2023 Telugu-language film about a girl with CF
Sickboy, a podcast hosted by Jeremie Saunders about cystic fibrosis and other chronic illnesses
Continent Chasers travel and cystic fibrosis blogs informative website about a man with cystic fibrosis travelling the world
Explanatory notes
| Biology and health sciences | Specific diseases | Health |
50603 | https://en.wikipedia.org/wiki/Multiple%20sclerosis | Multiple sclerosis | Multiple sclerosis (MS) is an autoimmune disease resulting in damage to the insulating covers of nerve cells in the brain and spinal cord. As a demyelinating disease, MS disrupts the nervous system's ability to transmit signals, resulting in a range of signs and symptoms, including physical, mental, and sometimes psychiatric problems. Symptoms include double vision, vision loss, eye pain, muscle weakness, and loss of sensation or coordination. MS takes several forms, with new symptoms either occurring in isolated attacks (relapsing forms) or building up over time (progressive forms). In relapsing forms of MS symptoms may disappear completely between attacks, although some permanent neurological problems often remain, especially as the disease advances. In progressive forms of MS, bodily function slowly deteriorates once symptoms manifest and will steadily worsen if left untreated.
While its cause is unclear, the underlying mechanism is thought to be due to either destruction by the immune system or inactivation of myelin-producing cells. Proposed causes for this include immune dysregulation, genetics, and environmental factors, such as viral infections. The McDonald criteria are a frequently updated set of guidelines used to establish an MS diagnosis.
There is no cure for MS. Current treatments aim to mitigate inflammation and resulting symptoms from acute flares and prevent further attacks with disease-modifying medications. Physical therapy and occupational therapy, along with patient-centered symptom management, can help with people's ability to function. The long-term outcome is difficult to predict; better outcomes are more often seen in women, those who develop the disease early in life, those with a relapsing course, and those who initially experienced few attacks.
MS is the most common immune-mediated disorder affecting the central nervous system (CNS). In 2020, about 2.8 million people were affected by MS globally, with rates varying widely in different regions and among different populations. The disease usually begins between the ages of 20 and 50 and is twice as common in women as in men. MS was first described in 1868 by French neurologist Jean-Martin Charcot.
The name "multiple sclerosis" is short for multiple cerebro-spinal sclerosis, which refers to the numerous glial scars (or sclerae – essentially plaques or lesions) that develop on the white matter of the brain and spinal cord.
Signs and symptoms
As MS lesions can affect any part of the central nervous system, a person with MS can have almost any neurological symptom or sign referable to it.
Fatigue is one of the most common symptoms of MS. Roughly 65% of people with MS experience fatigue symptomatology, and of these, some 15–40% report fatigue as their most disabling MS symptom.
Autonomic, visual, motor, and sensory problems are also among the most common symptoms.
The specific symptoms depend on the locations of the lesions within the nervous system, and may include focal loss of sensitivity or changes in sensation in the limbs, such as feeling tingling, pins and needles, or numbness; limb motor weakness/pain, blurred vision, pronounced reflexes, muscle spasms, difficulty with ambulation (walking), difficulties with coordination and balance (ataxia); problems with speech or swallowing, visual problems (optic neuritis manifesting as eye pain & vision loss, or nystagmus manifesting as double vision), fatigue, and bladder and bowel difficulties (such as urinary or fecal incontinence or retention), among others. When MS is more advanced, walking difficulties lead to a higher risk of falling.
Difficulties in thinking and emotional problems such as depression or unstable mood are also common. The primary deficit in cognitive function that people with MS experience is slowed information-processing speed, with memory also commonly affected, and executive function less commonly. Intelligence, language, and semantic memory are usually preserved, and the level of cognitive impairment varies considerably between people with MS.
Uhthoff's phenomenon, a worsening of symptoms due to exposure to higher-than-usual temperatures, and Lhermitte's sign, an electrical sensation that runs down the back when bending the neck, are particularly characteristic of MS, although may not always be present. Another presenting manifestation that is rare but highly suggestive of a demyelinating process such as MS is bilateral internuclear ophthalmoplegia, where the patient experiences double vision when attempting to move their gaze to the right & left.
Some 60% or more of MS patients find their symptoms, particularly including fatigue, are affected by changes in body temperature.
Measures of disability
The main measure of disability and severity is the expanded disability status scale (EDSS), with other measures such as the multiple sclerosis functional composite being increasingly used in research. EDSS is also correlated with falls in people with MS. While it is a popular measure, EDSS has been criticized for some of its limitations, such as overreliance on walking.
Disease course
Prodromal phase
MS may have a prodromal phase in the years leading up to its manifestation, characterized by psychiatric issues, cognitive impairment, and increased use of healthcare.
Onset
85% of cases begin as a clinically isolated syndrome (CIS) over a number of days with 45% having motor or sensory problems, 20% having optic neuritis, and 10% having symptoms related to brainstem dysfunction, while the remaining 25% have more than one of the aforementioned difficulties. With optic neuritis as the most common presenting symptom, people with MS notice sub-acute loss of vision, often associated with pain worsening on eye movement, and reduced color vision. Early diagnosis of MS-associated optic neuritis helps timely initiation of targeted treatments. However, it is crucial to adhere to established diagnostic criteria when treating optic neuritis due to the broad range of alternative causes, such as neuromyelitis optica spectrum disorder (NMOSD), and other autoimmune or infectious conditions. The course of symptoms occurs in two main patterns initially: either as episodes of sudden worsening that last a few days to months (called relapses, exacerbations, bouts, attacks, or flare-ups) followed by improvement (85% of cases) or as a gradual worsening over time without periods of recovery (10–15% of cases). A combination of these two patterns may also occur or people may start in a relapsing and remitting course that then becomes progressive later on.
Relapses
Relapses are usually unpredictable, occurring without warning. Exacerbations rarely occur more frequently than twice per year. Some relapses, however, are preceded by common triggers and they occur more frequently during spring and summer. Similarly, viral infections such as the common cold, influenza, or gastroenteritis increase their risk. Stress may also trigger an attack.
Many events do not affect rates of relapse requiring hospitalization including vaccination, breast feeding, physical trauma, and Uhthoff's phenomenon.
Pregnancy
Many people with MS who become pregnant experience lower symptoms during pregnancy. During the first months after delivery, the risk increases. Overall, pregnancy does not seem to influence long-term disability.
Causes
MS is an autoimmune disease with a combination of genetic and environmental causes underlying it. Both T-cells and B-cells are involved, although T-cells are often considered to be the driving force of the disease. The causes of the disease are not fully understood. The Epstein-Barr Virus (EBV) has been shown to be directly present in the brain of most cases of MS and the virus is transcriptionally active in infected cells. EBV nuclear antigens are believed to be involved in the pathogenesis of multiple sclerosis, but not all people with MS have signs of EBV infection. Dozens of human peptides have been identified in different cases of the disease, and while some have plausible links to infectious organisms or known environmental factors, others do not.
Immune dysregulation
Failure of both central and peripheral nervous system clearance of autoreactive immune cells is implicated in MS development. The thymus is responsible for the immune system's central tolerance, where autoreactive T-cells are killed without being released into circulation. A similar mechanism kills autoreactive B-cells in the bone marrow. Some autoreactive T-cells & B-cells may escape these defense mechanisms, which is where peripheral immune tolerance defenses take action by preventing them from causing disease. However, these additional lines of defense can still fail. Further detail on immune dysregulation's contribution to MS risk is provided in the pathophysiology section of this article as well as the standalone article on the pathophysiology of MS.
Infectious agents
Early evidence suggested the association between several viruses with human demyelinating encephalomyelitis, and the occurrence of demyelination in animals caused by some viral infections. One such virus, Epstein-Barr virus (EBV), can cause infectious mononucleosis and infects about 95% of adults, though only a small proportion of those infected later develop MS. A study of more than 10 million US military members compared 801 people who developed MS to 1,566 matched controls who did not. The study found a 32-fold increased risk of MS development following EBV infection. It did not find an increased risk after infection with other viruses, including the similar cytomegalovirus. These findings strongly suggest that EBV plays a role in MS onset, although EBV alone may be insufficient to cause it.
The nuclear antigen of EBV, which is the most consistent marker of EBV infection across all strains, has been identified as a direct source of autoreactivity in the human body. These antigens appear more likely to promote autoimmunity vitamin D-deficient persons. The exact nature of this relationship is poorly understood.
Genetics
MS is not considered a hereditary disease, but several genetic variations have been shown to increase its risk. Some of these genes appear to have higher expression levels in microglial cells than expected by chance. The probability of developing MS is higher in relatives of an affected person, with a greater risk among those more closely related. An identical twin of an affected individual has a 30% chance of developing MS, 5% for a nonidentical twin, 2.5% for a sibling, and an even lower chance for a half-sibling. If both parents are affected, the risk in their children is 10 times that of the general population. MS is also more common in some ethnic groups than others.
Specific genes linked with MS include differences in the human leukocyte antigen (HLA) system—a group of genes on chromosome 6 that serves as the major histocompatibility complex (MHC). The contribution of HLA variants to MS susceptibility has been known since the 1980s, and it has also been implicated in the development of other autoimmune diseases, such as type 1 diabetes and systemic lupus erythematosus. The most consistent finding is the association between higher risk MS development and the MHC allele DR15, which is present in 30% of the U.S. and Northern European population. Other loci exhibit a protective effect, such as HLA-C554 and HLA-DRB1*11. HLA differences account for an estimated 20 to 60% of the genetic predisposition. Genome-wide association studies have revealed at least 200 MS-associated variants outside the HLA locus.
Geography
The prevalence of MS from a geographic standpoint resembles a gradient, with it being more common in people who live farther from the equator (e.g. those who live in northern regions of the world), although exceptions exist. The cause of this geographical pattern is not clear, although exposure to ultraviolet B (UVB) radiation and vitamin D levels may be possible explanations. For example, those who live in northern regions of the world have less exposure to UVB radiation and subsequently lower levels of vitamin D, which is a known risk factor for developing MS. Inversely, those who live in areas of relatively higher sun exposure and subsequently increased UVB radiation have a decreased risk of developing MS. As of 2019, the north–south gradient of incidence is still present and is increasing.
On the other hand, MS is more common in regions with northern European populations, so the geographic variation may simply reflect the global distribution of these high-risk populations.
A relationship between season of birth and MS lends support to this idea, with fewer people born in the Northern Hemisphere in November compared to May being affected later in life.
Environmental factors may play a role during childhood, with several studies finding that people who move to a different region of the world before the age of 15 acquire the new region's risk of MS. If migration takes place after age 15, the persons retain the risk of their home country. Some evidence indicates that the effect of moving may still apply to people older than 15.
There are some exceptions to the above-mentioned geographic pattern. These include ethnic groups that are at low risk and that live far from the equator such as the Sami, Amerindians, Canadian Hutterites, New Zealand Māori, and Canada's Inuit, as well as groups that have a relatively high risk and that live closer to the equator such as Sardinians, inland Sicilians, Palestinians, and Parsi.
Impact of temperature
MS symptoms may increase if body temperature is dysregulated. Fatigue is particularly affected.
Other
Smoking may be an independent risk factor for MS. Stress may also be a risk factor, although the evidence to support this is weak. Association with occupational exposures and toxins—mainly organic solvents—has been evaluated, but no clear conclusions have been reached. Vaccinations were studied as causal factors; most studies, though, show no association. Several other possible risk factors, such as diet and hormone intake, have been evaluated, but evidence on their relation with the disease is "sparse and unpersuasive". Gout occurs less than would be expected and lower levels of uric acid have been found in people with MS. This has led to the theory that uric acid is protective, although its exact importance remains unknown. Obesity during adolescence and young adulthood is a risk factor for MS.
Pathophysiology
Multiple sclerosis is an autoimmune disease, primarily mediated by T-cells. The three main characteristics of MS are the formation of lesions in the central nervous system (also called plaques), inflammation, and the destruction of myelin sheaths of neurons. These features interact in a complex and not yet fully understood manner to produce the breakdown of nerve tissue, and in turn, the signs and symptoms of the disease. Damage is believed to be caused, at least in part, by attack on the nervous system by a person's own immune system.
Immune dysregulation
As briefly detailed in the causes section of this article, MS is currently thought to stem from a failure of the body's immune system to kill off autoreactive T-cells & B-cells. Currently, the T-cell subpopulations that are thought to drive the development of MS are autoreactive CD8+ T-cells, CD4+ helper T-cells, and TH17 cells. These autoreactive T-cells produce substances called cytokines that induce an inflammatory immune response in the CNS, leading to the development of the disease. More recently, however, the role of autoreactive B-cells has been elucidated. Evidence of their contribution to the development of MS is implicated through the presence of oligoclonal IgG bands (antibodies produced by B-cells) in the CSF of patients with MS. The presence of these oligoclonal bands has been used as supportive evidence in clinching a diagnosis of MS. As similarly described before, B-cells can also produce cytokines that induce an inflammatory immune response via activation of autoreactive T-cells. As such, higher levels of these autoreactive B-cells are associated with an increased number of lesions & neurodegeneration as well as worse disability.
Another cell population that is becoming increasingly implicated in MS is microglia. These cells are resident to & keep watch over the CNS, responding to pathogens by shifting between pro- & anti-inflammatory states. Microglia are involved in the formation of MS lesions and be involved in other diseases that primarily affect the CNS white matter. However, because of their ability to switch between pro- & anti-inflammatory states, microglia have also been shown to be able to assist in remyelination & subsequent neuron repair. As such, microglia are thought to be participating in both acute & chronic MS lesions, with 40% of phagocytic cells in early active MS lesions being proinflammatory microglia.
Lesions
The name multiple sclerosis refers to the scars (sclerae – better known as plaques or lesions) that form in the nervous system. These lesions most commonly affect the white matter in the optic nerve, brain stem, basal ganglia, and spinal cord, or white matter tracts close to the lateral ventricles. The function of white matter cells is to carry signals between grey matter areas, where the processing is done, and the rest of the body. The peripheral nervous system is rarely involved.
To be specific, MS involves the loss of oligodendrocytes, the cells responsible for creating and maintaining a fatty layer—known as the myelin sheath—which helps the neurons carry electrical signals (action potentials). This results in a thinning or complete loss of myelin, and as the disease advances, the breakdown of the axons of neurons. When the myelin is lost, a neuron can no longer effectively conduct electrical signals. A repair process, called remyelination, takes place in the early phases of the disease, but the oligodendrocytes are unable to completely rebuild the cell's myelin sheath. Repeated attacks lead to successively less effective remyelinations, until a scar-like plaque is built up around the damaged axons. These scars are the origin of the symptoms and during an attack magnetic resonance imaging (MRI) often shows more than 10 new plaques. This could indicate that some number of lesions exist, below which the brain is capable of repairing itself without producing noticeable consequences. Another process involved in the creation of lesions is an abnormal increase in the number of astrocytes due to the destruction of nearby neurons. A number of lesion patterns have been described.
Inflammation
Apart from demyelination, the other sign of the disease is inflammation. Fitting with an immunological explanation, the inflammatory process is caused by T cells, a kind of lymphocytes that plays an important role in the body's defenses. T cells gain entry into the brain as a result of disruptions in the blood–brain barrier. The T cells recognize myelin as foreign and attack it, explaining why these cells are also called "autoreactive lymphocytes".
The attack on myelin starts inflammatory processes, which trigger other immune cells and the release of soluble factors like cytokines and antibodies. A further breakdown of the blood-brain barrier, in turn, causes many other damaging effects, such as swelling, activation of macrophages, and more activation of cytokines and other destructive proteins. Inflammation can potentially reduce transmission of information between neurons in at least three ways. The soluble factors released might stop neurotransmission by intact neurons. These factors could lead to or enhance the loss of myelin, or they may cause the axon to break down completely.
Blood-brain barrier
The blood-brain barrier (BBB) is a part of the capillary system that prevents the entry of T cells into the central nervous system. It may become permeable to these types of cells secondary to an infection by a virus or bacteria. After it repairs itself, typically once the infection has cleared, T cells may remain trapped inside the brain. Gadolinium cannot cross a normal BBB, so gadolinium-enhanced MRI is used to show BBB breakdowns.
MS fatigue
The pathophysiology and mechanisms causing MS fatigue are not well understood. MS fatigue can be affected by body heat, and this may differentiate MS fatigue from other primary fatigue. Fatigability (loss of strength) may increase perception of fatigue, but the two measures warrant independent assessment in clinical studies.
Diagnosis
Multiple sclerosis is typically diagnosed based on the presenting signs and symptoms, in combination with supporting medical imaging and laboratory testing. It can be difficult to confirm, especially early on, since the signs and symptoms may be similar to those of other medical problems.
McDonald criteria
The McDonald criteria, which focus on clinical, laboratory, and radiologic evidence of lesions at different times and in different areas, is the most commonly used method of diagnosis with the Schumacher and Poser criteria being of mostly historical significance. The McDonald criteria states that patients with multiple sclerosis should have lesions which are disseminated in time (DIT) and disseminated in space (DIS), i.e. lesions which have appeared in different areas in the brain and at different times. Below is an abbreviated outline of the 2017 McDonald Criteria for diagnosis of MS.
At least 2 clinical attacks with MRI showing 2 or more lesions characteristic of MS.
At least 2 clinical attacks with MRI showing 1 lesion characteristic of MS with clear historical evidence of a previous attack involving a lesion at a distinct location in the CNS.
At least 2 clinical attacks with MRI showing 1 lesion characteristic of MS, with DIT established by an additional clinical attack at a distinct CNS site or by MRI showing an old MS lesion.
1 clinical attack with MRI showing at least 2 lesions characteristic of MS, with DIT established by an additional attack, by MRI showing old MS lesion(s), or presence of oligoclonal bands in CSF.
1 clinical attack with MRI showing 1 lesion characteristic of MS, with DIS established by an additional attack at a different CNS site or by MRI showing old MS lesion(s), and DIT established by an additional attack, by MRI showing old MS lesion(s), or presence of oligoclonal bands in CSF.
, no single test (including biopsy) can provide a definitive diagnosis.
MRI
Magnetic resonance imaging (MRI) of the brain and spine may show areas of demyelination (lesions or plaques). Gadolinium can be administered intravenously as a contrast agent to highlight active plaques, and by elimination, demonstrate the existence of historical lesions not associated with symptoms at the moment of the evaluation.
Central vein signs (CVSs) have been proposed as a good indicator of MS in comparison with other conditions causing white lesions. One small study found fewer CVSs in older and hypertensive people. Further research on CVS as a biomarker for MS is ongoing.
In vivo vs post-portem lesion visibility in MRI scans
Only postmortem MRI allows visualization of sub-millimetric lesions in cortical layers and in the cerebellar cortex.
Cerebrospinal fluid (lumbar puncture)
Testing of cerebrospinal fluid obtained from a lumbar puncture can provide evidence of chronic inflammation in the central nervous system. The cerebrospinal fluid is tested for oligoclonal bands of IgG on electrophoresis, which are inflammation markers found in 75–85% of people with MS.
Differential diagnosis
Several diseases present similarly to MS. Medical professionals use a patient's specific presentation, history, and exam findings to make an individualized differential. Red flags are findings that suggest an alternate diagnosis, although they do not rule out MS. Red flags include a patient younger than 15 or older than 60, less than 24 hours of symptoms, involvement of multiple cranial nerves, involvement of organs outside of the nervous system, and atypical lab and exam findings.
In an emergency setting, it is important to rule out a stroke or bleeding in the brain. Intractable vomiting, severe optic neuritis, or bilateral optic neuritis raises suspicion for neuromyelitis optica spectrum disorder (NMOSD). Infectious diseases that may look similar to multiple sclerosis include HIV, Lyme disease, and syphilis. Autoimmune diseases include neurosarcoidosis, lupus, Guillain-Barré syndrome, acute disseminated encephalomyelitis, and Behçet's disease. Psychiatric conditions such as anxiety or conversion disorder may also present in a similar way. Other rare diseases on the differential include CNS lymphoma, congenital leukodystrophies, and anti-MOG-associated myelitis.
Types and variants
Several phenotypes (commonly termed "types"), or patterns of progression, have been described. Phenotypes use the past course of the disease in an attempt to predict the future course. They are important not only for prognosis but also for treatment decisions.
The International Advisory Committee on Clinical Trials of MS describes four types of MS (revised in 2013) in what is known as the Lublin classification:
Clinically isolated syndrome (CIS)
Relapsing-remitting MS (RRMS)
Primary progressive MS (PPMS)
Secondary progressive MS (SPMS)
CIS can be characterised as a single lesion seen on MRI which is associated with signs or symptoms found in MS. Due to the McDonald criteria, it does not completely fit the criteria to be diagnosed as MS, hence being named "clinically isolated syndrome". CIS can be seen as the first episode of demyelination in the central nervous system. To be classified as CIS, the attack must last at least 24 hours and be caused by inflammation or demyelination of the central nervous system. Patients who suffer from CIS may or may not go on to develop MS, but 30 to 70% of persons who experience CIS will later develop MS.
RRMS is characterized by unpredictable relapses followed by periods of months to years of relative quiet (remission) with no new signs of disease activity. Deficits that occur during attacks may either resolve or leave problems, the latter in about 40% of attacks and being more common the longer a person has had the disease. This describes the initial course of 80% of individuals with MS.
PPMS occurs in roughly 10–20% of individuals with the disease, with no remission after the initial symptoms. It is characterized by progression of disability from onset, with no, or only occasional and minor, remissions and improvements. The usual age of onset for the primary progressive subtype is later than that of the relapsing-remitting subtype. It is similar to the age that secondary progressive usually begins in RRMS, around 40 years of age.
SPMS occurs in around 65% of those with initial RRMS, who eventually have progressive neurologic decline between acute attacks without any definite periods of remission. Occasional relapses and minor remissions may appear. The most common length of time between disease onset and conversion from RRMS to SPMS is 19 years.
Special courses
Independently of the types published by the MS associations, regulatory agencies such as the FDA often consider special courses, trying to reflect some clinical trial results on their approval documents. Some examples could be "highly active MS" (HAMS), "active secondary MS" (similar to the old progressive-relapsing) and "rapidly progressing PPMS".
Also, deficits always resolving between attacks is sometimes referred to as "benign" MS, although people still build up some degree of disability in the long term. On the other hand, the term malignant multiple sclerosis is used to describe people with MS having reached a significant level of disability in a short period.
An international panel has published a standardized definition for the course HAMS.
Variants
Atypical variants of MS have been described; these include tumefactive multiple sclerosis, Balo concentric sclerosis, Schilder's diffuse sclerosis, and Marburg multiple sclerosis. Debate remains on whether they are MS variants or different diseases. Some diseases previously considered MS variants, such as Devic's disease, are now considered outside the MS spectrum.
Management
Although no cure for multiple sclerosis has been found, several therapies have proven helpful. Several effective treatments can decrease the number of attacks and the rate of progression. The primary aims of therapy are returning function after an attack, preventing new attacks, and preventing disability. Starting medications is generally recommended in people after the first attack when more than two lesions are seen on MRI.
The first approved medications used to treat MS were modestly effective, though were poorly tolerated and had many adverse effects. Several treatment options with better safety and tolerability profiles have been introduced, improving the prognosis of MS.
As with any medical treatment, medications used in the management of MS have several adverse effects. Alternative treatments are pursued by some people, despite the shortage of supporting evidence of efficacy.
Initial management of acute flare
During symptomatic attacks, administration of high doses of intravenous corticosteroids, such as methylprednisolone, is the usual therapy, with oral corticosteroids seeming to have a similar efficacy and safety profile. Although effective in the short term for relieving symptoms, corticosteroid treatments do not appear to have a significant impact on long-term recovery. The long-term benefit is unclear in optic neuritis as of 2020. The consequences of severe attacks that do not respond to corticosteroids might be treatable by plasmapheresis.
Chronic management
Relapsing-remitting multiple sclerosis
Multiple disease-modifying medications were approved by regulatory agencies for RRMS; they are modestly effective at decreasing the number of attacks. Interferons and glatiramer acetate are first-line treatments and are roughly equivalent, reducing relapses by approximately 30%. Early-initiated long-term therapy is safe and improves outcomes.
Treatment of CIS with interferons decreases the chance of progressing to clinical MS. Efficacy of interferons and glatiramer acetate in children has been estimated to be roughly equivalent to that of adults. The role of some newer agents such as fingolimod, teriflunomide, and dimethyl fumarate, is not yet entirely clear. Making firm conclusions about the best treatment is difficult, especially regarding the long‐term benefit and safety of early treatment, given the lack of studies directly comparing disease-modifying therapies or long-term monitoring of patient outcomes.
The relative effectiveness of different treatments is unclear, as most have only been compared to placebo or a small number of other therapies. Direct comparisons of interferons and glatiramer acetate indicate similar effects or only small differences in effects on relapse rate, disease progression, and MRI measures. There is high confidence that natalizumab, cladribine, or alemtuzumab are decreasing relapses over two years for people with RRMS. Natalizumab and interferon beta-1a (Rebif) may reduce relapses compared to both placebo and interferon beta-1a (Avonex) while Interferon beta-1b (Betaseron), glatiramer acetate, and mitoxantrone may also prevent relapses. Evidence on relative effectiveness in reducing disability progression is unclear. There is moderate confidence that a two-year treatment with natalizumab slows disability progression for people with RRMS. All medications are associated with adverse effects that may influence their risk-to-benefit profiles.
Ublituximab was approved for medical use in the United States in December 2022.
Medications
Overview of medications available for MS.
Progressive multiple sclerosis
In 2011, mitoxantrone was the first medication approved for secondary progressive MS. In this population, tentative evidence supports mitoxantrone moderately slowing the progression of the disease and decreasing rates of relapses over two years.
New approved medications continue to emerge. In March 2017, the FDA approved ocrelizumab as a treatment for primary progressive MS in adults, the first drug to gain that approval, with requirements for several Phase IV clinical trials. It is also used for the treatment of relapsing forms of multiple sclerosis, to include clinically isolated syndrome, relapsing-remitting disease, and active secondary progressive disease in adults. According to a 2021 Cochrane review, ocrelizumab may reduce worsening of symptoms for primary progressive MS and probably increases unwanted effects but makes little or no difference to the number of serious unwanted effects.
In 2019, siponimod and cladribine were approved in the United States for the treatment of secondary progressive multiple sclerosis (SPMS). Subsequently, ozanimod was approved in 2020, and ponesimod was approved in 2021, which were both approved for management of CIS, relapsing MS, and SPMS in the U.S., and RRMS in Europe.
Ocrelizumab/hyaluronidase was approved for medical use in the United States in September 2024.
Adverse effects
The disease-modifying treatments have several adverse effects. One of the most common is irritation at the injection site for glatiramer acetate and the interferons (up to 90% with subcutaneous injections and 33% with intramuscular injections). Over time, a visible dent at the injection site, due to the local destruction of fat tissue, known as lipoatrophy, may develop. Interferons may produce flu-like symptoms; some people taking glatiramer experience a post-injection reaction with flushing, chest tightness, heart palpitations, and anxiety, which usually lasts less than thirty minutes. More dangerous but much less common are liver damage from interferons, systolic dysfunction (12%), infertility, and acute myeloid leukemia (0.8%) from mitoxantrone, and progressive multifocal leukoencephalopathy occurring with natalizumab (occurring in 1 in 600 people treated).
Fingolimod may give rise to hypertension and slowed heart rate, macular edema, elevated liver enzymes, or a reduction in lymphocyte levels. Tentative evidence supports the short-term safety of teriflunomide, with common side effects including headaches, fatigue, nausea, hair loss, and limb pain. There have also been reports of liver failure and PML with its use and it is dangerous for fetal development. Most common side effects of dimethyl fumarate are flushing and gastrointestinal problems. While dimethyl fumarate may lead to a reduction in the white blood cell count there were no reported cases of opportunistic infections during trials.
Associated symptoms
Both medications and neurorehabilitation have been shown to improve some symptoms, though neither changes the course of the disease. Some symptoms have a good response to medication, such as bladder spasticity, while others are little changed. Equipment such as catheters for neurogenic bladder dysfunction or mobility aids can help improve functional status.
A multidisciplinary approach is important for improving quality of life; however, it is difficult to specify a 'core team' as many health services may be needed at different points in time. Multidisciplinary rehabilitation programs increase activity and participation of people with MS but do not influence impairment level. Studies investigating information provision in support of patient understanding and participation suggest that while interventions (written information, decision aids, coaching, educational programmes) may increase knowledge, the evidence of an effect on decision making and quality of life is mixed and low certainty. There is limited evidence for the overall efficacy of individual therapeutic disciplines, though there is good evidence that specific approaches, such as exercise, and psychological therapies are effective. Cognitive training, alone or combined with other neuropsychological interventions, may show positive effects for memory and attention though firm conclusions are not possible given small sample numbers, variable methodology, interventions and outcome measures. The effectiveness of palliative approaches in addition to standard care is uncertain, due to lack of evidence. The effectiveness of interventions, including exercise, specifically for the prevention of falls in people with MS is uncertain, while there is some evidence of an effect on balance function and mobility. Cognitive behavioral therapy has shown to be moderately effective for reducing MS fatigue. The evidence for the effectiveness of non-pharmacological interventions for chronic pain is insufficient to recommend such interventions alone, however their use in combination with medications may be reasonable.
Non-pharmaceutical
There is some evidence that aquatic therapy is a beneficial intervention.
The spasticity associated with MS can be difficult to manage because of the progressive and fluctuating course of the disease. Although there is no firm conclusion on the efficacy in reducing spasticity, PT interventions can be a safe and beneficial option for patients with multiple sclerosis. Physical therapy including vibration interventions, electrical stimulation, exercise therapy, standing therapy, and radial shock wave therapy (RSWT), were beneficial for limiting spasticity, helping limit excitability, or increasing range of motion.
Alternative treatments
Over 50% of people with MS may use complementary and alternative medicine, although percentages vary depending on how alternative medicine is defined. Regarding the characteristics of users, they are more frequently women, have had MS for a longer time, tend to be more disabled and have lower levels of satisfaction with conventional healthcare. The evidence for the effectiveness for such treatments in most cases is weak or absent. Treatments of unproven benefit used by people with MS include dietary supplementation and regimens, vitamin D, relaxation techniques such as yoga, herbal medicine (including medical cannabis), hyperbaric oxygen therapy, self-infection with hookworms, reflexology, acupuncture, and mindfulness. Evidence suggests vitamin D supplementation, irrespective of the form and dose, provides no benefit for people with MS; this includes for measures such as relapse recurrence, disability, and MRI lesions while effects on health‐related quality of life and fatigue are unclear. There is insufficient evidence supporting high-dose biotin and some evidence for increased disease activity and higher risk of relapse with its use. A recent review of the effectiveness of Cannabis and Cannabinoids (2022) found that compared with placebo nabiximols probably reduce the severity of spasticity in the short term.
Prognosis
The availability of treatments that modify the course of multiple sclerosis beginning in the 1990s, known as disease-modifying therapies (DMTs), has improved prognosis. These treatments can reduce relapses and slow progression, but there is no cure.
The prognosis of MS depends on the subtype of the disease, and there is considerable individual variation in the progression of the disease. In relapsing MS, the most common subtype, a 2016 cohort study found that after a median of 16.8 years from onset, one in ten needed a walking aid, and almost two in ten transitioned to secondary progressive MS, a form characterized by more progressive decline. With treatments available in the 2020s, relapses can be eliminated or substantially reduced. However, "silent progression" of the disease still occurs.
In addition to secondary progressive MS (SPMS), a small proportion of people with MS (10–15%) experience progressive decline from the onset, known as primary progressive MS (PPMS). Most treatments have been approved for use in relapsing MS; there are fewer treatments with lower efficacy for progressive forms of MS. The prognosis for progressive MS is worse, with faster accumulation of disability, though with considerable individual variation. In untreated PPMS, the median time from onset to requiring a walking aid is estimated as seven years. In SPMS, a 2014 cohort study reported that people required a walking aid after an average of five years from the onset of SPMS, and were chair or bed-bound after an average of fifteen years.
After diagnosis of MS, characteristics that predict a worse course are male sex, older age, and greater disability at the time of diagnosis; female sex is associated with a higher relapse rate. Currently, no biomarker can accurately predict disease progression in every patient. Spinal cord lesions, abnormalities on MRI, and more brain atrophy are predictive of a worse course, though brain atrophy as a predictor of disease course is experimental and not used in clinical practice. Early treatment leads to a better prognosis, but a higher relapse frequency when treated with DMTs is associated with a poorer prognosis. A 60-year longitudinal population study conducted in Norway found that those with MS had a life expectancy seven years shorter than the general population. Median life expectancy for RRMS patients was 77.8 years and 71.4 years for PPMS, compared to 81.8 years for the general population. Life expectancy for men was five years shorter than for women.
Epidemiology
MS is the most common autoimmune disorder of the central nervous system. The latest estimation of the total number of people with MS was 2.8 million globally, with a prevalence of 36 per 100,000 people. Moreover, prevalence varies widely in different regions around the world. In Africa, there are five people per 100,000 diagnosed with MS, compared to South East Asia where the prevalence is nine per 100,000, 112 per 100,000 in the Americas, and 133 per 100,000 in Europe. Nearly one million people in the United States had MS in 2022.
Increasing rates of MS may be explained simply by better diagnosis. Studies on populational and geographical patterns have been common and have led to a number of theories about the cause.
MS usually appears in adults in their late twenties or early thirties but it can rarely start in childhood and after 50 years of age. The primary progressive subtype is more common in people in their fifties. Similarly to many autoimmune disorders, the disease is more common in women, and the trend may be increasing. As of 2020, globally it is about two times more common in women than in men, and the ratio of women to men with MS is as high as 4:1 in some countries. In children, it is even more common in females than males, while in people over fifty, it affects males and females almost equally.
History
Medical discovery
Robert Carswell (1793–1857), a British professor of pathology, and Jean Cruveilhier (1791–1873), a French professor of pathologic anatomy, described and illustrated many of the disease's clinical details, but did not identify it as a separate disease. Specifically, Carswell described the injuries he found as "a remarkable lesion of the spinal cord accompanied with atrophy". Under the microscope, Swiss pathologist Georg Eduard Rindfleisch (1836–1908) noted in 1863 that the inflammation-associated lesions were distributed around blood vessels.
The French neurologist Jean-Martin Charcot (1825–1893) was the first person to recognize multiple sclerosis as a distinct disease in 1868. Summarizing previous reports and adding his own clinical and pathological observations, Charcot called the disease sclerose en plaques.
Diagnosis history
The first attempt to establish a set of diagnostic criteria was also due to Charcot in 1868. He published what now is known as the "Charcot triad", consisting of nystagmus, intention tremor, and telegraphic speech (scanning speech). Charcot also observed cognition changes, describing his patients as having a "marked enfeeblement of the memory" and "conceptions that formed slowly".
The diagnosis was based on Charcot triad and clinical observation until Schumacher made the first attempt to standardize criteria in 1965 by introducing some fundamental requirements: Dissemination of the lesions in time (DIT) and space (DIS), and that "signs and symptoms cannot be explained better by another disease process". The DIT and DIS requirement was later inherited by the Poser and McDonald criteria, whose 2017 revision is in use.
During the 20th century, theories about the cause and pathogenesis were developed and effective treatments began to appear in the 1990s. Since the beginning of the 21st century, refinements of the concepts have taken place. The 2010 revision of the McDonald criteria allowed for the diagnosis of MS with only one proved lesion (CIS).
In 1996, the US National Multiple Sclerosis Society (NMSS) (Advisory Committee on Clinical Trials) defined the first version of the clinical phenotypes that is in use. In this first version, they provided standardized definitions for four MS clinical courses: relapsing-remitting (RR), secondary progressive (SP), primary progressive (PP), and progressive relapsing (PR). In 2010, PR was dropped and CIS was incorporated. Three years later, the 2013 revision of the "phenotypes for the disease course" were forced to consider CIS as one of the phenotypes of MS, making obsolete some expressions like "conversion from CIS to MS". Other organizations have proposed later new clinical phenotypes, like HAMS (Highly Active MS).
Historical cases
There are several historical accounts of people who probably had MS and lived before or shortly after the disease was described by Charcot.
A young woman called Halldora who lived in Iceland around 1200 suddenly lost her vision and mobility but recovered them seven days after. Saint Lidwina of Schiedam (1380–1433), a Dutch nun, may be one of the first clearly identifiable people with MS. From the age of 16 until her death at 53, she had intermittent pain, weakness of the legs and vision loss: symptoms typical of MS. Both cases have led to the proposal of a "Viking gene" hypothesis for the dissemination of the disease.
Augustus Frederick d'Este (1794–1848), son of Prince Augustus Frederick, Duke of Sussex and Lady Augusta Murray and a grandson of George III of the United Kingdom, almost certainly had MS. D'Este left a detailed diary describing his 22 years living with the disease. His diary began in 1822 and ended in 1846, although it remained unknown until 1948. His symptoms began at age 28 with a sudden transient visual loss (amaurosis fugax) after the funeral of a friend. During his disease, he developed weakness in the legs, clumsiness of the hands, numbness, dizziness, bladder disturbance and erectile dysfunction. In 1844, he began to use a wheelchair. Despite his illness, he kept an optimistic view of life. Another early account of MS was kept by the British diarist W. N. P. Barbellion, pen name of Bruce Frederick Cummings (1889–1919), who maintained a detailed log of his diagnosis and struggle. His diary was published in 1919 as The Journal of a Disappointed Man. Charles Dickens, a keen observer, described possible bilateral optic neuritis with reduced contrast vision and Uhthoff's phenomenon in the main female character of Bleak House (1852–1853), Esther Summerville.
Research
Epstein-Barr virus
As of 2022, the pathogenesis of MS, as it relates to Epstein-Barr virus (EBV), is actively investigated, as are disease-modifying therapies; understanding of how risk factors combine with EBV to initiate MS is sought. Whether EBV is the only cause of MS might be better understood if an EBV vaccine is developed and shown to prevent MS as well.
Even though a variety of studies showed the connection between an EBV infection and a later development of multiple sclerosis, the mechanisms behind this correlation are not completely clear, and several theories have been proposed to explain the relationship between the two diseases. It is thought that the involvement of EBV-infected B-cells (B lymphocytes) and the involvement of anti-EBNA antibodies, which appear to be significantly higher in multiple sclerosis patients, play a crucial role in the development of the disease. This is supported by the fact that treatment against B-cells, e.g. ocrelizumab, reduces the symptoms of multiple sclerosis: annual relapses appear less frequently and the disability progression is slower. A 2022 Stanford University study has shown that during an EBV infection, molecular mimicry can occur, where the immune system will produce antibodies against the EBNA1 protein, which at the same time is able to bind to GlialCAM in the myelin. Additionally, they observed a phenomenon which is uncommon in healthy individuals but often detected in multiple sclerosis patients – B-cells are trafficking to the brain and spinal cord, where they are producing oligoclonal antibody bands. A majority of these oligoclonal bands do have an affinity to the viral protein EBNA1, which is cross-reactive to GlialCAM. These antibodies are abundant in approximately 20–25% of multiple sclerosis patients and worsen the autoimmune demyelination which leads consequently to a pathophysiological exacerbation of the disease. Furthermore, the intrathecal oligoclonal expansion with a constant somatic hypermutation is unique in multiple sclerosis when compared to other neuroinflammatory diseases. In the study, there was also the abundance of antibodies with IGHV 3–7 genes measured, which appears to be connected to the disease progress. Antibodies which are IGHV3–7-based are binding with a high affinity to EBNA1 and GlialCAM. This process is actively thriving the demyelination. It is probable that B-cells, expressing IGHV 3–7 genes entered the CSF and underwent affinity maturation after facing GlialCAM, which led consequently to the production of high-affinity anti-GlialCAM antibodies. This was additionally shown in the EAE mouse model where immunization with EBNA1 lead to a strong B-cell response against GlialCAM, which worsened the EAE.
Human endogenous retroviruses
Two members of the human endogenous retroviruses-W (HERV-W) family, namely, ERVWE1 and MS-associated retrovirus (MSRV), may be co-factors in MS immunopathogenesis. HERVs constitute up to 8% of the human genome; most are epigenetically silent, but can be reactivated by exogenous viruses, proinflammatory conditions or oxidative stress.
Medications
Medications that influence voltage-gated sodium ion channels are under investigation as a potential neuroprotective strategy because of hypothesized role of sodium in the pathological process leading to axonal injury and accumulating disability. There is insufficient evidence of an effect of sodium channel blockers for people with MS.
Pathogenesis
MS is a clinically defined entity with several atypical presentations. Some auto-antibodies have been found in atypical MS cases, giving birth to separate disease families and restricting the previously wider concept of MS.
Anti-AQP4 autoantibodies were found in neuromyelitis optica (NMO), which was previously considered a MS variant. A spectrum of diseases named NMOSD (NMO spectrum diseases) or anti-AQP4 diseases has been accepted. Some cases of MS were presenting anti-MOG autoantibodies, mainly overlapping with the Marburg variant. Anti-MOG autoantibodies were found to be also present in ADEM, and a second spectrum of separated diseases is being considered. This spectrum is named inconsistently across different authors, but it is normally something similar to anti-MOG demyelinating diseases.
A third kind of auto-antibodies is accepted. There are several anti-neurofascin auto-antibodies that damage the Ranvier nodes of the neurons. These antibodies are more related to the peripheral nervous demyelination, but they were also found in chronic progressive PPMS and combined central and peripheral demyelination (CCPD, which is considered another atypical MS presentation).
In addition to the significance of auto-antibodies in MS, four different patterns of demyelination have been reported, opening the door to consider MS as a heterogeneous disease.
Biomarkers
Since disease progression is the result of degeneration of neurons, the roles of proteins showing loss of nerve tissue such as neurofilaments, tau, and N-acetylaspartate are under investigation.
Improvement in neuroimaging techniques such as positron emission tomography (PET) or MRI carry a promise for better diagnosis and prognosis predictions. Regarding MRI, there are several techniques that have already shown some usefulness in research settings and could be introduced into clinical practice, such as double-inversion recovery sequences, magnetization transfer, diffusion tensor, and functional magnetic resonance imaging. These techniques are more specific for the disease than existing ones, but still lack some standardization of acquisition protocols and the creation of normative values. This is particularly the case for proton magnetic resonance spectroscopy, for which a number of methodological variations observed in the literature may underlie continued inconsistencies in central nervous system metabolic abnormalities, particularly in N-acetyl aspartate, myoinositol, choline, glutamate, GABA, and GSH, observed for multiple sclerosis and its subtypes. There are other techniques under development that include contrast agents capable of measuring levels of peripheral macrophages, inflammation, or neuronal dysfunction, and techniques that measure iron deposition that could serve to determine the role of this feature in MS, or that of cerebral perfusion.
COVID-19
The hospitalization rate was found to be higher among individuals with MS and COVID-19 infection, at 10%, while the pooled infection rate is estimated at 4%. The pooled prevalence of death in hospitalized individuals with MS is estimated as 4%.
Metformin
A 2019 study on rats and a 2024 study on mice showed that a first-line medication for the treatment of type 2 diabetes, metformin, could promote remyelination. The promising drug is currently being researched on humans in the Octopus trials, a multi-arm, multi-stage trial, focussed on testing existing drugs for other conditions on patients with MS. Currently, clinical trials on humans are ongoing in Belgium, for patients with non-active progressive MS, in the U.K., in combination with clemastine for the treatment of relapsing-remitting MS, and Canada, for MS patients up to 25 years old.
Other emerging theories
One emerging hypothesis, referred to as the hygiene hypothesis, suggests that early-life exposure to infectious agents helps to develop the immune system and reduces susceptibility to allergies and autoimmune disorders. The hygiene hypothesis has been linked with MS and microbiome hypotheses.
It has also been proposed that certain bacteria found in the gut use molecular mimicry to infiltrate the brain via the gut–brain axis, initiating an inflammatory response and increasing blood-brain barrier permeability. Vitamin D levels have also been correlated with MS; lower levels of vitamin D correspond to an increased risk of MS, suggesting a reduced prevalence in the tropics – an area with more Vitamin D-rich sunlight – strengthening the impact of geographical location on MS development. MS mechanisms begin when peripheral autoreactive effector CD4+ T cells get activated and move into the CNS. Antigen-presenting cells localize the reactivation of autoreactive effector CD4-T cells once they have entered the CNS, attracting more T cells and macrophages to form the inflammatory lesion. In MS patients, macrophages and microglia assemble at locations where demyelination and neurodegeneration are actively occurring, and microglial activation is more apparent in the normal-appearing white matter of MS patients. Astrocytes generate neurotoxic chemicals like nitric oxide and TNFα, attract neurotoxic inflammatory monocytes to the CNS, and are responsible for astrogliosis, the scarring that prevents the spread of neuroinflammation and kills neurons inside the scarred area.
In 2024, scientists shared research on their findings of ancient migration to northern Europe from the Yamnaya area of culture, tracing MS-risk gene variants dating back around 5,000 years. The MS-risk gene variants protected ancient cattle herders from animal diseases, but modern lifestyles, diets and better hygiene, have allowed the gene to develop, resulting in the higher risk of MS today.
| Biology and health sciences | Non-infectious disease | null |
50605 | https://en.wikipedia.org/wiki/Cerebral%20palsy | Cerebral palsy | Cerebral palsy (CP) is a group of movement disorders that appear in early childhood. Signs and symptoms vary among people and over time, but include poor coordination, stiff muscles, weak muscles, and tremors. There may be problems with sensation, vision, hearing, and speech. Often, babies with cerebral palsy do not roll over, sit, crawl or walk as early as other children. Other symptoms may include seizures and problems with thinking or reasoning. While symptoms may get more noticeable over the first years of life, underlying problems do not worsen over time.
Cerebral palsy is caused by abnormal development or damage to the parts of the brain that control movement, balance, and posture. Most often, the problems occur during pregnancy, but may occur during childbirth or shortly afterwards. Often, the cause is unknown. Risk factors include preterm birth, being a twin, certain infections or exposure to methylmercury during pregnancy, a difficult delivery, and head trauma during the first few years of life. New studies suggest that inherited genetic causes play a role in 25% of cases, where formerly it was believed that 2% of cases were genetically determined.
Sub-types are classified, based on the specific problems present. For example, those with stiff muscles have spastic cerebral palsy, poor coordination in locomotion have ataxic cerebral palsy, and writhing movements have dyskinetic cerebral palsy. Diagnosis is based on the child's development. Blood tests and medical imaging may be used to rule out other possible causes.
Some causes of CP are preventable through immunization of the mother, and efforts to prevent head injuries in children such as improved safety. There is no known cure for CP, but supportive treatments, medication and surgery may help individuals. This may include physical therapy, occupational therapy and speech therapy. Mouse NGF has been shown to improve outcomes and has been available in China since 2003. Medications such as diazepam, baclofen and botulinum toxin may help relax stiff muscles. Surgery may include lengthening muscles and cutting overly active nerves. Often, external braces and Lycra splints and other assistive technology are helpful with mobility. Some affected children can achieve near normal adult lives with appropriate treatment. While alternative medicines are frequently used, there is no evidence to support their use. Potential treatments are being examined, including stem cell therapy. However, more research is required to determine if it is effective and safe.
Cerebral palsy is the most common movement disorder in children, occurring in about 2.1 per 1,000 live births. It has been documented throughout history, with the first known descriptions occurring in the work of Hippocrates in the 5th century BCE. Extensive study began in the 19th century by William John Little, after whom spastic diplegia was called "Little's disease". William Osler first named it "cerebral palsy" from the German (cerebral child-paralysis).
Signs and symptoms
Cerebral palsy is defined as "a group of permanent disorders of the development of movement and posture, causing activity limitation, that are attributed to non-progressive disturbances that occurred in the developing fetal or infant brain." While movement problems are the central feature of CP, difficulties with thinking, learning, feeling, communication and behavior often co-occur, with 28% having epilepsy, 58% having difficulties with communication, at least 42% having problems with their vision, and 2356% having learning disabilities. Muscle contractions in people with cerebral palsy-related high muscle tone are commonly thought to arise from overactivation. Although most people with CP have problems with increased muscle tone, some have low muscle tone instead. High muscle tone can either be due to spasticity or dystonia.
Cerebral palsy is characterized by abnormal muscle tone, reflexes, or motor development and coordination. The neurological lesion is primary and permanent while orthopedic manifestations are secondary to high muscle tone and progressive. In cerebral palsy with high muscle tone, unequal growth between muscle-tendon units and bone eventually leads to bone and joint deformities. At first, deformities are dynamic. Over time, deformities tend to become static, and joint contractures develop. Deformities in general and static deformities in specific (joint contractures) cause increasing gait difficulties in the form of tip-toeing gait, due to tightness of the Achilles tendon, and scissoring gait, due to tightness of the hip adductors. These gait patterns are among the most common gait abnormalities in children with cerebral palsy. However, orthopaedic manifestations of cerebral palsy are diverse. Additionally, crouch gait (also described as knee flexion gait) is prevalent among children who possess the ability to walk. The effects of cerebral palsy fall on a continuum of motor dysfunction, which may range from slight clumsiness at the mild end of the spectrum to impairments so severe that they render coordinated movement virtually impossible at the other end of the spectrum.
Babies born with severe cerebral palsy often have irregular posture; their bodies may be either very floppy or very stiff. Birth defects, such as spinal curvature, a small jawbone, or a small head sometimes occur along with CP. Symptoms may appear or change as a child gets older. Babies born with cerebral palsy do not immediately present with symptoms. Classically, CP becomes evident when the baby reaches the developmental stage at 6 to 9 months and is starting to mobilise, where preferential use of limbs, asymmetry, or gross motor developmental delay is seen.
Drooling is common among children with cerebral palsy, which can have a variety of impacts including social rejection, impaired speaking, damage to clothing and books, and mouth infections. It can additionally cause choking.
An average of 55.5% of people with cerebral palsy experience lower urinary tract symptoms, more commonly excessive storage issues than voiding issues. Those with voiding issues and pelvic floor overactivity can deteriorate as adults and experience upper urinary tract dysfunction.
Children with CP may also have sensory processing issues. Adults with cerebral palsy have a higher risk of respiratory failure.
Skeleton
For bones to attain their normal shape and size, they require the stresses from normal musculature. People with cerebral palsy are at risk of low bone mineral density. The shafts of the bones are often thin (gracile), and become thinner during growth. When compared to these thin shafts (diaphyses), the centres (metaphyses) often appear quite enlarged (ballooning). Due to more than normal joint compression caused by muscular imbalances, articular cartilage may atrophy, leading to narrowed joint spaces. Depending on the degree of spasticity, a person with the spastic form of CP may exhibit a variety of angular joint deformities. Because vertebral bodies need vertical gravitational loading forces to develop properly, spasticity and an abnormal gait can hinder proper or full bone and skeletal development. People with CP tend to be shorter in height than the average person because their bones are not allowed to grow to their full potential. Sometimes bones grow to different lengths, so the person may have one leg longer than the other.
Children with CP are prone to low trauma fractures, particularly children with higher Gross Motor Function Classification System (GMFCS) levels who cannot walk. This further affects a child's mobility, strength, and experience of pain, and can lead to missed schooling or child abuse suspicions. These children generally have fractures in the legs, whereas non-affected children mostly fracture their arms in the context of sporting activities.
Hip dislocation and ankle equinus or plantar flexion deformity are the two most common deformities among children with cerebral palsy. Additionally, flexion deformity of the hip and knee can occur. Torsional deformities of long bones such as the femur and tibia are also encountered, among others. Children may develop scoliosis before the age of 10 – estimated prevalence of scoliosis in children with CP is between 21% and 64%. Higher levels of impairment on the GMFCS are associated with scoliosis and hip dislocation. Scoliosis can be corrected with surgery, but CP makes surgical complications more likely, even with improved techniques. Hip migration can be managed by soft tissue procedures such as adductor musculature release. Advanced degrees of hip migration or dislocation can be managed by more extensive procedures such as femoral and pelvic corrective osteotomies. Both soft tissue and bony procedures aim at prevention of hip dislocation in the early phases or aim at hip containment and restoration of anatomy in the late phases of disease. Equinus deformity is managed by conservative methods especially when dynamic. If fixed/static deformity ensues surgery may become mandatory.
Growth spurts during puberty can make walking more difficult for people with CP and high muscle tone.
Eating
Due to sensory and motor impairments, those with CP may have difficulty preparing food, holding utensils, or chewing and swallowing. An infant with CP may not be able to suck, swallow or chew. Gastro-oesophageal reflux is common in children with CP. Children with CP may have too little or too much sensitivity around and in the mouth. Poor balance when sitting, lack of control of the head, mouth, and trunk, not being able to bend the hips enough to allow the arms to stretch forward to reach and grasp food or utensils, and lack of hand-eye coordination can make self-feeding difficult. Feeding difficulties are related to higher GMFCS levels. Dental problems can also contribute to difficulties with eating. Pneumonia is also common where eating difficulties exist, caused by undetected aspiration of food or liquids. Fine finger dexterity, like that needed for picking up a utensil, is more frequently impaired than gross manual dexterity, like that needed for spooning food onto a plate. Grip strength impairments are less common.
Children with severe cerebral palsy, particularly with oropharyngeal issues, are at risk of undernutrition. Triceps skin fold tests have been found to be a very reliable indicator of malnutrition in children with cerebral palsy. Due to challenges in feeding, evidence has shown that children with cerebral palsy are at a greater risk of malnutrition.
Language
Speech and language disorders are common in people with cerebral palsy. The incidence of dysarthria is estimated to range from 31% to 88%, and around a quarter of people with CP are non-verbal. Speech problems are associated with poor respiratory control, laryngeal and velopharyngeal dysfunction, and oral articulation disorders that are due to restricted movement in the oral-facial muscles. There are three major types of dysarthria in cerebral palsy: spastic, dyskinetic (athetotic), and ataxic.
Early use of augmentative and alternative communication systems may assist the child in developing spoken language skills. Overall language delay is associated with problems of cognition, deafness, and learned helplessness. Children with cerebral palsy are at risk of learned helplessness and becoming passive communicators, initiating little communication. Early intervention with this clientele, and their parents, often targets situations in which children communicate with others so that they learn that they can control people and objects in their environment through this communication, including making choices, decisions, and mistakes.
Pain and sleep
Pain is common and may result from the inherent deficits associated with the condition, along with the numerous procedures children typically face. When children with cerebral palsy are in pain, they experience worse muscle spasms. Pain is associated with tight or shortened muscles, abnormal posture, stiff joints, unsuitable orthosis, etc. Hip migration or dislocation is a recognizable source of pain in CP children and especially in the adolescent population. Nevertheless, the adequate scoring and scaling of pain in CP children remains challenging. Pain in CP has a number of different causes, and different pains respond to different treatments.
There is also a high likelihood of chronic sleep disorders secondary to both physical and environmental factors. Children with cerebral palsy have significantly higher rates of sleep disturbance than typically developing children. Babies with cerebral palsy who have stiffness issues might cry more and be harder to put to sleep than non-disabled babies, or "floppy" babies might be lethargic. Chronic pain is under-recognized in children with cerebral palsy, even though three out of four children with cerebral palsy experience pain. Adults with CP also experience more pain than the general population.
Associated disorders
Associated disorders include intellectual disabilities, seizures, muscle contractures, abnormal gait, osteoporosis, communication disorders, malnutrition, sleep disorders, and mental health disorders, such as depression and anxiety. Epilepsy is often found in the child before they are 1 year old, or also before they are four or five. In addition to these, functional gastrointestinal abnormalities contributing to bowel obstruction, vomiting, and constipation may also arise. Adults with cerebral palsy may have ischemic heart disease, cerebrovascular disease, cancer, and trauma more often. Obesity in people with cerebral palsy or a more severe Gross Motor Function Classification System assessment in particular are considered risk factors for multimorbidity. Other medical issues can be mistaken for being symptoms of cerebral palsy, and so may not be treated correctly.
Related conditions can include apraxia, sensory impairments, urinary incontinence, fecal incontinence, or behavioural disorders.
Seizure management is more difficult in people with CP as seizures often last longer. Epilepsy and asthma are common co-occurring diseases in adults with CP. The associated disorders that co-occur with cerebral palsy may be more disabling than the motor function problems.
Managing respiratory illnesses in children with severe CP is considered complex due to the need to manage oropharyngeal dysphagia of both food/drink and saliva, gastroesophageal reflux, motor disorders, upper airway obstruction during sleep, malnutrition, among other factors.
Causes
Cerebral palsy is due to abnormal development or damage occurring to the developing brain. This damage can occur during pregnancy, delivery, the first month of life, or less commonly in early childhood. Structural problems in the brain are seen in 80% of cases, most commonly within the white matter.
More than three-quarters of cases are believed to result from issues that occur during pregnancy. Most children who are born with cerebral palsy have more than one risk factor associated with CP. Cerebral palsy is not contagious and cannot be contracted in adulthood. CP is almost always developed in utero, or prior to birth.
While in certain cases there is no identifiable cause, typical causes include problems in intrauterine development (e.g. exposure to radiation, infection, fetal growth restriction), hypoxia of the brain (thrombotic events, placental insufficiency, umbilical cord prolapse), birth trauma during labor and delivery, and complications around birth or during childhood.
In Africa birth asphyxia, high bilirubin levels, and infections in newborns of the central nervous system are main cause. Many cases of CP in Africa could be prevented with better resources available.
Preterm birth
Between 40% and 50% of all children who develop cerebral palsy were born prematurely. Most of these cases (75–90%) are believed to be due to issues that occur around the time of birth, often just after birth. Multiple-birth infants are also more likely than single-birth infants to have CP. They are also more likely to be born with a low birth weight.
In those who are born with a weight between 1 kg (2.2 lbs) and 1.5 kg (3.3 lbs) CP occurs in 6%. Among those born before 28 weeks of gestation it occurs in 8%. Genetic factors are believed to play an important role in prematurity and cerebral palsy generally. In those who are born between 34 and 37 weeks the risk is 0.4% (three times normal).
Term infants
In babies who are born at term risk factors include problems with the placenta, birth defects, low birth weight, breathing meconium into the lungs, a delivery requiring either the use of instruments or an emergency Caesarean section, birth asphyxia, seizures just after birth, respiratory distress syndrome, low blood sugar, and infections in the baby.
, it was unclear how much of a role birth asphyxia plays as a cause. It is unclear if the size of the placenta plays a role. it is evident that in advanced countries, most cases of cerebral palsy in term or near-term neonates have explanations other than asphyxia.
Genetics
Cerebral palsy is not commonly considered a genetic disease. About 2% of all CP cases are expected to be inherited, with glutamate decarboxylase-1 being one of the possible enzymes involved. Most inherited cases are autosomal recessive. However, the vast majority of CP cases are connected to brain damage during birth and in infancy. There is a small percentage of CP cases caused by brain damage that stemmed from the prenatal period, which is estimated to be less than 5% of CP cases overall. Moreover, there is no one reason why some CP cases come from prenatal brain damage, and it is not known if those cases have a genetic basis.
Cerebellar hypoplasia is sometimes genetic and can cause ataxic cerebral palsy.
Early childhood
After birth, other causes include toxins, severe jaundice, lead poisoning, physical brain injury, stroke, abusive head trauma, incidents involving hypoxia to the brain (such as near drowning), and encephalitis or meningitis.
Others
Infections in the mother, even those not easily detected, can triple the risk of the child developing cerebral palsy. Infection of the fetal membranes known as chorioamnionitis increases the risk.
Intrauterine and neonatal insults (many of which are infectious) increase the risk.
Rh blood type incompatibility can cause the mother's immune system to attack the baby's red blood cells.
It has been hypothesised that some cases of cerebral palsy are caused by the death in very early pregnancy of an identical twin.
Diagnosis
The diagnosis of cerebral palsy has historically rested on the person's history and physical examination and is generally assessed at a young age. A general movements assessment, which involves measuring movements that occur spontaneously among those less than four months of age, appears most accurate. Children who are more severely affected are more likely to be noticed and diagnosed earlier. Abnormal muscle tone, delayed motor development and persistence of primitive reflexes are the main early symptoms of CP. Symptoms and diagnosis typically occur by the age of two, although depending on factors like malformations and congenital issues, persons with milder forms of cerebral palsy may be over the age of five, if not in adulthood, when finally diagnosed.
Cognitive assessments and medical observations are also useful to help confirm a diagnosis. Additionally, evaluations of the child's mobility, speech and language, hearing, vision, gait, feeding and digestion are also useful to determine the extent of the disorder. Early diagnosis and intervention are seen as being a key part of managing cerebral palsy. Machine learning algorithms facilitate automatic early diagnosis, with methods such as deep neural network and geometric feature fusion producing high accuracy in predicting cerebral palsy from short videos. It is a developmental disability.
Once a person is diagnosed with cerebral palsy, further diagnostic tests are optional. Neuroimaging with CT or MRI is warranted when the cause of a person's cerebral palsy has not been established. An MRI is preferred over CT, due to diagnostic yield and safety. When abnormal, evidence from neuroimaging may suggest the timing of the initial damage. The CT or MRI is also capable of revealing treatable conditions, such as hydrocephalus, porencephaly, arteriovenous malformation, subdural hematomas and hygromas, and a vermian tumour (which a few studies suggest are present 5–22% of the time). Furthermore, abnormalities detected by neuroimaging may indicate a high likelihood of associated conditions, such as epilepsy and intellectual disability. There is a small risk associated with sedating children to facilitate a clear MRI.
The age when CP is diagnosed is important, but medical professionals disagree over the best age to make the diagnosis. The earlier CP is diagnosed correctly, the better the opportunities are to provide the child with physical and educational help, but there might be a greater chance of confusing CP with another problem, especially if the child is 18 months of age or younger. Infants may have temporary problems with muscle tone or control that can be confused with CP, which is permanent. A metabolism disorder or tumors in the nervous system may appear to be CP; metabolic disorders, in particular, can produce brain problems that look like CP on an MRI. Disorders that deteriorate the white matter in the brain and problems that cause spasms and weakness in the legs, may be mistaken for CP if they first appear early in life. However, these disorders get worse over time, and CP does not (although it may change in character). In infancy it may not be possible to tell the difference between them. In the UK, not being able to sit independently by the age of 8 months is regarded as a clinical sign for further monitoring. Fragile X syndrome (a cause of autism and intellectual disability) and general intellectual disability must also be ruled out. Cerebral palsy specialist John McLaughlin recommends waiting until the child is 36 months of age before making a diagnosis because, by that age, motor capacity is easier to assess.
Classification
CP is classified by the types of motor impairment of the limbs or organs, and by restrictions to the activities an affected person may perform. The Gross Motor Function Classification System-Expanded and Revised and the Manual Ability Classification System are used to describe mobility and manual dexterity in people with cerebral palsy, and recently the Communication Function Classification System, and the Eating and Drinking Ability Classification System have been proposed to describe those functions. There are three main CP classifications by motor impairment: spastic, ataxic, and dyskinetic. Additionally, there is a mixed type that shows a combination of features of the other types. These classifications reflect the areas of the brain that are damaged.
Cerebral palsy is also classified according to the topographic distribution of muscle spasticity. This method classifies children as diplegic, (bilateral involvement with leg involvement greater than arm involvement), hemiplegic (unilateral involvement), or quadriplegic (bilateral involvement with arm involvement equal to or greater than leg involvement).
Spastic
Spastic cerebral palsy is the type of cerebral palsy characterized by spasticity or high muscle tone often resulting in stiff, jerky movements. Itself an umbrella term encompassing spastic hemiplegia, spastic diplegia, spastic quadriplegia and – where solely one limb or one specific area of the body is affected – spastic monoplegia. Spastic cerebral palsy affects the motor cortex of the brain, a specific portion of the cerebral cortex responsible for the planning and completion of voluntary movement. Spastic CP is the most common type of overall cerebral palsy, representing about 80% of cases. Botulinum toxin is effective in decreasing spasticity. It can help increase range of motion which could help mitigate CPs effects on the growing bones of children. There may be an improvement in motor functions in the children and ability to walk. however, the main benefit derived from botulinum toxin A comes from its ability to reduce muscle tone and spasticity and thus prevent or delay the development of fixed muscle contractures.
Ataxic
Ataxic cerebral palsy is observed in approximately 5–10% of all cases of cerebral palsy, making it the least frequent form of cerebral palsy. Ataxic cerebral palsy is caused by damage to cerebellar structures. Because of the damage to the cerebellum, which is essential for coordinating muscle movements and balance, patients with ataxic cerebral palsy experience problems in coordination, specifically in their arms, legs, and trunk. Ataxic cerebral palsy is known to decrease muscle tone. The most common manifestation of ataxic cerebral palsy is intention (action) tremor, which is especially apparent when carrying out precise movements, such as tying shoe laces or writing with a pencil. This symptom gets progressively worse as the movement persists, making the hand shake. As the hand gets closer to accomplishing the intended task, the trembling intensifies, which makes it even more difficult to complete.
Dyskinetic
Dyskinetic cerebral palsy (sometimes abbreviated DCP) is primarily associated with damage to the basal ganglia and the substantia nigra in the form of lesions that occur during brain development due to bilirubin encephalopathy and hypoxic-ischemic brain injury. DCP is characterized by both hypertonia and hypotonia, due to the affected individual's inability to control muscle tone. Clinical diagnosis of DCP typically occurs within 18 months of birth and is primarily based upon motor function and neuroimaging techniques.
Dyskinetic cerebral palsy is an extrapyramidal form of cerebral palsy. Dyskinetic cerebral palsy can be divided into two different groups; choreoathetosis and dystonia. Choreo-athetotic CP is characterized by involuntary movements, whereas dystonic CP is characterized by slow, strong contractions, which may occur locally or encompass the whole body.
Mixed
Mixed cerebral palsy has symptoms of dyskinetic, ataxic and spastic CP appearing simultaneously, each to varying degrees, and both with and without symptoms of each. Mixed CP is the most difficult to treat as it is extremely heterogeneous and sometimes unpredictable in its symptoms and development over the lifespan.
Gait classification
In patients with spastic hemiplegia or diplegia, various gait patterns can be observed, the exact form of which can only be described with the help of complex gait analysis systems. In order to facilitate interdisciplinary communication in the interdisciplinary team between those affected, doctors, physiotherapists and orthotists, a simple description of the gait pattern is useful. J. Rodda and H. K. Graham already described in 2001 how gait patterns of CP patients can be more easily recognized and defined gait types which they compared in a classification. They also described that gait patterns can vary with age. Building on this, the Amsterdam Gait Classification was developed at the free university in Amsterdam, the VU medisch centrum.
A special feature of this classification is that it makes different gait patterns very easy to recognize and can be used in CP patients in whom only one leg and both legs are affected. According to the Amsterdam Gait Classification, five gait types are described. To assess the gait pattern, the patient is viewed visually or via a video recording from the side of the leg to be assessed. At the point in time at which the leg to be viewed is in mid stance and the leg not to be viewed is in mid swing, the knee angle and the contact of the foot with the ground are assessed on the one hand.
Classification of the gait pattern according to the Amsterdam Gait Classification: In gait type 1, the knee angle is normal and the foot contact is complete. In gait type 2, the knee angle is hyperextended and the foot contact is complete. In gait type 3, the knee angle is hyperextended and foot contact is incomplete (only on the forefoot). In gait type 4, the knee angle is bent and foot contact is incomplete (only on the forefoot). With gait type 5, the knee angle is bent and the foot contact is complete.
Gait types 5 is also known as crouch gait.
Prevention
Because the causes of CP are varied, a broad range of preventive interventions have been investigated.
Electronic fetal monitoring has not helped to prevent CP, and in 2014 the American College of Obstetricians and Gynecologists, the Royal Australian and New Zealand College of Obstetricians and Gynaecologists, and the Society of Obstetricians and Gynaecologists of Canada have acknowledged that there are no long-term benefits of electronic fetal monitoring. Before this, electronic fetal monitoring was widely used to prop up obstetric litigation.
In those at risk of an early delivery, magnesium sulphate appears to decrease the risk of cerebral palsy. It is unclear if it helps those who are born at term. In those at high risk of preterm labor a review found that moderate to severe CP was reduced by the administration of magnesium sulphate, and that adverse effects on the babies from the magnesium sulphate were not significant. Mothers who received magnesium sulphate could experience side effects such as respiratory depression and nausea. However, guidelines for the use of magnesium sulfate in mothers at risk of preterm labour are not strongly adhered to; in 2017 only 2 in 3 eligible women in the UK received the medication despite it being recommended by NICE guidelines. An NHS quality improvement programme increased its usage in England from 71% in 2018 to 83% in 2020.
Caffeine is used to treat apnea of prematurity and reduces the risk of cerebral palsy in premature babies, but there are also concerns of long term negative effects. A moderate quality level of evidence indicates that giving women antibiotics during preterm labor before her membranes have ruptured (water is not yet not broken) may increase the risk of cerebral palsy for the child. Additionally, for preterm babies for whom there is a chance of fetal compromise, allowing the birth to proceed rather than trying to delay the birth may lead to an increased risk of cerebral palsy in the child. Corticosteroids are sometimes taken by pregnant women expecting a preterm birth to provide neuroprotection to their baby. Taking corticosteroids during pregnancy is shown to have no significant correlation with developing cerebral palsy in preterm births.
Cooling high-risk full-term babies shortly after birth may reduce disability, but this may only be useful for some forms of the brain damage that causes CP.
Management
Over time, the approach to CP management has shifted away from narrow attempts to fix individual physical problems such as spasticity in a particular limb to making such treatments part of a larger goal of maximizing the person's independence and community engagement. However, the evidence base for the effectiveness of intervention programs reflecting the philosophy of independence has not yet caught up: effective interventions for body structures and functions have a strong evidence base, but evidence is lacking for effective interventions targeted toward participation, environment, or personal factors. There is also no good evidence to show that an intervention that is effective at the body-specific level will result in an improvement at the activity level or vice versa. Although such cross-over benefit might happen, not enough high-quality studies have been done to demonstrate it.
Because cerebral palsy has "varying severity and complexity" across the lifespan, it can be considered a collection of conditions for management purposes. A multidisciplinary approach for cerebral palsy management is recommended, focusing on "maximising individual function, choice and independence" in line with the International Classification of Functioning, Disability and Health's goals. The team may include a paediatrician, a health visitor, a social worker, a physiotherapist, an orthotist, a speech and language therapist, an occupational therapist, a teacher specialising in helping children with visual impairment, an educational psychologist, an orthopaedic surgeon, a neurologist and a neurosurgeon.
Various forms of therapy are available to people living with cerebral palsy as well as caregivers and parents. Treatment may include one or more of the following: physical therapy; occupational therapy; speech therapy; water therapy; drugs to control seizures, alleviate pain, or relax muscle spasms (e.g. benzodiazepines); surgery to correct anatomical abnormalities or release tight muscles; braces and other orthotic devices; rolling walkers; and communication aids such as computers with attached voice synthesisers. Intensive rehabilitation is practiced in certain countries, but obtaining reliable data on its medium and long-term effectiveness is challenging.
Surgical intervention in CP children may include various orthopaedic or neurological surgeries to improve quality of life, such as tendon releases, hip rotation, spinal fusion, (selective dorsal rhizotomy) or placement of an intrathecal baclofen pump.
A Cochrane review published in 2004 found a trend toward the benefit of speech and language therapy for children with cerebral palsy but noted the need for high-quality research. A 2013 systematic review found that many of the therapies used to treat CP have no good evidence base; the treatments with the best evidence are medications (anticonvulsants, botulinum toxin, bisphosphonates, diazepam), therapy (bimanual training, casting, constraint-induced movement therapy, context-focused therapy, fitness training, goal-directed training, hip surveillance, home programmes, occupational therapy after botulinum toxin, pressure care) and surgery. There is also research on whether the sleeping position might improve hip migration, but there are not yet high-quality evidence studies to support that theory. Research papers also call for an agreed consensus on outcome measures which will allow researchers to cross-reference research. Also, the terminology used to describe orthoses needs to be standardised to ensure studies can be reproduced and readily compared and evaluated.
Orthotics in the concept of therapy
To improve the gait pattern, orthotics can be included in the therapy concept. An orthosis can support physiotherapeutic treatment in setting the right motor impulses in order to create new cerebral connections. The orthosis must meet the requirements of the medical prescription. In addition, the orthosis must be designed by the orthotist in such a way that it achieves the effectiveness of the necessary levers, matching the gait pattern, in order to support the proprioceptive approaches of physiotherapy. The characteristics of the stiffness of the orthosis shells and the adjustable dynamics in the ankle joint are important elements of the orthosis to be considered.
Due to these requirements, the development of orthoses has changed significantly in recent years, especially since around 2010. At about the same time, care concepts were developed that deal intensively with the orthotic treatment of the lower extremities in cerebral palsy. Modern materials and new functional elements enable the rigidity to be specifically adapted to the requirements that fits to the gait pattern of the CP patient. The adjustment of the stiffness has a decisive influence on the gait pattern and on the energy cost of walking. It is of great advantage if the stiffness of the orthosis can be adjusted separately from one another via resistances of the two functional elements in the two directions of movement, dorsiflexion and plantar flexion.
Prognosis
CP is not a progressive disorder (meaning the brain damage does not worsen), but the symptoms can become more severe over time. A person with the disorder may improve somewhat during childhood if he or she receives extensive care, but once bones and musculature become more established, orthopedic surgery may be required. People with CP can have varying degrees of cognitive impairment or none whatsoever. The full intellectual potential of a child born with CP is often not known until the child starts school. People with CP are more likely to have learning disorders but have normal intelligence. Intellectual level among people with CP varies from genius to intellectually disabled, as it does in the general population, and experts have stated that it is important not to underestimate the capabilities of a person with CP and to give them every opportunity to learn.
The ability to live independently with CP varies widely, depending partly on the severity of each person's impairment and partly on the capability of each person to self-manage the logistics of life. Some individuals with CP require personal assistant services for all activities of daily living. Others only need assistance with certain activities, and still others do not require any physical assistance. But regardless of the severity of a person's physical impairment, a person's ability to live independently often depends primarily on the person's capacity to manage the physical realities of his or her life autonomously. In some cases, people with CP recruit, hire, and manage a staff of personal care assistants (PCAs). PCAs facilitate the independence of their employers by assisting them with their daily personal needs in a way that allows them to maintain control over their lives.
Puberty in young adults with cerebral palsy may be precocious or delayed. Delayed puberty is thought to be a consequence of nutritional deficiencies. There is currently no evidence that CP affects fertility, although some of the secondary symptoms have been shown to affect sexual desire and performance. Adults with CP were less likely to get routine reproductive health screening as of 2005. Gynecological examinations may have to be performed under anesthesia due to spasticity, and equipment is often not accessible. Breast self-examination may be difficult, so partners or carers may have to perform it. Men with CP have higher levels of cryptorchidism at the age of 21.
CP can significantly reduce a person's life expectancy, depending on the severity of their condition and the quality of care they receive. 5–10% of children with CP die in childhood, particularly where seizures and intellectual disability also affect the child. The ability to ambulate, roll, and self-feed has been associated with increased life expectancy. While there is a lot of variation in how CP affects people, it has been found that "independent gross motor functional ability is a very strong determinant of life expectancy". According to the Australian Bureau of Statistics, in 2014, 104 Australians died of cerebral palsy. The most common causes of death in CP are related to respiratory causes, but in middle age cardiovascular issues and neoplastic disorders become more prominent.
Self-care
For many children with CP, parents are heavily involved in self-care activities. Self-care activities, such as bathing, dressing, and grooming, can be difficult for children with CP, as self-care depends primarily on the use of the upper limbs. For those living with CP, impaired upper limb function affects almost 50% of children and is considered the main factor contributing to decreased activity and participation. As the hands are used for many self-care tasks, sensory and motor impairments of the hands make daily self-care more difficult. Motor impairments cause more problems than sensory impairments. The most common impairment is that of finger dexterity, which is the ability to manipulate small objects with the fingers. Compared to other disabilities, people with cerebral palsy generally need more help in performing daily tasks. Occupational therapists are healthcare professionals that help individuals with disabilities gain or regain their independence through the use of meaningful activities.
Productivity
The effects of sensory, motor, and cognitive impairments affect self-care occupations in children with CP and productivity occupations. Productivity can include but is not limited to, school, work, household chores, or contributing to the community.
Play is included as a productive occupation as it is often the primary activity for children. If play becomes difficult due to a disability, like CP, this can cause problems for the child. These difficulties can affect a child's self-esteem. In addition, the sensory and motor problems experienced by children with CP affect how the child interacts with their surroundings, including the environment and other people. Not only do physical limitations affect a child's ability to play, the limitations perceived by the child's caregivers and playmates also affect the child's play activities. Some children with disabilities spend more time playing by themselves. When a disability prevents a child from playing, there may be social, emotional and psychological problems, which can lead to increased dependence on others, less motivation, and poor social skills.
In school, students are asked to complete many tasks and activities, many of which involve handwriting. Many children with CP have the capacity to learn and write in the school environment. However, students with CP may find it difficult to keep up with the handwriting demands of school and their writing may be difficult to read. In addition, writing may take longer and require greater effort on the student's part. Factors linked to handwriting include postural stability, sensory and perceptual abilities of the hand, and writing tool pressure.
Speech impairments may be seen in children with CP depending on the severity of brain damage. Communication in a school setting is important because communicating with peers and teachers is very much a part of the "school experience" and enhances social interaction. Problems with language or motor dysfunction can lead to underestimating a student's intelligence. In summary, children with CP may experience difficulties in school, such as difficulty with handwriting, carrying out school activities, communicating verbally, and interacting socially.
Leisure
Leisure activities can have several positive effects on physical health, mental health, life satisfaction, and psychological growth for people with physical disabilities like CP. Common benefits identified are stress reduction, development of coping skills, companionship, enjoyment, relaxation and a positive effect on life satisfaction. In addition, for children with CP, leisure appears to enhance adjustment to living with a disability.
Leisure can be divided into structured (formal) and unstructured (informal) activities. Children and teens with CP engage in less habitual physical activity than their peers. Children with CP primarily engage in physical activity through therapies aimed at managing their CP, or through organized sport for people with disabilities. It is difficult to sustain behavioural change in terms of increasing physical activity of children with CP. Gender, manual dexterity, the child's preferences, cognitive impairment and epilepsy were found to affect children's leisure activities, with manual dexterity associated with more leisure activity. Although leisure is important for children with CP, they may have difficulties carrying out leisure activities due to social and physical barriers.
Children with cerebral palsy may face challenges when it comes to participating in sports. This comes with being discouraged from physical activity because of these perceived limitations imposed by their medical condition.
Participation and barriers
Participation is involvement in life situations and everyday activities. Participation includes self-care, productivity, and leisure. In fact, communication, mobility, education, home life, leisure, and social relationships require participation, and indicate the extent to which children function in their environment. Barriers can exist on three levels: micro, meso, and macro. First, the barriers at the micro level involve the person. Barriers at the micro level include the child's physical limitations (motor, sensory and cognitive impairments) or their subjective feelings regarding their ability to participate. For example, the child may not participate in group activities due to lack of confidence. Second, barriers at the meso level include the family and community. These may include negative attitudes of people toward disability or lack of support within the family or in the community.
One of the main reasons for this limited support appears to be the result of a lack of awareness and knowledge regarding the child's ability to engage in activities despite his or her disability. Third, barriers at the macro level incorporate the systems and policies that are not in place or hinder children with CP. These may be environmental barriers to participation such as architectural barriers, lack of relevant assistive technology, and transportation difficulties due to limited wheelchair access or public transit that can accommodate children with CP. For example, a building without an elevator can prevent the child from accessing higher floors.
A 2013 review stated that outcomes for adults with cerebral palsy without intellectual disability in the 2000s were that "60–80% completed high school, 14–25% completed college, up to 61% were living independently in the community, 25–55% were competitively employed, and 14–28% were involved in long term relationships with partners or had established families". Adults with cerebral palsy may not seek physical therapy due to transport issues, financial restrictions and practitioners not feeling like they know enough about cerebral palsy to take people with CP on as clients.
Aging
Children with CP may not successfully transition into using adult services because they are not referred to one upon turning 18, and may decrease their use of services. Quality of life outcomes tend to decline for adults with cerebral palsy. Because children with cerebral palsy are often told that it is a non-progressive disease, they may be unprepared for the greater effects of the aging process as they head into their 30s. Young adults with cerebral palsy experience problems with aging that non-disabled adults experience "much later in life". 25% or more adults with cerebral palsy who can walk experience increasing difficulties walking with age. Hand function does not seem to have similar declines. Chronic disease risk, such as obesity, is also higher among adults with cerebral palsy than the general population. Common problems include increased pain, reduced flexibility, increased spasms and contractures, post-impairment syndrome and increasing problems with balance. Increased fatigue is also a problem. When adulthood and cerebral palsy is discussed, , it is not discussed in terms of the different stages of adulthood. About half of people with CP report some loss of function as of their 40s.
Like they did in childhood, adults with cerebral palsy experience psychosocial issues related to their CP, chiefly the need for social support, self-acceptance, and acceptance by others. Workplace accommodations may be needed to enhance continued employment for adults with CP as they age. Rehabilitation or social programs that include salutogenesis may improve the coping potential of adults with CP as they age.
Epidemiology
Cerebral palsy occurs in about 2.1 per 1000 live births. In those born at term rates are lower at 1 per 1000 live births. Within a population it may occur more often in poorer people. The rate is higher in males than in females; in Europe it is 1.3 times more common in males.
There was a "moderate, but significant" rise in the prevalence of CP between the 1970s and 1990s. This is thought to be due to a rise in low birth weight of infants and the increased survival rate of these infants. The increased survival rate of infants with CP in the 1970s and 80s may be indirectly due to the disability rights movement challenging perspectives around the worth of infants with a disability, as well as the Baby Doe Law. Between 1990 and 2003, rates of cerebral palsy remained the same.
As of 2005, advances in the care of pregnant mothers and their babies did not result in a noticeable decrease in CP. This is generally attributed to medical advances in areas related to the care of premature babies (which results in a greater survival rate). Only the introduction of quality medical care to locations with less-than-adequate medical care has shown any decreases. The incidence of CP increases with premature or very low-weight babies regardless of the quality of care. , there is a suggestion that both incidence and severity are slightly decreasing – more research is needed to find out if this is significant, and if so, which interventions are effective. It has been found that high-income countries have lower rates of children born with cerebral palsy than low or middle-income countries.
Prevalence of cerebral palsy is best calculated around the school entry age of about six years; the prevalence in the U.S. is estimated to be 2.4 out of 1000 children.
History
Cerebral palsy has affected humans since antiquity. A decorated grave marker dating from around the 15th to 14th century BCE shows a figure with one small leg and using a crutch, possibly due to cerebral palsy. The oldest likely physical evidence of the condition comes from the mummy of Siptah, an Egyptian Pharaoh who ruled from about 1196 to 1190 BCE and died at about 20 years of age. The presence of cerebral palsy has been suspected due to his deformed foot and hands.
The medical literature of the ancient Greeks discusses paralysis and weakness of the arms and legs; the modern word palsy comes from the Ancient Greek words παράλυση or πάρεση, meaning paralysis or paresis respectively. The works of the school of Hippocrates (460c. 370 BCE), and the manuscript On the Sacred Disease in particular, describe a group of problems that matches up very well with the modern understanding of cerebral palsy. The Roman Emperor Claudius (10 BCE54 CE) is suspected of having CP, as historical records describe him as having several physical problems in line with the condition. Medical historians have begun to suspect and find depictions of CP in much later art. Several paintings from the 16th century and later show individuals with problems consistent with it, such as Jusepe de Ribera's 1642 painting The Clubfoot.
The modern understanding of CP as resulting from problems within the brain began in the early decades of the 1800s with a number of publications on brain abnormalities by Johann Christian Reil, Claude François Lallemand and Philippe Pinel. Later physicians used this research to connect problems in the brain with specific symptoms. The English surgeon William John Little (18101894) was the first person to study CP extensively. In his doctoral thesis he stated that CP was a result of a problem around the time of birth. He later identified a difficult delivery, a preterm birth and perinatal asphyxia in particular as risk factors. The spastic diplegia form of CP came to be known as Little's disease. At around this time, a German surgeon was also working on cerebral palsy, and distinguished it from polio. In the 1880s British neurologist William Gowers built on Little's work by linking paralysis in newborns to difficult births. He named the problem "birth palsy" and classified birth palsies into two types: peripheral and cerebral.
Working in the US in the 1880s, Canadian-born physician William Osler (18491919) reviewed dozens of CP cases to further classify the disorders by the site of the problems on the body and by the underlying cause. Osler made further observations tying problems around the time of delivery with CP, and concluded that problems causing bleeding inside the brain were likely the root cause. Osler also suspected polioencephalitis as an infectious cause. Through the 1890s, scientists commonly confused CP with polio.
Before moving to psychiatry, Austrian neurologist Sigmund Freud (18561939) made further refinements to the classification of the disorder. He produced the system still being used today. Freud's system divides the causes of the disorder into problems present at birth, problems that develop during birth, and problems after birth. Freud also made a rough correlation between the location of the problem inside the brain and the location of the affected limbs on the body and documented the many kinds of movement disorders.
In the early 20th century, the attention of the medical community generally turned away from CP until orthopedic surgeon Winthrop Phelps became the first physician to treat the disorder. He viewed CP from a musculoskeletal perspective instead of a neurological one. Phelps developed surgical techniques for operating on the muscles to address issues such as spasticity and muscle rigidity. Hungarian physical rehabilitation practitioner András Pető developed a system to teach children with CP how to walk and perform other basic movements. Pető's system became the foundation for conductive education, widely used for children with CP today. Through the remaining decades, physical therapy for CP has evolved, and has become a core component of the CP management program.
In 1997, Robert Palisano et al. introduced the Gross Motor Function Classification System (GMFCS) as an improvement over the previous rough assessment of limitation as either mild, moderate, or severe. The GMFCS grades limitation based on observed proficiency in specific basic mobility skills such as sitting, standing, and walking, and takes into account the level of dependency on aids such as wheelchairs or walkers. The GMFCS was further revised and expanded in 2007.
Society and culture
Economic impact
It is difficult to directly compare the cost and cost-effectiveness of interventions to prevent cerebral palsy or the cost of interventions to manage CP. Access Economics has released a report on the economic impact of cerebral palsy in Australia. The report found that, in 2007, the financial cost of cerebral palsy (CP) in Australia was A$1.47 billion or 0.14% of GDP. Of this:
A$1.03 billion (69.9%) was productivity lost due to lower employment, absenteeism, and premature death of Australians with CP
A$141 million (9.6%) was the DWL from transfers including welfare payments and taxation forgone
A$131 million (9.0%) was other indirect costs such as direct program services, aides and home modifications, and the bringing-forward of funeral costs
A$129 million (8.8%) was the value of the informal care for people with CP
A$40 million (2.8%) was direct health system expenditure
The value of lost well-being (disability and premature death) was a further A$2.4 billion.
In per capita terms, this amounts to a financial cost of A$43,431 per person with CP per annum. Including the value of lost well-being, the cost is over $115,000 per person per annum.
Individuals with CP bear 37% of the financial costs, and their families and friends bear a further 6%. The federal government bears around one-third (33%) of the financial costs (mainly through taxation revenues forgone and welfare payments). State governments bear under 1% of the costs, while employers bear 5% and the rest of society bears the remaining 19%. If the burden of disease (lost well-being) is included, individuals bear 76% of the costs.
The average lifetime cost for people with CP in the US is US$921,000 per individual, including lost income.
In the United States, many states allow Medicaid beneficiaries to use their Medicaid funds to hire their own PCAs, instead of forcing them to use institutional or managed care.
In India, the government-sponsored program called "NIRAMAYA" for the medical care of children with neurological and muscular deformities has proved to be an ameliorating economic measure for persons with such disabilities. It has shown that persons with mental or physically debilitating congenital disabilities can lead better lives if they have financial independence.
Use of the term
"Cerebral" means "of, or pertaining to, the cerebrum or the brain" and "palsy" means "paralysis, generally partial, whereby a local body area is incapable of voluntary movement". It has been proposed to change the name to "cerebral palsy spectrum disorder" to reflect the diversity of presentations of CP.
Many people would rather be referred to as a person with a disability (people-first language) instead of as "handicapped". "Cerebral Palsy: A Guide for Care" at the University of Delaware offers the following guidelines:
The term "spastic" denotes the attribute of spasticity in types of spastic CP. In 1952 a UK charity called The Spastics Society was formed. The term "spastics" was used by the charity as a term for people with CP. The word "spastic" has since been used extensively as a general insult to disabled people, which some see as extremely offensive. They are also frequently used to insult non-disabled people when they seem overly uncoordinated, anxious, or unskilled in sports. The charity changed its name to Scope in 1994. In the United States the word spaz has the same usage as an insult but is not generally associated with CP.
Media
Maverick documentary filmmaker Kazuo Hara criticises the mores and customs of Japanese society in an unsentimental portrait of adults with cerebral palsy in his 1972 film . Focusing on how people with cerebral palsy are generally ignored or disregarded in Japan, Hara challenges his society's taboos about physical handicaps. Using a deliberately harsh style, with grainy black-and-white photography and out-of-sync sound, Hara brings a stark realism to his subject.
Spandan (2012), a film by Vegitha Reddy and Aman Tripathi, delves into the dilemma of parents whose child has cerebral palsy. While films made with children with special needs as central characters have been attempted before, the predicament of parents dealing with the stigma associated with the condition and beyond is dealt in Spandan. In one of the songs of Spandan "Chal chaal chaal tu bala" more than 50 CP kids have acted. The famous classical singer Devaki Pandit has given her voice to the song penned by Prof. Jayant Dhupkar and composed by National Film Awards winner Isaac Thomas Kottukapally.
My Left Foot (1989) is a drama film directed by Jim Sheridan and starring Daniel Day-Lewis. It tells the true story of Christy Brown, an Irishman born with cerebral palsy, who could control only his left foot. Christy Brown grew up in a poor, working-class family, and became a writer and artist. It won the Academy Award for Best Actor (Daniel Day-Lewis) and Best Actress in a Supporting Role (Brenda Fricker). It was also nominated for Best Director, Best Picture and Best Writing, Screenplay Based on Material from Another Medium. It also won the New York Film Critics Circle Award for Best Film for 1989.
Call the Midwife (2012–) has featured two episodes with actor Colin Young, who himself has cerebral palsy, playing a character with the same disability. His storylines have focused on the segregation of those with disabilities in the UK in the 1950s, and also romantic relationships between people with disabilities.
Micah Fowler, an American actor with CP, stars in the ABC sitcom Speechless (2016–2019), which explores both the serious and humorous challenges a family faces with a teenager with CP.
9-1-1 (2018–) is a procedural drama series on Fox. From season 2 onwards, it features Gavin McHugh (who himself has cerebral palsy) in the recurring role as Christopher Diaz – a young child who has cerebral palsy.
Special (2019) is a comedy series that premiered on Netflix on 12 April 2019. It was written, produced and stars Ryan O'Connell as a young gay man with mild cerebral palsy. It is based on O'Connell's book I'm Special: And Other Lies We Tell Ourselves.
Australian drama serial The Heights (2019–) features a character with mild cerebral palsy, teenage girl Sabine Rosso, depicted by an actor who herself has mild cerebral palsy, Bridie McKim.
6,000 Waiting (2021) is a documentary by Michael Joseph McDonald. It is the first film to depict a person with cerebral palsy parachuting. It tells the story of three men with cerebral palsy seeking to live in their communities instead of institutions. Upon seeing the film, American politician Stacey Abrams interviewed one of the film's protagonists and publicly stated that her top priority was deinstitutionalization through Medicaid expansion.
Notable cases
Christy Brown was the basis for the Academy Award-winning film, My Left Foot.
Two sons of Canadian rock musician Neil Young, Zeke and Ben. In 1986, Young helped found the Bridge School, an educational organization for children with severe verbal and physical disabilities, and its annual supporting Bridge School Benefit concerts, together with his wife Pegi.
Nicolas Hamilton, a British racing driver competing in BTCC. He is the half-brother of Formula 1 driver Lewis Hamilton.
Geri Jewell, who had a regular role in the prime-time series The Facts of Life.
Josh Blue, winner of the fourth season of NBC's Last Comic Standing, whose act revolves around his CP. Blue was also on the 2004 U.S. Paralympic soccer team.
Jason Benetti, play-by-play broadcaster for ESPN, Fox Sports, Westwood One, and Time Warner covering football, baseball, lacrosse, hockey, and basketball. From 2016 until 2023, he was the television play-by-play announcer for Chicago White Sox home games. Since 2024, Benetti has been the play-by-play announcer for the Detroit Tigers.
Jack Carroll, British comedian and runner-up in the seventh season of Britain's Got Talent, and winner of a BAFTA Award for his BBC Comedy, Mobility.
Jamie Beddard, Producer and Stage Actor, known for Extraordinary Bodies.
Abbey Curran, an American beauty queen who represented Iowa at Miss USA 2008 and was the first contestant with a disability to compete.
Laurence Clarke, British comedian, writer and activist.
Robert Griswold, swimmer
Francesca Martinez, British stand-up comedian and actress.
Robert Softley Gale, British actor and theatre practitioner, and artistic director of the Birds of Paradise theatre company.
Zak Ford-Williams, British stage and screen actor, known for Lord Remington in Bridgerton, Owen Davies in Better and Harry Hardacre in The Hardacres, as well as Richard III and Joseph Merrick on stage
Evan O'Hanlon, Australian Paralympian, the fastest athlete with cerebral palsy in the world.
Arun Shourie's son Aditya, about whom he has written a book Does He Know a Mother's Heart
Phoebe-Rae Taylor, British actress known for her role as Melody Brookes in Out of My Mind.
Maysoon Zayid, the self-described "Palestinian Muslim virgin with cerebral palsy, from New Jersey", who is an actress, stand-up comedian, and activist. Zayid has been a resident of Cliffside Park, New Jersey. She is considered one of America's first Muslim women comedians and the first person to perform standup in Palestine and Jordan.
RJ Mitte, an American actor best known for his role as Walter White Jr. in Breaking Bad. He is also a celebrity ambassador for United Cerebral Palsy.
Zach Anner, an American comedian, actor, and writer. He had a television series on Oprah Winfrey's OWN called Rollin' With Zach and is the author of If at Birth You Don't Succeed.
Kaine, a member of the American hip-hop duo The Ying Yang Twins, has a mild form of cerebral palsy that causes him to limp.
Hannah Cockroft, is a British wheelchair athlete specialising in sprint distances in the T34 classification. She holds the Paralympic and world records for the 100 metres, 200 metres and 400 metres in her classification.
Keah Brown, American disability rights activist, author and journalist.
Kuli Kohli, Indian-British writer, poet, activist.
Simon James Stevens, a British disability issues consultant and activist, who starred in I'm Spazticus and founded Wheelies virtual nightclub
The Roman Emperor Claudius is hypothesized to have had cerebral palsy on the basis of his reported symptoms.
Tim Renkow, American comedian, comic actor and writer of the BBC comedy series, Jerk.
Rosie Jones, a British comedian and actress, is incorporating her cerebral palsy into her comedic style.
Christopher Nolan, an Irish Poet and Author, he wrote Damn-Burst of Dreams, The Banyan Tree, and Under The Eye Of The Clock. He died in 2009.
Lost Voice Guy British Comedian
Litigation
Because of the perception that cerebral palsy is mostly caused by trauma during birth, as of 2005, 60% of obstetric litigation was about cerebral palsy, which Alastair MacLennan, Professor of Obstetrics and Gynaecology at the University of Adelaide, regards as causing an exodus from the profession. In the latter half of the 20th century, obstetric litigation about the cause of cerebral palsy became more common, leading to the practice of defensive medicine.
| Biology and health sciences | Disability | null |
50620 | https://en.wikipedia.org/wiki/Scyphozoa | Scyphozoa | The Scyphozoa are an exclusively marine class of the phylum Cnidaria, referred to as the true jellyfish (or "true jellies").
The class name Scyphozoa comes from the Greek word skyphos (), denoting a kind of drinking cup and alluding to the cup shape of the organism.
Scyphozoans have existed from the earliest Cambrian to the present.
Biology
Most species of Scyphozoa have two life-history phases, including the planktonic medusa or polyp form, which is most evident in the warm summer months, and an inconspicuous, but longer-lived, bottom-dwelling polyp, which seasonally gives rise to new medusae. Most of the large, often colorful, and conspicuous jellyfish found in coastal waters throughout the world are Scyphozoa. They typically range from in diameter, but the largest species, Cyanea capillata can reach across. Scyphomedusae are found throughout the world's oceans, from the surface to great depths; no Scyphozoa occur in freshwater (or on land).
As medusae, they eat a variety of crustaceans and fish, which they capture using stinging cells called nematocysts. The nematocysts are located throughout the tentacles that radiate downward from the edge of the umbrella dome, and also cover the four or eight oral arms that hang down from the central mouth. Some species, however, are instead filter feeders, using their tentacles to strain plankton from the water.
Anatomy
Scyphozoans usually display a four-part symmetry and have an internal gelatinous material called mesoglea, which provides the same structural integrity as a skeleton. The mesoglea includes mobile amoeboid cells originating from the epidermis.
Scyphozoans have no durable hard parts, including no head, no skeleton, and no specialized organs for respiration or excretion. Marine jellyfish can consist of as much as 98% water, so are rarely found in fossil form.
Unlike the hydrozoan jellyfish, Hydromedusae, Scyphomedusae lack a vellum, which is a circular membrane beneath the umbrella that helps propel the (usually smaller) Hydromedusae through the water. However, a ring of muscle fibres is present within the mesoglea around the rim of the dome, and the jellyfish swims by alternately contracting and relaxing these muscles. The periodic contracting and relaxing propels the jellyfish through the water, allowing it to escape predation or catch its prey.
The mouth opens into a central stomach, from which four interconnected diverticula radiate outwards. In many species, this is further elaborated by a system of radial canals, with or without an additional ring canal towards the edge of the dome. Some genera, such as Cassiopea, even have additional, smaller mouths in the oral arms. The lining of the digestive system includes further stinging nematocysts, along with cells that secrete digestive enzymes.
The nervous system usually consists of a distributed net of cells, although some species possess more organised nerve rings. In species lacking nerve rings, the nerve cells are instead concentrated into small structures called rhopalia. There are between four and sixteen of these small lobes arranged around the rim of the umbrella, where they coordinate the muscular action allowing the animal to move. Each rhopalium is typically associated with a pair of sensory pits, a statocyst, and sometimes a pigment-cup ocellus.
Reproduction
Most species appear to be gonochorists, with separate male and female individuals. The gonads are located in the stomach lining, and the mature gametes are expelled through the mouth. After fertilization, some species brood their young in pouches on the oral arms, but they are more commonly planktonic.
Growth and development
The fertilized egg produces a planular larva which, in most species, quickly attaches itself to the sea bottom. The larva develops into the hydroid stage of the lifecycle, a tiny sessile polyp called a scyphistoma. The scyphistoma reproduces asexually, producing similar polyps by budding, and then either transforming into a medusa, or budding several medusae off from its upper surface via a process called strobilation. The medusae are initially microscopic and may take years to reach sexual maturity.
Commercial importance
Scyphozoa include the moon jelly Aurelia aurita, in the order Semaeostomeae, and the enormous Nemopilema nomurai, in the order Rhizostomeae, found between Japan and China and which in some years causes major fisheries disruptions.
The jellyfish fished commercially for food are Scyphomedusae in the order Rhizostomeae. Most rhizostome jellyfish live in warm water.
Taxonomy
Although the Scyphozoa were formerly considered to include the animals now referred to as the classes Cubozoa and Staurozoa, they now include just three extant orders (two of which are in Discomedusae, a subclass of Scyphozoa). About 200 extant species are recognized at present, but the true diversity is likely to be at least 400 species.
Class Scyphozoa
Subclass Coronamedusae
Order Coronatae
Family Atollidae
Family Atorellidae
Family Linuchidae
Family Nausithoidae
Family Paraphyllinidae
Family Periphyllidae
Subclass Discomedusae
Order Rhizostomeae
Suborder Daktyliophorae
Family Catostylidae
Family Lobonematidae
Family Lychnorhizidae
Family Rhizostomatidae
Family Stomolophidae
Suborder Kolpophorae
Family Cassiopeidae
Family Cepheidae
Family Mastigiidae
Family Thysanostomatidae
Family Versurigidae
Order Semaeostomeae
Family Cyaneidae
Family Drymonematidae
Family Pelagiidae
Family Phacellophoridae
Family Ulmaridae
| Biology and health sciences | Cnidarians | Animals |
50627 | https://en.wikipedia.org/wiki/Conformal%20map | Conformal map | In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths.
More formally, let and be open subsets of . A function is called conformal (or angle-preserving) at a point if it preserves angles between directed curves through , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.
The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.
For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types.
The notion of conformality generalizes in a natural way to maps between Riemannian or semi-Riemannian manifolds.
In two dimensions
If is an open subset of the complex plane , then a function is conformal if and only if it is holomorphic and its derivative is everywhere non-zero on . If is antiholomorphic (conjugate to a holomorphic function), it preserves angles but reverses their orientation.
In the literature, there is another definition of conformal: a mapping which is one-to-one and holomorphic on an open set in the plane. The open mapping theorem forces the inverse function (defined on the image of ) to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. In fact, we have the following relation, the inverse function theorem:
where . However, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic.
The Riemann mapping theorem, one of the profound results of complex analysis, states that any non-empty open simply connected proper subset of admits a bijective conformal map to the open unit disk in . Informally, this means that any blob can be transformed into a perfect circle by some conformal map.
Global conformal maps on the Riemann sphere
A map of the Riemann sphere onto itself is conformal if and only if it is a Möbius transformation.
The complex conjugate of a Möbius transformation preserves angles, but reverses the orientation. For example, circle inversions.
Conformality with respect to three types of angles
In plane geometry there are three types of angles that may be preserved in a conformal map. Each is hosted by its own real algebra, ordinary complex numbers, split-complex numbers, and dual numbers. The conformal maps are described by linear fractional transformations in each case.
In three or more dimensions
Riemannian geometry
In Riemannian geometry, two Riemannian metrics and on a smooth manifold are called conformally equivalent if for some positive function on . The function is called the conformal factor.
A diffeomorphism between two Riemannian manifolds is called a conformal map if the pulled back metric is conformally equivalent to the original one. For example, stereographic projection of a sphere onto the plane augmented with a point at infinity is a conformal map.
One can also define a conformal structure on a smooth manifold, as a class of conformally equivalent Riemannian metrics.
Euclidean space
A classical theorem of Joseph Liouville shows that there are far fewer conformal maps in higher dimensions than in two dimensions. Any conformal map from an open subset of Euclidean space into the same Euclidean space of dimension three or greater can be composed from three types of transformations: a homothety, an isometry, and a special conformal transformation. For linear transformations, a conformal map may only be composed of homothety and isometry, and is called a conformal linear transformation.
Applications
Applications of conformal mapping exist
in aerospace engineering,
in biomedical sciences
(including brain mapping
and genetic mapping),
in applied math
(for geodesics
and in geometry),
in earth sciences
(including geophysics,
geography,
and cartography),
in engineering,
and in electronics.
Cartography
In cartography, several named map projections, including the Mercator projection and the stereographic projection are conformal. The preservation of compass directions makes them useful in marine navigation.
Physics and engineering
Conformal mappings are invaluable for solving problems in engineering and physics that can be expressed in terms of functions of a complex variable yet exhibit inconvenient geometries. By choosing an appropriate mapping, the analyst can transform the inconvenient geometry into a much more convenient one. For example, one may wish to calculate the electric field, , arising from a point charge located near the corner of two conducting planes separated by a certain angle (where is the complex coordinate of a point in 2-space). This problem per se is quite clumsy to solve in closed form. However, by employing a very simple conformal mapping, the inconvenient angle is mapped to one of precisely radians, meaning that the corner of two planes is transformed to a straight line. In this new domain, the problem (that of calculating the electric field impressed by a point charge located near a conducting wall) is quite easy to solve. The solution is obtained in this domain, , and then mapped back to the original domain by noting that was obtained as a function (viz., the composition of and ) of , whence can be viewed as , which is a function of , the original coordinate basis. Note that this application is not a contradiction to the fact that conformal mappings preserve angles, they do so only for points in the interior of their domain, and not at the boundary. Another example is the application of conformal mapping technique for solving the boundary value problem of liquid sloshing in tanks.
If a function is harmonic (that is, it satisfies Laplace's equation ) over a plane domain (which is two-dimensional), and is transformed via a conformal map to another plane domain, the transformation is also harmonic. For this reason, any function which is defined by a potential can be transformed by a conformal map and still remain governed by a potential. Examples in physics of equations defined by a potential include the electromagnetic field, the gravitational field, and, in fluid dynamics, potential flow, which is an approximation to fluid flow assuming constant density, zero viscosity, and irrotational flow. One example of a fluid dynamic application of a conformal map is the Joukowsky transform that can be used to examine the field of flow around a Joukowsky airfoil.
Conformal maps are also valuable in solving nonlinear partial differential equations in some specific geometries. Such analytic solutions provide a useful check on the accuracy of numerical simulations of the governing equation. For example, in the case of very viscous free-surface flow around a semi-infinite wall, the domain can be mapped to a half-plane in which the solution is one-dimensional and straightforward to calculate.
For discrete systems, Noury and Yang presented a way to convert discrete systems root locus into continuous root locus through a well-know conformal mapping in geometry (aka inversion mapping).
Maxwell's equations
Maxwell's equations are preserved by Lorentz transformations which form a group including circular and hyperbolic rotations. The latter are sometimes called Lorentz boosts to distinguish them from circular rotations. All these transformations are conformal since hyperbolic rotations preserve hyperbolic angle, (called rapidity) and the other rotations preserve circular angle. The introduction of translations in the Poincaré group again preserves angles.
A larger group of conformal maps for relating solutions of Maxwell's equations was identified by Ebenezer Cunningham (1908) and Harry Bateman (1910). Their training at Cambridge University had given them facility with the method of image charges and associated methods of images for spheres and inversion. As recounted by Andrew Warwick (2003) Masters of Theory:
Each four-dimensional solution could be inverted in a four-dimensional hyper-sphere of pseudo-radius in order to produce a new solution.
Warwick highlights this "new theorem of relativity" as a Cambridge response to Einstein, and as founded on exercises using the method of inversion, such as found in James Hopwood Jeans textbook Mathematical Theory of Electricity and Magnetism.
General relativity
In general relativity, conformal maps are the simplest and thus most common type of causal transformations. Physically, these describe different universes in which all the same events and interactions are still (causally) possible, but a new additional force is necessary to affect this (that is, replication of all the same trajectories would necessitate departures from geodesic motion because the metric tensor is different). It is often used to try to make models amenable to extension beyond curvature singularities, for example to permit description of the universe even before the Big Bang.
| Mathematics | Complex analysis | null |
50637 | https://en.wikipedia.org/wiki/Sequoiadendron%20giganteum | Sequoiadendron giganteum | Sequoiadendron giganteum (also known as the giant sequoia, giant redwood, Sierra redwood or Wellingtonia) is a species of coniferous tree, classified in the family Cupressaceae in the subfamily Sequoioideae. Giant sequoia specimens are the most massive trees on Earth. They are native to the groves on the western slopes of the Sierra Nevada mountain range of California but have been introduced, planted, and grown around the world.
The giant sequoia is listed as an endangered species by the IUCN with fewer than 80,000 remaining in its native California. The tree was introduced to the U.K. in 1853, and by now might have 5,000 trees growing there where it is more commonly known as Wellingtonia after the Duke of Wellington.
The giant sequoia grow to an average height of 50–85 m (164–279 ft) with trunk diameters ranging from 6–8 m (20–26 ft). Record trees have been measured at 94.8 m (311 ft) tall. The specimen known to have the greatest diameter at breast height is the General Grant tree at 8.8 m (28.9 ft). Giant sequoias are among the oldest living organisms on Earth. The oldest known giant sequoia is 3,200–3,266 years old.
Wood from mature giant sequoias is fibrous and brittle; trees would often shatter after they were felled. The wood is unsuitable for construction and instead is used for fence posts or match sticks. The giant sequoia is a very popular ornamental tree in many parts of the world.
Etymology
The etymology of the genus name was long presumedinitially in The Yosemite Book by Josiah Whitney in 1868to be in honor of Sequoyah (1767–1843), who was the inventor of the Cherokee syllabary. An etymological study published in 2012 debunked that "American myth," concluding that Austrian Stephen L. Endlicher derived the name from the Latin word sequi (meaning to follow), because the number of seeds per cone in the newly classified genus aligned in mathematical sequence with the other four genera in the suborder.
Description
Giant sequoia specimens are the most massive individual trees in the world. They grow to an average height of with trunk diameters ranging from . Record trees have been measured at tall. Trunk diameters of have been claimed via research figures taken out of context. The specimen known to have the greatest diameter at breast height is the General Grant tree at . Between 2014 and 2016, it is claimed that specimens of coast redwood were found to have greater trunk diameters than all known giant sequoias – though this has not been independently verified or affirmed in any academic literature. The trunks of coast redwoods taper at lower heights than those of giant sequoias which have more columnar trunks that maintain larger diameters to greater heights.
The oldest known giant sequoia is 3,200–3,266 years old based on dendrochronology. That tree has been verified to have the fourth-largest lifespan of any tree, after individuals of Great Basin bristlecone pine and alerce. Giant sequoia bark is fibrous, furrowed, and may be thick at the base of the columnar trunk. The sap contains tannic acid, which provides significant protection from fire damage. The leaves are evergreen, awl-shaped, long, and arranged spirally on the shoots.
The giant sequoia regenerates by seed. The seed cones are long and mature in 18–20 months, though they typically remain green and closed for as long as 20 years. Each cone has 30–50 spirally arranged scales, with several seeds on each scale, giving an average of 230 seeds per cone. Seeds are dark brown, long, and broad, with a wide, yellow-brown wing along each side. Some seeds shed when the cone scales shrink during hot weather in late summer, but most are liberated by insect damage or when the cone dries from the heat of fire. The trees do not begin to bear cones until they are 12 years old.
Trees may produce sprouts from their stumps subsequent to injury, until about 20 years old; however, shoots do not form on the stumps of more mature trees as they do on coast redwoods. Giant sequoias of all ages may sprout from their boles when branches are lost to fire or breakage.
A large tree may have as many as 11,000 cones. Cone production is greatest in the upper portion of the canopy. A mature giant sequoia disperses an estimated 300,000–400,000 seeds annually. The winged seeds may fly as far as from the parent tree.
Lower branches die readily from being shaded, but trees younger than 100 years retain most of their dead branches. Trunks of mature trees in groves are generally free of branches to a height of , but solitary trees retain lower branches.
Distribution
The natural distribution of giant sequoias is restricted to a limited area of the western Sierra Nevada, California. As a paleoendemic species, they occur in scattered groves, with a total of 81 groves (see list of sequoia groves for a full inventory), comprising a total area of only . Nowhere does it grow in pure stands, although in a few small areas, stands do approach a pure condition. The northern two-thirds of its range, from the American River in Placer County southward to the Kings River, has only eight disjunct groves. The remaining southern groves are concentrated between the Kings River and the Deer Creek Grove in southern Tulare County. Groves range in size from with 20,000 mature trees, to small groves with only six living trees. Many are protected in Sequoia and Kings Canyon National Parks and Giant Sequoia National Monument.
The giant sequoia is usually found in a humid climate characterized by dry summers and snowy winters. Most giant sequoia groves are on granitic-based residual and alluvial soils. The elevation of the giant sequoia groves generally ranges from in the north, to to the south. Giant sequoias generally occur on the south-facing sides of northern mountains, and on the northern faces of more southerly slopes.
High levels of reproduction are not necessary to maintain the present population levels. Few groves, however, have sufficient young trees to maintain the present density of mature giant sequoias for the future. The majority of giant sequoia groves are currently undergoing a gradual decline in density since European settlement.
Pre-historic range
While the present day distribution of this species is limited to a small area of California, it was once much more widely distributed in prehistoric times, and was a reasonably common species in North American and Eurasian coniferous forests until its range was greatly reduced by the last ice age. Older fossil specimens reliably identified as giant sequoia have been found in Cretaceous era sediments from a number of sites in North America and Europe, and even as far afield as New Zealand and Australia.
Artificial groves
In 1974, a group of giant sequoias was planted by the United States Forest Service in the San Jacinto Mountains of Southern California in the immediate aftermath of a wildfire that left the landscape barren. The giant sequoias were rediscovered in 2008 by botanist Rudolf Schmid and his daughter Mena Schmidt while hiking on Black Mountain Trail through Hall Canyon. Black Mountain Grove is home to over 150 giant sequoias, some of which stand over tall. This grove is not to be confused with the Black Mountain Grove in the southern Sierra. Nearby Lake Fulmor Grove is home to seven giant sequoias, the largest of which is tall. The two groves are located approximately southeast of the southernmost naturally occurring giant sequoia grove, Deer Creek Grove.
It was later discovered that the United States Forest Service had planted giant sequoias across Southern California. However, the giant sequoias of Black Mountain Grove and nearby Lake Fulmor Grove are the only ones known to be reproducing and propagating free of human intervention. The conditions of the San Jacinto Mountains mimic those of the Sierra Nevada, allowing the trees to naturally propagate throughout the canyon.
Ecology
Giant sequoias are in many ways adapted to forest fires. Their bark is unusually fire resistant, and their cones will normally open immediately after a fire. Giant sequoias are a pioneer species, and are having difficulty reproducing in their original habitat (and very rarely reproduce in cultivation) due to the seeds only being able to grow successfully in full sun and in mineral-rich soils, free from competing vegetation. Although the seeds can germinate in moist needle humus in the spring, these seedlings will die as the duff dries in the summer. They therefore require periodic wildfire to clear competing vegetation and soil humus before successful regeneration can occur. Without fire, shade-loving species will crowd out young sequoia seedlings, and sequoia seeds will not germinate. These trees require large amounts of water and are often concentrated near streams. Their growth is dependent on soil moisture. Squirrels, chipmunks, finches and sparrows consume the freshly sprouted seedlings, preventing their growth.
Fires also bring hot air high into the canopy via convection, which in turn dries and opens the cones. The subsequent release of large quantities of seeds coincides with the optimal postfire seedbed conditions. Loose ground ash may also act as a cover to protect the fallen seeds from ultraviolet radiation damage. Due to fire suppression efforts and livestock grazing during the early and mid-20th century, low-intensity fires no longer occurred naturally in many groves, and still do not occur in some groves today. The suppression of fires leads to ground fuel build-up and the dense growth of fire-sensitive white fir, which increases the risk of more intense fires that can use the firs as ladders to threaten mature giant sequoia crowns. Natural fires may also be important in keeping carpenter ants in check.
In 1970, the National Park Service began controlled burns of its groves to correct these problems. Current policies also allow natural fires to burn. One of these untamed burns severely damaged the second-largest tree in the world, the Washington tree, in September 2003, 45 days after the fire started. This damage made it unable to withstand the snowstorm of January 2005, leading to the collapse of over half the trunk.
In addition to fire, two animal agents also assist giant sequoia seed release. The more significant of the two is a longhorn beetle (Phymatodes nitidus) that lays eggs on the cones, into which the larvae then bore holes. Reduction of the vascular water supply to the cone scales allows the cones to dry and open for the seeds to fall. Cones damaged by the beetles during the summer will slowly open over the next several months. Some research indicates many cones, particularly higher in the crowns, may need to be partially dried by beetle damage before fire can fully open them. The other agent is the Douglas squirrel (Tamiasciurus douglasi) that gnaws on the fleshy green scales of younger cones. The squirrels are active year-round, and some seeds are dislodged and dropped as the cone is eaten.
More than 30 identified species of bird have been observed living in giant sequoia groves.
Genome
The genome of the giant sequoia was published in 2020. The size of the giant sequoia genome is 8.125 Gbp (8.125 billion base pairs) which were assembled into eleven chromosome-scale scaffolds, the largest of any organism at the time of publication.
This is the first genome sequenced in the Cupressaceae family, and it provides insights into disease resistance and survival for this robust species on a genetic basis. The genome was found to contain over 900 complete or partial predicted NLR genes used by plants to prevent the spread of infection by microbial pathogens.
The genome sequence was extracted from a single fertilized seed harvested from a 1,360-year-old tree specimen in Sequoia/Kings Canyon National Park identified as SEGI 21. It was sequenced over a three-year period by researchers at University of California, Davis, Johns Hopkins University, University of Connecticut, and Northern Arizona University and was supported by grants from Save the Redwoods League and the National Institute of Food and Agriculture as part of a species conservation, restoration and management effort.
Discovery and naming
Discovery
The giant sequoia first gained widespread attention in 1852 when grizzly hunter Augustus T. Dowd discovered the Discovery Tree in Calaveras Grove, marking the species' first widely publicized discovery by non-natives. The tree was cut down in 1853 and exhibited across the United States. The story of Dowd's discovery gained further notoriety following a 1859 feature in Hutchings' Illustrated California Magazine, which promoted tourism to the grove.
Before Augustus T. Dowd's well-known discovery in 1852, there were three earlier encounters with giant sequoias. The first known mention of the giant sequoia by a European American was in 1833 by explorer J.K. Leonard, who recorded it in his diary. While Leonard did not specify a location, his travels likely took him through Calaveras Grove, but this observation remained unnoticed.
In 1850, John M. Wooster encountered a giant sequoia at Calaveras Grove and carved his initials into the bark of the "Hercules" tree. A year later, in 1851, Robert Eccleston traveled through Nelder Grove with a small detachment of the Mariposa Battalion during the Mariposa War. Similar to Leonard's experience, these encounters also received no publicity.
Naming
The first scientific naming of the species was by John Lindley in December 1853, who named it Wellingtonia gigantea, without realizing this was an invalid name under the botanical code as the name Wellingtonia had already been used earlier for another unrelated plant (Wellingtonia arnottiana in the family Sabiaceae). The name "Wellingtonia" has persisted in England as a common name. The following year, Joseph Decaisne transferred it to the same genus as the coast redwood, naming it Sequoia gigantea, but this name was also invalid, having been applied earlier (in 1847, by Endlicher) to the coast redwood. The name Washingtonia californica was also applied to it by Winslow in 1854; this name too is invalid, since it was already used for the palm genus Washingtonia.
In 1907, it was placed by Carl Ernst Otto Kuntze in the otherwise fossil genus Steinhauera, but doubt as to whether the giant sequoia is related to the fossil originally so named makes this name invalid.
These nomenclatural oversights were corrected in 1939 by John Theodore Buchholz, who also pointed out the giant sequoia is distinct from the coast redwood at the genus level and coined the name Sequoiadendron giganteum for it.
The etymology of the genus name has been presumedinitially in The Yosemite Book by Josiah Whitney in 1868to be in honor of Sequoyah (1767–1843), who was the inventor of the Cherokee syllabary. An etymological study published in 2012, however, concluded that the name was more likely to have originated from the Latin sequi (meaning to follow) since the number of seeds per cone in the newly classified genus fell in mathematical sequence with the other four genera in the suborder.
John Muir wrote of the species in about 1870:"Do behold the King in his glory, King Sequoia! Behold! Behold! seems all I can say. Some time ago I left all for Sequoia and have been and am at his feet, fasting and praying for light, for is he not the greatest light in the woods, in the world? Where are such columns of sunshine, tangible, accessible, terrestrialized?'
Uses
Wood from mature giant sequoias is highly resistant to decay, but due to being fibrous and brittle, it is generally unsuitable for construction. From the 1880s through the 1920s, logging took place in many groves in spite of marginal commercial returns. The Hume-Bennett Lumber Company was the last to harvest giant sequoia, going out of business in 1924. Due to their weight and brittleness, trees would often shatter when they hit the ground, wasting much of the wood. Loggers attempted to cushion the impact by digging trenches and filling them with branches. Still, as little as 50% of the timber is estimated to have made it from groves to the mill. The wood was used mainly for shingles and fence posts, or even for matchsticks.
Pictures of the once majestic trees broken and abandoned in formerly pristine groves, and the thought of the giants put to such modest use, spurred the public outcry that caused most of the groves to be preserved as protected land. The public can visit an example of 1880s clear-cutting at Big Stump Grove near General Grant Grove. As late as the 1980s, some immature trees were logged in Sequoia National Forest, publicity of which helped lead to the creation of Giant Sequoia National Monument.
The wood from immature trees is less brittle, with recent tests on young plantation-grown trees showing it similar to coast redwood wood in quality. This is resulting in some interest in cultivating giant sequoia as a very high-yielding timber crop tree, both in California and also in parts of western Europe, where it may grow more efficiently than coast redwoods. In the northwest United States, some entrepreneurs have also begun growing giant sequoias for Christmas trees. Besides these attempts at tree farming, the principal economic uses for giant sequoia today are tourism and horticulture.
Cultural Symbol
Giant sequoias, native to California and discovered during the final phase of frontier expansion, hold a distinctive place in American culture. They embody the complex interplay of human ambition, environmental exploitation, and the emergence of the modern conservation movement.
In the 19th century, sequoias such as the Discovery Tree and Forest King were cut down and transported to urban centers and world expositions as exhibition trees. These displays highlighted the grandeur of the American frontier while exposing humanity’s capacity to exploit nature. The paradox of celebrating these giants’ majesty while destroying them sparked debates that eventually led to the establishment of Sequoia and Yosemite National Parks.
The creation of tunnel trees, including the iconic Wawona Tree, further cemented the cultural legacy of the sequoias. By carving pathways through their immense trunks, Americans celebrated their ability to master the wilderness, embodying the spirit of exploration and progress that defined the nation's expansion. These trees became popular tourist attractions during the rise of the automobile age, embodying the frontier spirit of exploration and progress. However, as the weakened trees began to collapse, they came to symbolize the unintended consequences of human ambition. Today, the California Tunnel Tree, the last surviving tunnel tree, protected in Mariposa Grove, stands as a relic of the past and a symbol of changing values.
Americans have also imbued the giant sequoia with sacred meaning. The General Grant Tree, for example, was named the "Nation's Christmas Tree" by Calvin Coolidge in 1926 and later declared a national shrine by Dwight Eisenhower to honor the country’s war dead. It remains the only living object designated as a national shrine.
Threats
Giant sequoias, once primarily threatened by logging, now face their greatest danger from the absence of regular fires. Fire suppression, severe wildfires, and competition from shade-tolerant species have disrupted the natural cycle that once relied on periodic wildfires to release seeds and clear undergrowth.
Natural wildfires historically played a key role in sequoia reproduction, releasing seeds from cones and clearing undergrowth to create the open, nutrient-rich conditions needed for seedlings. Fire suppression over the last century has disrupted this cycle, limiting reproduction in many groves. Without regular fires, the buildup of fuel and the excessive growth of more fire-sensitive trees, like white fir, have increased the risk of devastating crown fires, which have already destroyed significant portions of the sequoia population.
Many destructive wildfires have hit giant sequoia groves in recent decades, including the McNally Fire in 2002, the Rough Fire in 2015, and the Railroad Fire in 2017. The Castle Fire in 2020 is estimated to have wiped out 10–14% of the giant sequoia population, or about 7,500 to 10,600 mature trees, possibly including the King Arthur Tree, one of the tallest known sequoias. In 2021, the KNP Complex and Windy Fire added to the damage, killing an estimated 3 to 5% more of the population.
Controlled burns have been effective in protecting giant sequoias. In the 2022 Washburn Fire, officials credited prescribed burns in Yosemite National Park with limiting the fire’s intensity and sparing Mariposa Grove from major harm. Experts warn that to preserve healthy groves and prevent future destruction, the use of prescribed burns must increase significantly—by about 30 times the current levels.
Around the world
Giant sequoia is a very popular ornamental tree in many areas. It is successfully grown in most of western and southern Europe, the Pacific Northwest of North America, north to southwest British Columbia, the southern United States, southeast Australia, New Zealand and central-southern Chile. It is also grown, though less successfully, in parts of eastern North America.
Trees can withstand temperatures of −31 °C (−25 °F) or colder for short periods of time, provided the ground around the roots is insulated with either heavy snow or mulch. Outside its natural range, the foliage can suffer from damaging windburn.
A wide range of horticultural varieties have been selected, especially in Europe, including blue, compact blue, powder blue, hazel smith, pendulumor weepingvarieties, and grafted cultivars.
France
The tallest giant sequoia ever measured outside of the United States is a specimen planted near Ribeauvillé in France in 1856 and measured in 2014 at a height between and at age 158 years.
United Kingdom
The giant sequoia was first brought into cultivation in Britain in 1853 by the horticulturist Patrick Matthew of Perthshire from seeds sent by his botanist son John in California. A much larger shipment of seed collected from the Calaveras Grove by William Lobb, acting for the Veitch Nursery near Exeter, arrived in England in December 1853; seed from this batch was widely distributed throughout Europe.
Growth in Britain is very fast, with the tallest tree, at Benmore in southwest Scotland, reaching in 2014 at age 150 years, and several others from tall; the stoutest is around in girth and in diameter, in Perthshire. The Royal Botanic Gardens at Kew, and in their second campus at Wakehurst, contain multiple large specimens of the species. Biddulph Grange Garden in Staffordshire holds a fine collection of both Sequoiadendron giganteum and Sequoia sempervirens (coast redwood). The General Sherman of California has a volume of ; by way of comparison, the largest giant sequoias in Great Britain have volumes no greater than , one example being the specimen in the New Forest.
Sequoiadendron giganteum has gained the Royal Horticultural Society's Award of Garden Merit.
An avenue of 218 giant sequoias was planted in 1865 near the town of Camberley, Surrey, England. The trees have since been surrounded by modern real estate development.
In 2024, there were 4,949 notable sequoias in the UK. There is uncertainty if this is an undercount or overcount of the trees. In addition there are an estimated 500,000 younger Sequoiadendron giganteum and Sequoia sempervirens. Growing conditions are generally more conducive for these trees than in their native range in the US.
Germany
Probably the oldest sequoia in Germany and possibly of continental Europe was planted in 1852 as a gift of the British Royal Family to the Landgraviate of Hesse-Darmstadt in a park near Bensheim. In 2015 it reached a height of 44.35 metres and a circumfereence of 5.94 metres and is praised for its beauty. It is also the largest sequoia in Germany.
King William I of Württemberg (1816–1864) imported seeds shortly before his death. In the greenhouses of the Wilhelma in Stuttgart grew between 5000 to 8000 seedlings. Thirty-five of those trees are still present in the Wilhelma. The seedlings got distributed to the whole country of Württemberg and elsewhere and planted on different soils, under various conditions and elevations for a long-time evaluation to find out if they are suitable for forestry. At least 135 of them still can be traced back to these seedlings.
Since then the tree is well established as ornamental tree in public parks and cemeteries, but also on private properties and can be found planted in small groups in the woods.
Two members of the German Dendrology Society, E. J. Martin and Illa Martin, introduced the giant sequoia into German forestry at the Sequoiafarm Kaldenkirchen in 1952.
Italy
Numerous giant sequoia were planted in Italy from 1860 through 1905. Several regions contain specimens that range from in height. The largest tree is in Roccavione, in the Piedmont, with a basal circumference of . One notable tree survived a tall flood wave in 1963 that was caused by a landslide at Vajont Dam. There are numerous giant sequoia in parks and reserves.
Growth rates in some areas of Europe are remarkable. One young tree in Italy reached tall and trunk diameter in 17 years.
Northern and Central Europe
Growth further northeast in Europe is limited by winter cold. In Denmark, where extreme winters can reach , the largest tree was tall and diameter in 1976 and is bigger today. One in Poland has purportedly survived temperatures down to with heavy snow cover.
Twenty-nine giant sequoias, measuring around in height, grow in Belgrade's municipality of Lazarevac in Serbia.
The oldest Sequoiadendron in the Czech Republic, at , grows in Ratměřice u Votic castle garden.
In Slatina, Croatia, 32.5 m (107ft) tall giant sequoia grows in city park. Presumably seeded in 1890 and proclaimed as nature monument in 1967, now stands as a centerpiece in town's educational, presentational and informational center with tourist facilities available.
United States and Canada
Giant sequoias are grown successfully in the Pacific Northwest and southern US, and less successfully in eastern North America. Giant sequoia cultivation is very successful in the Pacific Northwest from western Oregon north to southwest British Columbia, with fast growth rates. In Washington and Oregon, it is common to find giant sequoias that have been successfully planted in both urban and rural areas. There are hundreds of sequoias planted on the Olympic Peninsula over the last 100 years and farms that have 50 or more planted 40 years ago or longer.
In Seattle, a sequoia stands as a prominent landmark at the entrance to Seattle's downtown retail core. Other large specimens exceeding are located on the University of Washington and Seattle University campuses, in the Evergreen Washelli Memorial Park cemetery, and in the Leschi, Madrona, and Magnolia neighborhoods.
In the northeastern US there has been some limited success in growing the species, but growth is much slower there, and it is prone to Cercospora and Kabatina fungal diseases due to the hot, humid summer climate there. A tree at Blithewold Gardens, in Bristol, Rhode Island, is reported to be tall, reportedly the tallest in the New England states. The tree at the Tyler Arboretum in Delaware County, Pennsylvania, at may be the tallest in the northeast. Specimens also grow in the Arnold Arboretum in Boston, Massachusetts (planted 1972, 18 m tall in 1998), at Longwood Gardens near Wilmington, Delaware, in the New Jersey State Botanical Garden at Skylands in Ringwood State Park, Ringwood, New Jersey, and in the Finger Lakes region of New York. Private plantings of giant sequoias around the Middle Atlantic States are not uncommon, and other publicly accessible specimens can be visited at the U.S. National Arboretum in Washington, D.C. A few trees have been established in Colorado as well. Additionally, numerous sequoias have been planted with success in the state of Michigan.
A cold-tolerant cultivar 'Hazel Smith' selected in about 1960 is proving more successful in the northeastern US. This clone was the sole survivor of several hundred seedlings grown at a nursery in New Jersey. The U.S. National Arboretum has a specimen grown from a cutting in 1970 that can be seen in the Gotelli Conifer Collection.
Since its last assessment as an endangered species in 2011, it was estimated that another 13–19% of the population (or 9,761–13,637 mature trees) were destroyed during the Castle Fire of 2020 and the KNP Complex & Windy Fire in 2021, events attributed to fire suppression and drought. Prescribed burns to reduce available fuel load may be crucial for saving the species.
As of 2021, there are approximately 60,000 living in its native California.
Australia
The Ballarat Botanical Gardens contain a significant collection, many of them about 150 years old. Jubilee Park and the Hepburn Mineral Springs Reserve in Daylesford, Cook Park in Orange, New South Wales and Carisbrook's Deep Creek park in Victoria both have specimens. Jamieson Township in the Victorian high country has two specimens which were planted in the early 1860s.
In Tasmania, specimens can be seen in private and public gardens, as sequoias were popular in the mid-Victorian era. The Westbury Village Green has specimens with more in Deloraine. The Tasmanian Arboretum contains both Sequoiadendron giganteum and Sequoia sempervirens specimens.
The Pialligo Redwood Forest consists of 3,000 surviving redwood specimens, of 122,000 planted, 500 meters east of the Canberra Airport. The forest was laid out by the city's designer Walter Burley Griffin, though the city's arborist, Thomas Charles Weston, advised against it. The National Arboretum Canberra began a grove of Sequoiadendron giganteum in 2008. They also grow in the abandoned arboretum at Mount Banda Banda in New South Wales.
New Zealand
Several impressive specimens of Sequoiadendron giganteum can be found in the South Island of New Zealand. Notable examples include a set of trees in a public park of Picton, as well as robust specimens in the public and botanical parks of Christchurch and Queenstown. There are also several in private gardens in Wānaka. Other locations in Christchurch and nearby include a number of trees at the Riccarton Park Racecourse, three large trees on the roadside bordering private properties on Clyde Road, near Wai-Iti Terrace—these are at least 150 years old. The suburb of ‘Redwood’ is named after a 160 years old Giant Redwood tree in the grounds of a local hotel. At St James Church, Harewood, is a protected very large specimen believed to be about 160 years old. A grove of about sixteen Redwood trees of varying ages is in Sheldon Park in the Belfast, New Zealand suburb. Some of these trees are in poor condition because of indifferent care. There is also a very large tree at Rangiora High School, which was planted for Queen Victoria's Golden Jubilee and is thus over 130 years old.
Record trees
Largest by trunk volume
As of 2009, the top ten largest giant sequoias sorted by volume of their trunks are:
The General Sherman tree is estimated to weigh about 2100 tonnes.
The Washington Tree was previously arguably the second largest tree with a volume of (although the upper half of its trunk was hollow, making the calculated volume debatable), but after losing the hollow upper half of its trunk in January 2005 following a fire, it is no longer of great size.
The largest giant sequoia ever recorded was the Father of the Forest from Calaveras Grove, an exceedingly massive tree which fell many centuries ago in the North Grove. Reportedly, the tree was once over 435 ft tall, and 110 ft in circumference, with a minimum height of 365 feet. Now, what is left of the tree is a popular tourist attraction.
Tallest
Redwood – Redwood Mountain Grove –
tallest outside the United States: specimen near Ribeauvillé, France, measured in 2014 at a height of at age 158 years.
Oldest
Muir Snag – Converse Basin Grove – more than 3500 years
Greatest girth
Waterfall Tree – Alder Creek Grove – – tree with enormous basal buttress on very steep ground.
Greatest base diameter
Waterfall Tree – Alder Creek Grove – – tree with enormous basal buttress on very steep ground.
Tunnel Tree – Atwell Mill Grove – – tree with a huge flared base that has burned all the way through.
Greatest mean diameter at breast height
General Grant – General Grant Grove –
Largest limb
Arm Tree – Atwell Mill, East Fork Grove – in diameter
Thickest bark
or more
| Biology and health sciences | Cupressaceae | Plants |
50640 | https://en.wikipedia.org/wiki/Texas%20Longhorn | Texas Longhorn | The Texas Longhorn is an American breed of beef cattle, characterized by its long horns, which can span more than from tip to tip. It derives from cattle brought from the Iberian Peninsula to the Americas by Spanish conquistadors from the time of the Second Voyage of Christopher Columbus until about 1512. For hundreds of years the cattle lived a semi-feral existence on the rangelands; they have a higher tolerance of heat and drought than most European breeds. It can be of any color or mix of colors.
In the 21st century it is considered part of the cultural heritage of Texas.
History
The Texas Longhorn derives from cattle brought to the Americas by Spanish conquistadors from the time of the Second Voyage of Christopher Columbus until about 1512. The first cattle were landed in 1493 on the Caribbean island of La Isla Española (now known as Hispaniola) to provide food for the colonists.
Over the next two centuries, the Spaniards used the cattle in Mexico and gradually moved them north to accompany their expanding settlements. The Spaniards reached the area that became known as "Texas" near the end of the 17th century. Eventually, some cattle escaped or were turned loose on the open range, where they remained mostly feral for the next two centuries. Over several generations, descendants of these cattle developed to have high feed- and drought-stress tolerances and other "hardy" characteristics that have given Longhorns their reputation as livestock.
The Texas Longhorn stock slowly dwindled, but in 1927, the breed was saved from near extinction by enthusiasts from the United States Forest Service. They collected a small herd of stock to breed on the Wichita Mountains Wildlife Refuge in Lawton, Oklahoma. The breed also received significant attention after a Texas Longhorn named "Bevo" was adopted as the mascot of The University of Texas at Austin in 1917. The animal's image became commonly associated with the school's sports teams, known as the Texas Longhorns. A few years later, J. Frank Dobie and others gathered small herds to keep in Texas state parks. Oilman Sid W. Richardson helped finance the project. The Longhorns were cared for largely as curiosities, but the stock's longevity, resistance to disease, and ability to thrive on marginal pastures resulted in a revival of the breed as beef stock and for their link to Texas history.
In 1957, Charles Schreiner III began creating a Longhorn herd on his ranch, the Y O, in Mountain Home, Texas, as a tribute to the ranching legacy of his grandfather, Captain Charles Armand Schreiner, and the Longhorns he ran on his ranches. Schreiner purchased five heifers and one bull calf for $75 each from the Wichita Mountains Wildlife Refuge near Lawton. In 1964, Schreiner founded the Texas Longhorn Breeders Association of America. The YO herd were the first cattle registered with the association. To draw attention to the Longhorn and its new association, in 1966, Schreiner organized a cattle drive of Longhorn steers from San Antonio, Texas to Dodge City, Kansas. The drive was promoted as a centennial commemoration of the earlier Chisholm Trail drives. Schreiner arranged for local members of the Quanah sheriff's posse to stage a simulated “Indian attack” as the steers crossed the Red River at Doan's Crossing. The attack was so authentic that the steers stampeded with cowboys in close pursuit. Four hours were needed to reassemble the herd. In 1976, Texas Tech University in Lubbock persuaded Schreiner to stage a cattle trail drive to celebrate its new National Ranching Heritage Center.
In 1995, the Texas Legislature designated the Texas Longhorn as the state large mammal. In the 21st century, Texas Longhorns from elite bloodlines can sell for $40,000 or more at auction. The record of $380,000 on March 18, 2017, was for a cow, 3S Danica, and heifer calf at side, during the Legacy XIII sale in Fort Worth, Texas.
Registries for the breed include: the Texas Longhorn Breeders Association of America, founded in 1964 by the Kerr County rancher Charles Schreiner III; the International Texas Longhorn Association; and the Cattlemen's Texas Longhorn Registry. The online National Texas Longhorn Museum displays the diversity of horns found in the breed, stories about notable individual cattle of the breed, and a gallery of furniture made from cattle horns.
Characteristics
The Longhorn is genetically close to Iberian cattle breeds such as the De Lidia and Retinta of Spain and the Alentejana and Mertolenga of Portugal. Like other Criollo cattle of the Americas and many breeds of southern Europe, it is principally of taurine (European) derivation, but has a small admixture of indicine genetic heritage; this may be a consequence of gene flow across the Strait of Gibraltar from cattle of African origin dating to before the time of the Spanish Conquest.
The horns are in some cases very long. In general, the horns of bulls are of moderate length, while those of steers may be much longer. In 2022 the Guinness Book of Records reported the longest spread of cattle horns (on a living animal) to be: for a steer called Poncho Via; for a cow named 3S Danica; and for a bull named Cowboy Tuff Chex. All three were Texas Longhorns.
Coat color is extremely variable. In some 40% of the cattle it is some shade of red, often a light red; the only shade of red not seen is the deep colour typical of the Hereford. The finching pattern is common; when the base color is black it is called , from the Spanish word for 'skunk'. Other colors include variations of black, blue, brown, cream, dun, grey, yellow or white, either with or without brindling (called , from the Spanish word for 'cat'), speckling or spotting. Speckled and solid-coloured animals are in roughly equal proportion.
Use
The Longhorn was traditionally reared for beef. In the 21st century it is considered part of the cultural heritage of Texas; it is the official large mammal of the state.
It may be kept for conservation reasons, or bred for greater horn length. It is occasionally used for steer riding.
| Biology and health sciences | Cattle | Animals |
50650 | https://en.wikipedia.org/wiki/Astronomy | Astronomy | Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics, physics, and chemistry in order to explain their origin and their overall evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole.
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars.
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
Etymology
Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct.
Use of terms "astronomy" and "astrophysics"
"Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties", while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department, and many professional astronomers have physics rather than astronomy degrees. Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
History
Pre-historic astronomy
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that may have had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.
Classical astronomy
As civilizations developed, most notably in Egypt, Mesopotamia, Greece, Persia, India, China, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model. In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy. The Antikythera mechanism (–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.
Post-classical astronomy
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Iranian scholar Al-Biruni observed that, contrary to Ptolemy, the Sun's apogee (highest point in the heavens) was mobile, not fixed. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.
It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed astronomical observatories. In Post-classical West Africa, Astronomers studied the movement of stars and relation to seasons, crafting charts of the heavens as well as precise diagrams of orbits of the other planets based on complex mathematical calculations. Songhai historian Mahmud Kati documented a meteor shower in August 1583.
Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels. Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
Early telescopic astronomy
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down. It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars. More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.
Deep space astronomy
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proven in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe. In 1919, when the Hooker Telescope was completed, the prevailing view was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that the universe consists of a multitude of galaxies. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, which became increasingly precise with better meassurements, starting at 2 billion years and 280 million light-years, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe.
Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere. In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.
Observational astronomy
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation. Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
Radio astronomy
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range. Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.
Infrared astronomy
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous galactic protostars and their host star clusters.
With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space. Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.
Optical astronomy
Historically, optical astronomy, which has been also called visible light astronomy, is the oldest form of astronomy. Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm), that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
Ultraviolet astronomy
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm). Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei. However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.
X-ray astronomy
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 107 (10 million) kelvins, and thermal emission from thick gases above 107 Kelvin. Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.
Gamma-ray astronomy
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes. The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.
Fields not based on the electromagnetic spectrum
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories. Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole. A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.
Astrometry and celestial mechanics
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allow astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.
Theoretical astronomy
Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved.
Theorists in astronomy endeavor to create theoretical models that are based on existing observations and known physics, and to predict observational consequences of those models. The observation of phenomena predicted by a model allows astronomers to select between several alternative or conflicting models. Theorists also modify existing models to take into account new observations. In some cases, a large amount of observational data that is inconsistent with a model may lead to abandoning it largely or completely, as for geocentric theory, the existence of luminiferous aether, and the steady-state model of cosmic evolution.
Phenomena modeled by theoretical astronomers include:
stellar dynamics and evolution
galaxy formation
large-scale distribution of matter in the Universe
the origin of cosmic rays
general relativity and physical cosmology, including string cosmology and astroparticle physics.
Modern theoretical astronomy reflects dramatic advances in observation since the 1990s, including studies of the cosmic microwave background, distant supernovae and galaxy redshifts, which have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Specific subfields
Astrophysics
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space". Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.
Astrochemistry
Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans.
Astrobiology
Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does. The term exobiology is similar.
Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth. The origin and early evolution of life is an inseparable part of the discipline of astrobiology. Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories.
This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space.
Physical cosmology
Cosmology (from the Greek () "world, universe" and () "word, study" or literally "logic") could be considered the study of the Universe as a whole.
Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13.8 billion years to its present condition. The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965.
In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe. ( | Physical sciences | Science and medicine | null |
50652 | https://en.wikipedia.org/wiki/Uniform%20convergence | Uniform convergence | In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence. A sequence of functions converges uniformly to a limiting function on a set as the function domain if, given any arbitrarily small positive number , a number can be found such that each of the functions differs from by no more than at every point in . Described in an informal way, if converges to uniformly, then how quickly the functions approach is "uniform" throughout in the following sense: in order to guarantee that differs from by less than a chosen distance , we only need to make sure that is larger than or equal to a certain , which we can find without knowing the value of in advance. In other words, there exists a number that could depend on but is independent of , such that choosing will ensure that for all . In contrast, pointwise convergence of to merely guarantees that for any given in advance, we can find (i.e., could depend on the values of both and ) such that, for that particular , falls within of whenever (and a different may require a different, larger for to guarantee that ).
The difference between uniform convergence and pointwise convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Karl Weierstrass, is important because several properties of the functions , such as continuity, Riemann integrability, and, with additional hypotheses, differentiability, are transferred to the limit if the convergence is uniform, but not necessarily if the convergence is not uniform.
History
In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series, arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions.
The term uniform convergence was probably first used by Christoph Gudermann, in an 1838 paper on elliptic functions, where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series is independent of the variables and While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs.
Later Gudermann's pupil Karl Weierstrass, who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent () which he used in his 1841 paper Zur Theorie der Potenzreihen, published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel and George Gabriel Stokes. G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis."
Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel, Paul du Bois-Reymond, Ulisse Dini, Cesare Arzelà and others.
Definition
We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below).
Suppose is a set and is a sequence of real-valued functions on it. We say the sequence is uniformly convergent on with limit if for every there exists a natural number such that for all and for all
The notation for uniform convergence of to is not quite standardized and different authors have used a variety of symbols, including (in roughly decreasing order of popularity):
Frequently, no special symbol is used, and authors simply write
to indicate that convergence is uniform. (In contrast, the expression on without an adverb is taken to mean pointwise convergence on : for all , as .)
Since is a complete metric space, the Cauchy criterion can be used to give an equivalent alternative formulation for uniform convergence: converges uniformly on (in the previous sense) if and only if for every , there exists a natural number such that
.
In yet another equivalent formulation, if we define
then converges to uniformly if and only if as . Thus, we can characterize uniform convergence of on as (simple) convergence of in the function space with respect to the uniform metric (also called the supremum metric), defined by
Symbolically,
.
The sequence is said to be locally uniformly convergent with limit if is a metric space and for every , there exists an such that converges uniformly on It is clear that uniform convergence implies local uniform convergence, which implies pointwise convergence.
| Mathematics | Mathematical analysis | null |
50680 | https://en.wikipedia.org/wiki/Funicular | Funicular | A funicular ( ) is a type of cable railway system that connects points along a railway track laid on a steep slope. The system is characterized by two counterbalanced carriages (also called cars or trains) permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track. The result of such a configuration is that the two carriages move synchronously: as one ascends, the other descends at an equal speed. This feature distinguishes funiculars from inclined elevators, which have a single car that is hauled uphill.
The term funicular derives from the Latin word , the diminutive of , meaning 'rope'.
Operation
In a funicular, both cars are permanently connected to the opposite ends of the same cable, known as a haul rope; this haul rope runs through a system of pulleys at the upper end of the line. If the railway track is not perfectly straight, the cable is guided along the track using sheaves – unpowered pulleys that simply allow the cable to change direction. While one car is pulled upwards by one end of the haul rope, the other car descends the slope at the other end. Since the weight of the two cars is counterbalanced (except for the weight of passengers), no lifting force is required to move them; the engine only has to lift the cable itself and the excess passengers, and supply the energy lost to friction by the cars' wheels and the pulleys.
For passenger comfort, funicular carriages are often (although not always) constructed so that the floor of the passenger deck is horizontal, and not necessarily parallel to the sloped track.
In some installations, the cars are also attached to a second cable – bottom towrope – which runs through a pulley at the bottom of the incline. In these designs, one of the pulleys must be designed as a tensioning wheel to avoid slack in the ropes. One advantage of such an installation is the fact that the weight of the rope is balanced between the carriages; therefore, the engine no longer needs to use any power to lift the cable itself. This practice is used on funiculars with slopes below 6%, funiculars using sledges instead of carriages, or any other case where it is not ensured that the descending car is always able to pull out the cable from the pulley in the station on the top of the incline. It is also used in systems where the engine room is located at the lower end of the track (such as the upper half of the Great Orme Tramway) – in such systems, the cable that runs through the top of the incline is still necessary to prevent the carriages from coasting down the incline.
Types of power systems
Cable drive
In most modern funiculars, neither of the two carriages is equipped with an engine of its own. Instead, the propulsion is provided by an electric motor in the engine room (typically at the upper end of the track); the motor is linked via a speed-reducing gearbox to a large pulley – a drive bullwheel – which then controls the movement of the haul rope using friction. Some early funiculars were powered in the same way, but using steam engines or other types of motor. The bullwheel has two grooves: after the first half turn around it the cable returns via an auxiliary pulley. This arrangement has the advantage of having twice the contact area between the cable and the groove, and returning the downward-moving cable in the same plane as the upward-moving one. Modern installations also use high friction liners to enhance the friction between the bullwheel grooves and the cable.
For emergency and service purposes two sets of brakes are used at the engine room: the emergency brake directly grips the bullwheel, and the service brake is mounted at the high speed shaft of the gear. In case of an emergency the cars are also equipped with spring-applied, hydraulically opened rail brakes.
The first funicular caliper brakes which clamp each side of the crown of the rail were invented by the Swiss entrepreneurs Franz Josef Bucher and Josef Durrer and implemented at the , opened in 1893. The Abt rack and pinion system was also used on some funiculars for speed control or emergency braking.
Water counterbalancing
Many early funiculars were built using water tanks under the floor of each car, which were filled or emptied until just sufficient imbalance was achieved to allow movement, and a few such funiculars still exist and operate in the same way. The car at the top of the hill is loaded with water until it is heavier than the car at the bottom, causing it to descend the hill and pull up the other car. The water is drained at the bottom, and the process repeats with the cars exchanging roles. The movement is controlled by a brakeman using the brake handle of the rack and pinion system engaged with the rack mounted between the rails.
The Bom Jesus funicular built in 1882 near Braga, Portugal is one of the extant systems of this type. Another example, the Fribourg funicular in Fribourg, Switzerland built in 1899, is of particular interest as it utilizes waste water, coming from a sewage plant at the upper part of the city.
Some funiculars of this type were later converted to electrical power. For example, the Giessbachbahn in the Swiss canton of Bern, opened in 1879, was originally powered by water ballast. In 1912 its energy provision was replaced by a hydraulic engine powered by a Pelton turbine. In 1948 this in turn was replaced by an electric motor.
Track layout
There are three main rail layouts used on funiculars; depending on the system, the track bed can consist of four, three, or two rails.
Early funiculars were built to the four-rail layout, with two separate parallel tracks and separate station platforms at both ends for each vehicle. The two tracks are laid with sufficient space between them for the two carriages to pass at the midpoint. While this layout requires the most land area, it is also the only layout that allows both tracks to be perfectly straight, requiring no sheaves on the tracks to keep the cable in place. Examples of four-rail funiculars are the Duquesne Incline in Pittsburgh, Pennsylvania, and most cliff railways in the United Kingdom.
In three-rail layouts, the middle rail is shared by both carriages, while each car runs on a different outer rail. To allow the two cars to pass at the halfway point, the middle rail must briefly split into two, forming a passing loop. Such systems are narrower and require less rail to construct than four-rail systems; however, they still require separate station platforms for each vehicle.
In a two-rail layout, both cars share the entire track except at the passing loop in the middle. This layout is the narrowest of all and needs only a single platform at each station (though sometimes two platforms are built: one for boarding, one for alighting). However, the required passing loop is more complex and costly to build, since special turnout systems must be in place to ensure that each car always enters the correct track at the loop. Furthermore, if a rack for braking is used, that rack can be mounted higher in three-rail and four-rail layouts, making it less sensitive to choking in snowy conditions compared to the two-rail layout.
Some funicular systems use a mix of different track layouts. An example of this arrangement is the lower half of the Great Orme Tramway, where the section "above" the passing loop has a three-rail layout (with each pair of adjacent rails having its own conduit which the cable runs through), while the section "below" the passing loop has a two-rail layout (with a single conduit shared by both cars). Another example is the Peak Tram in Hong Kong, which is mostly of a two-rail layout except for a short three-rail section immediately uphill of the passing loop.
Some four-rail funiculars have their tracks interlaced above and below the passing loop; this allows the system to be nearly as narrow as a two-rail system, with a single platform at each station, while also eliminating the need for the costly junctions either side of the passing loop. The Hill Train at the Legoland Windsor Resort is an example of this configuration.
Turnout systems for two-rail funiculars
In the case of two-rail funiculars, various solutions exist for ensuring that a carriage always enters the same track at the passing loop.
One such solution involves installing switches at each end of the passing loop. These switches are moved into their desired position by the carriage's wheels during trailing movements (i.e. away from the passing loop); this procedure also sets the route for the next trip in the opposite direction. The Great Orme Tramway is an example of a funicular that utilizes this system.
Another turnout system, known as the Abt switch, involves no moving parts on the track at all. Instead, the carriages are built with an unconventional wheelset design: the outboard wheels have flanges on both sides, whereas the inboard wheels are unflanged (and usually wider to allow them to roll over the turnouts more easily). The double-flanged wheels keep the carriages bound to one specific rail at all times. One car has the flanged wheels on the left-hand side, so it follows the leftmost rail, forcing it to run via the left branch of the passing loop; similarly, the other car has them on the right-hand side, meaning it follows the rightmost rail and runs on the right branch of the loop. This system was invented by Carl Roman Abt and first implemented on the Lugano Città–Stazione funicular in Switzerland in 1886; since then, the Abt turnout has gained popularity, becoming a standard for modern funiculars. The lack of moving parts on the track makes this system cost-effective and reliable compared to other systems.
Stations
The majority of funiculars have two stations, one at each end of the track. However, some systems have been built with additional intermediate stations. Because of the nature of a funicular system, intermediate stations are usually built symmetrically about the mid-point; this allows both cars to call simultaneously at a station. Examples of funiculars with more than two stations include the Wellington Cable Car in New Zealand (five stations, including one at the passing loop) and the Carmelit in Haifa, Israel (six stations, three on each side of the passing loop).
A few funiculars with asymmetrically placed stations also exist. For example, the Petřín funicular in Prague has three stations: one at each end, and a third (Nebozízek) a short way up from the passing loop. Because of this arrangement, carriages are forced to make a technical stop a short distance down from the passing loop as well, for the sole purpose of allowing the other car to call at Nebozízek.
History
A number of cable railway systems which pull their cars on inclined slopes were built since the 1820s.
In the second half of the 19th century the design of a funicular as a transit system emerged.
It was especially attractive in comparison with the other systems of the time as counterbalancing of the cars was deemed to be a cost-cutting solution.
The first line of the Funiculars of Lyon () opened in 1862, followed by other lines in 1878, 1891 and 1900. The Budapest Castle Hill Funicular was built in 1868–69, with the first test run on 23 October 1869.
The oldest funicular railway operating in Britain dates from 1875 and is in Scarborough, North Yorkshire.
In Istanbul, Turkey, the Tünel has been in continuous operation since 1875 and is both the first underground funicular and the second-oldest underground railway.
It remained powered by a steam engine up until it was taken for renovation in 1968.
Until the end of the 1870s, the four-rail parallel-track funicular was the normal configuration. Carl Roman Abt developed the Abt Switch allowing the two-rail layout, which was used for the first time in 1879 when the Giessbach Funicular opened in Switzerland.
In the United States, the first funicular to use a two-rail layout was the Telegraph Hill Railroad in San Francisco, which was in operation from 1884 until 1886. The Mount Lowe Railway in Altadena, California, was the first mountain railway in the United States to use the three-rail layout. Three- and two-rail layouts considerably reduced the space required for building a funicular, reducing grading costs on mountain slopes and property costs for urban funiculars. These layouts enabled a funicular boom in the latter half of the 19th century.
Currently, the United States' oldest and steepest funicular in continuous use is the Monongahela Incline located in Pittsburgh, Pennsylvania. Construction began in 1869 and officially opened 28 May 1870 for passenger use. The Monongahela incline also has the distinction of being the first funicular in the United States for strictly passenger use and not freight.
In 1880 the funicular of Mount Vesuvius inspired the Italian popular song Funiculì, Funiculà. This funicular was destroyed repeatedly by volcanic eruptions and abandoned after the eruption of 1944.
Exceptional examples
According to the Guinness World Records, the smallest public funicular in the world is the Fisherman's Walk Cliff Railway in Bournemouth, England, which is long.
Stoosbahn in Switzerland, with a maximum slope of 110% (47.7°), is the steepest funicular in the world.
The Lynton and Lynmouth Cliff Railway, built in 1888, is the steepest and longest water-powered funicular in the world. It climbs vertically on a 58% gradient.
The city of Valparaíso in Chile used to have up to 30 funicular elevators (). The oldest of them dates from 1883. 15 remain with almost half in operation, and others in various stages of restoration.
The Carmelit in Haifa, Israel, with six stations and a tunnel 1.8 km (1.1 mi) long, is claimed by the Guinness World Records as the "least extensive metro" in the world. Technically, it is an underground funicular.
The Dresden Suspension Railway (), which hangs from an elevated rail, is the only suspended funicular in the world.
The Fribourg funicular is the only funicular in the world powered by wastewater.
Standseilbahn Linth-Limmern, capable of moving 215 t, is said to have the highest capacity.
Comparison with inclined elevators
Some inclined elevators are incorrectly called funiculars. On an inclined elevator the cars operate independently rather than in interconnected pairs, and are lifted uphill.
A notable example is Paris' Montmartre Funicular. Its formal title is a relic of its original configuration, when its two cars operated as a counterbalanced, interconnected pair, always moving in opposite directions, thus meeting the definition of a funicular. However, the system has since been redesigned, and now uses two independently-operating cars that can each ascend or descend on demand, qualifying as a double inclined elevator; the term "funicular" in its title is retained as a historical reference.
| Technology | Rail and cable transport | null |
50691 | https://en.wikipedia.org/wiki/Douglas%20fir | Douglas fir | The Douglas fir (Pseudotsuga menziesii) is an evergreen conifer species in the pine family, Pinaceae. It is native to western North America and is also known as Douglas-fir, Douglas spruce, Oregon pine, and Columbian pine. There are three varieties: coast Douglas-fir (P. menziesii var. menziesii), Rocky Mountain Douglas-fir (P. menziesii var. glauca) and Mexican Douglas-fir (P. menziesii var. lindleyana).
Despite its common names, it is not a true fir (genus Abies), spruce (genus Picea), or pine (genus Pinus). It is also not a hemlock; the genus name Pseudotsuga means "false hemlock".
Description
Douglas-fir is a medium-sized to extremely large evergreen tree, tall (although only coast Douglas-firs, reach heights near 100 m) and commonly reach in diameter, although trees with diameters of almost exist. The largest coast Douglas-firs regularly live over 500 years, with the oldest specimens living for over 1,300 years. Rocky Mountain Douglas-firs, found further to the east, are less long-lived, usually not exceeding 400 years in age.
There are records of former coast Douglas-firs exceeding in height, which if alive today would make it the tallest tree species on Earth. Particular historical specimens with heights exceeding 400 ft include the Lynn Valley Tree and the Nooksack Giant.
The leaves are flat, soft, linear needles long, generally resembling those of the firs, occurring singly rather than in fascicles; they completely encircle the branches, which can be useful in recognizing the species. As the trees grow taller in denser forest, they lose their lower branches, such that the foliage may start as high as off the ground. Douglas-firs in environments with more light may have branches much closer to the ground.
The bark on young trees is thin, smooth, gray, and contains numerous resin blisters. On mature trees, usually exceeding 80 years, it is very thick and corky, growing up to thick with distinctive, deep vertical fissures caused by the gradual expansion of the growing tree. Some of the mature bark is brown, while other parts are lighter colored with a cork-like texture; these develop in multiple layers. This thick bark makes the Douglas-fir one of the most fire-resistant tree native to the Pacific Northwest.
The male cones are yellowish red, long. The female cones are green when young, maturing to reddish-brown or gray, long; they are pendulous, with persistent scales, unlike those of true firs. They have distinctive long, trifid (three-pointed) bracts which protrude prominently above each scale and are said to resemble the back half of a mouse, with two feet and a tail. The seeds are long, with a longer wing.
The massive mega-genome of Douglas-fir was sequenced in 2017 by the large PineRefSeq consortium, revealing a specialized photosynthetic apparatus in the light-harvesting complex of genes.
Taxonomy
The common name honors David Douglas, a Scottish botanist and collector who first reported the extraordinary nature and potential of the species. The common name is misleading since it is not a true fir, i.e., not a member of the genus Abies. For this reason, the name is often written as Douglas-fir (a name also used for the genus Pseudotsuga as a whole).
The specific epithet menziesii is after Archibald Menzies, a Scottish physician and rival naturalist to David Douglas. Menzies first documented the tree on Vancouver Island in 1791. Colloquially, the species is also known simply as Doug fir or Douglas pine (although the latter common name may also refer to Pinus douglasiana). Other names for this tree have included Oregon pine, British Columbian pine, Puget Sound pine, Douglas spruce, false hemlock, red fir, or red pine (although again red pine may refer to a different tree species, Pinus resinosa, and red fir may refer to Abies magnifica).
One Coast Salish name for the tree, used in the Halkomelem language, is . In the Lushootseed language, the tree is called .
Distribution
Pseudotsuga menziesii var. menziesii, the coast Douglas-fir, grows in the coastal regions from west-central British Columbia southward to Central California. In Oregon and Washington, its range is continuous from the eastern edge of the Cascades west to the Pacific Coast Ranges and Pacific Ocean. In California, it is found in the Klamath and California Coast Ranges as far south as the Santa Lucia Range, with a small stand as far south as the Purisima Hills in Santa Barbara County. One of the last remaining old growth stands of conifers is in the Mattole Watershed, and is under threat of logging. In the Sierra Nevada, it ranges as far south as the Yosemite region. It occurs from sea level along the coast to elevations of or higher, and inland in some cases up to .
Another variety exists further inland, Pseudotsuga menziesii var. glauca, the Rocky Mountain Douglas-fir or interior Douglas-fir. Interior Douglas-fir intergrades with coast Douglas-fir in the Cascades of northern Washington and southern British Columbia, and from there ranges northward to central British Columbia and southeastward to the Mexican border, becoming increasingly disjunct as latitude decreases and altitude increases. Mexican Douglas-fir (P. lindleyana), which ranges as far south as Oaxaca, is often considered a variety of P. menziesii.
Fossils (wood, pollen) of Pseudotsuga are recorded from the Miocene and Pliocene of Europe (Siebengebirge, Gleiwitz, Austria).
It is also naturalised throughout Europe, Argentina and Chile (called Pino Oregón). In New Zealand it is considered to be an invasive species, called a wilding conifer, and is subject to control measures. But is also one of the most common lumber trees used in forestry alongside Radiata pine with large plantations throughout the country. The species was introduced in the 20th century for its wood.
Ecology
Preferred sites
Douglas-fir prefers acidic or neutral soils, such as Olympic soil. However, it exhibits considerable morphological plasticity, and on drier sites P. menziesii var. menziesii will generate deeper taproots. Pseudotsuga menziesii var. glauca exhibits even greater plasticity, occurring in stands of interior temperate rainforest in British Columbia, as well as at the edge of semi-arid sagebrush steppe throughout much of its range, where it generates even deeper taproots still.
The coast Douglas-fir variety is the dominant tree west of the Cascade Mountains in the Pacific Northwest. It occurs in nearly all forest types and competes well on most parent materials, aspects, and slopes. Adapted to a more moist, mild climate than the interior subspecies, it grows larger and faster than Rocky Mountain Douglas-fir. Associated trees include western hemlock, Sitka spruce, sugar pine, western white pine, ponderosa pine, grand fir, coast redwood, western redcedar, California incense-cedar, Lawson's cypress, tanoak, bigleaf maple and several others. Pure stands are also common, particularly north of the Umpqua River in Oregon. It is most dominant in areas with a more frequent fire regime that suppresses less fire-resistant conifers.
Use by animals
Douglas-fir seeds are an extremely important food source for small mammals such as moles, shrews, and chipmunks, which consume an estimated 65% of each annual seed crop. The Douglas squirrel harvests and hoards great quantities of Douglas-fir cones, and also consumes mature pollen cones, the inner bark, terminal shoots, and developing young needles.
Mature or "old-growth" Douglas-fir forest is the primary habitat of the red tree vole (Arborimus longicaudus) and the spotted owl (Strix occidentalis). Home range requirements for breeding pairs of spotted owls are at least of old growth. Red tree voles may also be found in immature forests if Douglas-fir is a significant component. The red vole nests almost exclusively in the foliage of the trees, typically above the ground, and its diet consists chiefly of Douglas-fir needles.
Douglas-fir needles are generally poor browse for ungulates, although in the winter when other food sources are lacking it can become important, and black-tailed deer browse new seedlings and saplings in spring and summer. The spring diet of the blue grouse features Douglas-fir needles prominently.
The leaves are also used by the woolly conifer aphid Adelges cooleyi; this 0.5 mm-long sap-sucking insect is conspicuous on the undersides of the leaves by the small white "fluff spots" of protective wax that it produces. It is often present in large numbers, and can cause the foliage to turn yellowish from the damage it causes. Exceptionally, trees may be partially defoliated by it, but the damage is rarely this severe. Among Lepidoptera, apart from some that feed on Pseudotsuga in general, the gelechiid moths Chionodes abella and C. periculella as well as the cone scale-eating tortrix moth Cydia illutana have been recorded specifically on P. menziesii.
The inner bark is the primary winter food for the North American porcupine.
Poriol is a flavanone, a type of flavonoid, produced by P. menziesii in reaction to infection by Poria weirii.
Value to other plants
A parasitic plant which uses P. menziesii is the Douglas-fir dwarf mistletoe (Arceuthobium douglasii). Epiphytes such as crustose lichens and mosses are common sights on Douglas-firs. As it is only moderately shade tolerant, undisturbed Douglas-fir stands in humid areas will eventually give way to later successional, more shade-tolerant associates such as the western redcedar and western hemlock—though this process may take a thousand years or more. It is more shade tolerant than some associated fire-dependent species, such as western larch and ponderosa pine, and often replaces these species further inland.
Diseases and insects
Fungal diseases such as laminated root rot and shoestring root-rot can cause significant damage, and in plantation settings dominated by Douglas-fir monocultures may cause extreme damage to vast swathes of trees. Interplanting with resistant or nonhost species such as western redcedar and beaked hazelnut can reduce this risk. Other threats to Douglas-fir include red ring rot and the Douglas-fir beetle.
Uses
Many different Native American groups used the bark, resin, and needles to make herbal treatments for various diseases. Native Hawaiians built waʻa kaulua (double-hulled canoes) from coast Douglas-fir logs that had drifted ashore. The wood has historically been favored as firewood, especially from the coastal variety. In addition early settlers used Douglas-fir for all forms of building construction, including floors, beams, and fine carving.
The species is extensively used in forestry management as a plantation tree for softwood timber. Douglas-fir is one of the world's best timber-producing species and yields more timber than any other species in North America, making the forestlands of western Oregon, Washington, and British Columbia the most productive on the continent. In 2011, Douglas-fir represented 34.2% of US lumber exports, to a total of 1.053 billion board-feet. Douglas-fir timber is used for timber frame construction and timber trusses using traditional joinery, veneer, and flooring due to its strength, hardness and durability. As of 2024, the only wooden ships still currently in use by the U.S. Navy in conventional naval operations are Avenger-class minesweepers, made of Douglas-fir.
Douglas-fir sees wide use in heavy timber structures, as its wood is strong, available in a number of specifications including kiln dried and grade stamped, and can be supplied in very long lengths to 60 feet. West coast mills are sophisticated in their processing of timbers, making lead times predictable and availability reliable. Paints adhere well to Douglas-fir. Stains perform well on Douglas-fir timbers with the mild caution that the natural color of this species varies and care must be taken to ensure uniformity of color. Pitch pockets that may ooze resin can be present in timbers that have not been kiln dried. Because of the timber sizes available, stamped timber grading, and relatively short lead times, Douglas-fir sees wide use in both public and residential projects.
The species has ornamental value in large parks and gardens. It has been commonly used as a Christmas tree since the 1920s, and the trees are typically grown on plantations.
The buds have been used to flavor eau de vie, a clear, colorless fruit brandy. Douglas-fir pine leaves can be used to make pine needle tea. They possess a tangy citrus flavor and may serve in some recipes as a wild substitute for rosemary.
| Biology and health sciences | Gymnosperms | null |
50702 | https://en.wikipedia.org/wiki/Environmental%20engineering | Environmental engineering | Environmental engineering is a professional engineering discipline related to environmental science. It encompasses broad scientific topics like chemistry, biology, ecology, geology, hydraulics, hydrology, microbiology, and mathematics to create solutions that will protect and also improve the health of living organisms and improve the quality of the environment. Environmental engineering is a sub-discipline of civil engineering and chemical engineering. While on the part of civil engineering, the Environmental Engineering is focused mainly on Sanitary Engineering.
Environmental engineering applies scientific and engineering principles to improve and maintain the environment to protect human health, protect nature's beneficial ecosystems, and improve environmental-related enhancement of the quality of human life.
Environmental engineers devise solutions for wastewater management, water and air pollution control, recycling, waste disposal, and public health. They design municipal water supply and industrial wastewater treatment systems, and design plans to prevent waterborne diseases and improve sanitation in urban, rural and recreational areas. They evaluate hazardous-waste management systems to evaluate the severity of such hazards, advise on treatment and containment, and develop regulations to prevent mishaps. They implement environmental engineering law, as in assessing the environmental impact of proposed construction projects.
Environmental engineers study the effect of technological advances on the environment, addressing local and worldwide environmental issues such as acid rain, global warming, ozone depletion, water pollution and air pollution from automobile exhausts and industrial sources.
Most jurisdictions impose licensing and registration requirements for qualified environmental engineers.
Etymology
The word environmental has its root in the late 19th-century French word environ (verb), meaning to encircle or to encompass. The word environment was used by Carlyle in 1827 to refer to the aggregate of conditions in which a person or thing lives. The meaning shifted again in 1956 when it was used in the ecological sense, where Ecology is the branch of science dealing with the relationship of living things to their environment.
The second part of the phrase environmental engineer originates from Latin roots and was used in the 14th century French as engignour, meaning a constructor of military engines such as trebuchets, harquebuses, longbows, cannons, catapults, ballistas, stirrups, armour as well as other deadly or bellicose contraptions. The word engineer was not used to reference public works until the 16th century; and it likely entered the popular vernacular as meaning a contriver of public works during John Smeaton's time.
History
Ancient civilizations
Environmental engineering is a name for work that has been done since early civilizations, as people learned to modify and control the environmental conditions to meet needs. As people recognized that their health was related to the quality of their environment, they built systems to improve it. The ancient Indus Valley Civilization (3300 B.C.E. to 1300 B.C.E.) had advanced control over their water resources. The public work structures found at various sites in the area include wells, public baths, water storage tanks, a drinking water system, and a city-wide sewage collection system. They also had an early canal irrigation system enabling large-scale agriculture.
From 4000 to 2000 B.C.E., many civilizations had drainage systems and some had sanitation facilities, including the Mesopotamian Empire, Mohenjo-Daro, Egypt, Crete, and the Orkney Islands in Scotland. The Greeks also had aqueducts and sewer systems that used rain and wastewater to irrigate and fertilize fields.
The first aqueduct in Rome was constructed in 312 B.C.E., and the Romans continued to construct aqueducts for irrigation and safe urban water supply during droughts. They also built an underground sewer system as early as the 7th century B.C.E. that fed into the Tiber River, draining marshes to create farmland as well as removing sewage from the city.
Modern era
Very little change was seen from the decline of the Roman Empire until the 19th century, where improvements saw increasing efforts focused on public health. Modern environmental engineering began in London in the mid-19th century when Joseph Bazalgette designed the first major sewerage system following the Great Stink. The city's sewer system conveyed raw sewage to the River Thames, which also supplied the majority of the city's drinking water, leading to an outbreak of cholera. The introduction of drinking water treatment and sewage treatment in industrialized countries reduced waterborne diseases from leading causes of death to rarities.
The field emerged as a separate academic discipline during the middle of the 20th century in response to widespread public concern about water and air pollution and other environmental degradation. As society and technology grew more complex, they increasingly produced unintended effects on the natural environment. One example is the widespread application of the pesticide DDT to control agricultural pests in the years following World War II. The story of DDT as vividly told in Rachel Carson's Silent Spring (1962) is considered to be the birth of the modern environmental movement, which led to the modern field of "environmental engineering."
Education
Many universities offer environmental engineering programs through either the department of civil engineering or chemical engineering and also including electronic projects to develop and balance the environmental
conditions. Environmental engineers in a civil engineering program often focus on hydrology, water resources management, bioremediation, and water and wastewater treatment plant design. Environmental engineers in a chemical engineering program tend to focus on environmental chemistry, advanced air and water treatment technologies, and separation processes. Some subdivisions of environmental engineering include natural resources engineering and agricultural engineering.
Courses for students fall into a few broad classes:
Mechanical engineering courses oriented towards designing machines and mechanical systems for environmental use such as water and wastewater treatment facilities, pumping stations, garbage segregation plants, and other mechanical facilities.
Environmental engineering or environmental systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment.
Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects of chemicals in the environment, including any mining processes, pollutants, and also biochemical processes.
Environmental technology courses oriented towards producing electronic or electrical graduates capable of developing devices and artifacts able to monitor, measure, model and control environmental impact, including monitoring and managing energy generation from renewable sources.
Curriculum
The following topics make up a typical curriculum in environmental engineering:
Mass and Energy transfer
Environmental chemistry
Inorganic chemistry
Organic Chemistry
Nuclear Chemistry
Growth models
Resource consumption
Population growth
Economic growth
Risk assessment
Hazard identification
Dose-response Assessment
Exposure assessment
Risk characterization
Comparative risk analysis
Water pollution
Water resources and pollutants
Oxygen demand
Pollutant transport
Water and waste water treatment
Air pollution
Industry, transportation, commercial and residential emissions
Criteria and toxic air pollutants
Pollution modelling (e.g. Atmospheric dispersion modeling)
Pollution control
Air pollution and meteorology
Global change
Greenhouse effect and global temperature
Carbon, nitrogen, and oxygen cycle
IPCC emissions scenarios
Oceanic changes (ocean acidification, other effects of global warming on oceans) and changes in the stratosphere (see Physical impacts of climate change)
Solid waste management and resource recovery
Life cycle assessment
Source reduction
Collection and transfer operations
Recycling
Waste-to-energy conversion
Landfill
Applications
Water supply and treatment
Environmental engineers evaluate the water balance within a watershed and determine the available water supply, the water needed for various needs in that watershed, the seasonal cycles of water movement through the watershed and they develop systems to store, treat, and convey water for various uses.
Water is treated to achieve water quality objectives for the end uses. In the case of a potable water supply, water is treated to minimize the risk of infectious disease transmission, the risk of non-infectious illness, and to create a palatable water flavor. Water distribution systems are designed and built to provide adequate water pressure and flow rates to meet various end-user needs such as domestic use, fire suppression, and irrigation.
Wastewater treatment
There are numerous wastewater treatment technologies. A wastewater treatment train can consist of a primary clarifier system to remove solid and floating materials, a secondary treatment system consisting of an aeration basin followed by flocculation and sedimentation or an activated sludge system and a secondary clarifier, a tertiary biological nitrogen removal system, and a final disinfection process. The aeration basin/activated sludge system removes organic material by growing bacteria (activated sludge). The secondary clarifier removes the activated sludge from the water. The tertiary system, although not always included due to costs, is becoming more prevalent to remove nitrogen and phosphorus and to disinfect the water before discharge to a surface water stream or ocean outfall.
Air pollution management
Scientists have developed air pollution dispersion models to evaluate the concentration of a pollutant at a receptor or the impact on overall air quality from vehicle exhausts and industrial flue gas stack emissions. To some extent, this field overlaps the desire to decrease carbon dioxide and other greenhouse gas emissions from combustion processes.
Environmental impact assessment and mitigation
Environmental engineers apply scientific and engineering principles to evaluate if there are likely to be any adverse impacts to water quality, air quality, habitat quality, flora and fauna, agricultural capacity, traffic, ecology, and noise. If impacts are expected, they then develop mitigation measures to limit or prevent such impacts. An example of a mitigation measure would be the creation of wetlands in a nearby location to mitigate the filling in of wetlands necessary for a road development if it is not possible to reroute the road.
In the United States, the practice of environmental assessment was formally initiated on January 1, 1970, the effective date of the National Environmental Policy Act (NEPA). Since that time, more than 100 developing and developed nations either have planned specific analogous laws or have adopted procedure used elsewhere. NEPA is applicable to all federal agencies in the United States.
Regulatory agencies
Environmental Protection Agency
The U.S. Environmental Protection Agency (EPA) is one of the many agencies that work with environmental engineers to solve critical issues. An essential component of EPA's mission is to protect and improve air, water, and overall environmental quality to avoid or mitigate the consequences of harmful effects.
| Technology | Disciplines | null |
50705 | https://en.wikipedia.org/wiki/Construction%20engineering | Construction engineering | Construction engineering, also known as construction operations, is a professional subdiscipline of civil engineering that deals with the designing, planning, construction, and operations management of infrastructure such as roadways, tunnels, bridges, airports, railroads, facilities, buildings, dams, utilities and other projects. Construction engineers learn some of the design aspects similar to civil engineers as well as project management aspects.
At the educational level, civil engineering students concentrate primarily on the design work which is more analytical, gearing them toward a career as a design professional. This essentially requires them to take a multitude of challenging engineering science and design courses as part of obtaining a 4-year accredited degree. Education for construction engineers is primarily focused on construction procedures, methods, costs, schedules and personnel management. Their primary concern is to deliver a project on time within budget and of the desired quality.
Regarding educational requirements, construction engineering students take basic design courses in civil engineering, as well as construction management courses.
Work activities
Being a sub-discipline of civil engineering, construction engineers apply their knowledge and business, technical and management skills obtained from their undergraduate degree to oversee projects that include bridges, buildings and housing projects. Construction engineers are heavily involved in the design and management/ allocation of funds in these projects. They are charged with risk analysis, costing and planning. A career in design work does require a professional engineer license (PE). Individuals who pursue this career path are strongly advised to sit for the Engineer in Training exam (EIT), also, referred to as the Fundamentals of Engineering Exam (FE) while in college as it takes five years' (4 years in USA) post-graduate to obtain the PE license. Some states have recently changed the PE license exam pre-requisite of 4 years work experience after graduation to become a licensed Professional Engineer where an EIT is eligible to take the PE Exam in as little as 6 months after taking the FE exam.
Entry-level construction engineers position is typically project engineers or assistant project engineers. They are responsible for preparing purchasing requisitions, processing change orders, preparing monthly budgeting reports and handling meeting minutes. The construction management position does not necessarily require a PE license; however, possessing one does make the individual more marketable, as the PE license allows the individual to sign off on temporary structure designs.
Abilities
Construction engineers are problem solvers. They contribute to the creation of infrastructure that best meets the unique demands of its environment. They must be able to understand infrastructure life cycles. When compared and contrasted to design engineers, construction engineers bring to the table their own unique perspectives for solving technical challenges with clarity and imagination. While individuals considering this career path should certainly have a strong understanding of mathematics and science, many other skills are also highly desirable, including critical and analytical thinking, time management, people management and good communication skills.
Educational requirements
Individuals looking to obtain a construction engineering degree must first ensure that the program is accredited by the Accreditation Board for Engineering and Technology (ABET). ABET accreditation is assurance that a college or university program meets the quality standards established by the profession for which it prepares its students. In the US there are currently twenty-five programs that exist in the entire country so careful college consideration is advised.
A typical construction engineering curriculum is a mixture of engineering mechanics, engineering design, construction management and general science and mathematics. This usually leads to a Bachelor of Science degree. The B.S. degree along with some design or construction experience is sufficient for most entry-level positions. Graduate schools may be an option for those who want to go further in depth of the construction and engineering subjects taught at the undergraduate level. In most cases construction engineering graduates look to either civil engineering, engineering management or business administration as a possible graduate degree.
Job prospects
Job prospects for construction engineers generally have a strong cyclical variation. For example, starting in 2008 and continuing until at least 2011, job prospects have been poor due to the collapse of housing bubbles in many parts of the world. This sharply reduced demand for construction, forced construction professionals towards infrastructure construction and therefore increased the competition faced by established and new construction engineers. This increased competition and a core reduction in quantity demand is in parallel with a possible shift in the demand for construction engineers due to the automation of many engineering tasks, overall resulting in reduced prospects for construction engineers. In early 2010, the United States construction industry had a 27% unemployment rate, this is nearly three times higher than the 9.7% national average unemployment rate. The construction unemployment rate (including tradesmen) is comparable to the United States 1933 unemployment rate—the lowest point of the Great Depression—of 25%.
Remuneration
The average salary for a civil engineer in the UK depends on the sector and more specifically the level of experience of the individual. A 2010 survey of the remuneration and benefits of those occupying jobs in construction and the built environment industry showed that the average salary of a civil engineer in the UK is £29,582. In the United States, as of May 2013, the average was $85,640. The average salary varies depending on experience, for example the average annual salary for a civil engineer with between 3 and 6 years' experience is £23,813. For those with between 14 and 20 years' experience the average is £38,214.
| Technology | Disciplines | null |
50719 | https://en.wikipedia.org/wiki/Quantum%20harmonic%20oscillator | Quantum harmonic oscillator | The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.
One-dimensional harmonic oscillator
Hamiltonian and energy eigenstates
The Hamiltonian of the particle is:
where is the particle's mass, is the force constant, is the angular frequency of the oscillator, is the position operator (given by in the coordinate basis), and is the momentum operator (given by in the coordinate basis). The first term in the Hamiltonian represents the kinetic energy of the particle, and the second term represents its potential energy, as in Hooke's law.
The time-independent Schrödinger equation (TISE) is,
where denotes a real number (which needs to be determined) that will specify a time-independent energy level, or eigenvalue, and the solution denotes that level's energy eigenstate.
Then solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function , using a spectral method. It turns out that there is a family of solutions. In this basis, they amount to Hermite functions,
The functions Hn are the physicists' Hermite polynomials,
The corresponding energy levels are
The expectation values of position and momentum combined with variance of each variable can be derived from the wavefunction to understand the behavior of the energy eigenkets. They are shown to be and owing to the symmetry of the problem, whereas:
The variance in both position and momentum are observed to increase for higher energy levels. The lowest energy level has value of which is its minimum value due to uncertainty relation and also corresponds to a gaussian wavefunction.
This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the state, called the ground state) is not equal to the minimum of the potential well, but above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle.
The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy. As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. (See the discussion below of the highly excited states.) This is consistent with the classical harmonic oscillator, in which the particle spends more of its time (and is therefore more likely to be found) near the turning points, where it is moving the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian.
Ladder operator method
The "ladder operator" method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators and its adjoint ,
Note these operators classically are exactly the generators of normalized rotation in the phase space of and , i.e they describe the forwards and backwards evolution in time of a classical harmonic oscillator.
These operators lead to the following representation of and ,
The operator is not Hermitian, since itself and its adjoint are not equal. The energy eigenstates , when operated on by these ladder operators, give
From the relations above, we can also define a number operator , which has the following property:
The following commutators can be easily obtained by substituting the canonical commutation relation,
and the Hamilton operator can be expressed as
so the eigenstates of are also the eigenstates of energy.
To see that, we can apply to a number state :
Using the property of the number operator :
we get:
Thus, since solves the TISE for the Hamiltonian operator , is also one of its eigenstates with the corresponding eigenvalue:
QED.
The commutation property yields
and similarly,
This means that acts on to produce, up to a multiplicative constant, , and acts on to produce . For this reason, is called an annihilation operator ("lowering operator"), and a creation operator ("raising operator"). The two operators together are called ladder operators.
Given any energy eigenstate, we can act on it with the lowering operator, , to produce another eigenstate with less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to . However, since
the smallest eigenvalue of the number operator is 0, and
In this case, subsequent applications of the lowering operator will just produce zero, instead of additional energy eigenstates. Furthermore, we have shown above that
Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates
such that
which matches the energy spectrum given in the preceding section.
Arbitrary eigenstates can be expressed in terms of |0⟩,
Analytical questions
The preceding analysis is algebraic, using only the commutation relations between the raising and lowering operators. Once the algebraic analysis is complete, one should turn to analytical questions. First, one should find the ground state, that is, the solution of the equation . In the position representation, this is the first-order differential equation
whose solution is easily found to be the Gaussian
Conceptually, it is important that there is only one solution of this equation; if there were, say, two linearly independent ground states, we would get two independent chains of eigenvectors for the harmonic oscillator. Once the ground state is computed, one can show inductively that the excited states are Hermite polynomials times the Gaussian ground state, using the explicit form of the raising operator in the position representation. One can also prove that, as expected from the uniqueness of the ground state, the Hermite functions energy eigenstates constructed by the ladder method form a complete orthonormal set of functions.
Explicitly connecting with the previous section, the ground state |0⟩ in the position representation is determined by ,
hence
so that , and so on.
Natural length and energy scales
The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization.
The result is that, if energy is measured in units of and distance in units of , then the Hamiltonian simplifies to
while the energy eigenfunctions and eigenvalues simplify to Hermite functions and integers offset by a half,
where are the Hermite polynomials.
To avoid confusion, these "natural units" will mostly not be adopted in this article. However, they frequently come in handy when performing calculations, by bypassing clutter.
For example, the fundamental solution (propagator) of , the time-dependent Schrödinger operator for this oscillator, simply boils down to the Mehler kernel,
where . The most general solution for a given initial configuration then is simply
Coherent states
The coherent states (also known as Glauber states) of the harmonic oscillator are special nondispersive wave packets, with minimum uncertainty , whose observables' expectation values evolve like a classical system. They are eigenvectors of the annihilation operator, not the Hamiltonian, and form an overcomplete basis which consequentially lacks orthogonality.
The coherent states are indexed by and expressed in the basis as
Since coherent states are not energy eigenstates, their time evolution is not a simple shift in wavefunction phase. The time-evolved states are, however, also coherent states but with phase-shifting parameter instead: .
Because and via the Kermack-McCrae identity, the last form is equivalent to a unitary displacement operator acting on the ground state: . Calculating the expectation values:
where is the phase contributed by complex . These equations confirm the oscillating behavior of the particle.
The uncertainties calculated using the numeric method are:
which gives . Since the only wavefunction that can have lowest position-momentum uncertainty, , is a gaussian wavefunction, and since the coherent state wavefunction has minimum position-momentum uncertainty, we note that the general gaussian wavefunction in quantum mechanics has the form:Substituting the expectation values as a function of time, gives the required time varying wavefunction.
The probability of each energy eigenstates can be calculated to find the energy distribution of the wavefunction:
which corresponds to Poisson distribution.
Highly excited states
When is large, the eigenstates are localized into the classical allowed region, that is, the region in which a classical particle with energy can move. The eigenstates are peaked near the turning points: the points at the ends of the classically allowed region where the classical particle changes direction. This phenomenon can be verified through asymptotics of the Hermite polynomials, and also through the WKB approximation.
The frequency of oscillation at is proportional to the momentum of a classical particle of energy and position . Furthermore, the square of the amplitude (determining the probability density) is inversely proportional to , reflecting the length of time the classical particle spends near . The system behavior in a small neighborhood of the turning point does not have a simple classical explanation, but can be modeled using an Airy function. Using properties of the Airy function, one may estimate the probability of finding the particle outside the classically allowed region, to be approximately
This is also given, asymptotically, by the integral
Phase space solutions
In the phase space formulation of quantum mechanics, eigenstates of the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution.
The Wigner quasiprobability distribution for the energy eigenstate is, in the natural units described above,
where Ln are the Laguerre polynomials. This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map.
Meanwhile, the Husimi Q function of the harmonic oscillator eigenstates have an even simpler form. If we work in the natural units described above, we have
This claim can be verified using the Segal–Bargmann transform. Specifically, since the raising operator in the Segal–Bargmann representation is simply multiplication by and the ground state is the constant function 1, the normalized harmonic oscillator states in this representation are simply . At this point, we can appeal to the formula for the Husimi Q function in terms of the Segal–Bargmann transform.
N-dimensional isotropic harmonic oscillator
The one-dimensional harmonic oscillator is readily generalizable to dimensions, where . In one dimension, the position of the particle was specified by a single coordinate, . In dimensions, this is replaced by position coordinates, which we label . Corresponding to each position coordinate is a momentum; we label these . The canonical commutation relations between these operators are
The Hamiltonian for this system is
As the form of this Hamiltonian makes clear, the -dimensional harmonic oscillator is exactly analogous to independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities would refer to the positions of each of the particles. This is a convenient property of the potential, which allows the potential energy to be separated into terms depending on one coordinate each.
This observation makes the solution straightforward. For a particular set of quantum numbers the energy eigenfunctions for the -dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as:
In the ladder operator method, we define sets of ladder operators,
By an analogous procedure to the one-dimensional case, we can then show that each of the and operators lower and raise the energy by respectively. The Hamiltonian is
This Hamiltonian is invariant under the dynamic symmetry group (the unitary group in dimensions), defined by
where is an element in the defining matrix representation of .
The energy levels of the system are
As in the one-dimensional case, the energy is quantized. The ground state energy is times the one-dimensional ground energy, as we would expect using the analogy to independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In -dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy.
The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define . All states with the same will have the same energy. For a given , we choose a particular . Then . There are possible pairs . can take on the values to , and for each the value of is fixed. The degree of degeneracy therefore is:
Formula for general and [ being the dimension of the symmetric irreducible -th power representation of the unitary group ]:
The special case = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in dimensions (as dimensions are distinguishable). For the case of bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer using integers less than or equal to .
This arises due to the constraint of putting quanta into a state ket where and , which are the same constraints as in integer partition.
Example: 3D isotropic harmonic oscillator
The Schrödinger equation for a particle in a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with a different spherically symmetric potential
where is the mass of the particle. Because will be used below for the magnetic quantum number, mass is indicated by , instead of , as earlier in this article.
The solution to the equation is:
where
is a normalization constant; ;
are generalized Laguerre polynomials; The order of the polynomial is a non-negative integer;
is a spherical harmonic function;
is the reduced Planck constant:
The energy eigenvalue is
The energy is usually described by the single quantum number
Because is a non-negative integer, for every even we have and for every odd we have . The magnetic quantum number is an integer satisfying , so for every and ℓ there are 2ℓ + 1 different quantum states, labeled by . Thus, the degeneracy at level is
where the sum starts from 0 or 1, according to whether is even or odd.
This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of , the relevant degeneracy group.
Applications
Harmonic oscillators lattice: phonons
The notation of a harmonic oscillator can be extended to a one-dimensional lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions.
As in the previous section, we denote the positions of the masses by , as measured from their equilibrium positions (i.e. if the particle is at its equilibrium position). In two or more dimensions, the are vector quantities. The Hamiltonian for this system is
where is the (assumed uniform) mass of each atom, and and are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space.
We introduce, then, a set of "normal coordinates" , defined as the discrete Fourier transforms of the s, and "conjugate momenta" defined as the Fourier transforms of the s,
The quantity will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite.
This preserves the desired commutation relations in either real space or wave vector space
From the general result
it is easy to show, through elementary trigonometry, that the potential energy term is
where
The Hamiltonian may be written in wave vector space as
Note that the couplings between the position variables have been transformed away; if the s and s were hermitian (which they are not), the transformed Hamiltonian would describe uncoupled harmonic oscillators.
The form of the quantization depends on the choice of boundary conditions; for simplicity, we impose periodic boundary conditions, defining the -th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is
The upper bound to comes from the minimum wavelength, which is twice the lattice spacing , as discussed above.
The harmonic oscillator eigenvalues or energy levels for the mode are
If we ignore the zero-point energy then the levels are evenly spaced at
So an exact amount of energy , must be supplied to the harmonic oscillator lattice to push it to the next energy level. In analogy to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon.
All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described elsewhere.
In the continuum limit, , , while is held fixed. The canonical coordinates devolve to the decoupled momentum modes of a scalar field, , whilst the location index (not the displacement dynamical variable) becomes the parameter argument of the scalar field, .
Molecular vibrations
The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by where is the reduced mass and and are the masses of the two atoms.
The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator.
Modelling phonons, as discussed above.
A charge with mass in a uniform magnetic field is an example of a one-dimensional quantum harmonic oscillator: Landau quantization.
| Physical sciences | Quantum mechanics | Physics |
50748 | https://en.wikipedia.org/wiki/Paris%20M%C3%A9tro | Paris Métro | The Paris Métro (, ), short for Métropolitain (), is a rapid transit system serving the Paris metropolitan area in France. A symbol of the city, it is known for its density within the capital's territorial limits, uniform architecture and historical entrances influenced by Art Nouveau. The system is long, mostly underground. It has 321 stations of which 61 have transfers between lines. Operated by the Régie autonome des transports parisiens (RATP), it has sixteen lines (with an additional four under construction), numbered 1 to 14, with two lines, Line 3bis and Line 7bis, named because they used to be part of Line 3 and Line 7, respectively. Three lines (1, 4 and 14) are automated. Lines are identified on maps by number and colour, with the direction of travel indicated by the terminus.
It is the second-busiest metro system in Europe, after the Moscow Metro, as well as the tenth-busiest in the world. It carried 1.498 billion passengers in 2019, roughly 4.1 million passengers a day, which makes it the most used public transport system in Paris. It is one of the densest metro systems in the world, with 244 stations within the of the City of Paris. Châtelet–Les Halles, with five Métro and three RER commuter rail lines, is one of the world's largest metro stations. The system generally has poor accessibility since most stations were built underground well before ease of access started being taken into consideration.
The first line opened without ceremony on 19 July 1900, during the World's Fair (). The system expanded quickly until World War I and the core was complete by the 1920s; extensions into suburbs were built in the 1930s. The network reached saturation after World War II with new trains to allow higher traffic, but further improvements have been limited by the design of the network and, in particular, the short distances between stations. In 1998, Line 14 was put into service to relieve RER A. Line 11 reaching in 2024 is the network's most recent extension. A large expansion programme known as the Grand Paris Express (GPE) is currently under construction with four new orbital Métro lines (15, 16, 17 and 18) around the Île-de-France region, outside the Paris city limits. Further plans exist for Line 1, Line 7, Line 10, a merger of Line 3bis and Line 7bis, Line 12, as well as a new proposed Line 19 in the city's outer suburbs.
Besides the Métro, central Paris and its urban area are served by five RER lines (602 km or 374 mi with 257 stations), fourteen tramway lines (186.6 km or 115.9 mi with 278 stations), nine Transilien suburban trains (1,299 km or 807 mi with 392 stations), in addition to three VAL lines at Charles de Gaulle Airport and Orly Airport, making Paris one of the cities in the world best served by public transportation. Despite the network's uniform architecture, several of its stations stand out at the hand of their unique design. The Métro itself has become an icon in popular culture, being frequently featured in cinema and mentioned in music. In 2021, the RATP started offering an umbrella lending service at several Métro and RER stations, highlighting the Métro's own rabbit mascot, which advises children on staying away from the closing doors.
Naming
Métro is the abbreviated name of the company that originally operated most of the network: the Empain group subsidiary Compagnie du chemin de fer métropolitain de Paris S.A. ("Paris Metropolitan Railway Company Ltd."), shortened to "Le Métropolitain". It was quickly abbreviated to Métro, which became a common designation and brand name for rapid transit systems in France and in many cities elsewhere.
The Métro is operated by the Régie autonome des transports parisiens (RATP), a public transport authority that also operates part of the RER network, light rail lines and many bus routes. The name Métro was adopted in many languages, making it the most used word for a (generally underground) urban transit system. "Compagnie du chemin de fer métropolitain" may have been adapted from the name of London's pioneering underground railway company, the Metropolitan Railway, which had been in business for almost 40 years prior to the inauguration of Paris's first line.
History
By 1845, Paris and the railway companies were already thinking about an urban railway system to link inner districts of the city. The railway companies and the French government wanted to extend mainline railways into a new underground network, whereas the Parisians favoured a new and independent network and feared national takeover of any system it built. The disagreement lasted from 1856 to 1890. Meanwhile, the population became denser and traffic congestion grew massively. The deadlock put pressure on the authorities and gave the city the green light.
Prior to 1845, the urban transport network consisted primarily of a large number of omnibus lines, consolidated by the French government into a regulated system with fixed and unconflicting routes and schedules. The first concrete proposal for an urban rail system in Paris was put forward by civil engineer Florence de Kérizouet. This plan called for a surface cable car system. In 1855, civil engineers Edouard Brame and Eugène Flachat proposed an underground freight urban railway, due to the high rate of accidents on surface rail lines. On 19 November 1871 the General Council of the Seine commissioned a team of 40 engineers to plan an urban rail network. This team proposed a network with a pattern of routes "resembling a cross enclosed in a circle" with axial routes following large boulevards. On 11 May 1872 the Council endorsed the plan, but the French government turned down the plan. After this point, a serious debate occurred over whether the new system should consist of elevated lines or of mostly underground lines; this debate involved numerous parties in France, including Victor Hugo, Guy de Maupassant, and the Eiffel Society of Gustave Eiffel, and continued until 1892. Eventually the underground option emerged as the preferred solution because of the high cost of buying land for rights-of-way in central Paris required for elevated lines, estimated at 70,000 francs per metre of line for a -wide railway.
The last remaining hurdle was the city's concern about national interference in its urban rail system. The city commissioned renowned engineer Jean-Baptiste Berlier, who designed Paris' postal network of pneumatic tubes, to design and plan its rail system in the early 1890s. Berlier recommended a special track gauge of (versus the standard gauge of ) to protect the system from national takeover, which inflamed the issue substantially. The issue was finally settled when the Minister of Public Works begrudgingly recognised the city's right to build a local system on 22 November 1895, and by the city's secret designing of the trains and tunnels to be too narrow for mainline trains, while adopting standard gauge as a compromise with the state.
Fulgence Bienvenüe project
On 20 April 1896, Paris adopted the Fulgence Bienvenüe project, which was to serve only the city proper of Paris. Many Parisians worried that extending lines to industrial suburbs would reduce the safety of the city. Paris forbade lines to the inner suburbs and, as a guarantee, Métro trains were to run on the right, as opposed to existing suburban lines, which ran on the left.
Unlike many other subway systems (such as that of London), this system was designed from the outset as a system of (initially) nine lines. Such a large project required a private-public arrangement right from the outset – the city would build most of the permanent way, while a private concessionaire company would supply the trains and power stations, and lease the system (each line separately, for initially 39-year leases). In July 1897, six bidders competed, and The Compagnie Generale de Traction, owned by the Belgian Baron Édouard Empain,
won the contract; this company was then immediately reorganised as the Compagnie du chemin de fer métropolitain.
Construction began in November 1898. The first line, Porte Maillot–Porte de Vincennes, was inaugurated on 19 July 1900 during the Paris World's Fair. Entrances to stations were designed in Art Nouveau style by Hector Guimard. Eighty-six of his entrances are still in existence.
Bienvenüe's project consisted of 10 lines, which correspond to current Lines 1 to 9. Construction was so intense that by 1920, despite a few changes from schedule, most lines had been completed. The shield method of construction was rejected in favor of the cut-and-cover method in order to speed up work. Bienvenüe, a highly regarded engineer, designed a special procedure of building the tunnels to allow the swift repaving of roads, and is credited with a largely swift and relatively uneventful construction through the difficult and heterogeneous soils and rocks.
Line 1 and Line 4 were conceived as central east–west and north–south lines. Two lines, ligne 2 Nord (Line 2 North) and ligne 2 Sud (Line 2 South), were also planned but Line 2 South was merged with Line 5 in 1906. Line 3 was an additional east–west line to the north of line 1 and line 5 an additional north to south line to the east of Line 4. Line 6 would run from Nation to Place d'Italie. Lines 7, 8 and 9 would connect commercial and office districts around the Opéra to residential areas in the north-east and the south-west. Bienvenüe also planned a circular line, the ligne circulaire intérieure, to connect the six mainline stations. A section opened in 1923 between Invalides and the Boulevard Saint-Germain before the plan was abandoned.
Nord-Sud competing network
On 31 January 1904, a second concession was granted to the Société du chemin de fer électrique souterrain Nord-Sud de Paris (Paris North-South underground electrical railway company), abbreviated to the Nord-Sud (North-South) company. It was responsible for building three proposed lines:
Line A would join Montmartre to Montparnasse as an additional north–south line to the west of Line 4.
Line B would serve the north-west of Paris by connecting Saint-Lazare station to Porte de Clichy and Porte de Saint-Ouen.
Line C would serve the south-west by connecting Montparnasse station to Porte de Vanves. The aim was to connect Line B with Line C, but the CMP renamed Line B as Line 13 and Line C as Line 14. Both were connected by the RATP as the current Line 13.
Line A was inaugurated on 4 November 1910, after being postponed because of floods in January that year. Line B was inaugurated on 26 February 1911. Because of the high construction costs, the construction of line C was postponed. Nord-Sud and CMP used compatible trains that could be used on both networks, but CMP trains used 600 volts third rail, and NS −600 volts overhead wire and +600 volts third rail. This was necessary because of steep gradients on NS lines. NS distinguished itself from its competitor with the high-quality decoration of its stations, the trains' extreme comfort and pretty lighting.
Nord-Sud did not become profitable and bankruptcy became unavoidable. By the end of 1930, the CMP bought Nord-Sud. Line A became Line 12 and Line B Line 13. Line C was built and renamed Line 14; that line was reorganised in 1937 with Lines 8 and 10. This partial line is now the south part of Line 13.
The last Nord-Sud train set was decommissioned on 15 May 1972.
1930–1950: first inner suburbs are reached
Bienvenüe's project was nearly completed during the 1920s. Paris planned three new lines and extensions of most lines to the inner suburbs, despite the reluctance of Parisians. Bienvenüe's inner circular line having been abandoned, the already-built portion between Duroc and Odéon for the creation of a new east–west line that became Line 10, extended west to Porte de Saint-Cloud and the inner suburbs of Boulogne.
The line C planned by Nord-Sud between Montparnasse station and Porte de Vanves was built as Line 14 (different from present Line 14). It extended north in encompassing the already-built portion between Invalides and Duroc, initially planned as part of the inner circular. The over-busy Belleville funicular tramway would be replaced by a new line, Line 11, extended to Châtelet. Lines 10, 11 and 14 were thus the three new lines envisaged under this plan.
Most lines would be extended to the inner suburbs. The first to leave the city proper was Line 9, extended in 1934 to Boulogne-Billancourt; more followed in the 1930s. World War II forced authorities to abandon projects such as the extension of Line 4 and Line 12 to the northern suburbs. By 1949, eight lines had been extended: Line 1 to Neuilly-sur-Seine and Vincennes, Line 3 to Levallois-Perret, Line 5 to Pantin, Line 7 to Ivry-sur-Seine, Line 8 to Charenton, Line 9 to Boulogne-Billancourt, Line 11 to Les Lilas and Line 12 to Issy-les-Moulineaux.
World War II had a massive impact on the Métro. Services were limited and many stations closed. The risk of bombing meant the service between Place d'Italie and Étoile was transferred from Line 5 to Line 6, so that most of the elevated portions of the Métro would be on Line 6. As a result, Lines 2 and 6 now form a circle. Most stations were too shallow to be used as bomb shelters. The French Resistance used the tunnels to conduct swift assaults throughout Paris.
It took a long time to recover after liberation in 1944. Many stations had not reopened by the 1960s and some closed for good. On 23 March 1948, the CMP (the underground) and the STCRP (bus and tramways) merged to form the RATP, which still operates the Métro.
1960–1990: development of the RER
The network grew saturated during the 1950s. Outdated technology limited the number of trains, which led the RATP to stop extending lines and concentrate on modernisation. The MP 51 prototype was built, testing both rubber-tyred metro and basic automatic driving on the voie navette. The first replacements of the older Sprague trains began with experimental articulated trains and then with mainstream rubber-tyred Métro MP 55 and MP 59, some of the latter still in service (Line 11). Thanks to newer trains and better signalling, trains ran more frequently.
The population boomed from 1950 to 1980. Car ownership became more common and suburbs grew further from the centre of Paris. The main railway stations, termini of the suburban rail lines, were overcrowded during rush hour. The short distance between Métro stations slowed the network and made it unprofitable to build extensions. The solution in the 1960s was to revive a project abandoned at the end of the 19th century: joining suburban lines to new underground portions in the city centre as the Réseau Express Régional (regional express network; RER).
The RER plan initially included one east–west line and two north–south lines. RATP bought two unprofitable SNCF lines—the Ligne de Saint-Germain (westbound) and the Ligne de Vincennes (eastbound) with the intention of joining them and to serve multiple districts of central Paris with new underground stations. The new line created by this merger became Line A. The Ligne de Sceaux, which served the southern suburbs and was bought by the CMP in the 1930s, would be extended north to merge with a line of the SNCF and reach the new Charles de Gaulle Airport in Roissy. This became Line B. These new lines were inaugurated in 1977 and their wild success outperformed all the most optimistic forecasts to the extent that line A is the most used urban rail line in Europe with nearly 300 million journeys a year.
Because of the enormous cost of these two lines, the third planned line was abandoned and the authorities decided that later developments of the RER network would be more cheaply developed by the SNCF, alongside its continued management of other suburban lines. However, the RER developed by the SNCF would never match the success of the RATP's two RER lines. In 1979, the SNCF developed Line C by joining the suburban lines of the Gare d'Austerlitz and Gare d'Orsay, the latter being converted into a museum dedicated to impressionist paintings. During the 1980s, it developed Line D, which was the second line planned by the initial RER schedule, but serving Châtelet instead of République to reduce costs. A huge Métro-RER hub was created at Châtelet–Les Halles, becoming one of the world's largest underground stations.
The same project of the 1960s also decided to merge Line 13 and Line 14 to create a quick connection between Saint-Lazare and Montparnasse as a new north–south line. Distances between stations on the lengthened line 13 differ from that on other lines in order to make it more "express" and hence to extend it farther in the suburbs. The new Line 13 was inaugurated on 9 November 1976.
1990–2010: Eole and Météor
In October 1998, Line 14 was inaugurated. It was the first fully new Métro line in 63 years. Known during its conception as Météor (Métro Est-Ouest Rapide), it was the first of the now three fully automatic lines within the network, along with Line 1 and Line 4. It was the first with platform screen doors to prevent suicides and accidents. It was conceived with extensions to the suburbs in mind, similar to the extensions of the line 13 built during the 1970s. As a result, most of the stations are at least a kilometre apart. Like the RER lines designed by the RATP, nearly all stations offer connections with multiple Métro lines. The line initially ran between Saint-Lazare and Olympiades and was subsequently extended north to Mairie de St.Ouen in 2020.
Lines 13 and 7 are the only two on the network to be split in branches. The RATP would like to get rid of those saturated branches in order to improve the network's efficiency. A project existed to attribute to line 14 one branch of each line, and to extend them further into the suburbs. This project was abandoned. In 1999, the RER Line E was inaugurated. Known during its conception as Eole (Est-Ouest Liaison Express), it is the fifth RER line. It terminates at , but a new project, financed by EPAD, the public authority managing the La Défense business district, should extend it west to La Défense–Grande Arche and the suburbs beyond.
2010 and beyond: automation
Between 2007 and November 2011, Line 1 was converted to driverless operation. The line was operated with a combination of driver-operated trains and driverless trains until the delivery of the last of its driverless MP 05 trains in February 2013. The same conversion for Line 4 was completed on 13 January 2022, with the last non-automatic train removed from that line on 17 December 2023, and RATP would now like to automate Line 13. Line 14 was automated from Day 1, as will the lines 15 to 18 which are being built as part of the Grand Paris Express.
Several extensions to the suburbs opened in the last years. Line 8 was extended to Pointe du Lac in 2011, line 12 was extended to Aubervilliers in 2012, line 4 was extended to Mairie de Montrouge in 2013, Line 14 was extended by to Mairie de Saint-Ouen in December 2020, and Line 4 was extended to Bagneux in January 2022.
Accidents and incidents
10 August 1903: Couronnes Disaster (fire), 84 killed.
July – October 1995: Paris Métro bombings (terror attack), committed by Algerian extremists – 8 killed and more than 100 injured.
30 August 2000: an MF 67 train derailed due to excessive speed and unavailable automatic cruising at Notre-Dame-de-Lorette, 24 slightly injured.
6 August 2005: fire broke out on a train at Simplon, injuring at least 19 people. Early reports blamed an electrical short circuit as the cause.
29 July 2007: a fire started on a train between Varenne and Invalides. Fifteen people were injured.
Network
Since the Métro was built to comprehensively serve the city inside its walls, the stations are very close: apart on average, from on Line 4 to on the newer line 14, meaning Paris is densely networked with stations. The surrounding suburbs are served by later line extensions, thus traffic from one suburb to another must pass through the city (the circular line 15, now under construction, will enable some journeys that do not need to pass through Paris). The slow average speed effectively prohibits service to the greater Paris area.
The Métro is mostly underground ( of ). Above-ground sections consist of elevated railway viaducts within Paris (on Lines 1, 2, 5 and 6) and the at-level suburban ends of Lines 1, 5, 8, and 13. The tunnels are relatively close to the surface due to the variable nature of the terrain, which complicates deep digging; exceptions include parts of Line 12 under the hill of Montmartre and line 2 under Ménilmontant. The tunnels mostly follow the twists and turns of the streets above. During construction in 1900, a minimum radius of curvature of just was imposed, but even this low standard was not adhered to at Bastille and Notre-Dame-de-Lorette.
Like the New York City Subway, and in contrast with the London Underground, the Paris Métro mostly uses two-way tunnels. As in most French metro and tramway systems, trains drive on the right (SNCF trains run on the left track). The tracks are . Electric power is supplied by a third rail which carries 750 volts DC.
The width of the carriages, , is narrower than that of newer French systems (such as the carriages in Lyon) and trains on Lines 1, 4 and 14 have capacities of 600–700 passengers; this is as compared with 2,600 on the Altéo MI 2N trains of RER A. The City of Paris deliberately chose to build narrow Métro tunnels to prevent the running of mainline trains; the city of Paris and the French state had historically poor relations. In contrast to many other historical metro systems (such as New York, Madrid, London, and Boston), all lines have tunnels and operate trains with the same dimensions. Five Paris Métro Lines (1, 4, 6, 11 and 14) run on a rubber tire system developed by the RATP in the 1950s, exported to the Montreal, Santiago, Mexico City and Lausanne metro.
The number of cars in each train varies line by line. The shortest are lines 3bis and 7bis with three-car trains. Line 11 ran with four until the summer 2023 when four-car MP 59 trains, the oldest type in service at the time, were gradually replaced by new five-car MP 14 trains (at a pace of 3 to 5 new MP 14 every Monday). Lines 1 and 4 run six-car trains. Line 14 currently runs a mix of six and 8-car trains; in the future it will only run 8 cars. All other lines run with five. Two lines, 7 and 13, have branches at the end, and Line 10 has a one-way loop. Trains serve every station on each line except when they are closed for renovations.
Map
Opening hours
The first train leaves each terminus at 5:30 a.m. On some lines additional trains start from an intermediate station. The last train, often called the "balai" (broom) because it sweeps up remaining passengers, arrives at the terminus at 1:15 a.m., except on Fridays (since 7 December 2007), Saturdays and on nights before a holiday, when the service ends at 2:15 a.m.
On New Year's Eve, Fête de la Musique, Nuit Blanche and other events, some stations on Lines 1, 4, 6, 9 and 14 remain open all night.
Tickets
Tickets are sold at staffed counters and at automated machines in the station foyer. Entrance to platforms is by automated gate, opened by smart cards and paper tickets. Gates return tickets for passengers to retain for the duration of the journey. There is normally no system to collect or check tickets at the end of the journey, and tickets can be inspected at any point. The exit from all stations is clearly marked as to the point beyond which possession of a ticket is no longer required.
Ticket t+
The standard ticket for a single trip is the Ticket t+. It is valid for a multi-transfer journey within 90 minutes from the first validation. It can be used on the Métro (excluding Orly Airport), buses and trams, and in zone 1 of the RER. It allows unlimited transfers between the same mode of transport (i.e. Métro to Métro, bus to bus and tram to tram), between bus and tram, and between Métro and RER zone 1. The ticket is available in paper form, or can be loaded onto a Navigo Easy pass. As of 2024, it costs €2.15 per ticket, and is also available as a pack of ten tickets (a carnet) for €17.35 on Navigo Easy.
Other tickets
Daily, weekly, and monthly passes are available for users of a Navigo card, an RFID-based contactless smart card. Daily tickets are also available as paper tickets until the end of 2024.
Paris Visite is a paper ticket aimed at visitors offering unlimited trips for a duration of one, two, three or five days, for zones 1–3 covering the centre of Paris, or zones 1–5 covering the whole of the network including the RER to the airports, Versailles and Disneyland Paris.
A single ticket to or from Orly Airport on Métro line 14 costs €10.30.
Facilities
On 26 June 2012, it was announced that the Métro would get Wi-Fi in most stations. Access provided would be free, with a premium paid alternative offer proposed for a faster internet connection. As of 2020, the entire RATP network was connected with 4G service, including within tunnels. The automated Line 1, Line 4 and Line 14 – as well as some congested stations on Line 13 – have platform edge doors ('porte palière') separating the tracks from the platform.
Accessibility
The vast majority of Métro stations are not accessible to all. The 20 stations of Line 14 (which first opened in 1998) are fully accessible, and all line extensions since 1992 have included lifts at the new stations. By 2025, 23 stations on the Métro will be accessible, following extensions to existing lines. The four new lines of the Grand Paris Express will also be fully accessible from day 1.
The does not require the Métro to be made accessible. RATP estimates that retrofitting the network would cost between 4 and 6 billion euros, and that certain stations would remain impossible to retrofit. , there were no plans to retrofit existing stations with lifts. RATP notes that buses and trams in Paris are fully accessible, and many RER & Transilien stations are accessible.
Technical specifications
The Métro has of track and 321 stations, 61 connecting between lines. These figures do not include the RER network. The average distance between stations is . Trains stop at all stations. Lines do not share tracks, even at interchange (transfer) stations.
Trains had a maximum permitted speed of and averaged at peak times as of 2018. The fastest lines were the automated ones: Line 14, which averaged , and Line 1, which averaged . Trains travel on the right. The track is standard gauge but the loading gauge is smaller than the mainline SNCF network. Power is from a lateral third rail, 750 V DC, except on the rubber-tyred lines where the current is from guide bars.
The loading gauge is small compared to those of newer metro systems (but comparable to that of early European metros), with capacities of between about 560 and 720 passengers per train on Lines 1–14. Many other metro systems (such as those of New York and London) adopted expanded tunnel dimensions for their newer lines (or used tunnels of multiple sizes almost from the outset, in the case of Boston), at the cost of operating incompatible fleets of rolling stock. Paris built all lines to the same dimensions as its original lines. Before the introduction of rubber-tire lines in the 1950s, this common shared size theoretically allowed any Métro rolling stock to operate on any line, but in practice each line was assigned a regular roster of trains.
A feature is the use of rubber-tired trains on five lines: this technique was developed by RATP and entered service in 1951. The technology was exported to many networks around the world (including Montreal, Mexico City and Santiago). Lines 1, 4, 6, 11 and 14 have special adaptations to accommodate rubber-tyred trains. Trains are composed of 3 to 8 cars depending on the line, the most common being 5 cars, but all trains on the same line have the same number of cars.
The Métro is designed to provide local, point-to-point service in Paris proper and service into the city from some close suburbs. Stations within Paris are very close together to form a grid structure, ensuring that every point in the city is close to a Métro station (less than ), at the cost of speed, except on Line 14 where the stations are farther apart and the trains travel faster. The system is complemented by the RER, which extends farther out into the suburbs and functions as an express network for the city and its surroundings.
The Paris Métro runs mostly underground; surface sections include sections on viaducts in Paris (Lines 1, 2, 5, and 6) and at the surface in the suburbs (Lines 1, 5, 8, and 13). In most cases, both tracks are laid in a single tunnel. Almost all lines follow roads, having been built by the cut-and-cover method near the surface (the earliest by hand). Line 1 follows the straight course of the Champs-Elysées and on other lines, some stations (Liège, Commerce) have platforms that do not align: the street above is too narrow to fit both platforms opposite each other. Many lines have very sharp curves. The specifications established in 1900 required a very low minimum curve radius by railway standards, but even this was often not fully respected, for example near Bastille and Notre Dame de Lorette. Parts of the network are built at depth, in particular a section of Line 12 under Montmartre, the sections under the Seine, and all of Line 14.
Lines 7 and 13 have two terminal branches, while line 7bis runs in a unidirectional loop at one end. One end of lines 2 and 5 each and both ends of line 6 have their terminus station on a balloon loop. One end of lines 3bis and 7bis each have their trains essentially operate this way, but instead reverse. One end of lines 2, 3bis, and 4 have trains run out of service on a balloon loop before reentering service. All other termini have trains continue a certain distance beyond the terminal, before proceeding back to the station on a different platform headed the other way.
Rolling stock
The rolling stock has steel wheels (MF for matériel fer) and rubber-tyred trains (MP for matériel pneu). The different versions of each kind are specified by year of design. Some trains have suffixes to differentiate between them – CC (Conduite Conducteur) for trains driven by a driver and CA (Conduite Automatique) for trains that are automatically driven.
No longer in service
M1: in service from 1900 until 1931.
Sprague-Thomson: in service from 1908 until 1983.
MA 51: in service on lines 10 and 13 until 1994.
MP 55: in service on Line 11 from 1956 until 1999, replaced by the MP 59.
MP 59: in service from 1963 until 2024, replaced by the MP 14.
Zébulon a prototype MF 67, used for training operators between 1968 and 2010. It never saw passenger service.
Not yet in service
MF 19: intended to replace the MF 67, MF 77 and MF 88 stocks on Lines 3, 3bis, 7, 7bis, 8, 10, 12 and 13.
MR3V/MR6V: intended to serve on line 15 (MR6V) and on lines 16 and 17 (MR3V).
MRV: intended to serve on line 18.
Lines
Lines in construction
Planned lines
Stations
The typical station comprises two central tracks flanked by two four-metre wide platforms. About 50 stations, generally current or former termini, are exceptions; most have three tracks and two platforms (Porte d'Orléans), or two tracks and a central platform (Porte Dauphine). Some stations are single-track, either due to difficult terrain (Saint-Georges), a narrow street above (Liège) or track loops (Église d'Auteuil).
Station length was originally , enough to accommodate the 5-car trains used on most lines. This was extended to on high-traffic lines (Line 1 and Line 4) which operate six-car trains, with some stations at for accommodating seven-car trains (the difference as yet unused).
In general, stations were built near the surface by the cut-and-cover method, and are vaulted. Stations of the former Nord-Sud network (Line 12 and Line 13) have higher ceilings, due to the former presence of a ceiling catenary. There are exceptions to the rule of near-surface vaulting:
Stations particularly close to the surface, generally on Line 1 (Champs-Elysées–Clémenceau), have flat metal ceilings.
Elevated (above street) stations, in particular on Line 2 and Line 6, are built in brick and covered by platform awnings (Line 2) or glass canopies (Line 6).
Stations on the newest sections (Line 14), built at depth, comprise platforms for eight-car trains, high ceilings and double-width platforms. Since the trains on this line are driverless, the stations have platform screen doors. Platform screen doors have been introduced on Line 1 and Line 4 as well since the MP 05 trains have been functioning.
Several ghost stations are no longer served by trains. One of the three platforms at Porte des Lilas station is on a currently unused section of track, often used as a backdrop in films.
In 2018, the busiest stations were Saint-Lazare (46.7 million passengers), Gare du Nord (45.8), Gare de Lyon (36.9), Montparnasse – Bienvenüe (30.6), Gare de l'Est (21.4), Bibliothèque François Mitterrand (18.8), République (18.3), Les Halles (17.5), La Défense (16.0) and Bastille (13.2).
Interior decoration
Concourses are decorated in Art Nouveau style defined at the Métro's opening in 1900. The spirit of this aesthetic has generally been respected in renovations.
Standard vaulted stations are lined by small white earthenware tiles, chosen because of the poor efficiency of early twentieth century electric lighting. From the outset walls have been used for advertising; posters in early stations are framed by coloured tiles with the name of the original operator (CMP or Nord Sud). Stations of the former Nord Sud (most of line 12 and parts of line 13) generally have more meticulous decoration. Station names are usually inscribed on metallic plaques in white letters on a blue background or in white tiles on a background of blue tiles.
The first renovations took place after the Second World War, when the installation of fluorescent lighting revealed the poor state of the original tiling. Three main styles of redecoration followed in succession.
Between 1948 and 1967 the RATP installed standardised coloured metallic wall casings in 73 stations.
From the end of the 1960s a new style was rolled out in around 20 stations, known as Mouton-Duvernet after the first station concerned. The white tiles were replaced to a height of with non-bevelled tiles in various shades of orange. Intended to be warm and dynamic, the renovations proved unpopular. The decoration has been removed as part of the "Renouveau du métro" programme.
From 1975 some stations were redecorated in the Motte style, which emphasised the original white tiling but brought touches of colour to light fixtures, seating and the walls of connecting tunnels. The subsequent Ouï Dire style features audaciously shaped seats and light housings with complementary multicoloured uplighting.
A number of stations have original decorations to reflect the cultural significance of their locations. The first to receive this treatment was Louvre – Rivoli on line 1, which contains copies of the masterpieces on display at the museum. Other notable examples include Bastille (line 1), Saint-Germain-des-Prés (line 4), Cluny – La Sorbonne (line 10) and Arts et Métiers (line 11).
Exterior decoration
The original Art Nouveau entrances are iconic symbols of Paris. There are currently 83 of them. Designed by Hector Guimard in a style that caused some surprise and controversy in 1900, there are two main variants:
The most elaborate feature glass canopies. Two original canopies still exist, at Porte Dauphine and Abbesses (originally located at until moved in the 1970s). A replica of the canopy at Abbesses was installed at Châtelet station at the intersection of Rue des Halles and Rue Sainte-Opportune.
A cast-iron balustrade decorated in plant-like motifs, accompanied by a "Métropolitain" sign supported by two orange globes atop ornate cast-iron supports in the form of plant stems.
Several of the iconic Guimard entrances have been given to other cities. The only original one on a metro station outside Paris is at Square-Victoria-OACI station in Montreal, as a monument to the collaboration of RATP engineers. Replicas cast from the original moulds have been given to the Lisbon Metro (Picoas station); the Mexico City Metro (Metro Bellas Artes, with a "Metro" sign), offered as a gift in return for a Huichol mural displayed at Palais Royal – Musée du Louvre; and Chicago Metra (Van Buren Street, at South Michigan Avenue and East Van Buren Street, with a "Metra" sign), given in 2001. The Moscow Metro has a Guimard entrance at Kievskaya station, donated by the RATP in 2006. There is an entrance on display at the Sculpture Garden in Downtown Washington, D.C. This does not lead to a metro station, it is just for pleasure. Similarly, The Museum of Modern Art has an original, restored Guimard entrance outdoors in the Abby Aldrich Rockefeller Sculpture Garden.
Later stations and redecorations have brought increasingly simple styles to entrances.
Classical stone balustrades were chosen for some early stations in prestigious locations (Franklin D. Roosevelt, République).
Simpler metal balustrades accompany a "Métro" sign crowned by a spherical lamp in other early stations (Saint-Placide).
Minimalist stainless-steel balustrades (Havre-Caumartin) appeared from the 1970s and signposts with just an "M" have been the norm since the war (Olympiades, opened 2007).
A handful of entrances have original architecture (Saint-Lazare); a number are integrated into residential or standalone buildings (Pelleport).
Future
Under construction
As part of the Grand Paris Express project:
The first (southern) section of future Line 15 between and . This section is long and will have sixteen stations. Opening is currently planned for 2025.
The first (northern) section of future Line 16 between and with seven new stations. Opening is currently planned for 2026.
The first (southern) section of future Line 17 between and with one new station. Opening is currently planned for 2026.
The first (central) section of future Line 18 between and with seven new stations. Opening is currently planned in two phases for 2026 & 2027.
Planned
The original Grand Paris Express plans had a total span of and counted 68 stations, the completion of which forms the major part of the currently planned lines.
Line 15, the longest of the new Grand Paris Express lines, will be a circular line around Paris when completed in 2031.
The second (southern) section of Line 16 between Clichy–Montfermeil and will open in 2028.
Line 17 will be additionally extended in two phases in 2028 & 2030 to , running through Charles de Gaulle Airport.
Line 18 will be extended to the north, to , by 2030.
Proposed
In addition to the projects already under construction or currently being actively studied, there have also been proposals for:
An extension of Line 1 from to , connecting to RER and future Line 15 in 2035.
An extension of Line 3 to .
An extension of Line 5 to (south) and (north), as well as a new infill station ().
An extension of Line 7 to .
An extension of Line 9 to .
An extension of Line 10 from to .
An extension of Line 10 from to or even , the latter connecting to RER and future Line 15 (around 2030 to 2035).
An extension of Line 11 to as part of the Grand Paris Express.
An extension of Line 12 to (south) and (north).
An extension of Line 13's branch to and branch to .
An extension of Line 17 south to , under study for completion after 2030 as part of the Grand Paris Express.
An extension of Line 18 north to , under study for completion after 2030 as part of the Grand Paris Express.
In 2023, the Grand Paris Express plans were extended with the addition of Line 19, serving on a route from to , under study for a completion around 2040.
A merger of Line 3bis and Line 7bis to form a new line.
Cultural significance
The Métro has a cultural significance in the arts that goes well beyond Paris. The term "metro" has become a generic name for subways and urban underground railways.
The station entrance kiosks, designed by Hector Guimard, fostered Art Nouveau building style (once widely known as "le style Métro"); however, some French commentators criticised the Guimard station kiosks, including their green colour and sign lettering, as difficult to read.
The success of rubber-tired lines led to their export to metro systems around the world, starting with the Montreal Metro. The success of Montreal "did much to accelerate the international subway boom" of the 1960s/1970s and "assure the preeminence of the French in the process". Rubber-tired systems were adopted in Mexico City, Santiago, Lausanne, Turin, Singapore and other cities. The Japanese adopted rubber-tired metros (with their own technology and manufacturing firms) to systems in Kobe, Sapporo, as well as parts of Tokyo.
The "Rabbit of the Paris Métro" is an anthropomorphic rabbit visible on stickers on the doors of the trains since 1977 to advise passengers (especially children) of the risk of getting one's hands trapped when the doors are opening, as well as the risk of injury on escalators or becoming trapped in the closing doors. This rabbit is now a popular icon in Paris similar to the "mind the gap" phrase in London.
| Technology | France | null |
50771 | https://en.wikipedia.org/wiki/Ferry | Ferry | A ferry is a boat that transports passengers, and occasionally vehicles and cargo, across a body of water. A small passenger ferry with multiple stops, like those in Venice, Italy, is sometimes referred to as a water taxi or water bus.
Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels. Ship connections of much larger distances (such as over long distances in water bodies like the Baltic Sea) may also be called ferry services, and many carry vehicles.
History
The profession of the ferryman is embodied in Greek mythology in Charon, the boatman who transported souls across the River Styx to the Underworld.
Speculation that a pair of oxen propelled a ship having a water wheel can be found in 4th century Roman literature "Anonymus De Rebus Bellicis". Though impractical, there is no reason why it could not work and such a ferry, modified by using horses, was used in Lake Champlain in 19th-century America. See Experiment (horse powered boat).
In 1850 the roll-on roll-off (ro-ro) ferry, Leviathan designed to carry freight wagons efficiently across the Firth of Forth in Scotland started to operate between Granton, near Edinburgh, and Burntisland in Fife. The vessel design was highly innovative and the ability to move freight in great quantities and with minimal labour signalled the way ahead for sea-borne transport, converting the ro-ro ferry from an experimental and marginal ship type into one of central importance in the transport of goods and passengers.
In 1871, the world's first car ferry crossed the Bosphorus in Istanbul. The iron steamship, named Suhulet (meaning 'ease' or 'convenience') was designed by the general manager of Şirket-i Hayriye (Bosporus Steam Navigation Company), Giritli Hüseyin Haki Bey and built by the Greenwich shipyard of Maudslay, Sons and Field. It weighed 157 tons, was long, wide and had a draft of . It was capable of travelling up to 6 knots with the side wheel turned by its 450-horsepower, single-cylinder, two-cycle steam engine. Launched in 1872, Suhulet's unique features consisted of a symmetrical entry and exit for horse carriages, along with a dual system of hatchways. The ferry operated on the Üsküdar-Kabataş route, which is still serviced by modern ferries today.
Notable services
Asia
In Hong Kong, Star Ferry carries passengers across Victoria Harbour. Other carriers ferry travelers between Hong Kong Island and outlying islands like Cheung Chau, Lantau Island and Lamma Island.
In the Philippines, the Philippine Nautical Highway System forms the backbone of the nationwide transport system by integrating ports with highway systems; the system has three main routes. Another known ferry service is the Pasig River Ferry Service, which is the only water-based transportation in Metro Manila. This system cruises the Pasig River.
Bangladesh
India
India's ro-ro ferry service between Ghogha and Dahej was inaugurated by Prime Minister Narendra Modi on 22 October 2017. It aims to connect South Gujarat and Saurashtra currently separated by of roadway to of ferry service. It is a part of the larger Sagar Mala project.
Water transport in Mumbai consists of ferries, hovercraft, and catamarans, operated by various government agencies as well as private entities. The Kerala State Water Transport Department (SWTD), operating under the Ministry of Transport, Government of Kerala, India regulates the inland navigation systems in the Indian state of Kerala and provides inland water transport facilities. It stands for catering to the passenger and cargo traffic needs of the inhabitants of the waterlogged areas of the Districts of Alappuzha, Kottayam, Kollam, Ernakulam, Kannur and Kasargode. SWTD ferry service is also one of the most affordable modes to enjoy the beauty of the scenic Kerala backwaters.
Indonesia
As the largest archipelagic country, Indonesia has several ferry routes which is managed mostly by PT. ASDP Indonesia Ferry (Persero) and several private companies. ASDP_Indonesia_Ferry or ASDP is a state-owned company engaged in the business of integrated ferry and port services and waterfront tourist destinations. ASDP operates a ferry fleet of more than 160 units handling more than 300 routes in 36 ports throughout Indonesia.
Japan
Japan used to rely heavily on ferries for passenger and goods transportation among the four main islands of Hokkaido, Honshu, Shikoku and Kyushu. However, as highway and railway bridges and undersea tunnels (such as the Seikan Tunnel and Honshū–Shikoku Bridge Project) have been constructed, the ferry transportation has recently become for short-distance sightseeing passengers with or without car, and for long-distance truck drivers hauling goods.
Malaysia
The Malaysian state of Penang is home to the oldest ferry service in the country. The first regular ferry service operating across the Penang Strait between George Town and Province Wellesley (now Seberang Perai) was launched in 1894 by Quah Beng Kee and his brothers. The iconic yellow double-deck roll-on/roll-off (RORO) ferries were introduced in 1957. Between 1959 and 2002, a total of 15 vessels were commissioned for the service.
Currently operated by the Penang Port Commission, the ferry service has evolved over the decades. The RORO ferries were retired in 2021, with speedboats temporarily replacing them. In 2023, these speedboats were succeeded by four newly-built catamarans, which now serve only passengers and motorcyclists. These catamarans operate between the Raja Tun Uda Ferry Terminal in George Town and the Sultan Abdul Halim Ferry Terminal in Seberang Perai.
Russian Federation
Due to the geographical features of Russia, it has a large number of both sea and river ferry crossings. Car ferries operate from the continental part of Russia to Sakhalin, Kamchatka and Japan. The Ust-Luga – Kaliningrad ferry also runs, until February 2022 ferries also ran from St. Petersburg to different cities of the Baltic Sea. Before the construction of the Kerch Bridge, there was a ferry across the Kerch Strait, whose service was resumed after the Kerch bridge explosion.
There are also more than 100 ferry crossings on different rivers in Russia. These are usually symmetrical through ferries with two ramps for quick entry and exit of cars. For some categories of car owners, these ferries may be free if there is no alternative crossing of the river.
Europe
Great Britain
The busiest seaway in the world, the English Channel, connects Great Britain and mainland Europe, with ships sailing from the UK ports of Dover, Newhaven, Poole, Portsmouth and Plymouth to French ports, such as Calais, Dunkirk, Dieppe, Roscoff, Cherbourg-Octeville, Caen, St Malo and Le Havre. The busiest ferry route to France is the Dover to Calais crossing with approximately 9,168,000 passengers using the service in 2018. Ferries from Great Britain also sail to Belgium, the Netherlands, Norway, Spain and Ireland. Some ferries carry mainly tourist traffic, but most also carry freight, and some are exclusively for the use of freight lorries. In Britain, car-carrying ferries are sometimes referred to as RORO (roll-on, roll-off) for the ease by which vehicles can board and leave.
Denmark
The busiest single ferry route in terms of the number of departures is across the northern part of Øresund, between Helsingborg, Scania, Sweden and Elsinore, Denmark. Before the Øresund bridge was opened in July 2000, car and "car and train" ferries departed up to seven times every hour (every 8.5 minutes). This has since been reduced, but a car ferry still departs from each harbor every 15 minutes during daytime. The route is around and the crossing takes 22 minutes. Today, all ferries on this route are constructed so that they do not need to turn around in the harbors. This also means that the ferries lack stems and sterns, since the vessels sail in both directions. Starboard and port-side are dynamic, depending on the direction the ferry sails. Despite the short crossing, the ferries are equipped with restaurants (on three out of four ferries), cafeterias, and kiosks. Passengers without cars often make a double or triple return journey in the restaurants; for this, a single journey ticket is sufficient. Passenger and bicycle passenger tickets are inexpensive compared with longer routes.
Baltic Sea
Large cruiseferries sail in the Baltic Sea between Finland, Åland, Sweden, Estonia, Latvia and Saint Petersburg, Russia. In many ways, these ferries are like cruise ships, but they can also carry hundreds of cars on car decks. Besides providing passenger and car transport across the sea, Baltic Sea cruise-ferries are a popular tourist destination unto themselves, with multiple restaurants, nightclubs, bars, shops and entertainment on board. Helsinki was the busiest international passenger ferry port in the world in 2017 with over 11.8 million passengers whilst the second business international ferry port, Dover, had 11.7 million passengers. The Helsinki-Tallinn route alone accounted for nine million passengers. In 2022 the port of Helsinki had almost 8 million passengers, of which 6.3 million travelled between Helsinki and Tallinn. Additionally many smaller ferries operate on domestic routes in Finland, Sweden and Estonia.
The south-west and southern parts of the Baltic Sea has several routes mainly for heavy traffic and cars. The ferry routes of Rødby-Puttgarden, Trelleborg-Rostock, Trelleborg-Travemünde, Trelleborg-Świnoujście, Gedser-Rostock, Gdynia-Karlskrona, and Ystad-Świnoujście are all typical transports ferries. On the longer of these routes, simple cabins are available. Some of these routes previously also carried trains, but since 2020 these trains are instead routed around the Baltic via the Great Belt fixed link and Jutland.
Turkey
In Istanbul, ferries connect the European and Asian shores of Bosphorus, as well as Princes' Islands and nearby coastal towns. In 2014, İDO transported 47 million passengers, the largest ferry system in the world.
Italy
The largest ferry system in Italy is in Venice. The city's water taxis (Italian: taxi d'acqua) provide service all around the city's canals. They can carry up to 10 people. They operate on a series of lines that stop at different locations around Venice.
Sweden
The world's shortest ferry line is the Ferry Lina in Töreboda, Sweden. It takes around 20–25 seconds and is hand powered.
North America
Canada
Due to the numbers of large freshwater lakes and length of shoreline in Canada, various provinces and territories have ferry services.
BC Ferries operates the third largest ferry service in the world which carries travellers between Vancouver Island and the British Columbia mainland on the country's west coast. This ferry service operates to other islands including the Gulf Islands and Haida Gwaii. In 2015, BC Ferries carried more than 8 million vehicles and 20 million passengers. In Vancouver there is SeaBus.
Canada's east coast has been home to numerous inter- and intra-provincial ferry and coastal services, including a large network operated by the federal government under CN Marine and later Marine Atlantic. Private and publicly owned ferry operations in eastern Canada include Marine Atlantic, serving the island of Newfoundland, as well as Bay, NFL, CTMA, Coastal Transport, and STQ. Canadian waters in the Great Lakes once hosted numerous ferry services, but these have been reduced to those offered by Owen Sound Transportation and several smaller operations. There are also several commuter passenger ferry services operated in major cities, such as Metro Transit in Halifax, and Toronto Island ferries in Toronto. There is also the Société des traversiers du Québec.
United States
Due to the North Carolina coast's geography, consisting of numerous sounds, inlets, tidal arms, and islands, ferry transportation is essential in the region. The state operates twelve routes, eight of which are under the oversight of the North Carolina Department of Transportation Ferry Division, three of which are under the direct oversight of the North Carolina Department of Transportation, and one of which is under the oversight of the North Carolina Division of Parks and Recreation. Three of the Ferry Division routes are tolled, and all ferry routes operated by the North Carolina Department of Transportation carry both vehicles and pedestrians, although certain vessels only carry pedestrians and cyclists. The National Park Service additionally works with private companies to offer ferry service to locations such as Cape Lookout and Portsmouth.
Washington State Ferries operates the most extensive ferry system in the continental United States and the second largest in the world by vehicles carried, with ten routes on Puget Sound and the Strait of Juan de Fuca serving terminals in Washington and Vancouver Island. In 2016, Washington State Ferries carried 10.5 million vehicles and 24.2 million riders in total.
The Alaska Marine Highway System provides service between Bellingham, Washington, and various towns and villages throughout Southeast and Southwest Alaska, including crossings of the Gulf of Alaska. AMHS provides affordable access to many small communities with no road connection or airport.
The Staten Island Ferry in New York City, sailing between the boroughs of Manhattan and Staten Island, is the nation's single busiest ferry route by passenger volume. Unlike riders on many other ferry services, Staten Island Ferry passengers do not pay any fare to ride it. New York City also has a network of smaller ferries, or water taxis, that shuttle commuters along the Hudson River from locations in New Jersey and Northern Manhattan down to the midtown, downtown and Wall Street business centers. Several ferry companies also offer service linking midtown and lower Manhattan with locations in the boroughs of Queens and Brooklyn, crossing the city's East River. New York City Mayor Bill de Blasio announced in February 2015 that city would begin an expanded Citywide Ferry Service, and launched as NYC Ferry in 2017, linking heretofore relatively isolated communities such as Manhattan's Lower East Side, Soundview in The Bronx, Astoria and the Rockaways in Queens and such Brooklyn neighborhoods as Bay Ridge, Sunset Park, and Red Hook with existing ferry landings in Lower Manhattan and Midtown Manhattan. A second expansion phase connected Staten Island to the West Side of Manhattan, and added a stop in Throgs Neck, in the Bronx. NYC Ferry is now the largest passenger fleet in the United States.
The New Orleans area also has many ferries that carry both vehicles and pedestrians. Most notable is the Algiers Ferry, which has been in continuous operation since 1827 and is one of the oldest operating ferries in North America.
In New England, vehicle-carrying ferry services between mainland Cape Cod and the islands of Martha's Vineyard and Nantucket are operated by The Woods Hole, Martha's Vineyard and Nantucket Steamship Authority, which sails year-round between Woods Hole and Vineyard Haven as well as Hyannis and Nantucket. Seasonal service is also operated from Woods Hole to Oak Bluffs during the summer and fall. As there are no bridges or tunnels connecting the islands to the mainland, The Steamship Authority ferries in addition to being the only method for transporting private cars to or from the islands, also ferry heavy freight and supplies, such as construction materials and fuel, competing with tug and barge companies. Additionally, Hy-Line Cruises operates high-speed catamaran service from Hyannis to both islands, and several smaller operations run seasonal passenger-only service primarily geared towards tourist day-trippers from other mainland ports, including New Bedford, (New Bedford Fast Ferry) Falmouth, (Island Queen ferry and Falmouth Ferry) and Harwich (Freedom Cruise Line). Ferries also bring riders and vehicles across Long Island Sound to such Connecticut cities as Bridgeport and New London, and to Block Island in Rhode Island from points on Long Island.
Transbay commuting in the San Francisco Bay Area was primarily ferry-based until the advent of automobiles in the 1940s, and most bridges in the area were built to supplant ferry services. By the 1970s, ferries were primarily used by tourists with Golden Gate Ferry, an organization under the ownership of the same governing body as the Golden Gate Bridge, left as the sole commute operator. The 1989 Loma Prieta earthquake prompted the restoration of service to the East Bay. The modern ferry network is primarily under the authority of San Francisco Bay Ferry, connecting with cities as far as Vallejo. Tourist excursions are also offered by Blue & Gold Fleet and Red & White Fleet. A ferry serves Angel Island (which also accepts private craft). Alcatraz is served exclusively by ferry service administered by the National Park Service.
Until the completion of the Mackinac Bridge in the 1950s, ferries were used for vehicle transportation between the Lower and the Upper Peninsulas of Michigan, across the Straits of Mackinac in the United States. Ferry service for bicycles and passengers continues across the straits for transport to Mackinac Island, where motorized vehicles are almost completely prohibited. This crossing is made possible by two ferry lines Shepler's Ferry and Mackinac Island Ferry Company (formerly Star Line).
A ferry service runs between Milwaukee, Wisconsin and Muskegon, Michigan operated by Lake Express. Another ferry SS Badger operates between Manitowoc, Wisconsin and Ludington, Michigan. Both cross Lake Michigan.
Numerous additional inland ferry routes exist in the United States, such as the Cave-In-Rock Ferry across the Ohio River, and the Benton-Houston Ferry across the Tennessee River.
Modernization of ferry system
The FTA announced in September 2024 that it would award $300 million in grants to modernize ferry systems in the United States. These grants will support 18 projects across 14 states, emphasizing upgrading environmentally friendly propulsion systems. Eight of the 18 projects will receive funding for this purpose.
One notable project is the San Francisco ferry system, which will receive $11.5 million to improve the connection between Treasure Island and Mission Bay. In Maine, the ferry system will be upgraded in Lincolnville and Islesboro. Additionally, Alaska will receive a significant $106.4 million grant to replace a 60-year-old vessel operating in the southwest. This vessel is a crucial connector for the region.
These grants are part of the FTA's efforts to improve ferry transportation in the United States and promote sustainable transportation options.
Mexico
Mexico has ferry services run by Baja Ferries that connect La Paz located on the Baja California Peninsula with Mazatlán and Topolobampo. Passenger ferries also run from Playa del Carmen to the island of Cozumel.
South America
There are several ferries in South America.
Chacao Channel has ferry lines.
Oceania
Australia
In Australia, two Spirit of Tasmania ferries carry passengers and vehicles across Bass Strait, the body of water that separates Tasmania from the Australian mainland, often under turbulent sea conditions. These run overnight but also include day crossings in peak time. Both ferries are based in the northern Tasmanian port city of Devonport and sail to Geelong. Before Geelong this ferry used to sail to Melbourne.
The double-ended Freshwater-class ferry cuts an iconic shape as it makes its way up and down Sydney Harbour New South Wales, Australia between Manly and Circular Quay.
New Zealand
In New Zealand, ferries connect Wellington in the North Island with Picton in the South Island, linking New Zealand's two main islands. The route is , and is run by two companies – government-owned Interislander, and independent Bluebridge, who say the trip takes three and half hours.
Types
Ferry designs depend on the length of the route, the passenger or vehicle capacity required, speed requirements and the water conditions the craft must deal with.
Double-ended
Double-ended ferries have interchangeable bows and sterns, allowing them to shuttle back and forth between two terminals without having to turn around. Well-known double-ended ferry systems include the BC Ferries, the Staten Island Ferry, Washington State Ferries, Star Ferry, several ferries on the North Carolina Ferry System, and the Lake Champlain Transportation Company. Most Norwegian fjord and coastal ferries are double-ended vessels. All ferries from southern Prince Edward Island to the mainland of Canada were double-ended. This service was discontinued upon completion of the Confederation Bridge. Some ferries in Sydney, Australia and British Columbia are also double-ended. In 2008, BC Ferries launched the first of the Coastal-class ferries, which at the time were the world's largest double enders. These were surpassed as the world's largest double-enders when P&O Ferries launched their first double-ender, called the P&O Pioneer, which entered service in June 2023 replacing Pride of Kent.
Hydrofoil
Hydrofoils have the advantage of higher cruising speeds, succeeding hovercraft on some English Channel routes where the ferries now compete against the Eurotunnel and Eurostar trains that use the Channel Tunnel. Passenger-only hydrofoils also proved a practical, fast and relatively economical solution in the Canary Islands, but were recently replaced by faster catamaran "high speed" ferries that can carry cars. Their replacement by the larger craft is seen by critics as a retrograde step given that the new vessels use much more fuel and foster the inappropriate use of cars in islands already suffering from the impact of mass tourism.
Hovercraft
Hovercraft were developed in the 1960s and 1970s to carry cars. The largest was the massive SR.N4 which carried cars in its centre section with ramps at the bow and stern between England and France. The hovercraft was superseded by catamarans which are nearly as fast and are less affected by sea and weather conditions. Only one service now remains, a foot passenger service between Portsmouth and the Isle of Wight run by Hovertravel.
Catamaran
Since 1990 high speed catamarans have revolutionised ferry services, replacing hovercraft, hydrofoils and conventional monohull ferries. In the 1990s there were a variety of builders, but the industry has consolidated to two builders of large vehicular ferries between 60 and 120 metres. Incat of Hobart, Tasmania favours a Wave-piercing hull to deliver a smooth ride, while Austal of Perth, Western Australia builds ships based on SWATH designs. Both these companies also compete in the smaller river ferry industry with a number of other ship builders.
Stena Line once operated the largest catamarans in the world, the Stena HSS class, between the United Kingdom and Ireland. These waterjet-powered vessels, displaced 19,638 tonnes, accommodating 375 passenger cars and 1,500 passengers. Other examples of these super-size catamarans are found in the Condor Ferries fleet with the Condor Voyager and Rapide.
Roll-on/roll-off
Roll-on/roll-off ferries (RORO) are large conventional ferries named for the ease by which vehicles can board and leave.
Cruiseferry / RoPax
A cruiseferry is a ship that combines the features of a cruise ship with a roll-on/roll-off ferry. They are also known as RoPax for their combined Roll on/Roll Off and passenger design.
Fast RoPax ferry
Fast RoPax ferries are conventional ferries with a large garage intake and a relatively large passenger capacity, with conventional diesel propulsion and propellers that sail over . Pioneering this class of ferries was Attica Group, when it introduced Superfast I between Greece and Italy in 1995 through its subsidiary company Superfast Ferries. Cabins, if existent, are much smaller than those on cruise ships.
Turntable ferry
This type of ferry allows vehicles to load from the "side". The vehicle platform can be turned. When loading, the platform is turned sideways to allow sideways loading of vehicles. Then the platform is turned back, in line with the vessel, and the journey across water is made.
Pontoon ferry
Pontoon ferries and flat-bottomed boats such as punts carry passengers and vehicles across rivers and lakes and are widely used in less-developed countries with large rivers where the cost of bridge construction is prohibitive. One or more vehicles are carried on such ferries with ramps at either end for vehicles or animals to board. Cable ferries are usually pontoon ferries. In the Netherlands, Belgium and Germany many such small cable ferries exist and are called püntes.
Train ferry
A train ferry is a ship designed to carry railway vehicles. Typically, one level of the ship is fitted with railway tracks, and the vessel has a door at either or both of the front and rear to give access to the wharves.
Foot ferry
Foot ferries are small craft used to ferry foot passengers, and often also cyclists, over rivers. These are either self-propelled craft or cable ferries. Such ferries are for example to be found on the lower River Scheldt in Belgium and in particular the Netherlands. Regular foot ferry service also exists in the capital of the Czech Republic, Prague, and across the Yarra River in Melbourne, Australia at Newport. Restored, expanded ferry service in the Port of New York and New Jersey uses boats for pedestrians only.
The UK has a variety of historic foot ferries such as the Butley Foot Ferry across Butley Creek which dates back to 1383.
Cable ferry
Very short distances may be crossed by a cable or chain ferry, which is usually a pontoon ferry (see above), where the ferry is propelled along and steered by cables connected to each shore. Sometimes the cable ferry is human powered by someone on the boat. Reaction ferries are cable ferries that use the perpendicular force of the current as a source of power. Examples of a current propelled ferry are the four Rhine ferries in Basel, Switzerland. Cable ferries may be used in fast-flowing rivers across short distances. With an ocean crossing of approximately 1900 metres, the cable ferry between Vancouver Island and Denman Island in British Columbia; is the longest one in the world.
Free ferries operate in some parts of the world, such as at Woolwich in London, England (across the River Thames); in Amsterdam, Netherlands (across the IJ waterway); along the Murray River in South Australia, and across many lakes in British Columbia. Many cable ferries operate on lakes and rivers in Canada, among them a cable ferry that charges a toll operates on the Rivière des Prairies between Laval-sur-le-Lac and Île Bizard in Quebec, Canada. In Finland there were 40 road ferries (cable ferries) in 2009, on lakes, rivers and on sea between islands.
Air ferries
In the 1950s and 1960s, travel on an "air ferry" was possible—airplanes, often ex-military, specially equipped to take a small number of cars in addition to foot passengers. These operated various routes including between the United Kingdom and Continental Europe. Companies operating such services included Channel Air Bridge, Silver City Airways, and Corsair.
The term is also applied to any "ferrying" by air, and is commonly used when referring to airborne military operations.
Docking
Ferries often dock at specialized facilities designed to position the boat for loading and unloading, called a ferry slip. If the ferry transports road vehicles or railway carriages there will usually be an adjustable ramp called an apron that is part of the slip. In other cases, the apron ramp will be a part of the ferry itself, acting as a wave guard when elevated and lowered to meet a fixed ramp at the terminus – a road segment that extends partially underwater or meet the ferry slip.
Records
Gross tonnage
The world's largest ferries are typically those operated in Europe, with different vessels holding the record depending on whether length, gross tonnage or car vehicle capacity is the metric.
Oldest
The sole contender as oldest ferry in continuous operation is the Mersey Ferry from Liverpool to Birkenhead, England. In 1150, the Benedictine Priory at Birkenhead was established. The monks used to charge a small fare to row passengers across the estuary. In 1330, Edward III granted a charter to the Priory and its successors for ever: "the right of ferry there... for men, horses and goods, with leave to charge reasonable tolls". However, there may have been a short break following the Dissolution of the monasteries after 1536.
On 11 October 1811, inventor John Stevens' ship the Juliana, began operation as the first steam-powered ferry (service was between New York City, and Hoboken, New Jersey).
The Elwell Ferry, a cable ferry in North Carolina, travels a distance of , shore to shore, with a travel time of five minutes.
Largest networks
Waxholmsbolaget – 21 vessels serving around 300 ports of call in the Stockholm archipelago.
Istanbul Ferry Network – 87 vessels serving 86 ports of call in and around the Bosporus of Istanbul, Turkey.
BC Ferries – 36 vessels serving 47 ports of call along the west coast of British Columbia, Canada, carrying 22.3 million passengers annually.
Caledonian MacBrayne – 31 vessels serving 50 ports of call along the west coast of Scotland, carrying 1.43 million passengers annually.
Sydney Ferries – 31 vessels serving 36 ports of call in Port Jackson (Sydney Harbour), carrying 15.3 million passengers annually.
Washington State Ferries – 21 vessels serving 20 ports of call around Puget Sound of Washington, United States, carrying 24.2 million passengers annually.
Metrolink Queensland – 21 vessels serving 26 ports of call along the Brisbane River in Brisbane, Australia, carrying 2.7 million passengers annually.
Société des traversiers du Québec
Busiest networks
Istanbul Ferry Network – 40 million passengers annually.
Washington State Ferries – 24.2 million passengers annually.
Staten Island Ferry in New York City – 23.9 million passengers annually; busiest single-line ferry in the world.
Amsterdam GVB Ferries – 22.4 million passengers annually.
BC Ferries – 22.3 million passengers annually.
Star Ferry in Hong Kong – 19.7 million passengers annually.
Fastest
The gas turbine powered Luciano Federico L operated by Montevideo-based Buquebus, holds the Guinness World Record for the fastest car ferry in the world, in service between Montevideo, Uruguay and Buenos Aires, Argentina: its maximum speed, achieved in sea trials, was . It can carry 450 passengers and 52 cars along the route.
Sustainability
The contributions of ferry travel to climate change have received less scrutiny than land and air transport, and vary considerably according to factors like speed and the number of passengers carried. Average carbon dioxide emissions by ferries per passenger-kilometre seem to be . However, ferries between Finland and Sweden produce of CO2, with total emissions equalling a CO2 equivalent of , while ferries between Finland and Estonia produce of CO2 with total emissions equalling a CO2 equivalent of .
Alternative fuels
With the price of oil at high levels, and with increasing pressure from consumers for measures to tackle global warming, a number of innovations for energy and the environment were put forward at the Interferry conference in Stockholm. According to the company Solar Sailor, hybrid marine power and solar wing technology are suitable for use with ferries, private yachts and even tankers.
Alternative fuels are becoming more widespread on ferries. The fastest passenger ferry in the world Buquebus, runs on LNG, while Sweden's Stena converted one of its ferries to run on both diesel and methanol in 2015. Both LNG and methanol reduce CO2 emissions considerably and replace costly diesel fuel.
Megawatt-class battery electric ferries operate in Scandinavia, with several more scheduled for operation. As of 2017, the world's biggest purely electric ferry was the , which operates on the Helsingør–Helsingborg ferry route across the Øresund between Denmark and Sweden. The ferry weights 8414 tonnes, and has an electric storage capacity of more than 4 MWh.
Since 2015, Norwegian ferry company Norled has operated e-ferry on the Lavik-Opedal connection on the E39 north of Bergen. Further north on the Norwegian west coast, the connection between Anda and Lote will be the world's first route served only by e-ferries. The first of two ships, MF Gloppefjord, was put into service in January 2018, followed by MF Eidsfjord. The owner, Fjord1, has commissioned a further seven battery-powered ferries to be in operation from 2020. A total of 60 battery powered car ferries are expected to be operational in Norway by 2021.
Since 15 August 2019, Ærø Municipality have operated between the southern Danish ports of Fynshav and Søby, on the island of Ærø. The e-ferry is capable of carrying 30 vehicles and 200 passengers and is powered by a battery "with an unprecedented capacity" of . The vessel can sail up to between charges – seven times further than previously possible for an e-ferry. It will now need to prove it can provide up to seven return trips per day. The European Union, which supported the project, aims to roll out 100 or more of these ferries by 2030.
A special feature is the Danish Udbyhøj cable ferry in Randers Fjord which has a land-based power supply by means of a retractable submarine cable.
Accidents
The following notable maritime disasters involved ferries:
– (10 April 1968) 53 deaths
MV Namyoug-Ho (15 December 1970) 323–326 deaths
MV George Prince (20 October 1976) 78 deaths
– (6 March 1987) 193 deaths
– (20 December 1987) 4,386 deaths
– (24 October 1988) ≈400 deaths
– (7 April 1990) 159 deaths
– (15 December 1991) 470–850 deaths
– (23 August 1992) 30 deaths
MS Jan Heweliusz – (14 January 1993) 55 deaths
MV Seohae – (10 October 1993) 292 deaths
MS Estonia – (28 September 1994) 852 deaths
– (2 December 1994) 140 deaths
– (21 May 1996) 894 deaths
– (18 September 1998) 150 deaths
– (26 September 2000) 81 deaths
– (26 September 2002) 1,863 deaths
– (21 June 2008) 814 deaths
– (10 September 2011) 1,573 deaths
– (2 February 2012) 88–223 deaths
– (18 July 2012) 150 deaths
– (16 August 2013) 137 deaths
MV Sewol – (16 April 2014) 304 deaths
MV Nyerere – (20 September 2018) 228 deaths
| Technology | Maritime transport | null |
50798 | https://en.wikipedia.org/wiki/Insomnia | Insomnia | Insomnia, also known as sleeplessness, is a sleep disorder where people have trouble sleeping. They may have difficulty falling asleep, or staying asleep for as long as desired. Insomnia is typically followed by daytime sleepiness, low energy, irritability, and a depressed mood. It may result in an increased risk of accidents of all kinds as well as problems focusing and learning. Insomnia can be short term, lasting for days or weeks, or long term, lasting more than a month. The concept of the word insomnia has two distinct possibilities: insomnia disorder (ID) or insomnia symptoms, and many abstracts of randomized controlled trials and systematic reviews often underreport on which of these two possibilities the word refers to.
Insomnia can occur independently or as a result of another problem. Conditions that can result in insomnia include psychological stress, chronic pain, heart failure, hyperthyroidism, heartburn, restless leg syndrome, menopause, certain medications, and drugs such as caffeine, nicotine, and alcohol. Insomnia is also common in people with ADHD, and children with autism. Other risk factors include working night shifts and sleep apnea. Diagnosis is based on sleep habits and an examination to look for underlying causes. A sleep study may be done to look for underlying sleep disorders. Screening may be done with questions like "Do you experience difficulty sleeping?" or "Do you have difficulty falling or staying asleep?"
Although their efficacy as first line treatments is not unequivocally established, sleep hygiene and lifestyle changes are typically the first treatment for insomnia. Sleep hygiene includes a consistent bedtime, a quiet and dark room, exposure to sunlight during the day and regular exercise. Cognitive behavioral therapy may be added to this. While sleeping pills may help, they are sometimes associated with injuries, dementia, and addiction. These medications are not recommended for more than four or five weeks. The effectiveness and safety of alternative medicine is unclear.
Between 10% and 30% of adults have insomnia at any given point in time and up to half of people have insomnia in a given year. About 6% of people have insomnia that is not due to another problem and lasts for more than a month. People over the age of 65 are affected more often than younger people. Women are more often affected than men. Descriptions of insomnia occur at least as far back as ancient Greece.
Signs and symptoms
Symptoms of insomnia:
Difficulty falling asleep, including difficulty finding a comfortable sleeping position
Waking during the night, being unable to return to sleep and waking up early
Not able to focus on daily tasks, difficulty in remembering
Daytime sleepiness, irritability, depression or anxiety
Feeling tired or having low energy during the day
Trouble concentrating
Being irritable, acting aggressive or impulsive
Sleep onset insomnia is difficulty falling asleep at the beginning of the night, often a symptom of anxiety disorders. Delayed sleep phase disorder can be misdiagnosed as insomnia, as sleep onset is delayed to much later than normal while awakening spills over into daylight hours.
It is common for patients who have difficulty falling asleep to also have nocturnal awakenings with difficulty returning to sleep. Two-thirds of these patients wake up in the middle of the night, with more than half having trouble falling back to sleep after a middle-of-the-night awakening.
Early morning awakening is an awakening occurring earlier (more than 30 minutes) than desired with an inability to go back to sleep, and before total sleep time reaches 6.5 hours. Early morning awakening is often a characteristic of depression. Anxiety symptoms may well lead to insomnia. Some of these symptoms include tension, compulsive worrying about the future, feeling overstimulated, and overanalyzing past events.
Poor sleep quality
Poor sleep quality can occur as a result of, for example, restless legs, sleep apnea or major depression. Poor sleep quality is defined as the individual not reaching stage 3 or delta sleep which has restorative properties.
Major depression leads to alterations in the function of the hypothalamic–pituitary–adrenal axis, causing excessive release of cortisol which can lead to poor sleep quality.
Nocturnal polyuria, excessive night-time urination, can also result in a poor quality of sleep.
Subjectivity
Some cases of insomnia are not really insomnia in the traditional sense because people experiencing sleep state misperception often sleep for a normal amount of time. The problem is that, despite sleeping for multiple hours each night and typically not experiencing significant daytime sleepiness or other symptoms of sleep loss, they do not feel like they have slept very much, if at all. Because their perception of their sleep is incomplete, they incorrectly believe it takes them an abnormally long time to fall asleep, and they underestimate how long they stay asleep.
Problematic digital media use
Causes
While insomnia can be caused by a number of conditions, it can also occur without any identifiable cause. This is known as Primary Insomnia. Primary Insomnia may also have an initial identifiable cause but continues after the cause is no longer present. For example, a bout of insomnia may be triggered by a stressful work or life event. However, the condition may continue after the stressful event has been resolved. In such cases, the insomnia is usually perpetuated by the anxiety or fear caused by the sleeplessness itself, rather than any external factors.
Symptoms of insomnia can be caused by or be associated with:
Sleep breathing disorders, such as sleep apnea or upper airway resistance syndrome
Use of psychoactive drugs (such as stimulants), including certain medications, herbs, caffeine, nicotine, cocaine, amphetamines, methylphenidate, aripiprazole, MDMA, modafinil, or excessive alcohol intake
Use of or withdrawal from alcohol and other sedatives, such as anti-anxiety and sleep drugs like benzodiazepines
Use of or withdrawal from pain-relievers such as opioids
Heart disease
Restless legs syndrome, which can cause sleep onset insomnia due to the discomforting sensations felt and the need to move the legs or other body parts to relieve these sensations
Periodic limb movement disorder (PLMD), which occurs during sleep and can cause arousals of which the sleeper is unaware
Pain: an injury or condition that causes pain can preclude an individual from finding a comfortable position in which to fall asleep, and can also cause awakening.
Hormone shifts such as those that precede menstruation and those during menopause
Life events such as fear, stress, anxiety, emotional or mental tension, work problems, financial stress, birth of a child, and bereavement
Gastrointestinal issues such as heartburn or constipation
Mental, neurobehavioral, or neurodevelopmental disorders such as bipolar disorder, clinical depression, generalized anxiety disorder, post traumatic stress disorder, schizophrenia, obsessive compulsive disorder, autism, dementia, ADHD, and FASD
Disturbances of the circadian rhythm, such as shift work and jet lag, can cause an inability to sleep at some times of the day and excessive sleepiness at other times of the day. Chronic circadian rhythm disorders are characterized by similar symptoms.
Certain neurological disorders such as brain lesions, or a history of traumatic brain injury
Medical conditions such as hyperthyroidism
Abuse of over-the-counter or prescription sleep aids (sedative or depressant drugs) can produce rebound insomnia
Poor sleep hygiene, e.g., noise or over-consumption of caffeine
A rare genetic condition can cause a prion-based, permanent and eventually fatal form of insomnia called fatal familial insomnia
Physical exercise: exercise-induced insomnia is common in athletes in the form of prolonged sleep onset latency
Increased exposure to the blue light from artificial sources, such as phones or computers
Chronic pain
Lower back pain
Asthma
Sleep studies using polysomnography have suggested that people who have sleep disruption have elevated night-time levels of circulating cortisol and adrenocorticotropic hormone. They also have an elevated metabolic rate, which does not occur in people who do not have insomnia but whose sleep is intentionally disrupted during a sleep study. Studies of brain metabolism using positron emission tomography (PET) scans indicate that people with insomnia have higher metabolic rates by night and by day. The question remains whether these changes are the causes or consequences of long-term insomnia.
Genetics
Heritability estimates of insomnia vary between 38% in males to 59% in females. A genome-wide association study (GWAS) identified 3 genomic loci and 7 genes that influence the risk of insomnia, and showed that insomnia is highly polygenic. In particular, a strong positive association was observed for the MEIS1 gene in both males and females. This study showed that the genetic architecture of insomnia strongly overlaps with psychiatric disorders and metabolic traits.
It has been hypothesized that epigenetics might also influence insomnia through a controlling process of both sleep regulation and brain-stress response having an impact as well on the brain plasticity.
Substance-induced
Alcohol-induced
Alcohol is often used as a form of self-treatment of insomnia to induce sleep. However, alcohol use to induce sleep can be a cause of insomnia. Long-term use of alcohol is associated with a decrease in NREM stage 3 and 4 sleep as well as suppression of REM sleep and REM sleep fragmentation. Frequent moving between sleep stages occurs with; awakenings due to headaches, the need to urinate, dehydration, and excessive sweating. Glutamine rebound also plays a role as when someone is drinking; alcohol inhibits glutamine, one of the body's natural stimulants. When the person stops drinking, the body tries to make up for lost time by producing more glutamine than it needs.
The increase in glutamine levels stimulates the brain while the drinker is trying to sleep, keeping them from reaching the deepest levels of sleep. Stopping chronic alcohol use can also lead to severe insomnia with vivid dreams. During withdrawal, REM sleep is typically exaggerated as part of a rebound effect.
Caffeine
Some people experience sleep disruption or anxiety if they consume caffeine. Doses as low as 100 mg/day, such as a cup of coffee or two to three servings of caffeinated soft-drink, may continue to cause sleep disruption, among other intolerances. Non-regular caffeine users have the least caffeine tolerance for sleep disruption. Some coffee drinkers develop tolerance to its undesired sleep-disrupting effects, but others apparently do not.
Benzodiazepine-induced
Like alcohol, benzodiazepines, such as alprazolam, clonazepam, lorazepam, and diazepam, are commonly used to treat insomnia in the short-term (both prescribed and self-medicated), but worsen sleep in the long-term. While benzodiazepines can put people to sleep (i.e., inhibit NREM stage 1 and 2 sleep), while asleep, the drugs disrupt sleep architecture: decreasing sleep time, delaying time to REM sleep, and decreasing deep slow-wave sleep (the most restorative part of sleep for both energy and mood).
Opioid-induced
Opioid medications such as hydrocodone, oxycodone, and morphine are used for insomnia that is associated with pain due to their analgesic properties and hypnotic effects. Opioids can fragment sleep and decrease REM and stage 2 sleep. By producing analgesia and sedation, opioids may be appropriate in carefully selected patients with pain-associated insomnia. However, dependence on opioids can lead to long-term sleep disturbances.
Risk factors
Insomnia affects people of all age groups, but people in the following groups have a higher chance of acquiring insomnia:
Individuals older than 60
History of mental health disorder including depression, etc.
Emotional stress
Working late night shifts
Traveling through different time zones
Having chronic diseases such as diabetes, kidney disease, lung disease, Alzheimer's, or heart disease
Alcohol or drug use disorders
Gastrointestinal reflux disease
Heavy smoking
Work stress
Individuals of low socioeconomic status
Urban Neighborhoods
Household stress
Mechanism
Two main models exist as to the mechanism of insomnia, cognitive and physiological. The cognitive model suggests rumination and hyperarousal contribute to preventing a person from falling asleep and might lead to an episode of insomnia.
The physiological model is based upon three major findings in people with insomnia; firstly, increased urinary cortisol and catecholamines have been found suggesting increased activity of the HPA axis and arousal; second, increased global cerebral glucose utilization during wakefulness and NREM sleep in people with insomnia; and lastly, increased full body metabolism and heart rate in those with insomnia. All these findings taken together suggest a deregulation of the arousal system, cognitive system, and HPA axis all contributing to insomnia. However, it is unknown if the hyperarousal is a result of, or cause of insomnia. Altered levels of the inhibitory neurotransmitter GABA have been found, but the results have been inconsistent, and the implications of altered levels of such a ubiquitous neurotransmitter are unknown. Studies on whether insomnia is driven by circadian control over sleep or a wake dependent process have shown inconsistent results, but some literature suggests a deregulation of the circadian rhythm based on core temperature. Increased beta activity and decreased delta wave activity has been observed on electroencephalograms; however, the implication of this is unknown.
Around half of post-menopausal women experience sleep disturbances, and generally sleep disturbance is about twice as common in women as men; this appears to be due in part, but not completely, to changes in hormone levels, especially in and post-menopause.
Changes in sex hormones in both men and women as they age may account in part for increased prevalence of sleep disorders in older people.
Diagnosis
In medicine, insomnia is measured using the Athens insomnia scale. It measures eight parameters related to sleep, represented as an overall scale which assesses an individual's sleep quality.
Medical history and a physical examination can identify other conditions that could be the cause of insomnia. A comprehensive sleep history should include sleep habits and sleep environment, medications (prescription and non-prescription including supplements), alcohol, nicotine, and caffeine intake, co-morbid illnesses. A sleep diary can be used track time to bed, total sleep time, time to sleep onset, number of awakenings, use of medications, time of awakening, and subjective feelings in the morning. The sleep diary can be replaced or validated by the use of out-patient actigraphy for a week or more, using a non-invasive device that measures movement.
Not everyone who suffers from insomnia should routinely have a polysomnography study to screen for sleep disorders, but it may be indicated for those with risk factors for sleep apnea, including obesity, a thick neck diameter, or fullness of the flesh in the oropharynx. For most people, the test is not needed to make a diagnosis, and insomnia can often be treated by changing their schedule to make time for sufficient sleep and by improving sleep hygiene.
Some patients may need to do an overnight sleep study in a sleep lab. Such a study will commonly involve assessment tools including a polysomnogram and the multiple sleep latency test. Specialists in sleep medicine are qualified to diagnose disorders within the, according to the ICSD, 81 major sleep disorder diagnostic categories. Patients with some disorders, including delayed sleep phase disorder, are often mis-diagnosed with primary insomnia; when a person has trouble getting to sleep and awakening at desired times, but has a normal sleep pattern once asleep, a circadian rhythm disorder is a likely cause.
In many cases, insomnia is co-morbid with another disease, side-effects from medications, or a psychological problem. Approximately half of all diagnosed insomnia is related to psychiatric disorders. For those who have depression, "insomnia should be regarded as a co-morbid condition, rather than as a secondary one;" insomnia typically predates psychiatric symptoms. "In fact, it is possible that insomnia represents a significant risk for the development of a subsequent psychiatric disorder." Insomnia occurs in between 60% and 80% of people with depression, and can be a side effect from medications that treat depression.
Determination of causation is not necessary for a diagnosis.
DSM-5 criteria
The DSM-5 criteria for insomnia include the following:
"Predominant complaint of dissatisfaction with sleep quantity or quality, associated with one (or more) of the following symptoms:
Difficulty initiating sleep. (In children, this may manifest as difficulty initiating sleep without caregiver intervention.)
Difficulty maintaining sleep, characterized by frequent awakenings or problems returning to sleep after awakenings. (In children, this may manifest as difficulty returning to sleep without caregiver intervention.)
Early-morning awakening with inability to return to sleep.
In addition:
The sleep disturbance causes clinically significant distress or impairment in social, occupational, educational, academic, behavioral, or other important areas of functioning.
The sleep difficulty occurs at least three nights per week.
The sleep difficulty is present for at least three months.
The sleep difficulty occurs despite adequate opportunity for sleep.
The insomnia is not better explained by and does not occur exclusively during the course of another sleep-wake disorder (e.g., narcolepsy, a breathing-related sleep disorder, a circadian rhythm sleep-wake disorder, a parasomnia).
The insomnia is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication)."
The DSM-IV TR includes insomnia but does not fully elaborate on the symptoms compared to the DSM-5. Instead of early-morning waking as a symptom, the DSM-IV-TR listed “nonrestorative sleep” as a primary symptom. The duration of the experience was also vague in the DSM-IV-TR. The DSM-IV-TR stated that symptoms had to be present for a month, whereas in the DSM-5, it stated symptoms had to be present for three months and occur at least 3 nights a week (Gillette).
Types
Insomnia can be classified as transient, acute, or chronic.
Transient insomnia lasts for less than a week. It can be caused by another disorder, by changes in the sleep environment, by the timing of sleep, severe depression, or by stress. Its consequences – sleepiness and impaired psychomotor performance – are similar to those of sleep deprivation.
Acute insomnia is the inability to consistently sleep well for a period of less than a month. Insomnia is present when there is difficulty initiating or maintaining sleep or when the sleep that is obtained is non-refreshing or of poor quality. These problems occur despite adequate opportunity and circumstances for sleep and they must result in problems with daytime function. Hyperarousal can be linked to acute insomnia since it activates the body's fight-or-flight response. When we encounter stress or danger, our bodies naturally become more alert, which can interfere with our capacity to both fall asleep and remain asleep. This heightened state of arousal can be useful in the short term during threatening situations, but if it continues over an extended period, it can result in acute insomnia. Acute insomnia is also known as short term insomnia or stress related insomnia.
Chronic insomnia lasts for longer than a month. It can be caused by another disorder, or it can be a primary disorder. Common causes of chronic insomnia include persistent stress, trauma, work schedules, poor sleep habits, medications, and other mental health disorders. When an individual consistently engages in behaviors that disrupt their sleep, such as irregular sleep schedules, spending excessive time awake in bed, or engaging in stimulating activities close to bedtime, it can lead to conditioned wakefulness contributing to chronic insomnia. People with high levels of stress hormones or shifts in the levels of cytokines are more likely than others to have chronic insomnia. Its effects can vary according to its causes. They might include muscular weariness, hallucinations, and/or mental fatigue.
Prevention
Prevention and treatment of insomnia may require a combination of cognitive behavioral therapy, medications, and lifestyle changes.
Among lifestyle practices, going to sleep and waking up at the same time each day can create a steady pattern which may help to prevent insomnia. Avoidance of vigorous exercise and caffeinated drinks a few hours before going to sleep is recommended, while exercise earlier in the day may be beneficial. Other practices to improve sleep hygiene may include:
Avoiding or limiting naps
Treating pain at bedtime
Avoiding large meals, beverages, alcohol, and nicotine before bedtime
Finding soothing ways to relax into sleep, including use of white noise
Making the bedroom suitable for sleep by keeping it dark, cool, and free of devices, such as clocks, cell phones, or televisions
Maintain regular exercise
Try relaxing activities before sleeping
Management
It is recommended to rule out medical and psychological causes before deciding on the treatment for insomnia. Cognitive behavioral therapy has been found to be effective for chronic insomnia. The beneficial effects, in contrast to those produced by medications, may last well beyond the stopping of therapy.
Medications have been used mainly to reduce symptoms in insomnia of short duration; their role in the management of chronic insomnia remains unclear. Several different types of medications may be used. Many doctors do not recommend relying on prescription sleeping pills for long-term use. It is also important to identify and treat other medical conditions that may be contributing to insomnia, such as depression, breathing problems, and chronic pain. As of 2022, many people with insomnia were reported as not receiving overall sufficient sleep or treatment for insomnia.
Non-medication based
Non-medication based strategies have comparable efficacy to hypnotic medication for insomnia and they may have longer lasting effects. Hypnotic medication is only recommended for short-term use because dependence with rebound withdrawal effects upon discontinuation or tolerance can develop.
Non medication based strategies provide long lasting improvements to insomnia and are recommended as a first line and long-term strategy of management. Behavioral sleep medicine offers non-medication strategies to address chronic insomnia including sleep hygiene, stimulus control, behavioral interventions, sleep-restriction therapy, paradoxical intention, patient education, and relaxation therapy. Some examples are keeping a journal, restricting the time spent awake in bed, practicing relaxation techniques, and maintaining a regular sleep schedule and a wake-up time. Behavioral therapy can assist a patient in developing new sleep behaviors to improve sleep quality and consolidation. Behavioral therapy may include, learning healthy sleep habits to promote sleep relaxation, undergoing light therapy to help with worry-reduction strategies and regulating the circadian clock.
Music may improve insomnia in adults (see music and sleep). EEG biofeedback has demonstrated effectiveness in the treatment of insomnia with improvements in duration as well as quality of sleep. Self-help therapy (defined as a psychological therapy that can be worked through on one's own) may improve sleep quality for adults with insomnia to a small or moderate degree.
Stimulus control therapy is a treatment for patients who have conditioned themselves to associate the bed, or sleep in general, with a negative response. As stimulus control therapy involves taking steps to control the sleep environment, it is sometimes referred interchangeably with the concept of sleep hygiene. Examples of such environmental modifications include using the bed for sleep and sex only, not for activities such as reading or watching television; waking up at the same time every morning, including on weekends; going to bed only when sleepy and when there is a high likelihood that sleep will occur; leaving the bed and beginning an activity in another location if sleep does not occur in a reasonably brief period of time after getting into bed (commonly ~20 min); reducing the subjective effort and energy expended trying to fall asleep; avoiding exposure to bright light during night-time hours, and eliminating daytime naps.
A component of stimulus control therapy is sleep restriction, a technique that aims to match the time spent in bed with actual time spent asleep. This technique involves maintaining a strict sleep-wake schedule, sleeping only at certain times of the day and for specific amounts of time to induce mild sleep deprivation. Complete treatment usually lasts up to 3 weeks and involves making oneself sleep for only a minimum amount of time that they are actually capable of on average, and then, if capable (i.e. when sleep efficiency improves), slowly increasing this amount (~15 min) by going to bed earlier as the body attempts to reset its internal sleep clock. Bright light therapy may be effective for insomnia.
Paradoxical intention is a cognitive reframing technique where the insomniac, instead of attempting to fall asleep at night, makes every effort to stay awake (i.e. essentially stops trying to fall asleep). One theory that may explain the effectiveness of this method is that by not voluntarily making oneself go to sleep, it relieves the performance anxiety that arises from the need or requirement to fall asleep, which is meant to be a passive act. This technique has been shown to reduce sleep effort and performance anxiety and also lower subjective assessment of sleep-onset latency and overestimation of the sleep deficit (a quality found in many insomniacs).
Sleep hygiene
Sleep hygiene is a common term for all of the behaviors which relate to the promotion of good sleep. They include habits which provide a good foundation for sleep and help to prevent insomnia. However, sleep hygiene alone may not be adequate to address chronic insomnia. Sleep hygiene recommendations are typically included as one component of cognitive behavioral therapy for insomnia (CBT-I). Recommendations include reducing caffeine, nicotine, and alcohol consumption, maximizing the regularity and efficiency of sleep episodes, minimizing medication usage and daytime napping, the promotion of regular exercise, and the facilitation of a positive sleep environment. The creation of a positive sleep environment may also be helpful in reducing the symptoms of insomnia.
On the other hand, a systematic review by the AASM concluded that clinicians should not prescribe sleep hygiene for insomnia due to the evidence of absence of its efficacy and potential delaying of adequate treatment, recommending instead that effective therapies such as CBT-i should be preferred.
Cognitive behavioral therapy
There is some evidence that cognitive behavioral therapy for insomnia (CBT-I) is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. In this therapy, patients are taught improved sleep habits and relieved of counter-productive assumptions about sleep. Common misconceptions and expectations that can be modified include:
Unrealistic sleep expectations.
Misconceptions about insomnia causes.
Amplifying the consequences of insomnia.
Performance anxiety after trying for so long to have a good night's sleep by controlling the sleep process.
Numerous studies have reported positive outcomes of combining cognitive behavioral therapy for insomnia treatment with treatments such as stimulus control and the relaxation therapies. Hypnotic medications are equally effective in the short-term treatment of insomnia, but their effects wear off over time due to tolerance. The effects of CBT-I have sustained and lasting effects on treating insomnia long after therapy has been discontinued. The addition of hypnotic medications with CBT-I adds no benefit in insomnia. The long lasting benefits of a course of CBT-I shows superiority over pharmacological hypnotic drugs. Even in the short term when compared to short-term hypnotic medication such as zolpidem, CBT-I still shows significant superiority. Thus CBT-I is recommended as a first line treatment for insomnia.
Common forms of CBT-I treatments include stimulus control therapy, sleep restriction, sleep hygiene, improved sleeping environments, relaxation training, paradoxical intention, and biofeedback.
CBT is the well-accepted form of therapy for insomnia since it has no known adverse effects, whereas taking medications to alleviate insomnia symptoms have been shown to have adverse side effects. Nevertheless, the downside of CBT is that it may take a lot of time and motivation.
Acceptance and commitment therapy
Treatments based on the principles of acceptance and commitment therapy (ACT) and metacognition have emerged as alternative approaches to treating insomnia. ACT rejects the idea that behavioral changes can help insomniacs achieve better sleep, since they require "sleep efforts" - actions which create more "struggle" and arouse the nervous system, leading to hyperarousal. The ACT approach posits that acceptance of the negative feelings associated with insomnia can, in time, create the right conditions for sleep. Mindfulness practice is a key feature of this approach, although mindfulness is not practised to induce sleep (this in itself is a sleep effort to be avoided) but rather as a longer-term activity to help calm the nervous system and create the internal conditions from which sleep can emerge.
A key distinction between CBT-i and ACT lies in the divergent approaches to time spent awake in bed. Proponents of CBT-i advocate minimizing time spent awake in bed, on the basis that this creates cognitive association between being in bed and wakefulness. The ACT approach proposes that avoiding time in bed may increase the pressure to sleep and arouse the nervous system further.
Research has shown that "ACT has a significant effect on primary and comorbid insomnia and sleep quality, and ... can be used as an appropriate treatment method to control and improve insomnia".
Internet interventions
Despite the therapeutic effectiveness and proven success of CBT, treatment availability is significantly limited by a lack of trained clinicians, poor geographical distribution of knowledgeable professionals, and expense. One way to potentially overcome these barriers is to use the Internet to deliver treatment, making this effective intervention more accessible and less costly. The Internet has already become a critical source of health-care and medical information. Although the vast majority of health websites provide general information, there is growing research literature on the development and evaluation of Internet interventions.
These online programs are typically behaviorally-based treatments that have been operationalized and transformed for delivery via the Internet. They are usually highly structured; automated or human supported; based on effective face-to-face treatment; personalized to the user; interactive; enhanced by graphics, animations, audio, and possibly video; and tailored to provide follow-up and feedback.
There is good evidence for the use of computer based CBT for insomnia.
Medications
Many people with insomnia use sleeping tablets and other sedatives. In some places medications are prescribed in over 95% of cases. They, however, are a second line treatment. In 2019, the US Food and Drug Administration stated it is going to require warnings for eszopiclone, zaleplon, and zolpidem, due to concerns about serious injuries resulting from abnormal sleep behaviors, including sleepwalking or driving a vehicle while asleep.
The percentage of adults using a prescription sleep aid increases with age. During 2005–2010, about 4% of U.S. adults aged 20 and over reported that they took prescription sleep aids in the past 30 days. Rates of use were lowest among the youngest age group (those aged 20–39) at about 2%, increased to 6% among those aged 50–59, and reached 7% among those aged 80 and over. More adult women (5%) reported using prescription sleep aids than adult men (3%). Non-Hispanic white adults reported higher use of sleep aids (5%) than non-Hispanic black (3%) and Mexican-American (2%) adults. No difference was shown between non-Hispanic black adults and Mexican-American adults in use of prescription sleep aids.
Antihistamines
As an alternative to taking prescription drugs, some evidence shows that an average person seeking short-term help may find relief by taking over-the-counter antihistamines such as diphenhydramine or doxylamine. Diphenhydramine and doxylamine are widely used in nonprescription sleep aids. They are the most effective over-the-counter sedatives currently available, at least in much of Europe, Canada, Australia, and the United States, and are more sedating than some prescription hypnotics. Antihistamine effectiveness for sleep may decrease over time, and anticholinergic side-effects (such as dry mouth) may also be a drawback with these particular drugs. While addiction does not seem to be an issue with this class of drugs, they can induce dependence and rebound effects upon abrupt cessation of use. However, people whose insomnia is caused by restless legs syndrome may have worsened symptoms with antihistamines.
Antidepressants
While insomnia is a common symptom of depression, antidepressants are effective for treating sleep problems whether or not they are associated with depression. While all antidepressants help regulate sleep, some antidepressants, such as amitriptyline, doxepin, mirtazapine, trazodone, and trimipramine, can have an immediate sedative effect, and are prescribed to treat insomnia. Trazodone was at the beginning of the 2020s the biggest prescribed drug for sleep in the United States despite not being indicated for sleep.
Amitriptyline, doxepin, and trimipramine all have antihistaminergic, anticholinergic, antiadrenergic, and antiserotonergic properties, which contribute to both their therapeutic effects and side effect profiles, while mirtazapine's actions are primarily antihistaminergic and antiserotonergic and trazodone's effects are primarily antiadrenergic and antiserotonergic. Mirtazapine is known to decrease sleep latency (i.e., the time it takes to fall asleep), promoting sleep efficiency and increasing the total amount of sleeping time in people with both depression and insomnia.
Agomelatine, a melatonergic antidepressant with claimed sleep-improving qualities that does not cause daytime drowsiness, is approved for the treatment of depression though not sleep conditions in the European Union and Australia. After trials in the United States, its development for use there was discontinued in October 2011 by Novartis, who had bought the rights to market it there from the European pharmaceutical company Servier.
A 2018 Cochrane review found the safety of taking antidepressants for insomnia to be uncertain with no evidence supporting long term use.
Melatonin agonists
Melatonin receptor agonists such as melatonin and ramelteon are used in the treatment of insomnia. The evidence for melatonin in treating insomnia is generally poor. There is low-quality evidence that it may speed the onset of sleep by 6minutes. Ramelteon does not appear to speed the onset of sleep or the amount of sleep a person gets.
Usage of melatonin as a treatment for insomnia in adults has increased from 0.4% between 1999 and 2000 to nearly 2.1% between 2017 and 2018.
While the use of melatonin in the short-term has been proven to be generally safe and it is shown to not be a dependent medication, side effects can still occur.
Most common side effects of melatonin include:
Headache
Dizziness
Nausea
Daytime drowsiness
Prolonged-release melatonin may improve quality of sleep in older people with minimal side effects.
Studies have also shown that children who are on the autism spectrum or have learning disabilities, attention-deficit hyperactivity disorder (ADHD) or related neurological diseases can benefit from the use of melatonin. This is because they often have trouble sleeping due to their disorders. For example, children with ADHD tend to have trouble falling asleep because of their hyperactivity and, as a result, tend to be tired during most of the day. Another cause of insomnia in children with ADHD is the use of stimulants used to treat their disorder. Children who have ADHD then, as well as the other disorders mentioned, may be given melatonin before bedtime in order to help them sleep.
Benzodiazepines
The most commonly used class of hypnotics for insomnia are the benzodiazepines. Benzodiazepines are not significantly better for insomnia than antidepressants. Chronic users of hypnotic medications for insomnia do not have better sleep than chronic insomniacs not taking medications. In fact, chronic users of hypnotic medications have more regular night-time awakenings than insomniacs not taking hypnotic medications. Many have concluded that these drugs cause an unjustifiable risk to the individual and to public health and lack evidence of long-term effectiveness. It is preferred that hypnotics be prescribed for only a few days at the lowest effective dose and avoided altogether wherever possible, especially in the elderly. Between 1993 and 2010, the prescribing of benzodiazepines to individuals with sleep disorders has decreased from 24% to 11% in the US, coinciding with the first release of nonbenzodiazepines.
The benzodiazepine and nonbenzodiazepine hypnotic medications also have a number of side-effects such as day time fatigue, motor vehicle crashes and other accidents, cognitive impairments, and falls and fractures. Elderly people are more sensitive to these side-effects. Some benzodiazepines have demonstrated effectiveness in sleep maintenance in the short term but in the longer term benzodiazepines can lead to tolerance, physical dependence, benzodiazepine withdrawal syndrome upon discontinuation, and long-term worsening of sleep, especially after consistent usage over long periods of time. Benzodiazepines, while inducing unconsciousness, actually worsen sleep as – like alcohol – they promote light sleep while decreasing time spent in deep sleep. A further problem is, with regular use of short-acting sleep aids for insomnia, daytime rebound anxiety can emerge. Although there is little evidence for benefit of benzodiazepines in insomnia compared to other treatments and evidence of major harm, prescriptions have continued to increase. This is likely due to their addictive nature, both due to misuse and because – through their rapid action, tolerance and withdrawal they can "trick" insomniacs into thinking they are helping with sleep. There is a general awareness that long-term use of benzodiazepines for insomnia in most people is inappropriate and that a gradual withdrawal is usually beneficial due to the adverse effects associated with the long-term use of benzodiazepines and is recommended whenever possible.
Benzodiazepines all bind unselectively to the GABAA receptor. Some theorize that certain benzodiazepines (hypnotic benzodiazepines) have significantly higher activity at the α1 subunit of the GABAA receptor compared to other benzodiazepines (for example, triazolam and temazepam have significantly higher activity at the α1 subunit compared to alprazolam and diazepam, making them superior sedative-hypnotics – alprazolam and diazepam, in turn, have higher activity at the α2 subunit compared to triazolam and temazepam, making them superior anxiolytic agents). Modulation of the α1 subunit is associated with sedation, motor impairment, respiratory depression, amnesia, ataxia, and reinforcing behavior (drug-seeking behavior). Modulation of the α2 subunit is associated with anxiolytic activity and disinhibition. For this reason, certain benzodiazepines may be better suited to treat insomnia than others.
Z-Drugs
Nonbenzodiazepine or Z-drug sedative–hypnotic drugs, such as zolpidem, zaleplon, zopiclone, and eszopiclone, are a class of hypnotic medications that are similar to benzodiazepines in their mechanism of action, and indicated for mild to moderate insomnia. Their effectiveness at improving time to sleeping is slight, and they have similar—though potentially less severe—side effect profiles compared to benzodiazepines. Prescribing of nonbenzodiazepines has seen a general increase since their initial release on the US market in 1992, from 2.3% in 1993 among individuals with sleep disorders to 13.7% in 2010.
Orexin antagonists
Orexin receptor antagonists are a more recently introduced class of sleep medications and include suvorexant, lemborexant, and daridorexant, all of which are FDA-approved for treatment of insomnia characterized by difficulties with sleep onset and/or sleep maintenance. They are oriented towards blocking signals in the brain that stimulate wakefulness, therefore claiming to address insomnia without creating dependence. There are three dual orexin receptor (DORA) drugs on the market: Belsomra (Merck), Dayvigo (Eisai) and Quviviq (Idorsia).
Antipsychotics
Certain atypical antipsychotics, particularly quetiapine, olanzapine, and risperidone, are used in the treatment of insomnia. However, while common, use of antipsychotics for this indication is not recommended as the evidence does not demonstrate a benefit, and the risk of adverse effects are significant. A major 2022 systematic review and network meta-analysis of medications for insomnia in adults found that quetiapine did not demonstrate any short-term benefits for insomnia. Some of the more serious adverse effects may also occur at the low doses used, such as dyslipidemia and neutropenia. Such concerns of risks at low doses are supported by Danish observational studies that showed an association of use of low-dose quetiapine (excluding prescriptions filled for tablet strengths >50 mg) with an increased risk of major cardiovascular events as compared to use of Z-drugs, with most of the risk being driven by cardiovascular death. Laboratory data from an unpublished analysis of the same cohort also support the lack of dose-dependency of metabolic side effects, as new use of low-dose quetiapine was associated with a risk of increased fasting triglycerides at one-year follow-up. Concerns regarding side effects are greater in the elderly.
Other sedatives
Gabapentinoids like gabapentin and pregabalin have sleep-promoting effects but are not commonly used for treatment of insomnia. Gabapentin is not effective in helping alcohol related insomnia.
Barbiturates, while once used, are no longer recommended for insomnia due to the risk of addiction and other side effects.
Comparative effectiveness
Medications for the treatment of insomnia have a wide range of effect sizes. When comparing drugs such as benzodiazepines, Z-drugs, sedative antidepressants and antihistamines, quetiapine, orexin receptor antagonists, and melatonin receptor agonists, the orexin antagonist lemborexant and the Z-drug eszopiclone had the best profiles overall in terms of efficacy, tolerability, and acceptability.
Alternative medicine
Herbal products, such as valerian, kava, chamomile, and lavender, have been used to treat insomnia. However, there is no quality evidence that they are effective and safe. The same is true for cannabis and cannabinoids. It is likewise unclear whether acupuncture is useful in the treatment of insomnia.
Prognosis
A survey of 1.1 million residents in the United States found that those that reported sleeping about 7 hours per night had the lowest rates of mortality, whereas those that slept for fewer than 6 hours or more than 8 hours had higher mortality rates. Severe insomnia – sleeping less than 3.5 hours in women and 4.5 hours in men – is associated with a 15% increase in mortality, while getting 8.5 or more hours of sleep per night was associated with a 15% higher mortality rate.
With this technique, it is difficult to distinguish lack of sleep caused by a disorder which is also a cause of premature death, versus a disorder which causes a lack of sleep, and the lack of sleep causing premature death. Most of the increase in mortality from severe insomnia was discounted after controlling for associated disorders. After controlling for sleep duration and insomnia, use of sleeping pills was also found to be associated with an increased mortality rate.
The lowest mortality was seen in individuals who slept between six and a half and seven and a half hours per night. Even sleeping only 4.5 hours per night is associated with very little increase in mortality. Thus, mild to moderate insomnia for most people is associated with increased longevity and severe insomnia is associated only with a very small effect on mortality. It is unclear why sleeping longer than 7.5 hours is associated with excess mortality.
Epidemiology
Between 10% and 30% of adults have insomnia at any given point in time and up to half of people have insomnia in a given year, making it the most common sleep disorder. About 6% of people have insomnia that is not due to another problem and lasts for more than a month. People over the age of 65 are affected more often than younger people. Females are more often affected than males. Insomnia is 40% more common in women than in men.
There are higher rates of insomnia reported among university students compared to the general population.
Society and culture
The word insomnia is from + "without sleep" and -ia as a nominalizing suffix.
The popular press have published stories about people who supposedly never sleep, such as that of Thái Ngọc and Al Herpin. Horne writes "everybody sleeps and needs to do so", and generally this appears true. However, he also relates from contemporary accounts the case of Paul Kern, who was shot in 1915 fighting in World War I and then "never slept again" until his death in 1955. Kern appears to be a completely isolated, unique case.
| Biology and health sciences | Mental disorders | Health |
50838 | https://en.wikipedia.org/wiki/Manta%20ray | Manta ray | Manta rays are large rays belonging to the genus Mobula (formerly its own genus Manta). The larger species, M. birostris, reaches in width, while the smaller, M. alfredi, reaches . Both have triangular pectoral fins, horn-shaped cephalic fins and large, forward-facing mouths. They are classified among the Myliobatiformes (stingrays and relatives) and are placed in the family Myliobatidae (eagle rays). They have the largest brain-to-body ratio of all fish, and can pass the mirror test.
Mantas are found in warm temperate, subtropical and tropical waters. Both species are pelagic; M. birostris migrates across open oceans, singly or in groups, while M. alfredi tends to be resident and coastal. They are filter feeders and eat large quantities of zooplankton, which they gather with their open mouths as they swim. However, research suggests that the majority of their diet (73%) comes from mesopelagic sources. Gestation lasts over a year and mantas give birth to live pups. Mantas may visit cleaning stations for the removal of parasites. Like whales, they breach for unknown reasons.
Both species are listed as vulnerable by the International Union for Conservation of Nature. Anthropogenic threats include pollution, entanglement in fishing nets, and direct harvesting of their gill rakers for use in Chinese medicine. Manta rays are particularly valued for their gill plates, which are traded internationally. Their slow reproductive rate exacerbates these threats. They are protected in international waters by the Convention on Migratory Species of Wild Animals, but are more vulnerable closer to shore. Areas where mantas congregate are popular with tourists. Only a few public aquariums are large enough to house them.
Etymology
The name "manta" is Portuguese and Spanish for mantle (cloak or blanket), a type of blanket-shaped trap traditionally used to catch rays. Mantas are known as "devilfish" because of their horn-shaped cephalic fins, which are imagined to give them an "evil" appearance.
Taxonomy
Manta rays are members of the order Myliobatiformes which consists of stingrays and their relatives. The genus Manta is part of the eagle ray family Myliobatidae, where it is grouped in the subfamily Mobulinae along with the smaller Mobula devil rays. In 2018, an analysis of DNA, and to a lesser degree, morphology, found that Mobula was paraphyletic with respect to the manta rays; that is, some members of genus Mobula are closer related to the members of the genus Manta than they are to fellow Mobula, and the researchers recommended treating Manta as a junior synonym of Mobula.
Mantas evolved from bottom-dwelling stingrays, eventually developing more wing-like pectoral fins. M. birostris still has a vestigial remnant of a sting barb in the form of a caudal spine. The mouths of most rays lie on the underside of the head, while in mantas, they are right at the front. The edges of the jaws line up while in devil rays, the lower jaw shifts back when the mouth closes. Manta rays and devil rays are the only ray species that have evolved into filter feeders. Manta rays have dorsal slit-like spiracles, traits which they share with the devil fish and Chilean devil ray.
Species
The scientific naming of mantas has had a convoluted history, during which several names were used for both the genus (Ceratoptera, Brachioptilon, Daemomanta, and Diabolicthys) and species (such as vampyrus, americana, johnii, and hamiltoni). All were eventually treated as synonyms of the single species Manta birostris. The genus name Manta was first published in 1829 by Dr Edward Nathaniel Bancroft of Jamaica. The specific name birostris is ascribed to Johann Julius Walbaum (1792) by some authorities and to Johann August Donndorff (1798) by others. The specific name alfredi was first used by Australian zoologist Gerard Krefft, who named the manta after Prince Alfred.
A 2009 study analyzed the differences in morphology, including color, meristic variation, spine, dermal denticles (tooth-like scales), and teeth of different populations. Two distinct species emerged: the smaller M. alfredi found in the Indo-Pacific and tropical East Atlantic, and the larger M. birostris found throughout tropical, subtropical and warm temperate oceans. The former is more coastal, while the latter is more ocean-going and migratory. A 2010 study on mantas around Japan confirmed the morphological and genetic differences between M. birostris and M. alfredi.
A third possible species, preliminarily called Manta sp. cf. birostris, reaches at least in width, and inhabits the tropical West Atlantic, including the Caribbean.
Fossil record
While some small teeth have been found, few fossilized skeletons of manta rays have been discovered. Their cartilaginous skeletons do not preserve well, as they lack the calcification of the bony fish. Only three sedimentary beds bearing manta ray fossils are known, one from the Oligocene in South Carolina and two from the Miocene and Pliocene in North Carolina. M. hynei is a fossil species dating to Early Pliocene North America. Remains of an extinct species have been found in the Chandler Bridge Formation of South Carolina. These were originally described as Manta fragilis, but were later reclassified as Paramobula fragilis.
Biology
Characteristics
Manta rays have broad heads, triangular pectoral fins, and horn-shaped cephalic fins located on both sides of their mouths. They have horizontally flattened bodies with eyes on the sides of their heads behind the cephalic fins, and gill slits on their ventral surfaces. Their tails lack skeletal support and are shorter than their disc-like bodies. The dorsal fins are small and at the base of the tail. Mantas can reach . In both species, the width is about 2.2 times the length of the body; M. birostris reaches at least in width, while M. alfredi reaches about . Their skin is covered in mucus. Mantas normally have a "chevron" coloration. They are typically black or dark on top with pale markings on their "shoulders". Underneath, they are usually white or pale with distinctive dark markings by which individual mantas can be recognized, as well as some shading. Individuals can also vary from mostly black (melanism) to mostly white (leucism). These color morphs appear to be products of neutral mutations and have no effects on fitness. A pink manta ray has been observed in Australia's Great Barrier Reef and scientists believe this could be due to a genetic mutation causing erythrism. The fish, spotted near Lady Elliot Island, is the world's only known pink manta ray.
The two species of manta differ in color patterns, dermal denticles, and dentition. M. birostris has more angular shoulder markings, ventral dark spots on the abdominal region, charcoal-coloured ventral outlines on the pectoral fins, and a dark colored mouth. The shoulder markings of M. alfredi are more rounded, while its ventral spots are located near the posterior end and between the gill slits, and the mouth is white or pale colored. The denticles have multiple cusps and overlap in M. birostris, while those of M. alfredi are evenly spaced and lack cusps. Both species have small, square-shaped teeth on the lower jaw, but M. birostris also has enlarged teeth on the upper jaw. Unlike M. alfredi, M. birostris has a caudal spine near its dorsal fin.
Mantas move through the water by the wing-like movements of their pectoral fins. Their large mouths are rectangular, and face forward. The spiracles typical of rays are vestigial and concealed by small flaps of skin, and mantas must keep swimming with their mouths open to keep oxygenated water passing over their gills. The cephalic fins are usually spiraled but flatten during foraging. The fish's gill arches have pallets of pinkish-brown gill rakers, which are made of spongy tissue that collects food particles. Mantas track down prey using visual and olfactory senses. They have one of the highest brain-to-body mass ratios and the largest brain size of all fish. Their brains have retia mirabilia which may serve to keep them warm. M. alfredi has been shown to dive to depths over , while the Chilean devil ray, which has a similar structure, dives to nearly .
Lifecycle
Mating takes place at different times of the year in different parts of the manta's range. Courtship is difficult to observe in this fast-swimming fish, although mating "trains" with multiple individuals swimming closely behind each other are sometimes seen in shallow water. The mating sequence may be triggered by a full moon and seems to be initiated by a male following closely behind a female while she travels at around . He makes repeated efforts to grasp her pectoral fin with his mouth, which may take 20 to 30 minutes. Once he has a tight grip, he turns upside-down and presses his ventral side against hers. He then inserts one of his claspers into her cloaca, where it remains for 60–90 seconds. The claspers form a tube and a siphon propels semen from the genital papilla into the oviduct. The male continues to grip the female's pectoral fin with his teeth for a further few minutes as both continue to swim, often followed by up to 20 other males. The pair then parts, the female being left with scars on her fin.
The fertilized eggs develop within the female's oviduct. At first, they are enclosed in an egg case while the developing embryos absorb the yolk. After hatching, the pups remain in the oviduct and receive additional nutrition from milky secretions called histotroph. With no umbilical cord or placenta, the unborn pup relies on buccal pumping to obtain oxygen. Brood size is usually one or occasionally two. The gestation period is thought to be 12–13 months. When fully developed, the pup resembles a miniature adult and is expelled from the oviduct with no further parental care. In wild populations, an interval of two years between births may be normal, but a few individuals become pregnant in consecutive years, demonstrating an annual ovulatory cycle. The Okinawa Churaumi Aquarium has had some success in breeding M. alfredi, with one female giving birth in three successive years. In one of these pregnancies, the gestation period was 372 days and at birth the pup had a width of and weight of . In Indonesia, M. birostris males appear to mature at , while female mature around . In the Maldives, males of M. alfredi mature at a width of , while females mature at . In Hawaii, M. alfredi matures at a width of for males and for females. Female mantas appear to mature at 8–10 years. Manta rays may live as long as 50 years.
Behavior and ecology
Swimming behavior in mantas differs across habitats: when travelling over deep water, they swim at a constant rate in a straight line, while further inshore, they usually bask or swim idly around. Mantas may travel alone or in groups up to 50. They may associate with other fish species, as well as sea birds and marine mammals. Mantas sometimes breach or leap out of the water. Individuals in a group may make aerial jumps in succession. Mantas may leap forward and re-enter head first, tail first or make somersaults. The reason for breaching is not known; possible explanations include communication, or the removal of parasites and remoras (suckerfish).
Mantas visit cleaning stations on coral reefs for the removal of external parasites. The ray adopts a near-stationary position close to the coral surface for several minutes while the cleaner fish feed. Such visits most frequently occur when the tide is high. Individual mantas may exhibit philopatry by revisiting the same cleaning station or feeding area repeatedly and appear to have cognitive maps of their environment. In addition, it has been confirmed that reef manta rays form a bond with a specific individual and act together.
Mantas may be preyed upon by large sharks, orcas and false killer whales. They may also harbor parasitic copepods. Mantas can remove internal parasites by sticking their intestines up to out of their cloaca and squeezing them out, often while defecating. Remoras adhere themselves onto mantas for transportation and use their mouths as shelter. Though they may clean them of parasites, remoras can also damage the manta's gills and skin, and increase its swimming load.
In 2016, scientists published a study in which manta rays were shown to exhibit behavior associated with self-awareness. In a modified mirror test, the individuals engaged in contingency checking and unusual self-directed behavior.
Feeding
Manta rays are filter feeders as well as macropredators. On the surface, they consume large quantities of zooplankton in the form of shrimp, krill, and planktonic crabs. In deeper depths, mantas consume small to medium-sized fish. Foraging mantas flatten their cephalic fins to channel food into their mouths. During filter feeding, small particles are collected by the tissue between the gill arches. The standard method of feeding for a lone manta is simply swimming horizontally, turning 180 degrees to feed in the other direction. Up and down movements, sideways tilting and 360 degree somersaults are also observed.
Mantas engage in a number of group feeding behaviors. An individual may "piggy-back" on a larger, horizontally feeding individual, placing itself over its back. "Chain-feeding" involves them aligning back-to-front and swimming horizontally. Chain-feeding mantas may create a circle, with the lead individual meeting up with the stragglers. More individuals may join, creating a "cyclone" of mantas spiraling upwards. With a diameter of , these cyclones consist of up to 150 mantas and last up to an hour. Studies have shown that around 27% of the diet of M. birostris is from the surface, while around 73% is at deeper depths. Mantas may forage on the ocean floor with the cephalic fins splayed apart.
During filter feeding, the gills may get clogged up, forcing mantas to cough and create a cloud of gill waste. The rays commonly do this above cleaning stations, providing a feast for the cleaner fish. Mantas defecate dark red fecal matter which is often mistaken for blood.
Distribution and habitat
Mantas are found in tropical and subtropical waters in all the world's major oceans, and also venture into temperate seas. The furthest from the equator they have been recorded is North Carolina in the United States (31°N) and the North Island of New Zealand (36°S). They prefer water temperatures above and M. alfredi is predominantly found in tropical areas. Both species are pelagic. M. birostris lives mostly in the open ocean, travelling with the currents and migrating to areas where upwellings of nutrient-rich water increase prey concentrations.
Fish that have been fitted with radio transmitters have traveled as far as from where they were caught, and descended to depths of at least . M. alfredi is a more resident and coastal species. Seasonal migrations do occur, but they are shorter than those of M. birostris. Mantas are common around coasts from spring to fall, but travel further offshore during the winter. They keep close to the surface and in shallow water in daytime, while at night they swim at greater depths.
Conservation issues
Threats
The greatest threat to manta rays is overfishing. M. birostris is not evenly distributed over the oceans, but is concentrated in areas that provide the food resources it requires, while M. alfredi is even more localized. Their distributions are thus fragmented, with little evidence of intermingling of subpopulations. Because of their long lifespans and low reproductive rate, overfishing can severely reduce local populations with little likelihood that individuals from elsewhere will replace them.
Both commercial and artisanal fisheries have targeted mantas for their meat and products. They are typically caught with nets, trawls, and harpoons. Mantas were once captured by fisheries in California and Australia for their liver oil and skin; the latter made into abrasives. Their flesh is edible and is consumed in some countries, but is unattractive compared to other fish. Demand for their gill rakers, the cartilaginous structures protecting the gills, has recently entered Chinese medicine. To fill the growing demand in Asia for gill rakers, targeted fisheries have developed in the Philippines, Indonesia, Mozambique, Madagascar, India, Pakistan, Sri Lanka, Brazil, and Tanzania. Each year, thousands of manta rays, primarily M. birostris, are caught and killed purely for their gill rakers. A fisheries study in Sri Lanka and India estimated that over 1000 were being sold in the country's fish markets each year. By comparison, M. birostris populations at most of the key aggregation sites around the world are estimated to have significantly fewer than 1000 individuals. Targeted fisheries for manta rays in the Gulf of California, the west coast of Mexico, India, Sri Lanka, Indonesia, and the Philippines have reduced populations in these areas dramatically.
Manta rays are subject to other human impacts. Because mantas must swim constantly to flush oxygen-rich water over their gills, they are vulnerable to entanglement and subsequent suffocation. Mantas cannot swim backwards, and because of their protruding cephalic fins, are prone to entanglement in fishing lines, nets, ghost nets, and even loose mooring lines. When snared, mantas often attempt to free themselves by somersaulting, tangling themselves further. Loose, trailing line can wrap around and cut its way into its flesh, resulting in irreversible injury. Similarly, mantas become bycatch when entangled in gill nets designed for smaller fish. Some mantas are injured by collision with boats, especially in areas where they congregate and are easily observed. Other threats or factors that may affect manta numbers are climate change, tourism, pollution from oil spills, and the ingestion of microplastics.
Status
The IUCN listed the reef manta as vulnerable in 2019 and the giant manta as endangered in 2020. In 2011, mantas became strictly protected in international waters because of their inclusion in the Convention on Migratory Species of Wild Animals. The CMS is an international treaty organization concerned with conserving migratory species and habitats on a global scale. Although individual nations were already protecting manta rays, the fish often migrate through unregulated waters, putting them at increased risk from overfishing. The Manta Trust is a UK-based charity dedicated to research and conservation efforts for manta rays. The organization's website is also an information resource for manta conservation and biology.
In 2009, Hawaii became the first state in the United States to introduce a ban on the killing or capturing of manta rays. Previously, no fishery for mantas existed in the state but migratory fish that pass the islands are now protected. In 2010, Ecuador introduced a law prohibiting all fishing for manta and other rays, their retention as bycatch and their sale.
Relation with humans
The ancient Peruvian Moche people worshipped the sea and its animals. Their art often depicts manta rays. Historically, mantas were feared for their size and power. Sailors believed that they were dangerous to humans and could pull ships out to sea by the anchor. This attitude changed around 1976, when divers around the Gulf of California found them to be placid and safe to interact with. Several divers photographed themselves with mantas, including Jaws author Peter Benchley.
Aquariums
The Okinawa Ocean Expo Aquarium acquired mantas in 1978 which survived for four days. In addition, at the Okinawa Churaumi Aquarium, a male manta ray, which started captivity in 1992 at its predecessor, the Okinawa Ocean Expo Aquarium, was recorded to have lived for approximately 23 years. The Okinawa Churaumi Aquarium houses manta rays in the "Kuroshio Sea" tank, one of the largest aquarium tanks in the world. The first manta ray birth in captivity took place there in 2007. Although this pup did not survive, the aquarium has since had the birth of four more manta rays in 2008, 2009, 2010 and 2011. However, although Manta became pregnant in 2012, she was stillborn. In 2013, she became pregnant, but her mother, manta ray, died and the pup that was taken out died. In August 2024, a female all black body manta ray kept in the Kuroshio tank gave birth. The pups were born black all over like their mother, wide, and weighed .
There are currently three mantas spending time at the Georgia Aquarium. One notable individual is "Nandi", a manta ray which was accidentally caught in shark nets off Durban, South Africa, in 2007. Rehabilitated and outgrowing her aquarium at uShaka Marine World, Nandi was moved to the larger Georgia Aquarium in August 2008, where she resides in its 23,848 m3 (6,300,000 US gal) "Ocean Voyager" exhibit. A second manta ray, "Tallulah", joined that aquarium's collection in September 2009 and a third was added in 2010.
The Atlantis resort on Paradise Island, Bahamas, hosted a manta named "Zeus" that was used as a research subject for three years until it was released in 2008.
Tourism
Manta ray tourism is estimated to generate over US$73 million per year and brings US$140 million per year to local economies. The majority of global revenues come from ten countries: Japan, Indonesia, the Maldives, Mozambique, Thailand, Australia, Mexico, United States, the Federated States of Micronesia and Palau. Divers may get a chance to watch mantas visiting cleaning stations and night dives enable viewers to see mantas feeding on plankton attracted by the lights.
Ray tourism benefits locals and visitors by raising awareness of natural resource management and educating them about the animals. It can also provide funds for research and conservation. Constant unregulated interactions with tourists can negatively affect them by disrupting ecological relationships and increasing disease transmission.
In 2014, Indonesia banned fishing and export targeting mantas, as manta ray tourism is more economically beneficial than allowing them to be killed. A dead manta is worth $40 to $500, while the economic impact of tourism at a popular dive site can be $1 million per manta over its life, the most famous spot for Manta Ray spotting being Manta Point located in Labuan Bajo. Indonesia has of ocean, and this is now the world's largest sanctuary for manta rays.
| Biology and health sciences | Batoidea | null |
50873 | https://en.wikipedia.org/wiki/Feather | Feather | Feathers are epidermal growths that form a distinctive outer covering, or plumage, on both avian (bird) and some non-avian dinosaurs and other archosaurs. They are the most complex integumentary structures found in vertebrates and an example of a complex evolutionary novelty. They are among the characteristics that distinguish the extant birds from other living groups.
Although feathers cover most of the bird's body, they arise only from certain well-defined tracts on the skin. They aid in flight, thermal insulation, and waterproofing. In addition, coloration helps in communication and protection. The study of feathers is called plumology (or plumage science).
People use feathers in many ways that are practical, cultural, and religious. Feathers are both soft and excellent at trapping heat; thus, they are sometimes used in high-class bedding, especially pillows, blankets, and mattresses. They are also used as filling for winter clothing and outdoor bedding, such as quilted coats and sleeping bags. Goose and eider down have great loft, the ability to expand from a compressed, stored state to trap large amounts of compartmentalized, insulating air. Feathers of large birds (most often geese) have been and are used to make quill pens. Historically, the hunting of birds for decorative and ornamental feathers has endangered some species and helped to contribute to the extinction of others. Today, feathers used in fashion and in military headdresses and clothes are obtained as a waste product of poultry farming, including chickens, geese, turkeys, pheasants, and ostriches. These feathers are dyed and manipulated to enhance their appearance, as poultry feathers are naturally often dull in appearance compared to the feathers of wild birds.
Etymology
Feather derives from the Old English "feþer", which is of Germanic origin; related to Dutch "veer" and German "Feder", from an Indo-European root shared by Sanskrit's "patra" meaning 'wing', Latin's "penna" meaning 'feather', and Greek's "pteron", "pterux" meaning 'wing'.
Because of feathers being an integral part of quills, which were early pens used for writing, the word pen itself is derived from the Latin penna, meaning feather. The French word plume can mean feather, quill, or pen.
Structures and characteristics
Feathers are among the most complex integumentary appendages found in vertebrates and are formed in tiny follicles in the epidermis, or outer skin layer, that produce keratin proteins. The β-keratins in feathers, beaks and claws – and the claws, scales and shells of reptiles – are composed of protein strands hydrogen-bonded into β-pleated sheets, which are then further twisted and crosslinked by disulfide bridges into structures even tougher than the α-keratins of mammalian hair, horns and hooves. The exact signals that induce the growth of feathers on the skin are not known, but it has been found that the transcription factor cDermo-1 induces the growth of feathers on skin and scales on the leg.
Classification
There are two basic types of feather: vaned feathers which cover the exterior of the body, and down feathers which are underneath the vaned feathers. The pennaceous feathers are vaned feathers. Also called contour feathers, pennaceous feathers arise from tracts and cover the entire body. A third rarer type of feather, the filoplume, is hairlike and are closely associated with pennaceous feathers and are often entirely hidden by them, with one or two filoplumes attached and sprouting from near the same point of the skin as each pennaceous feather, at least on a bird's head, neck and trunk. Filoplumes are entirely absent in ratites. In some passerines, filoplumes arise exposed beyond the pennaceous feathers on the neck. The remiges, or flight feathers of the wing, and rectrices, or flight feathers of the tail, are the most important feathers for flight. A typical vaned feather features a main shaft, called the rachis. Fused to the rachis are a series of branches, or barbs; the barbs themselves are also branched and form the barbules. These barbules have minute hooks called barbicels for cross-attachment. Down feathers are fluffy because they lack barbicels, so the barbules float free of each other, allowing the down to trap air and provide excellent thermal insulation. At the base of the feather, the rachis expands to form the hollow tubular calamus (or quill) which inserts into a follicle in the skin. The basal part of the calamus is without vanes. This part is embedded within the skin follicle and has an opening at the base (proximal umbilicus) and a small opening on the side (distal umbilicus).
Hatchling birds of some species have a special kind of natal down feathers (neossoptiles) which are pushed out when the normal feathers (teleoptiles) emerge.
Flight feathers are stiffened so as to work against the air in the downstroke but yield in other directions. It has been observed that the orientation pattern of β-keratin fibers in the feathers of flying birds differs from that in flightless birds: the fibers are better aligned along the shaft axis direction towards the tip, and the lateral walls of rachis region show structure of crossed fibers.
Functions
Feathers insulate birds from water and cold temperatures. They may also be plucked to line the nest and provide insulation to the eggs and young. The individual feathers in the wings and tail play important roles in controlling flight. Some species have a crest of feathers on their heads. Although feathers are light, a bird's plumage weighs two or three times more than its skeleton, since many bones are hollow and contain air sacs. Color patterns serve as camouflage against predators for birds in their habitats, and serve as camouflage for predators looking for a meal. As with fish, the top and bottom colors may be different, in order to provide camouflage during flight. Striking differences in feather patterns and colors are part of the sexual dimorphism of many bird species and are particularly important in the selection of mating pairs. In some cases, there are differences in the UV reflectivity of feathers across sexes even though no differences in color are noted in the visible range. The wing feathers of male club-winged manakins Machaeropterus deliciosus have special structures that are used to produce sounds by stridulation.
Some birds have a supply of powder down feathers that grow continuously, with small particles regularly breaking off from the ends of the barbules. These particles produce a powder that sifts through the feathers on the bird's body and acts as a waterproofing agent and a feather conditioner. Powder down has evolved independently in several taxa and can be found in down as well as in pennaceous feathers. They may be scattered in plumage as in the pigeons and parrots or in localized patches on the breast, belly, or flanks, as in herons and frogmouths. Herons use their bill to break the powder down feathers and to spread them, while cockatoos may use their head as a powder puff to apply the powder. Waterproofing can be lost by exposure to emulsifying agents due to human pollution. Feathers can then become waterlogged, causing the bird to sink. It is also very difficult to clean and rescue birds whose feathers have been fouled by oil spills. The feathers of cormorants soak up water and help to reduce buoyancy, thereby allowing the birds to swim submerged.
Bristles are stiff, tapering feathers with a large rachis but few barbs. Rictal bristles are found around the eyes and bill. They may serve a similar purpose to eyelashes and vibrissae in mammals. Although there is as yet no clear evidence, it has been suggested that rictal bristles have sensory functions and may help insectivorous birds to capture prey. In one study, willow flycatchers (Empidonax traillii) were found to catch insects equally well before and after removal of the rictal bristles.
Grebes are peculiar in their habit of ingesting their own feathers and feeding them to their young. Observations on their diet of fish and the frequency of feather eating suggest that ingesting feathers, particularly down from their flanks, aids in forming easily ejectable pellets.
Distribution
Contour feathers are not uniformly distributed on the skin of the bird except in some groups such as the penguins, ratites and screamers. In most birds the feathers grow from specific tracts of skin called pterylae; between the pterylae there are regions which are free of feathers called apterylae (or apteria). Filoplumes and down may arise from the apterylae. The arrangement of these feather tracts, pterylosis or pterylography, varies across bird families and has been used in the past as a means for determining the evolutionary relationships of bird families. Species that incubate their own eggs often lose their feathers on a region of their belly, forming a brooding patch.
Coloration
The colors of feathers are produced by pigments, by microscopic structures that can refract, reflect, or scatter selected wavelengths of light, or by a combination of both.
Most feather pigments are melanins (brown and beige pheomelanins, black and grey eumelanins) and carotenoids (red, yellow, orange); other pigments occur only in certain taxa – the yellow to red psittacofulvins (found in some parrots) and the red turacin and green turacoverdin (porphyrin pigments found only in turacos).
Structural coloration is involved in the production of blue colors, iridescence, most ultraviolet reflectance and in the enhancement of pigmentary colors. Structural iridescence has been reported in fossil feathers dating back 40 million years. White feathers lack pigment and scatter light diffusely; albinism in birds is caused by defective pigment production, though structural coloration will not be affected (as can be seen, for example, in blue-and-white budgerigars).
The blues and bright greens of many parrots are produced by constructive interference of light reflecting from different layers of structures in feathers. In the case of green plumage, in addition to yellow, the specific feather structure involved is called by some the Dyck texture. Melanin is often involved in the absorption of light; in combination with a yellow pigment, it produces a dull olive-green.
In some birds, feather colors may be created, or altered, by secretions from the uropygial gland, also called the preen gland. The yellow bill colors of many hornbills are produced by such secretions. It has been suggested that there are other color differences that may be visible only in the ultraviolet region, but studies have failed to find evidence. The oil secretion from the uropygial gland may also have an inhibitory effect on feather bacteria.
The reds, orange and yellow colors of many feathers are caused by various carotenoids. Carotenoid-based pigments might be honest signals of fitness because they are derived from special diets and hence might be difficult to obtain, and/or because carotenoids are required for immune function and hence sexual displays come at the expense of health.
A bird's feathers undergo wear and tear and are replaced periodically during the bird's life through molting. New feathers, known when developing as blood, or pin feathers, depending on the stage of growth, are formed through the same follicles from which the old ones were fledged. The presence of melanin in feathers increases their resistance to abrasion. One study notes that melanin based feathers were observed to degrade more quickly under bacterial action, even compared to unpigmented feathers from the same species, than those unpigmented or with carotenoid pigments. However, another study the same year compared the action of bacteria on pigmentations of two song sparrow species and observed that the darker pigmented feathers were more resistant; the authors cited other research also published in 2004 that stated increased melanin provided greater resistance. They observed that the greater resistance of the darker birds confirmed Gloger's rule.
Although sexual selection plays a major role in the development of feathers, in particular, the color of the feathers it is not the only conclusion available. New studies are suggesting that the unique feathers of birds are also a large influence on many important aspects of avian behavior, such as the height at which different species build their nests. Since females are the prime caregivers, evolution has helped select females to display duller colors down so that they may blend into the nesting environment. The position of the nest and whether it has a greater chance of being under predation has exerted constraints on female birds' plumage. A species of bird that nests on the ground, rather than the canopy of the trees, will need to have much duller colors in order not to attract attention to the nest. The height study found that birds that nest in the canopies of trees often have many more predator attacks due to the brighter color of feathers that the female displays. Another influence of evolution that could play a part in why feathers of birds are so colorful and display so many patterns could be due to that birds developed their bright colors from the vegetation and flowers that thrive around them. Birds develop their bright colors from living around certain colors. Most bird species often blend into their environment, due to some degree of camouflage, so if the species habitat is full of colors and patterns, the species would eventually evolve to blend in to avoid being eaten. Birds' feathers show a large range of colors, even exceeding the variety of many plants, leaf, and flower colors.
Parasites
The feather surface is the home for some ectoparasites, notably feather lice (Phthiraptera) and feather mites. Feather lice typically live on a single host and can move only from parents to chicks, between mating birds, and, occasionally, by phoresy. This life history has resulted in most of the parasite species being specific to the host and coevolving with the host, making them of interest in phylogenetic studies.
Feather holes are chewing traces of lice (most probably Brueelia spp. lice) on the wing and tail feathers. They were described on barn swallows, and because of easy countability, many evolutionary, ecological, and behavioral publications use them to quantify the intensity of infestation.
Parasitic cuckoos which grow up in the nests of other species also have host-specific feather lice and these seem to be transmitted only after the young cuckoos leave the host nest.
Birds maintain their feather condition by preening and bathing in water or dust. It has been suggested that a peculiar behavior of birds, anting, in which ants are introduced into the plumage, helps to reduce parasites, but no supporting evidence has been found.
Human usage
Utilitarian
Bird feathers have long been used for fletching arrows. Colorful feathers such as those belonging to pheasants have been used to decorate fishing lures.
Feathers are also valuable in aiding the identification of species in forensic studies, particularly in bird strikes to aircraft. The ratios of hydrogen isotopes in feathers help in determining the geographic origins of birds. Feathers may also be useful in the non-destructive sampling of pollutants.
The poultry industry produces a large amount of feathers as waste, which, like other forms of keratin, are slow to decompose. Feather waste has been used in a number of industrial applications as a medium for culturing microbes, biodegradable polymers, and production of enzymes. Feather proteins have been tried as an adhesive for wood board.
Some groups of Native people in Alaska have used ptarmigan feathers as temper (non-plastic additives) in pottery manufacture since the first millennium BC in order to promote thermal shock resistance and strength.
In religion and culture
Eagle feathers have great cultural and spiritual value to Native Americans in the United States and First Nations peoples in Canada as religious objects. In the United States, the religious use of eagle and hawk feathers is governed by the eagle feather law, a federal law limiting the possession of eagle feathers to certified and enrolled members of federally recognized Native American tribes.
In South America, brews made from the feathers of condors are used in traditional medications. In India, feathers of the Indian peacock have been used in traditional medicine for snakebite, infertility, and coughs.
Members of Scotland's Clan Campbell are known to wear feathers on their bonnets to signify authority within the clan. Clan chiefs wear three, chieftains wear two and an armiger wears one. Any member of the clan who does not meet the criteria is not authorized to wear feathers as part of traditional garb and doing so is considered presumptuous.
During the 18th, 19th, and early 20th centuries, there was a booming international trade in plumes for extravagant women's hats and other headgear (including in Victorian fashion). Frank Chapman noted in 1886 that feathers of as many as 40 species of birds were used in about three-fourths of the 700 ladies' hats that he observed in New York City. For instance, South American hummingbird feathers were used in the past to dress some of the miniature birds featured in singing bird boxes. This trade caused severe losses to bird populations (for example, egrets and whooping cranes). Conservationists led a major campaign against the use of feathers in hats. This contributed to passage of the Lacey Act in 1900, and to changes in fashion. The ornamental feather market then largely collapsed.
More recently, rooster plumage has become a popular trend as a hairstyle accessory, with feathers formerly used as fishing lures now being used to provide color and style to hair.
Feather products manufacturing in Europe has declined in the last 60 years, mainly due to competition from Asia.
Feathers have adorned hats at many prestigious events such as weddings and Ladies Day at racecourses (Royal Ascot).
Evolution
Functional considerations
The functional view on the evolution of feathers has traditionally focused on insulation, flight and display. Discoveries of non-flying Late Cretaceous feathered dinosaurs in China, however, suggest that flight could not have been the original primary function as the feathers simply would not have been capable of providing any form of lift. There have been suggestions that feathers may have had their original function in thermoregulation, waterproofing, or even as sinks for metabolic wastes such as sulphur. Recent discoveries are argued to support a thermoregulatory function, at least in smaller dinosaurs. Some researchers even argue that thermoregulation arose from bristles on the face that were used as tactile sensors. While feathers have been suggested as having evolved from reptilian scales, there are numerous objections to that idea, and more recent explanations have arisen from the paradigm of evolutionary developmental biology. Theories of the scale-based origins of feathers suggest that the planar scale structure was modified for development into feathers by splitting to form the webbing; however, that developmental process involves a tubular structure arising from a follicle and the tube splitting longitudinally to form the webbing. The number of feathers per unit area of skin is higher in smaller birds than in larger birds, and this trend points to their important role in thermal insulation, since smaller birds lose more heat due to the relatively larger surface area in proportion to their body weight. The miniaturization of birds also played a role in the evolution of powered flight. The coloration of feathers is believed to have evolved primarily in response to sexual selection. In fossil specimens of the paravian Anchiornis huxleyi and the pterosaur Tupandactylus imperator, the features are so well preserved that the melanosome (pigment cells) structure can be observed. By comparing the shape of the fossil melanosomes to melanosomes from extant birds, the color and pattern of the feathers on Anchiornis and Tupandactylus could be determined. Anchiornis was found to have black-and-white-patterned feathers on the forelimbs and hindlimbs, with a reddish-brown crest. This pattern is similar to the coloration of many extant bird species, which use plumage coloration for display and communication, including sexual selection and camouflage. It is likely that non-avian dinosaur species utilized plumage patterns for similar functions as modern birds before the origin of flight. In many cases, the physiological condition of the birds (especially males) is indicated by the quality of their feathers, and this is used (by the females) in mate choice. Additionally, when comparing different Ornithomimus edmontonicus specimens, older individuals were found to have a pennibrachium (a wing-like structure consisting of elongate feathers), while younger ones did not. This suggests that the pennibrachium was a secondary sex characteristic and likely had a sexual function.
Molecular evolution
Several genes have been found to determine feather development. They will be key to understand the evolution of feathers. For instance, some genes convert scales into feathers or feather-like structures when expressed or induced in bird feet, such as the scale-feather converters Sox2, Zic1, Grem1, Spry2, and Sox18.
Feathers and scales are made up of two distinct forms of keratin, and it was long thought that each type of keratin was exclusive to each skin structure (feathers and scales). However, feather keratin is also present in the early stages of development of American alligator scales. This type of keratin, previously thought to be specific to feathers, is suppressed during embryological development of the alligator and so is not present in the scales of mature alligators. The presence of this homologous keratin in both birds and crocodilians indicates that it was inherited from a common ancestor.
This may suggest that crocodilian scales, bird and dinosaur feathers, and pterosaur pycnofibres are all developmental expressions of the same primitive archosaur skin structures; suggesting that feathers and pycnofibers could be homologous. Molecular dating methods in 2011 show that the subfamily of feather β-keratins found in extant birds started to diverge 143 million years ago, suggesting the pennaceous feathers of Anchiornis were not made of the feather β-keratins present in extant birds. However, a study of fossil feathers from the dinosaur Sinosauropteryx and other fossils revealed traces of beta-sheet proteins, using infrared spectroscopy and sulfur-X-ray spectroscopy. The presence of abundant alpha-proteins in some fossil feathers was shown to be an artefact of the fossilization process, as beta-protein structures are readily altered to alpha-helices during thermal degradation. In 2019, scientists found that genes for the production of feathers evolved at the base of archosauria, supporting that feathers were present at early ornithodirans and is consistent with the fossil record.
Feathered dinosaurs
Several non-avian dinosaurs had feathers on their limbs that would not have functioned for flight. One theory suggests that feathers originally evolved on dinosaurs due to their insulation properties; then, small dinosaur species which grew longer feathers may have found them helpful in gliding, leading to the evolution of proto-birds like Archaeopteryx and Microraptor zhaoianus. Another theory posits that the original adaptive advantage of early feathers was their pigmentation or iridescence, contributing to sexual preference in mate selection. Dinosaurs that had feathers or protofeathers include Pedopenna daohugouensis and Dilong paradoxus, a tyrannosauroid which is 60 to 70 million years older than Tyrannosaurus rex.
The majority of dinosaurs known to have had feathers or protofeathers are theropods, however featherlike "filamentous integumentary structures" are also known from the ornithischian dinosaurs Tianyulong and Psittacosaurus. The exact nature of these structures is still under study. However, it is believed that the stage-1 feathers (see Evolutionary stages section below) such as those seen in these two ornithischians likely functioned in display. In 2014, the ornithischian Kulindadromeus was reported as having structures resembling stage-3 feathers. The likelihood of scales evolving on early dinosaur ancestors are high. However, this was by assuming that primitive pterosaurs were scaly. A 2016 study analyzes the pulp morphology of the tail bristles of Psittacosaurus and finds they are similar to feathers but notes that they are also similar to the bristles on the head of the Congo peafowl, the beard of the turkey, and the spine on the head of the horned screamer. A reestimation of maximum likelihoods by paleontologist Thomas Holtz finds that filaments were more likely to be the ancestral state of dinosaurs.
In 2010, a carcharodontosaurid named Concavenator corcovatus was found to have remiges on the ulna suggesting it might have had quill-like structures on the ams. However, Foth et al. 2014 disagress with the publication where they point out that the bumps on the ulna of Concavenator are on the anterolateral which is unlike remiges which are in a posterolateral on the ulna of some birds, they consider it more likely that these are attachments for interosseous ligaments. This was refuted by Cuesta Fidalgo and her colleagues, they pointed out that these bumps on the ulna are posterolateral which is unlike that of interosseous ligaments.
Since the 1990s, dozens of feathered dinosaurs have been discovered in the clade Maniraptora, which includes the clade Avialae and the recent common ancestors of birds, Oviraptorosauria and Deinonychosauria. In 1998, the discovery of a feathered oviraptorosaurian, Caudipteryx zoui, challenged the notion of feathers as a structure exclusive to Avialae. Buried in the Yixian Formation in Liaoning, China, C. zoui lived during the Early Cretaceous Period. Present on the forelimbs and tails, their integumentary structure has been accepted as pennaceous vaned feathers based on the rachis and herringbone pattern of the barbs. In the clade Deinonychosauria, the continued divergence of feathers is also apparent in the families Troodontidae and Dromaeosauridae. Branched feathers with rachis, barbs, and barbules were discovered in many members including Sinornithosaurus millenii, a dromaeosaurid found in the Yixian formation (124.6 MYA).
Previously, a temporal paradox existed in the evolution of feathers—theropods with highly derived bird-like characteristics occurred at a later time than Archaeopteryx—suggesting that the descendants of birds arose before the ancestor. However, the discovery of Anchiornis huxleyi in the Late Jurassic Tiaojishan Formation (160 MYA) in western Liaoning in 2009
resolved this paradox. By predating Archaeopteryx, Anchiornis proves the existence of a modernly feathered theropod ancestor, providing insight into the dinosaur-bird transition. The specimen shows distribution of large pennaceous feathers on the forelimbs and tail, implying that pennaceous feathers spread to the rest of the body at an earlier stage in theropod evolution. The development of pennaceous feathers did not replace earlier filamentous feathers. Filamentous feathers are preserved alongside modern-looking flight feathers – including some with modifications found in the feathers of extant diving birds – in 80 million year old amber from Alberta.
Two small wings trapped in amber dating to 100 mya show plumage existed in some bird predecessors. The wings most probably belonged to enantiornithes, a diverse group of avian dinosaurs.
A large phylogenetic analysis of early dinosaurs by Matthew Baron, David B. Norman and Paul Barrett (2017) found that Theropoda is actually more closely related to Ornithischia, to which it formed the sister group within the clade Ornithoscelida. The study also suggested that if the feather-like structures of theropods and ornithischians are of common evolutionary origin then it would be possible that feathers were restricted to Ornithoscelida. If so, then the origin of feathers would have likely occurred as early as the Middle Triassic, though this has been disagreed upon. The lack of feathers present in large sauropods and ankylosaurs could be that feathers were suppressed by genomic regulators.
Evolutionary stages
Several studies of feather development in the embryos of modern birds, coupled with the distribution of feather types among various prehistoric bird precursors, have allowed scientists to attempt a reconstruction of the sequence in which feathers first evolved and developed into the types found on modern birds.
Feather evolution was broken down into the following stages by Xu and Guo in 2009:
Single filament
Multiple filaments joined at their base
Multiple filaments joined at their base to a central filament
Multiple filaments along the length of a central filament
Multiple filaments arising from the edge of a membranous structure
Pennaceous feather with vane of barbs and barbules and central rachis
Pennaceous feather with an asymmetrical rachis
Undifferentiated vane with central rachis
However, Foth (2011) showed that some of these purported stages (stages 2 and 5 in particular) are likely simply artifacts of preservation caused by the way fossil feathers are crushed and the feather remains or imprints are preserved. Foth re-interpreted stage 2 feathers as crushed or misidentified feathers of at least stage 3, and stage 5 feathers as crushed stage 6 feathers.
The following simplified diagram of dinosaur relationships follows these results, and shows the likely distribution of plumaceous (downy) and pennaceous (vaned) feathers among dinosaurs and prehistoric birds. The diagram follows one presented by Xu and Guo (2009) modified with the findings of Foth (2011). The numbers accompanying each name refer to the presence of specific feather stages. Note that 's' indicates the known presence of scales on the body.
In pterosaurs
Pterosaurs were long known to have filamentous fur-like structures covering their body known as pycnofibres, which were generally considered distinct from the "true feathers" of birds and their dinosaur kin. However, a 2018 study of two small, well-preserved pterosaur fossils from the Jurassic of Inner Mongolia, China indicated that pterosaurs were covered in an array of differently-structured pycnofibres (rather than just filamentous ones), with several of these structures displaying diagnostic features of feathers, such as non-veined grouped filaments and bilaterally branched filaments, both of which were originally thought to be exclusive to birds and other maniraptoran dinosaurs. Given these findings, it is possible that feathers have deep evolutionary origins in ancestral archosaurs, though there is also a possibility that these structures independently evolved to resemble bird feathers via convergent evolution. Mike Benton, the study's senior author, lent credence to the former theory, stating "We couldn't find any anatomical evidence that the four pycnofiber types are in any way different from the feathers of birds and dinosaurs. Therefore, because they are the same, they must share an evolutionary origin, and that was about 250 million years ago, long before the origin of birds." But the integumentary structures of the anurognathid specimens is still based gross morphology as Liliana D'Alba pointed out. The pycnofibres of the two anurognathid specimens might not be homologous with the filamentous appendages on dinosaurs. Paul M. Barrett suspects that during the integumentary evolution of pterosaurs, pterosaurs primitively lost scales and pycnofibers started to appear.
Cascocauda was almost entirely covered in an extensive coat of pycnofibres, which appear to have come in two types. The first are simple, curved filaments that range in length from 3.5–12.8 mm long. These filaments cover most of the animal, including the head, neck, body, limbs and tail. The second type consists of tufts of filaments joined near the base, similar to the branching down feathers of birds and other coelurosaurian dinosaurs, around 2.5–8.0 mm long and only cover the wing membranes. Studies of sampled pycnofibres revealed the presence of microbodies within the filaments, resembling the melanosome pigments identified in other fossil integuments, specifically phaeomelanosomes. Furthermore, infrared spectral analysis of these pycnofibres show similar absorption spectra to red human hair. These pycnofibres likely provided both insulation and may have helped streamline the body and wings during flight.
The identity of these branching structures as pycnofibres or feathers was challenged by Unwin & Martill (2020), who interpreted them as bunched-up and degraded aktinofibrils–stiffening fibres found in the wing membrane of pterosaurs–and attributed the melanosomes and keratin to skin rather than filaments. These claims were refuted by Yang and colleagues, who argue that Unwin and Martill's interpretations are inconsistent with the specimen's preservation. Namely, they argue that the consistent structure, regular spacing, and extension of the filaments beyond the wing membrane support their identification as pycnofibres. Further, they argue that the restriction of melanosomes and keratin to the fibres, as occurs in fossil dinosaur feathers, supports the case they are filaments and is not consistent with contamination from preserved skin. Protofeathers likely evolved in early archosaurs, not long after the P-T extinction event during the time metabolic rates of early archosaurs and synapsids were increasing, postures becoming erect, and sustained activity.
| Biology and health sciences | Integumentary system | null |
50884 | https://en.wikipedia.org/wiki/Chelicerata | Chelicerata | The subphylum Chelicerata (from Neo-Latin, , ) constitutes one of the major subdivisions of the phylum Arthropoda. Chelicerates include the sea spiders, horseshoe crabs, and arachnids (including harvestmen, scorpions, spiders, solifuges, ticks, and mites, among many others), as well as a number of extinct lineages, such as the eurypterids (sea scorpions) and chasmataspidids.
Chelicerata split from Mandibulata by the mid-Cambrian, as evidenced by stem-group chelicerates like Habeliida and Mollisonia present by this time. The surviving marine species include the four species of xiphosurans (horseshoe crabs), and possibly the 1,300 species of pycnogonids (sea spiders), if the latter are indeed chelicerates. On the other hand, there are over 77,000 well-identified species of air-breathing chelicerates, and there may be about 500,000 unidentified species.
Like all arthropods, chelicerates have segmented bodies with jointed limbs, all covered in a cuticle made of chitin and proteins. The chelicerate body plan consists of two tagmata, the prosoma and the opisthosoma – excepting the mites, which have lost any visible division between these sections. The chelicerae, which give the group its name, are the only appendages that appear before the mouth. In most sub-groups, they are modest pincers used to feed. However, spiders' chelicerae form fangs that most species use to inject venom into prey. The group has the open circulatory system typical of arthropods, in which a tube-like heart pumps blood through the hemocoel, which is the major body cavity. Marine chelicerates have gills, while the air-breathing forms generally have both book lungs and tracheae. In general, the ganglia of living chelicerates' central nervous systems fuse into large masses in the cephalothorax, but there are wide variations and this fusion is very limited in the Mesothelae, which are regarded as the oldest and most basal group of spiders. Most chelicerates rely on modified bristles for touch and for information about vibrations, air currents, and chemical changes in their environment. The most active hunting spiders also have very acute eyesight.
Chelicerates were originally predators, but the group has diversified to use all the major feeding strategies: predation, parasitism, herbivory, scavenging and eating decaying organic matter. Although harvestmen can digest solid food, the guts of most modern chelicerates are too narrow for this, and they generally liquidize their food by grinding it with their chelicerae and pedipalps and flooding it with digestive enzymes. To conserve water, air-breathing chelicerates excrete waste as solids that are removed from their blood by Malpighian tubules, structures that also evolved independently in insects.
While the marine horseshoe crabs rely on external fertilization, air-breathing chelicerates use internal but usually indirect fertilization. Many species use elaborate courtship rituals to attract mates. Most lay eggs that hatch as what look like miniature adults, but all scorpions and a few species of mites keep the eggs inside their bodies until the young emerge. In most chelicerate species the young have to fend for themselves, but in scorpions and some species of spider the females protect and feed their young.
The evolutionary origins of chelicerates from the early arthropods have been debated for decades. Although there is considerable agreement about the relationships between most chelicerate sub-groups, the inclusion of the Pycnogonida in this taxon has been questioned, and the exact position of scorpions is still controversial, though they were long considered the most basal of the arachnids.
Venom has evolved three times in the chelicerates; spiders, scorpions and pseudoscorpions, or four times if the hematophagous secretions produced by ticks are included. In addition there have been undocumented descriptions of venom glands in Solifugae. Chemical defense has been found in whip scorpions, shorttailed whipscorpions, harvestmen, beetle mites and sea spiders.
Although the venom of a few spider and scorpion species can be very dangerous to humans, medical researchers are investigating the use of these venoms for the treatment of disorders ranging from cancer to erectile dysfunction. The medical industry also uses the blood of horseshoe crabs as a test for the presence of contaminant bacteria. Mites can cause allergies in humans, transmit several diseases to humans and their livestock, and are serious agricultural pests.
Description
Segmentation and cuticle
The Chelicerata are arthropods as they have: segmented bodies with jointed limbs, all covered in a cuticle made of chitin and proteins; heads that are composed of several segments that fuse during the development of the embryo; a much reduced coelom; a hemocoel through which the blood circulates, driven by a tube-like heart. Chelicerates' bodies consist of two tagmata, sets of segments that serve similar functions: the foremost one, called the prosoma or cephalothorax, and the rear tagma is called the opisthosoma or abdomen. However, in the Acari (mites and ticks) there is no visible division between these sections.
The prosoma is formed in the embryo by fusion of the ocular somite (referred as "acron" in previous literatures), which carries the eyes and labrum, with six post-ocular segments (somite 1 to 6), which all have paired appendages. It was previously thought that chelicerates had lost the antennae-bearing somite 1, but later investigations reveal that it is retained and corresponds to a pair of chelicerae or chelifores, small appendages that often form pincers. Somite 2 has a pair of pedipalps that in most sub-groups perform sensory functions, while the remaining four cephalothorax segments (somite 4 to 6) have pairs of legs. In basal forms the ocular somite has a pair of compound eyes on the sides and four pigment-cup ocelli ("little eyes") in the middle. The mouth is between somite 1 and 2 (chelicerae and pedipalps).
The opisthosoma consists of thirteen or fewer segments, may or may not end with a telson. In some taxa such as scorpion and eurypterid the opisthosoma divided into two groups, mesosoma and metasoma. The abdominal appendages of modern chelicerates are missing or heavily modified – for example in spiders the remaining appendages form spinnerets that extrude silk, while those of horseshoe crabs (Xiphosura) form gills.
Like all arthropods, chelicerates' bodies and appendages are covered with a tough cuticle made mainly of chitin and chemically hardened proteins. Since this cannot stretch, the animals must molt to grow. In other words, they grow new but still soft cuticles, then cast off the old one and wait for the new one to harden. Until the new cuticle hardens the animals are defenseless and almost immobilized.
Chelicerae and pedipalps
Chelicerae and pedipalps are the two pairs of appendages closest to the mouth; they vary widely in form and function and the consistent difference between them is their position in the embryo and corresponding neurons: chelicerae are deutocerebral and arise from somite 1, ahead of the mouth, while pedipalps are tritocerebral and arise from somite 2, behind the mouth.
The chelicerae ("claw horns") that give the sub-phylum its name normally consist of three sections, and the claw is formed by the third section and a rigid extension of the second. However, spiders' have only two sections, and the second forms a fang that folds away behind the first when not in use. The relative sizes of chelicerae vary widely: those of some fossil eurypterids and modern harvestmen form large claws that extended ahead of the body, while scorpions' are tiny pincers that are used in feeding and project only slightly in front of the head.
In basal chelicerates, the pedipalps are unspecialized and subequal to the posterior pairs of walking legs. However, in sea spider and arachnids, the pedipalps are more or less specialized for sensory or prey-catching function – for example scorpions have pincers and male spiders have bulbous tips that act as syringes to inject sperm into the females' reproductive openings when mating.
Body cavities and circulatory systems
As in all arthropods, the chelicerate body has a very small coelom restricted to small areas round the reproductive and excretory systems. The main body cavity is a hemocoel that runs most of the length of the body and through which blood flows, driven by a tubular heart that collects blood from the rear and pumps it forward. Although arteries direct the blood to specific parts of the body, they have open ends rather than joining directly to veins, and chelicerates therefore have open circulatory systems as is typical for arthropods.
Respiratory systems
These depend on individual sub-groups' environments. Modern terrestrial chelicerates generally have both book lungs, which deliver oxygen and remove waste gases via the blood, and tracheae, which do the same without using the blood as a transport system. The living horseshoe crabs are aquatic and have book gills that lie in a horizontal plane. For a long time it was assumed that the extinct eurypterids had gills, but the fossil evidence was ambiguous. However, a fossil of the long eurypterid Onychopterella, from the Late Ordovician period, has what appear to be four pairs of vertically oriented book gills whose internal structure is very similar to that of scorpions' book lungs.
Feeding and digestion
The guts of most modern chelicerates are too narrow to take solid food. All scorpions and almost all spiders are predators that "pre-process" food in preoral cavities formed by the chelicerae and the bases of the pedipalps. However, one predominantly herbivore spider species is known, and many supplement their diets with nectar and pollen. Many of the Acari (ticks and mites) are blood-sucking parasites, but there are many predatory, herbivore and scavenger sub-groups. All the Acari have a retractable feeding assembly that consists of the chelicerae, pedipalps and parts of the exoskeleton, and which forms a preoral cavity for pre-processing food.
Harvestmen are among the minority of living chelicerates that can take solid food, and the group includes predators, herbivores and scavengers. Horseshoe crabs are also capable of processing solid food, and use a distinctive feeding system. Claws at the tips of their legs grab small invertebrates and pass them to a food groove that runs from between the rearmost legs to the mouth, which is on the underside of the head and faces slightly backwards. The bases of the legs form toothed gnathobases that both grind the food and push it towards the mouth. This is how the earliest arthropods are thought to have fed.
Excretion
Horseshoe crabs convert nitrogenous wastes to ammonia and dump it via their gills, and excrete other wastes as feces via the anus. They also have nephridia ("little kidneys"), which extract other wastes for excretion as urine. Ammonia is so toxic that it must be diluted rapidly with large quantities of water. Most terrestrial chelicerates cannot afford to use so much water and therefore convert nitrogenous wastes to other chemicals, which they excrete as dry matter. Extraction is by various combinations of nephridia and Malpighian tubules. The tubules filter wastes out of the blood and dump them into the hindgut as solids, a system that has evolved independently in insects and several groups of arachnids.
Nervous system
Chelicerate nervous systems are based on the standard arthropod model of a pair of nerve cords, each with a ganglion per segment, and a brain formed by fusion of the ganglia just behind the mouth with those ahead of it. If one assume that chelicerates lose the first segment, which bears antennae in other arthropods, chelicerate brains include only one pair of pre-oral ganglia instead of two. However, there is evidence that the first segment is indeed available and bears the cheliceres.
There is a notable but variable trend towards fusion of other ganglia into the brain. The brains of horseshoe crabs include all the ganglia of the prosoma plus those of the first two opisthosomal segments, while the other opisthosomal segments retain separate pairs of ganglia. In most living arachnids, except scorpions if they are true arachnids, all the ganglia, including those that would normally be in the opisthosoma, are fused into a single mass in the prosoma and there are no ganglia in the opisthosoma. However, in the Mesothelae, which are regarded as the most basal living spiders, the ganglia of the opisthosoma and the rear part of the prosoma remain unfused, and in scorpions the ganglia of the cephalothorax are fused but the abdomen retains separate pairs of ganglia.
Senses
As with other arthropods, chelicerates' cuticles would block out information about the outside world, except that they are penetrated by many sensors or connections from sensors to the nervous system. In fact, spiders and other arthropods have modified their cuticles into elaborate arrays of sensors. Various touch and vibration sensors, mostly bristles called setae, respond to different levels of force, from strong contact to very weak air currents. Chemical sensors provide equivalents of taste and smell, often by means of setae.
Living chelicerates have both compound eyes (only in horseshoe crabs, as the compound eye in the other clades has been reduced to a cluster of no more than five pairs of ocelli), mounted on the sides of the head, plus pigment-cup ocelli ("little eyes"), mounted in the middle. These median ocelli-type eyes in chelicerates are assumed to be homologous with the crustacean nauplius eyes and the insect ocelli. The eyes of horseshoe crabs can detect movement but not form images. At the other extreme, jumping spiders have a very wide field of vision, and their main eyes are ten times as acute as those of dragonflies, able to see in both colors and UV-light.
Reproduction
Horseshoe crabs use external fertilization; the sperm and ova meet outside the parents' bodies. Despite being aquatic, they spawn on land in the intertidal zone on the beach. The female digs a depression in the wet sand, where she will release her eggs. The male, usually more than one, then releases his sperm onto them. Their trilobite-like larvae look rather like miniature adults as they have full sets of appendages and eyes, but initially they have only two pairs of book-gills and gain three more pairs as they molt.
Also the sea spiders have external fertilization. The male and female release their sperm and eggs into the water where fertilization occurs. The male then collects the eggs and carries them around under his body.
Being air-breathing animals, although many mites have become secondary aquatic, the arachnids use internal fertilization. Except for opiliones and some mites, where the male have a penis used for direct fertilization, fertilization in arachnids is indirect. Indirect fertilization happens in two ways; the male deposit his spermatophore (package of sperm) on the ground, which is then picked up by the female. Or the male store his sperm in appendages modified into sperm transfer organs, such as the pedipalps in male spiders, which is inserted into the female genital openings during copulation. Courtship rituals are common, especially in species where the male risk being eaten before mating. Most arachnids lay eggs, but all scorpions and some mites are viviparous, giving birth to live young (even more mites are ovoviviparous, but most are oviparous). Female pseudoscorpions carry their eggs in a brood pouch on the belly, where the growing embryos feeds on a nutritive fluid provided by the mother during development, and are therefore matrotrophic.
Levels of parental care for the young range from zero to prolonged. Scorpions carry their young on their backs until the first molt, and in a few semi-social species the young remain with their mother. Some spiders care for their young, for example a wolf spider's brood cling to rough bristles on the mother's back, and females of some species respond to the "begging" behavior of their young by giving them their prey, provided it is no longer struggling, or even regurgitate food.
Evolutionary history
Fossil record
There are large gaps in the chelicerates' fossil record because, like all arthropods, their exoskeletons are organic and hence their fossils are rare except in a few lagerstätten where conditions were exceptionally suited to preserving fairly soft tissues. The Burgess shale animals like Sidneyia from about have been classified as chelicerates, the latter because its appendages resemble those of the Xiphosura (horseshoe crabs). However, cladistic analyses that consider wider ranges of characteristics place neither as chelicerates. There is debate about whether Fuxianhuia from earlier in the Cambrian period, about , was a chelicerate. Another Cambrian fossil, Kodymirus, was originally classified as an aglaspid but may have been a eurypterid and therefore a chelicerate. If any of these was closely related to chelicerates, there is a gap of at least 43 million years in the record between true chelicerates and their nearest not-quite chelicerate relatives.
Sanctacaris, member of the family Sanctacarididae from the Burgess Shale of Canada, represents the oldest occurrence of a confirmed chelicerate, Middle Cambrian in age. Although its chelicerate nature has been doubted for its pattern of tagmosis (how the segments are grouped, especially in the head), a restudy in 2014 confirmed its phylogenetic position as the oldest chelicerate. Another fossil of the site, Mollisonia, is considered a basal chelicerate and it has the oldest known chelicerae and proto-book gills.
The eurypterids have left few good fossils and one of the earliest confirmed eurypterid, Pentecopterus decorahensis, appears in the Middle Ordovician period , making it the oldest eurypterid.
Until recently the earliest known xiphosuran fossil dated from the Late Llandovery stage of the Silurian , but in 2008 an older specimen described as Lunataspis aurora was reported from about in the Late Ordovician.
The oldest known arachnid is the trigonotarbid Palaeotarbus jerami, from about in the Silurian period, and had a triangular cephalothorax and segmented abdomen, as well as eight legs and a pair of pedipalps.
Attercopus fimbriunguis, from in the Devonian period, bears the earliest known silk-producing spigots, and was therefore hailed as a spider, but it lacked spinnerets and hence was not a true spider. Rather, it was likely sister group to the spiders, a clade which has been named Serikodiastida. Close relatives of the group survived through to the Cretaceous Period. Several Carboniferous spiders were members of the Mesothelae, a basal group now represented only by the Liphistiidae, and fossils suggest taxa closely related to the spiders, but which were not true members of the group were also present during this Period.
The Late Silurian Proscorpius has been classified as a scorpion, but differed significantly from modern scorpions: it appears wholly aquatic since it had gills rather than book lungs or tracheae; its mouth was completely under its head and almost between the first pair of legs, as in the extinct eurypterids and living horseshoe crabs. Fossils of terrestrial scorpions with book lungs have been found in Early Devonian rocks from about . The oldest species of scorpion found as of 2021 is Dolichophonus loudonensis, which lived during the Silurian, in present-day Scotland.
Relationships with other arthropods
A recent view of chelicerate phylogeny
A "traditional" view of chelicerate phylogeny
The "traditional" view of the arthropod "family tree" shows chelicerates as less closely related to the other major living groups (crustaceans; hexapods, which includes insects; and myriapods, which includes centipedes and millipedes) than these other groups are to each other. Recent research since 2001, using both molecular phylogenetics (the application of cladistic analysis to biochemistry, especially to organisms' DNA and RNA) and detailed examination of how various arthropods' nervous systems develop in the embryos, suggests that chelicerates are most closely related to myriapods, while hexapods and crustaceans are each other's closest relatives. However, these results are derived from analyzing only living arthropods, and including extinct ones such as trilobites causes a swing back to the "traditional" view, placing trilobites as the sister-group of the Tracheata (hexapods plus myriapods) and chelicerates as least closely related to the other groups.
Major sub-groups
Shultz (2007)'s evolutionary family tree of arachnids – † marks extinct groups.
It is generally agreed that the Chelicerata contain the classes Arachnida (spiders, scorpions, mites, etc.), Xiphosura (horseshoe crabs) and Eurypterida (sea scorpions, extinct). The extinct Chasmataspidida may be a sub-group within Eurypterida. The Pycnogonida (sea spiders) were traditionally classified as chelicerates, but some features suggest they may be representatives of the earliest arthropods from which the well-known groups such as chelicerates evolved.
However, the structure of "family tree" relationships within the Chelicerata has been controversial ever since the late 19th century. An attempt in 2002 to combine analysis of DNA features of modern chelicerates and anatomical features of modern and fossil ones produced credible results for many lower-level groups, but its results for the high-level relationships between major sub-groups of chelicerates were unstable, in other words minor changes in the inputs caused significant changes in the outputs of the computer program used (POY). An analysis in 2007 using only anatomical features produced the cladogram on the right, but also noted that many uncertainties remain. In recent analyses the clade Tetrapulmonata is reliably recovered, but other ordinal relationships remain in flux.
The position of scorpions is particularly controversial. Some early fossils such as the Late Silurian Proscorpius have been classified by paleontologists as scorpions, but described as wholly aquatic as they had gills rather than book lungs or tracheae. Their mouths are also completely under their heads and almost between the first pair of legs, as in the extinct eurypterids and living horseshoe crabs. This presents a difficult choice: classify Proscorpius and other aquatic fossils as something other than scorpions, despite the similarities; accept that "scorpions" are not monophyletic but consist of separate aquatic and terrestrial groups; or treat scorpions as more closely related to eurypterids and possibly horseshoe crabs than to spiders and other arachnids, so that either scorpions are not arachnids or "arachnids" are not monophyletic. Cladistic analyses have recovered Proscorpius within the scorpions, based on reinterpretation of the species' breathing apparatus. This is reflected also in the reinterpretation of Palaeoscorpius as a terrestrial animal.
A 2013 phylogenetic analysis (the results presented in a cladogram below) on the relationships within the Xiphosura and the relations to other closely related groups (including the eurypterids, which were represented in the analysis by genera Eurypterus, Parastylonurus, Rhenopterus and Stoermeropterus) concluded that the Xiphosura, as presently understood, was paraphyletic (a group sharing a last common ancestor but not including all descendants of this ancestor) and thus not a valid phylogenetic group. Eurypterids were recovered as closely related to arachnids instead of xiphosurans, forming the group Sclerophorata within the clade Dekatriata (composed of sclerophorates and chasmataspidids). This work suggested it is possible that Dekatriata is synonymous with Sclerophorata as the reproductive system, the primary defining feature of sclerophorates, has not been thoroughly studied in chasmataspidids. Dekatriata is in turn part of the Prosomapoda, a group including the Xiphosurida (the only monophyletic xiphosuran group) and other stem-genera. A recent phylogenetic analysis of the chelicerates places the Xiphosura within the Arachnida as the sister group of Ricinulei., but others still retrieve a monophyletic arachnida.
Diversity
Although well behind the insects, chelicerates are one of the most diverse groups of animals, with over 77,000 living species that have been described in scientific publications. Some estimates suggest that there may be 130,000 undescribed species of spider and nearly 500,000 undescribed species of mites and ticks. While the earliest chelicerates and the living Pycnogonida (if they are chelicerates) and Xiphosura are marine animals that breathe dissolved oxygen, the vast majority of living species are air-breathers, although a few spider species build "diving bell" webs that enable them to live under water. Like their ancestors, most living chelicerates are carnivores, mainly on small invertebrates. However, many species feed as parasites, herbivores, scavengers and detritivores.
Interaction with humans
In the past, Native Americans ate the flesh of horseshoe crabs, and used the tail spines as spear tips and the shells to bail water out of their canoes. More recent attempts to use horseshoe crabs as food for livestock were abandoned when it was found that this gave the meat a bad taste. Horseshoe crab blood contains a clotting agent, limulus amebocyte lysate, which is used to test antibiotics and kidney machines to ensure that they are free of dangerous bacteria, and to detect spinal meningitis and some cancers.
Cooked tarantula spiders are considered a delicacy in Cambodia, and by the Piaroa Indians of southern Venezuela. Spider venoms may be a less polluting alternative to conventional pesticides as they are deadly to insects but the great majority are harmless to vertebrates. Possible medical uses for spider venoms are being investigated, for the treatment of cardiac arrhythmia, Alzheimer's disease, strokes, and erectile dysfunction.
Because spider silk is both light and very strong, but large-scale harvesting from spiders is impractical, work is being done to produce it in other organisms by means of genetic engineering. Spider silk proteins have been successfully produced in transgenic goats' milk,
tobacco leaves,
silkworms,
and bacteria, and recombinant spider silk is now available as a commercial product from some biotechnology companies.
In the 20th century, there were about 100 reliably reported deaths from spider bites, compared with 1,500 from jellyfish stings. Scorpion stings are thought to be a significant danger in less-developed countries; for example, they cause about 1,000 deaths per year in Mexico, but only one every few years in the USA. Most of these incidents are caused by accidental human "invasions" of scorpions' nests. On the other hand, medical uses of scorpion venom are being investigated for treatment of brain cancers and bone diseases.
Ticks are parasitic, and some transmit micro-organisms and parasites that can cause diseases in humans, while the saliva of a few species can directly cause tick paralysis if they are not removed within a day or two.
A few of the closely related mites also infest humans, some causing intense itching by their bites, and others by burrowing into the skin. Species that normally infest other animals such as rodents may infest humans if their normal hosts are eliminated. Three species of mite are a threat to honey bees and one of these, Varroa destructor, has become the largest single problem faced by beekeepers worldwide. Mites cause several forms of allergic diseases, including hay fever, asthma and eczema, and they aggravate atopic dermatitis. Mites are also significant crop pests, although predatory mites may be useful in controlling some of these.
| Biology and health sciences | Chelicerata | null |
50896 | https://en.wikipedia.org/wiki/Space%20station | Space station | A space station (or orbital station) is a spacecraft which remains in orbit and hosts humans for extended periods of time. It therefore is an artificial satellite featuring habitation facilities. The purpose of maintaining a space station varies depending on the program. Most often space stations have been research stations, but they have also served military or commercial uses, such as hosting space tourists.
Space stations have been hosting the only continuous presence of humans in space. The first space station was Salyut 1 (1971), hosting the first crew, of the ill-fated Soyuz 11. Consecutively space stations have been operated since Skylab (1973) and occupied since 1987 with the Salyut successor Mir. Uninterrupted occupation has been sustained since the operational transition from the Mir to the International Space Station (ISS), with its first occupation in 2000.
Currently there are two fully operational space stations – the ISS and China's Tiangong Space Station (TSS), which have been occupied since October 2000 with Expedition 1 and since June 2022 with Shenzhou 14. The highest number of people at the same time on one space station has been 13, first achieved with the eleven day docking to the ISS of the 127th Space Shuttle mission in 2009. The record for most people on all space stations at the same time has been 17, first on May 30, 2023, with 11 people on the ISS and 6 on the TSS.
Space stations are often modular, featuring docking ports, through which they are built and maintained, allowing the joining or movement of modules and the docking of other spacecrafts for the exchange of people, supplies and tools. While space stations generally do not leave their orbit, they do feature thrusters for station keeping.
History
Early concepts
The first mention of anything resembling a space station occurred in Edward Everett Hale's 1868 "The Brick Moon". The first to give serious, scientifically grounded consideration to space stations were Konstantin Tsiolkovsky and Hermann Oberth about two decades apart in the early 20th century.
In 1929, Herman Potočnik's The Problem of Space Travel was published, the first to envision a "rotating wheel" space station to create artificial gravity. Conceptualized during the Second World War, the "sun gun" was a theoretical orbital weapon orbiting Earth at a height of . No further research was ever conducted. In 1951, Wernher von Braun published a concept for a rotating wheel space station in Collier's Weekly, referencing Potočnik's idea. However, development of a rotating station was never begun in the 20th century.
First advances and precursors
The first human flew to space and concluded the first orbit on April 12, 1961, with Vostok 1.
The Apollo program had in its early planning instead of a lunar landing a crewed lunar orbital flight and an orbital laboratory station in orbit of Earth, at times called Project Olympus, as two different possible program goals, until the Kennedy administration sped ahead and made the Apollo program focus on what was originally planned to come after it, the lunar landing. The Project Olympus space station, or orbiting laboratory of the Apollo program, was proposed as an in-space unfolded structure with the Apollo command and service module docking. While never realized, the Apollo command and service module would perform docking maneuvers and eventually become a lunar orbiting module which was used for station-like purposes.
But before that the Gemini program paved the way and achieved the first space rendezvous (undocked) with Gemini 6 and Gemini 7 in 1965. Subsequently in 1966 Neil Armstrong performed on Gemini 8 the first ever space docking, while in 1967 Kosmos 186 and Kosmos 188 were the first spacecrafts that docked automatically.
In January 1969, Soyuz 4 and Soyuz 5 performed the first docked, but not internal, crew transfer, and in March, Apollo 9 performed the first ever internal transfer of astronauts between two docked spaceships.
Salyut, Almaz and Skylab
In 1971, the Soviet Union developed and launched the world's first space station, Salyut 1. The Almaz and Salyut series were eventually joined by Skylab, Mir, and Tiangong-1 and Tiangong-2. The hardware developed during the initial Soviet efforts remains in use, with evolved variants comprising a considerable part of the ISS, orbiting today. Each crew member stays aboard the station for weeks or months but rarely more than a year.
Early stations were monolithic designs that were constructed and launched in one piece, generally containing all their supplies and experimental equipment. A crew would then be launched to join the station and perform research. After the supplies had been consumed, the station was abandoned.
The first space station was Salyut 1, which was launched by the Soviet Union on April 19, 1971. The early Soviet stations were all designated "Salyut", but among these, there were two distinct types: civilian and military. The military stations, Salyut 2, Salyut 3, and Salyut 5, were also known as Almaz stations.
The civilian stations Salyut 6 and Salyut 7 were built with two docking ports, which allowed a second crew to visit, bringing a new spacecraft with them; the Soyuz ferry could spend 90 days in space, at which point it needed to be replaced by a fresh Soyuz spacecraft. This allowed for a crew to man the station continually. The American Skylab (1973–1979) was also equipped with two docking ports, like second-generation stations, but the extra port was never used. The presence of a second port on the new stations allowed Progress supply vehicles to be docked to the station, meaning that fresh supplies could be brought to aid long-duration missions. This concept was expanded on Salyut 7, which "hard docked" with a TKS tug shortly before it was abandoned; this served as a proof of concept for the use of modular space stations. The later Salyuts may reasonably be seen as a transition between the two groups.
Mir
Unlike previous stations, the Soviet space station Mir had a modular design; a core unit was launched, and additional modules, generally with a specific role, were later added. This method allows for greater flexibility in operation, as well as removing the need for a single immensely powerful launch vehicle. Modular stations are also designed from the outset to have their supplies provided by logistical support craft, which allows for a longer lifetime at the cost of requiring regular support launches.
International Space Station
The ISS is divided into two main sections, the Russian Orbital Segment (ROS) and the US Orbital Segment (USOS). The first module of the ISS, Zarya, was launched in 1998.
The Russian Orbital Segment's "second-generation" modules were able to launch on Proton, fly to the correct orbit, and dock themselves without human intervention. Connections are automatically made for power, data, gases, and propellants. The Russian autonomous approach allows the assembly of space stations prior to the launch of crew.
The Russian "second-generation" modules are able to be reconfigured to suit changing needs. As of 2009, RKK Energia was considering the removal and reuse of some modules of the ROS on the Orbital Piloted Assembly and Experiment Complex after the end of mission is reached for the ISS. However, in September 2017, the head of Roscosmos said that the technical feasibility of separating the station to form OPSEK had been studied, and there were now no plans to separate the Russian segment from the ISS.
In contrast, the main US modules launched on the Space Shuttle and were attached to the ISS by crews during EVAs. Connections for electrical power, data, propulsion, and cooling fluids are also made at this time, resulting in an integrated block of modules that is not designed for disassembly and must be deorbited as one mass.
Axiom Station is a planned commercial space station that will begin as a single module docked to the ISS. Axiom Space gained NASA approval for the venture in January 2020. The first module, the Payload Power Transfer Module (PPTM), is expected to be launched to the ISS no earlier than 2027. PPTM will remain at the ISS until the launch of Axiom's Habitat One (Hab-1) module about one year later, after which it will detach from the ISS to join with Hab-1.
Tiangong program
China's first space laboratory, Tiangong-1 was launched in September 2011. The uncrewed Shenzhou 8 then successfully performed an automatic rendezvous and docking in November 2011. The crewed Shenzhou 9 then docked with Tiangong-1 in June 2012, followed by the crewed Shenzhou 10 in 2013.
According to the China Manned Space Engineering Office, Tiangong-1 reentered over the South Pacific Ocean, northwest of Tahiti, on 2 April 2018 at 00:15 UTC.
A second space laboratory Tiangong-2 was launched in September 2016, while a plan for Tiangong-3 was merged with Tiangong-2. The station made a controlled reentry on 19 July 2019 and burned up over the South Pacific Ocean.
The Tiangong Space Station (), the first module of which was launched on 29 April 2021, is in low Earth orbit, 340 to 450 kilometres above the Earth at an orbital inclination of 42° to 43°. Its planned construction via 11 total launches across 2021–2022 was intended to extend the core module with two laboratory modules, capable of hosting up to six crew.
Planned projects
Architecture
Two types of space stations have been flown: monolithic and modular. Monolithic stations consist of a single vehicle and are launched by one rocket. Modular stations consist of two or more separate vehicles that are launched independently and docked on orbit. Modular stations are currently preferred due to lower costs and greater flexibility.
A space station is a complex vehicle that must incorporate many interrelated subsystems, including structure, electrical power, thermal control, attitude determination and control, orbital navigation and propulsion, automation and robotics, computing and communications, environmental and life support, crew facilities, and crew and cargo transportation. Stations must serve a useful role, which drives the capabilities required.
Orbit and purpose
Materials
Space stations are made from durable materials that have to weather space radiation, internal pressure, micrometeoroids, thermal effects of the sun and cold temperatures for long periods of time. They are typically made from stainless steel, titanium and high-quality aluminum alloys, with layers of insulation such as Kevlar as a ballistics shield protection.
The International Space Station (ISS) has a single inflatable module, the Bigelow Expandable Activity Module, which was installed in April2016 after being delivered to the ISS on the SpaceX CRS-8 resupply mission. This module, based on NASA research in the 1990s, weighs and was transported while compressed before being attached to the ISS by the space station arm and inflated to provide a volume. Whilst it was initially designed for a 2year lifetime it was still attached and being used for storage in August 2022.
Construction
Salyut 1 – first space station, launched in 1971
Skylab – launched in a single launch in May 1973
Mir – first modular space station assembled in orbit
International Space Station – modular space station assembled in orbit
Tiangong space station – Chinese space station
Habitability
The space station environment presents a variety of challenges to human habitability, including short-term problems such as the limited supplies of air, water, and food and the need to manage waste heat, and long-term ones such as weightlessness and relatively high levels of ionizing radiation. These conditions can create long-term health problems for space-station inhabitants, including muscle atrophy, bone deterioration, balance disorders, eyesight disorders, and elevated risk of cancer.
Future space habitats may attempt to address these issues, and could be designed for occupation beyond the weeks or months that current missions typically last. Possible solutions include the creation of artificial gravity by a rotating structure, the inclusion of radiation shielding, and the development of on-site agricultural ecosystems. Some designs might even accommodate large numbers of people, becoming essentially "cities in space" where people would reside semi-permanently.
Molds that develop aboard space stations can produce acids that degrade metal, glass, and rubber. Despite an expanding array of molecular approaches for detecting microorganisms, rapid and robust means of assessing the differential viability of the microbial cells, as a function of phylogenetic lineage, remain elusive.
Power
Like uncrewed spacecraft close to the Sun, space stations in the inner Solar System generally rely on solar panels to obtain power.
Life support
Space station air and water is brought up in spacecraft from Earth before being recycled. Supplemental oxygen can be supplied by a solid fuel oxygen generator.
Communications
Military
The last military-use space station was the Soviet Salyut 5, which was launched under the Almaz program and orbited between 1976 and 1977.
Occupation
Space stations have harboured so far the only long-duration direct human presence in space. After the first station, Salyut 1 (1971), and its tragic Soyuz 11 crew, space stations have been operated consecutively since Skylab (1973–1974), having allowed a progression of long-duration direct human presence in space. Long-duration resident crews have been joined by visiting crews since 1977 (Salyut 6), and stations have been occupied by consecutive crews since 1987 with the Salyut successor Mir. Uninterrupted occupation of stations has been achieved since the operational transition from the Mir to the ISS, with its first occupation in 2000. The ISS has hosted the highest number of people in orbit at the same time, reaching 13 for the first time during the eleven day docking of STS-127 in 2009.
The duration record for a single spaceflight is 437.75 days, set by Valeri Polyakov aboard Mir from 1994 to 1995. , four cosmonauts have completed single missions of over a year, all aboard Mir.
Operations
Resupply and crew vehicles
Many spacecraft are used to dock with the space stations. Soyuz flight T-15 in March to July 1986 was the first and as of 2016, only spacecraft to visit two different space stations, Mir and Salyut 7.
International Space Station
The International Space Station has been supported by many different spacecraft.
Future
Sierra Nevada Corporation Dream Chaser
New Space-Station Resupply Vehicle (HTV-X)
Roscosmos Orel
Current
Northrop Grumman Cygnus (2013–present)
Roscosmos Progress (multiple variants) (2000–present)
Energia Soyuz (multiple variants) (2001–present)
SpaceX Dragon 2 (2020–present)
Retired
Automated Transfer Vehicle (ATV) (2008–2015)
H-II Transfer Vehicle (HTV) (2009–2020)
Space Shuttle (1998–2011)
SpaceX Dragon 1 (2012–2020)
Tiangong space station
The Tiangong space station is supported by the following spacecraft:
Shenzhou (2021–present)
Tianzhou (2021–present)
Tiangong program
The Tiangong program relied on the following spacecraft.
Shenzhou program (2011–2016)
Mir
The Mir space station was in orbit from 1986 to 2001 and was supported and visited by the following spacecraft:
Roscosmos Progress (multiple variants) (1986–2000) – An additional Progress spacecraft was used in 2001 to deorbit Mir.
Energia Soyuz (multiple variants) (1986–2000)
Space Shuttle (1995–1998)
Skylab
Apollo command and service module (1973–1974)
Salyut programme
Energia Soyuz (multiple variants) (1971–1986)
Docking and berthing
Maintenance
Research
Research conducted on the Mir included the first long term space based ESA research project EUROMIR95 which lasted 179days and included 35 scientific experiments.
During the first 20 years of operation of the International Space Station, there were around 3,000 scientific experiments in the areas of biology and biotech, technology development, educational activities, human research, physical science, and Earth and space science.
Materials research
Space stations provide a useful platform to test the performance, stability, and survivability of materials in space. This research follows on from previous experiments such as the Long Duration Exposure Facility, a free flying experimental platform which flew from April1984 until January1990.
Mir Environmental Effects Payload (1996–1997)
Materials International Space Station Experiment (2001–present)
Human research
Botany
Space tourism
On the International Space Station, guests sometimes pay $50 million to spend the week living as an astronaut. Later, space tourism is slated to expand once launch costs are lowered sufficiently. By the end of the 2020s, space hotels may become relatively common.
Finance
As it currently costs on average $10,000 to $25,000 per kilogram to launch anything into orbit, space stations remain the exclusive province of government space agencies, which are primarily funded by taxation. In the case of the International Space Station, space tourism makes up a small portion of money to run it.
Legacy
Technology spinoffs
International cooperation and economy
Cultural impact
Space settlement
| Technology | Space | null |
50903 | https://en.wikipedia.org/wiki/Wavelet | Wavelet | A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing.
For example, a wavelet could be created to have a frequency of middle C and a short duration of roughly one tenth of a second. If this wavelet were to be convolved with a signal created from the recording of a melody, then the resulting signal would be useful for determining when the middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar. Correlation is at the core of many practical wavelet applications.
As a mathematical tool, wavelets can be used to extract information from many kinds of data, including audio signals and images. Sets of wavelets are needed to analyze data fully. "Complementary" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss.
In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square-integrable functions. This is accomplished through coherent states.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. Multiple, closely spaced openings (e.g., a diffraction grating), can result in a complex pattern of varying intensity.
Etymology
The word wavelet has been used for decades in digital signal processing and exploration geophysics. The equivalent French word ondelette meaning "small wave" was used by Jean Morlet and Alex Grossmann in the early 1980s.
Wavelet theory
Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Discrete wavelet transform (continuous in time) of a discrete-time (sampled) signal by using discrete-time filterbanks of dyadic (octave band) configuration is a wavelet approximation to that signal. The coefficients of such a filter bank are called the shift and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.
Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based.
Continuous wavelet transforms (continuous shift and scale parameters)
In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the Lp function space L2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequencies f > 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components.
The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ in L2(R), the mother wavelet. For the example of the scale one frequency band [1, 2] this function is
with the (normalized) sinc function. That, Meyer's, and two other examples of mother wavelets are:
The subspace of scale a or frequency band [1/a, 2/a] is generated by the functions (sometimes called child wavelets)
where a is positive and defines the scale and b is any real number and defines the shift. The pair (a, b) defines a point in the right halfplane R+ × R.
The projection of a function x onto the subspace of scale a then has the form
with wavelet coefficients
For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal.
See a list of some Continuous wavelets.
Discrete wavelet transforms (discrete shift and scale parameters, continuous in time)
It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a > 1, b > 0. The corresponding discrete subset of the halfplane consists of all the points (am, nb am) with m, n in Z. The corresponding child wavelets are now given as
A sufficient condition for the reconstruction of any signal x of finite energy by the formula
is that the functions form an orthonormal basis of L2(R).
Multiresolution based discrete wavelet transforms (continuous in time)
In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. In special situations this numerical complexity can be avoided if the scaled and shifted wavelets form a multiresolution analysis. This means that there has to exist an auxiliary function, the father wavelet φ in L2(R), and that a is an integer. A typical choice is a = 2 and b = 1. The most famous pair of father and mother wavelets is the Daubechies 4-tap wavelet. Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe wavelet admits no multiresolution analysis.
From the mother and father wavelets one constructs the subspaces
The father wavelet keeps the time domain properties, while the mother wavelets keeps the frequency domain properties.
From these it is required that the sequence
forms a multiresolution analysis of L2 and that the subspaces are the orthogonal "differences" of the above sequence, that is, Wm is the orthogonal complement of Vm inside the subspace Vm−1,
In analogy to the sampling theorem one may conclude that the space Vm with sampling distance 2m more or less covers the frequency baseband from 0 to 1/2m-1. As orthogonal complement, Wm roughly covers the band [1/2m−1, 1/2m].
From those inclusions and orthogonality relations, especially , follows the existence of sequences and that satisfy the identities
so that and
so that
The second identity of the first pair is a refinement equation for the father wavelet φ. Both pairs of identities form the basis for the algorithm of the fast wavelet transform.
From the multiresolution analysis derives the orthogonal decomposition of the space L2 as
For any signal or function this gives a representation in basis functions of the corresponding subspaces as
where the coefficients are
and
Time-causal wavelets
For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation.
Mother wavelet
For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the space This is the space of Lebesgue measurable functions that are both absolutely integrable and square integrable in the sense that
and
Being in this space ensures that one can formulate the conditions of zero mean and square norm one:
is the condition for zero mean, and
is the condition for square norm one.
For ψ to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform.
For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space L2(R). Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is a solution to a functional equation.
In most situations it is useful to restrict ψ to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m < M
The mother wavelet is scaled (or dilated) by a factor of a and translated (or shifted) by a factor of b to give (under Morlet's original formulation):
For the continuous WT, the pair (a,b) varies over the full half-plane R+ × R; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group.
These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat).
Restriction:
when and ,
has a finite time interval
Comparisons with Fourier transform (continuous-time)
The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. In fact, the Fourier transform can be viewed as a special case of the continuous wavelet transform with the choice of the mother wavelet
.
The main difference in general is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The short-time Fourier transform (STFT) is similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off.
In particular, assuming a rectangular window region, one may think of the STFT as a transform with a slightly different kernel
where can often be written as , where and u respectively denote the length and temporal offset of the windowing function. Using Parseval's theorem, one may define the wavelet's energy as
From this, the square of the temporal support of the window offset by time u is given by
and the square of the spectral support of the window acting on a frequency
Multiplication with a rectangular window in the time domain corresponds to convolution with a function in the frequency domain, resulting in spurious ringing artifacts for short/localized temporal windows. With the continuous-time Fourier transform, and this convolution is with a delta function in Fourier space, resulting in the true Fourier transform of the signal . The window function may be some other apodizing filter, such as a Gaussian. The choice of windowing function will affect the approximation error relative to the true Fourier transform.
A given resolution cell's time-bandwidth product may not be exceeded with the STFT. All STFT basis elements maintain a uniform spectral and temporal support for all temporal shifts or offsets, thereby attaining an equal resolution in time for lower and higher frequencies. The resolution is purely determined by the sampling width.
In contrast, the wavelet transform's multiresolutional properties enables large temporal supports for lower frequencies while maintaining short temporal widths for higher frequencies by the scaling properties of the wavelet transform. This property extends conventional time-frequency analysis into time-scale analysis.
The discrete wavelet transform is less computationally complex, taking O(N) time as compared to O(N log N) for the fast Fourier transform (FFT). This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT which uses the same basis functions as the discrete Fourier transform (DFT). This complexity only applies when the filter size has no relation to the signal size. A wavelet without compact support such as the Shannon wavelet would require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only useful for certain types of signals.)
Definition of a wavelet
A wavelet (or a wavelet family) can be defined in various ways:
Scaling filter
An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined.
For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters.
Daubechies and Symlet wavelets can be defined by the scaling filter.
Scaling function
Wavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain.
The wavelet function is in effect a band-pass filter and scaling that for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See for a detailed explanation.
For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filter g.
Meyer wavelets can be defined by scaling functions
Wavelet function
The wavelet only has a time domain representation as the wavelet function ψ(t).
For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few continuous wavelets.
History
The development of wavelets can be linked to several separate trains of thought, starting with Alfréd Haar's work in the early 20th century. Later work by Dennis Gabor yielded Gabor atoms (1946), which are constructed similarly to wavelets, and applied to similar purposes.
Notable contributions to wavelet theory since then can be attributed to George Zweig’s discovery of the continuous wavelet transform (CWT) in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound), Pierre Goupillaud, Alex Grossmann and Jean Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), the Le Gall–Tabatabai (LGT) 5/3-taps non-orthogonal filter bank with linear phase (1988), Ingrid Daubechies' orthogonal wavelets with compact support (1988), Stéphane Mallat's non-orthogonal multiresolution framework (1989), Ali Akansu's binomial QMF (1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's harmonic wavelet transform (1993), and set partitioning in hierarchical trees (SPIHT) developed by Amir Said with William A. Pearlman in 1996.
The JPEG 2000 standard was developed from 1997 to 2000 by a Joint Photographic Experts Group (JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. It uses the CDF 9/7 wavelet transform (developed by Ingrid Daubechies in 1992) for its lossy compression algorithm, and the Le Gall–Tabatabai (LGT) 5/3 discrete-time filter bank (developed by Didier Le Gall and Ali J. Tabatabai in 1988) for its lossless compression algorithm. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.
Timeline
First wavelet (Haar's wavelet) by Alfréd Haar (1909)
Since the 1970s: George Zweig, Jean Morlet, Alex Grossmann
Since the 1980s: Yves Meyer, Didier Le Gall, Ali J. Tabatabai, Stéphane Mallat, Ingrid Daubechies, Ronald Coifman, Ali Akansu, Victor Wickerhauser
Since the 1990s: Nathalie Delprat, Newland, Amir Said, William A. Pearlman, Touradj Ebrahimi, JPEG 2000
Wavelet transforms
A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals.
Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid.
There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below:
Continuous wavelet transform (CWT)
Discrete wavelet transform (DWT)
Fast wavelet transform (FWT)
Lifting scheme and generalized lifting scheme
Wavelet packet decomposition (WPD)
Stationary wavelet transform (SWT)
Fractional Fourier transform (FRFT)
Fractional wavelet transform (FRWT)
Generalized transforms
There are a number of generalized transforms of which the wavelet transform is a special case. For example, Yosef Joseph introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume.
Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform.
An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects. Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition and strain/metrology applications for intermediate transforms with high frequency resolution (like brushlets and ridgelets) is growing rapidly.
Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane.
Applications
Generally, an approximation to DWT is used for data compression if a signal is already sampled, and the CWT for signal analysis. Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research.
Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of frames of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression.
A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed.
Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a power line communications technology developed by Panasonic), and in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems).
As a representation of a signal
Often, signals can be represented well as a sum of sinusoids. However, consider a non-continuous signal with an abrupt discontinuity; this signal can still be represented as a sum of sinusoids, but requires an infinite number, which is an observation known as Gibbs phenomenon. This, then, requires an infinite number of Fourier coefficients, which is not practical for many applications, such as compression. Wavelets are more useful for describing these signals with discontinuities because of their time-localized behavior (both Fourier and wavelet transforms are frequency-localized, but wavelets have an additional time-localization property). Because of this, many types of signals in practice may be non-sparse in the Fourier domain, but very sparse in the wavelet domain. This is particularly useful in signal reconstruction, especially in the recently popular field of compressed sensing. (Note that the short-time Fourier transform (STFT) is also localized in time and frequency, but there are often problems with the frequency-time resolution trade-off. Wavelets are better signal representations because of multiresolution analysis.)
This motivates why wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, chaos theory, ab initio calculations, astrophysics, gravitational wave transient data analysis, density-matrix localisation, seismology, optics, turbulence and quantum mechanics. This change has also occurred in image processing, EEG, EMG, ECG analyses, brain rhythms, DNA analysis, protein analysis, climatology, human sexual response analysis, general signal processing, speech recognition, acoustics, vibration signals, computer graphics, multifractal analysis, and sparse coding. In computer vision and image processing, the notion of scale space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation.
Wavelet denoising
Suppose we measure a noisy signal , where represents the signal and represents the noise. Assume has a sparse representation in a certain wavelet basis, and
Let the wavelet transform of be , where is the wavelet transform of the signal component and is the wavelet transform of the noise component.
Most elements in are 0 or close to 0, and
Since is orthogonal, the estimation problem amounts to recovery of a signal in iid Gaussian noise. As is sparse, one method is to apply a Gaussian mixture model for .
Assume a prior , where is the variance of "significant" coefficients and is the variance of "insignificant" coefficients.
Then , is called the shrinkage factor, which depends on the prior variances and . By setting coefficients that fall below a shrinkage threshold to zero, once the inverse transform is applied, an expectedly small amount of signal is lost due to the sparsity assumption. The larger coefficients are expected to primarily represent signal due to sparsity, and statistically very little of the signal, albeit the majority of the noise, is expected to be represented in such lower magnitude coefficients... therefore the zeroing-out operation is expected to remove most of the noise and not much signal. Typically, the above-threshold coefficients are not modified during this process. Some algorithms for wavelet-based denoising may attenuate larger coefficients as well, based on a statistical estimate of the amount of noise expected to be removed by such an attenuation.
At last, apply the inverse wavelet transform to obtain
Multiscale climate network
Agarwal et al. proposed wavelet based advanced linear and nonlinear methods to construct and investigate Climate as complex networks at different timescales. Climate networks constructed using SST datasets at different timescale averred that wavelet based multi-scale analysis of climatic processes holds the promise of better understanding the system dynamics that may be missed when processes are analyzed at one timescale only
List of wavelets
Discrete wavelets
Beylkin (18)
Moore Wavelet Morlet wavelet
Biorthogonal nearly coiflet (BNC) wavelets
Coiflet (6, 12, 18, 24, 30)
Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechies biorthogonal wavelets)
Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, etc.)
Binomial QMF (Also referred to as Daubechies wavelet)
Haar wavelet
Mathieu wavelet
Legendre wavelet
Villasenor wavelet
Symlet
Continuous wavelets
Real-valued
Beta wavelet
Hermitian wavelet
Meyer wavelet
Mexican hat wavelet
Poisson wavelet
Shannon wavelet
Spline wavelet
Strömberg wavelet
Complex-valued
Complex Mexican hat wavelet
fbsp wavelet
Morlet wavelet
Shannon wavelet
Modified Morlet wavelet
| Mathematics | Harmonic analysis | null |
50943 | https://en.wikipedia.org/wiki/Light%20rail | Light rail | Light rail (or light rail transit, abbreviated to LRT) is a form of passenger urban rail transit that uses rolling stock derived from tram technology while also having some features from heavy rapid transit.
The term was coined in 1972 in the United States as an English equivalent for the German word Stadtbahn, meaning "city railway". Different definitions exist in some countries, but in the United States, light rail operates primarily along exclusive rights-of-way and uses either individual tramcars or multiple units coupled together, with a lower capacity and speed than a long heavy rail passenger train or rapid transit system.
Narrowly defined, light rail transit uses rolling stock that is similar to that of a traditional tram, while operating at a higher capacity and speed, often on an exclusive right-of-way. In broader use, it includes tram-like operations mostly on streets. A few light rail networks have characteristics closer to rapid transit or even commuter rail, yet only when these systems are fully grade-separated are they referred to as light metros.
Definition
The term light rail was coined in 1972 by the U.S. Urban Mass Transportation Administration (UMTA; the precursor to the Federal Transit Administration) to describe new streetcar transformations that were taking place in Europe and the United States. In Germany, the term Stadtbahn (to be distinguished from S-Bahn, which stands for Stadtschnellbahn) was used to describe the concept, and many in UMTA wanted to adopt the direct translation, which is city rail (the Norwegian term, by bane, means the same). However, UMTA finally adopted the term light rail instead. Light in this context is used in the sense of "intended for light loads and fast movement", rather than referring to physical weight. The infrastructure investment is also usually lighter than would be found for a heavy rail system.
The American Public Transportation Association (APTA), in its Glossary of Transit Terminology, defines light rail as:
...a mode of transit service (also called streetcar, tramway, or trolley) operating passenger rail cars singly (or in short, usually two-car or three-car, trains) on fixed rails in the right-of-way that is often separated from other traffic for part or much of the way. Light rail vehicles are typically driven electrically with power being drawn from an overhead electric line via a trolley [pole] or a pantograph; driven by an operator onboard the vehicle; and may have either high platform loading or low-level boarding using steps."
However, some diesel-powered transit is designated light rail, such as the O-Train Trillium Line in Ottawa, Ontario, Canada, the River Line in New Jersey, United States, and the Sprinter in California, United States, which use diesel multiple unit (DMU) cars.
Light rail is different from the British English term light railway, long-used to distinguish railway operations carried out under a less rigorous set of regulations using lighter equipment at lower speeds from mainline railways. Light rail is a generic international English phrase for types of rail systems using modern streetcars/trams, which means more or less the same thing throughout the English-speaking world. Light rail systems can range from trams running in streets along with other traffic, to semi-metro systems having portions of grade separated track.
People movers are even "lighter", in terms of capacity. Monorail is a separate technology that has been more successful in specialized services than in a commuter transit role.
British English versus American English
The use of the generic term light rail avoids some serious incompatibilities between British and American English. The word tram, for instance, is generally used in the UK and many former British colonies to refer to what is known in North America as a streetcar, but in North America tram can instead refer to an aerial tramway, or, in the case of the Disney amusement parks, even a land train. (The usual British term for an aerial tramway is cable car, which in the US usually refers to a ground-level car pulled along by subterranean cables.) The word trolley is often used as a synonym for streetcar in the United States but is usually taken to mean a cart, particularly a shopping cart, in the UK and elsewhere. Many North American transportation planners reserve streetcar for traditional vehicles that operate exclusively in mixed traffic on city streets, while they use light rail to refer to more modern vehicles operating mostly in exclusive rights of way, since they may operate both side-by-side targeted at different passenger groups.
The difference between British English and American English terminology arose in the late 19th century when Americans adopted the term "street railway", rather than "tramway", with the vehicles being called "streetcars" rather than "trams". Some have suggested that the Americans' preference for the term "street railway" at that time was influenced by German emigrants to the United States (who were more numerous than British immigrants in the industrialized Northeast), as it is the same as the German term for the mode, Straßenbahn (meaning "street railway"). A further difference arose because, while Britain abandoned all of its trams after World War II except in Blackpool, eight major North American cities (Toronto, Boston, Philadelphia, San Francisco, Pittsburgh, Newark, Cleveland, and New Orleans) continued to operate large streetcar systems. When these cities upgraded to new technology, they called it light rail to differentiate it from their existing streetcars since some continued to operate both the old and new systems. Since the 1980s, Portland, Oregon, has built all three types of system: a high-capacity light rail system in dedicated lanes and rights-of-way, a low-capacity streetcar system integrated with street traffic, and an aerial tram system.
The opposite phrase heavy rail, used for higher-capacity, higher-speed systems, also avoids some incompatibilities in terminology between British and American English, for instance in comparing the London Underground and the New York City Subway. Conventional rail technologies including high-speed, freight, commuter, and rapid transit urban transit systems are considered "heavy rail". The main difference between light rail and heavy rail rapid transit is the ability for a light rail vehicle to operate in mixed traffic if the routing requires it.
History
The world's first electric tram operated in Sestroretsk near Saint Petersburg, Russia, invented and operated on an experimental basis by Fyodor Pirotsky in 1880. The first tramway was the Gross-Lichterfelde tramway in Lichterfelde near Berlin in Germany, which opened in 1881. It was built by Werner von Siemens who contacted Pirotsky. It initially drew current from the rails, with overhead wire being installed in 1883. The first interurban to emerge in the United States was the Newark and Granville Street Railway in Ohio, which opened in 1889. An early example of the light rail concept was the "Shaker Heights Rapid Transit" which started in the 1920s, was renovated in 1980-81 and is now part of RTA Rapid Transit.
Postwar
Many original tram and streetcar systems in the United Kingdom, United States, and elsewhere were decommissioned starting in the 1950s as subsidies for the car increased. Britain abandoned its tram systems, except for Blackpool, with the closure of Glasgow Corporation Tramways (one of the largest in Europe) in 1962.
Emergence
Although some traditional trolley or tram systems continued to exist in San Francisco and elsewhere, the term "light rail" has come to mean a different type of rail system as modern light rail technology has primarily post-WWII West German origins. An attempt by Boeing Vertol to introduce a new American light rail vehicle in the 1970s was proven to have been a technical failure by the following decade. After World War II, the Germans retained many of their streetcar networks and evolved them into model light rail systems (Stadtbahnen). With the exception of Hamburg, all large and most medium-sized German cities maintain light rail networks.
The concept of a "limited tramway" was proposed by American transport planner H. Dean Quinby in 1962. Quinby distinguished this new concept in rail transportation from historic streetcar or tram systems as:
having the capacity to carry more passengers
operating with "three-section, articulated" transit vehicles
having more doors to facilitate full utilization of the space
faster and quieter in operation
The term light rail transit was introduced in North America in 1972 to describe this new concept of rail transportation. Prior to that time the abbreviation "LRT" was used for "Light Rapid Transit" and "Light Rail Rapid Transit".
The first of the new light rail systems in North America began operation in 1978 when the Canadian city of Edmonton, Alberta, adopted the German Siemens-Duewag U2 system, followed three years later by Calgary, Alberta, and San Diego, California. The concept proved popular, with there now being numerous light rail systems in the United States and in North America.
In Britain, modern light rail systems began to appear in the 1980s, starting with the Tyne and Wear Metro from 1980 and followed by the Docklands Light Railway (DLR) in London in 1987, continuing into the 1990s including the establishment of the Manchester Metrolink in 1992 and the Sheffield Supertram from 1994.
Types
Due to varying definitions, it is hard to distinguish between what is called light rail, and other forms of urban and commuter rail. A system described as a light rail in one city may be considered to be a streetcar or tram system in another. Conversely, some lines that are called "light rail" are very similar to rapid transit; in recent years, new terms such as light metro have been used to describe these medium-capacity systems. Some "light rail" systems, such as Sprinter, bear little similarity to urban rail, and could alternatively be classified as commuter rail or even inter-city rail. In the United States, "light rail" has become a catch-all term to describe a wide variety of passenger rail systems.
Light rail corridors may constitute a fully segregated corridor, a dedicated right-of-way on a street, an on-street corridor shared with other traffic, a corridor shared with other public transport, or a corridor shared with pedestrians.
Lower capacity
The most difficult distinction to draw is that between low-floor light rail and streetcar or tram systems. There is a significant amount of overlap between the technologies; similar rolling stock may be used for either, and it is common to classify streetcars or trams as a subcategory of light rail rather than as a distinct type of transportation. However, some distinctions can be made, though systems may combine elements of both.Low-floor light rail lines tend to follow a reserved right-of-way and with trains receiving priority at intersections, and tend not to operate in mixed traffic, enabling higher operating speeds. Light rail lines tend to have less frequent stops than tramways, and operate over a longer distance. Light rail cars are often coupled into multiple units of two to four cars.
Higher capacity
Light rail systems may also exhibit attributes of heavy rail systems, including having downtown subways, as in San Francisco and Seattle. Light rail is designed to address a gap in interurban transportation between heavy rail and bus services, carrying high passenger numbers more quickly than local buses and more cheaply than heavy rail. It serves corridors in which heavy rail is impractical. Light metro systems are essentially hybrids of light rail and rapid transit.
Metro trains are larger and faster than light rail trains, with stops being further apart.
Mixed systems
Many systems have mixed characteristics. Indeed, with proper engineering, a rail line could run along a street, then go underground, and then run along an elevated viaduct. For example, the Los Angeles Metro Rail's A Line "light rail" has sections that could alternatively be described as a tramway, a light metro, and, in a narrow sense, rapid transit. This is especially common in the United States, where there is not a popularly perceived distinction between these different types of urban rail systems. The development of technology for low-floor and catenary-free trams facilitates the construction of such mixed systems with only short and shallow underground sections below critical intersections as the required clearance height can be reduced significantly compared to conventional light rail vehicles.
Speed and stop frequency
Reference speed from major light rail systems, including station stop time, is shown below.
However, low top speed is not always a differentiating characteristic between light rail and other systems. For example, the Siemens S70 LRVs used in the Houston METRORail and other North American LRT systems have a top speed of depending on the system, while the trains on the all-underground Montreal Metro can only reach a top speed of . LACMTA light rail vehicles have higher top and average speeds than Montreal Metro or New York City Subway trains.
System-wide considerations
Many light rail systems—even fairly old ones—have a combination of both on- and off-road sections. In some countries (especially in Europe), only the latter is described as light rail. In those places, trams running on mixed rights-of-way are not regarded as a light rail but considered distinctly as streetcars or trams. However, the requirement for saying that a rail line is "separated" can be quite low—sometimes just with concrete "buttons" to discourage automobile drivers from getting onto the tracks. Some systems such as Seattle's Link had on-road mixed sections but were closed to regular road traffic, with light rail vehicles and buses both operating along a common right-of-way (however, Link converted to full separation in 2019).
Some systems, such as the AirTrain JFK in New York City, the DLR in London, and Kelana Jaya Line in Kuala Lumpur, have dispensed with the need for an operator. The Vancouver SkyTrain was an early adopter of driverless vehicles, while the Toronto Scarborough rapid transit operated the same trains as Vancouver, but used drivers. In most discussions and comparisons, these specialized systems are generally not considered light rail but as light metro systems.
Variations
Light rail operating on mainline railways
Around Karlsruhe, Kassel, and Saarbrücken in Germany, dual-voltage light rail trains partly use mainline railroad tracks, sharing these tracks with heavy rail trains. In the Netherlands, this concept was first applied on the RijnGouweLijn. This allows commuters to ride directly into the city center, rather than taking a mainline train only as far as a central station and then having to change to a tram. In France, similar tram-trains are planned for Paris, Mulhouse, and Strasbourg; further projects exist. In some cases, tram trains use previously abandoned or lightly used heavy rail lines in addition to or instead of still in use mainline tracks. In 2022, Spain opened the Cádiz TramBahia, where trams share track with commuter and long-distance trains from the main terminus in the city and curve off to serve cities without a railway connection.
Some of the issues involved in such schemes are:
compatibility of the safety systems
power supply of the track to the power used by the vehicles (frequently different voltages, rarely third rail vs overhead wires)
width of the vehicles to the position of the platforms
height of the platforms
There is a history of what would now be considered light rail vehicles operating on heavy rail rapid transit tracks in the US, especially in the case of interurban streetcars. Notable examples are Lehigh Valley Transit trains running on the Philadelphia and Western Railroad high-speed third rail line (now the Norristown High-Speed Line). Such arrangements are almost impossible now, due to the Federal Railroad Administration refusing (for crash safety reasons) to allow non-FRA compliant railcars (i.e., subway and light rail vehicles) to run on the same tracks at the same times as compliant railcars, which includes locomotives and standard railroad passenger and freight equipment. Notable exceptions in the US are the NJ Transit River Line from Camden to Trenton and Austin's Capital MetroRail, which have received exemptions to the provision that light rail operations occur only during daytime hours and Conrail freight service only at night, with several hours separating one operation from the other. The O-Train Trillium Line in Ottawa also has freight service at certain hours.
Comparison to other rail transit modes
With its mix of right-of-way types and train control technologies, LRT offers the widest range of latitude of any rail system in the design, engineering, and operating practices. The challenge in designing light rail systems is to realize the potential of LRT to provide fast, comfortable service while avoiding the tendency to overdesign that results in excessive capital costs beyond what is necessary to meet the public's needs.
Typical rolling stock
The BART railcar in the following chart is not generally considered to be a "light rail" vehicle (it is a heavy rail vehicle), and is only included for comparison purposes.
Floor height
Low-floor LRVs have the advantage of a low-floor design, allowing them to load passengers directly from low-rise platforms that can be little more than raised curbs. High-floor light rail systems also exist, featuring larger stations.
Infrastructure
Track gauge
Historically, the track gauge has had considerable variations, with narrow gauge common in many early systems. However, most light rail systems are now standard gauge. Older standard-gauge vehicles could not negotiate sharp turns as easily as narrow-gauge ones, but modern light rail systems achieve tighter turning radii by using articulated cars. An important advantage of the standard gauge is that standard railway maintenance equipment can be used on it, rather than custom-built machinery. Using standard gauges also allows light rail vehicles to be conveniently moved around using the same tracks as freight railways. Additionally, wider gauges (e.g. standard gauge) provide more floor clearance on low-floor trams that have constricted pedestrian areas at the wheels, which is especially important for wheelchair access, as narrower gauges (e.g. metre gauge) can make it challenging or impossible to pass the tram's wheels. Furthermore, standard-gauge rolling stock can be switched between networks either temporarily or permanently, and both newly built and used standard-gauge rolling stock tends to be cheaper to buy, as more companies offer such vehicles.
Power sources
Overhead lines supply electricity to the vast majority of light rail systems. This avoids the danger potentially presented by an electrified third rail. The Docklands Light Railway uses an inverted third rail for its electrical power, which allows the electrified rail to be covered and the power drawn from the underside. Trams in Bordeaux, France, use a special third-rail configuration where the power is only switched on beneath the trams, making it safe on city streets. Several systems in Europe and a few recently opened systems in North America use diesel-powered trains.
Ground-level power supply for trams
When electric streetcars were introduced in the late 19th century, conduit current collection was one of the first ways of supplying power, but it proved to be much more expensive, complicated, and trouble-prone than overhead wires. When electric street railways became ubiquitous, conduit power was used in those cities that did not permit overhead wires. In Europe, it was used in London, Paris, Berlin, Marseille, Budapest, and Prague. In the United States, it was used in parts of New York City and Washington, D.C. Third rail technology was investigated for use on the Gold Coast of Australia for the G:link light rail, though power from overhead lines was ultimately utilized for that system.
In the French city of Bordeaux, the tramway network is powered by a third rail in the city center, where the tracks are not always segregated from pedestrians and cars. The third rail (actually two closely spaced rails) is placed in the middle of the track and divided into eight-metre sections, each of which is powered only while it is completely covered by a tram. This minimizes the risk of a person or animal coming into contact with a live rail. In outer areas, the trams switch to conventional overhead wires. The Bordeaux power system costs about three times as much as a conventional overhead wire system and took 24 months to achieve acceptable levels of reliability, requiring the replacement of all the main cables and power supplies. Operating and maintenance costs of the innovative power system still remain high. However, despite numerous service outages, the system was a success with the public, gaining up to 190,000 passengers per day.
Automatic train operation
Automatic train operation is employed on light rail networks, tracking the position and speed of a train and hence adjusting its movement for safety and efficiency.
Comparison to road traffic
Comparison with high capacity roads
One line of light rail (requires 7.6 m, 25' right of way) has a theoretical capacity of up to 8 times more than one 3.7 m (12 foot) lane on a freeway, excluding busses, during peak times. Roads have ultimate capacity limits that can be determined by traffic engineering, and usually experience a chaotic breakdown inflow and a dramatic drop in speed (a traffic jam) if they exceed about 2,000 vehicles per hour per lane (each car roughly two seconds behind another). Since most people who drive to work or on business trips do so alone, studies show that the average car occupancy on many roads carrying commuters is only about 1.5 people per car during the high-demand rush hour periods of the day.
This combination of factors limits roads carrying only automobile commuters to a maximum observed capacity of about 3,000 passengers per hour per lane. The problem can be mitigated by introducing high-occupancy vehicle (HOV) lanes and ride-sharing programs, but in most cases, policymakers have chosen to add more lanes to the roads, despite a small risk that in unfavorable situations an extension of the road network might lead to increased travel times (Downs–Thomson paradox, Braess's paradox).
By contrast, light rail vehicles can travel in multi-car trains carrying a theoretical ridership up to 20,000 passengers per hour in much narrower rights-of-way, not much more than two car lanes wide for a double track system. They can often be run through existing city streets and parks, or placed in the medians of roads. If run in streets, trains are usually limited by city block lengths to about four 180-passenger vehicles (720 passengers). Operating on two-minute headways using traffic signal progression, a well-designed two-track system can handle up to 30 trains per hour per track, achieving peak rates of over 20,000 passengers per hour in each direction. More advanced systems with separate rights-of-way using moving block signaling can exceed 25,000 passengers per hour per track.
Practical considerations
Most light rail systems in the United States are limited by demand rather than capacity (by and large, most American LRT systems carry fewer than 4,000 persons per hour per direction), but Boston's and San Francisco's light rail lines carry 9,600 and 13,100 passengers per hour per track during rush hour. Elsewhere in North America, the Calgary C-Train and Monterrey Metro have higher light rail ridership than Boston or San Francisco. Systems outside North America often have much higher passenger volumes. The Manila Light Rail Transit System is one of the highest capacity ones, having been upgraded in a series of expansions to handle 40,000 passengers per hour per direction, and having carried as many as 582,989 passengers in a single day on its Line 1. It achieves this volume by running four-car trains with a capacity of up to 1,350 passengers each at a frequency of up to 30 trains per hour. However, the Manila light rail system has full grade separation and as a result, has many of the operating characteristics of a metro system rather than a light rail system. A capacity of 1,350 passengers per train is more similar to the heavy rail than light rail.
Bus rapid transit (BRT) is an alternative to LRT and many planning studies undertake a comparison of each mode when considering appropriate investments in transit corridor development. BRT systems can exhibit a more diverse range of design characteristics than LRT, depending on the demand and constraints that exist, and BRT using dedicated lanes can have a theoretical capacity of over 30,000 passengers per hour per direction (for example, the Guangzhou Bus Rapid Transit system operates up to 350 buses per hour per direction). For the effective operation of a bus or BRT system, buses must have priority at traffic lights and have their dedicated lanes, especially as bus frequencies exceed 30 buses per hour per direction. The higher theoretical of BRT relates to the ability of buses to travel closer to each other than rail vehicles and their ability to overtake each other at designated locations allowing express services to bypass those that have stopped at stations. However, to achieve capacities this high, BRT station footprints need to be significantly larger than a typical LRT station. In terms of cost of operation, each bus vehicle requires a single driver, whereas a light rail train may have three to four cars of much larger capacity in one train under the control of one driver, or no driver at all in fully automated systems, increasing the labor costs of BRT systems compared to LRT systems. BRT systems are also usually less fuel-efficient as they use non-electrified vehicles.
The peak passenger capacity per lane per hour depends on which types of vehicles are allowed on the roads. Typically roadways have 1,900 passenger cars per lane per hour (pcplph). If only cars are allowed, the capacity will be less and will not increase when the traffic volume increases.
When there is a bus driving on this route, the capacity of the lane will be higher and will increase when the traffic level increases. And because the capacity of a light rail system is higher than that of a bus, there will be even more capacity when there is a combination of cars and light rail. Table 3 shows an example of peak passenger capacity.
Construction and operation costs
The cost of light rail construction varies widely, largely depending on the amount of tunneling and elevated structures required. A survey of North American light rail projects shows that costs of most LRT systems range from $15 million to over $100 million per mile. Seattle's new light rail system is by far the most expensive in the US, at $179 million per mile, since it includes extensive tunneling in poor soil conditions, elevated sections, and stations as deep as below ground level. This results in costs more typical of subways or rapid transit systems than light rail. At the other end of the scale, four systems (Baltimore, Maryland; Camden, New Jersey; Sacramento, California; and Salt Lake City, Utah) incurred construction costs of less than $20 million per mile. Over the US as a whole, excluding Seattle, new light rail construction costs average about $35 million per mile.
By comparison, a freeway lane expansion typically costs $1.0 million to $8.5 million per lane mile for two directions, with an average of $2.3 million. However, freeways are frequently built in suburbs or rural areas, whereas light rail tends to be concentrated in urban areas, where right of way and property acquisition is expensive. Similarly, the most expensive US highway expansion project was the "Big Dig" in Boston, Massachusetts, which cost $200 million per lane mile for a total cost of $14.6 billion. A light rail track can carry up to 20,000 people per hour as compared with 2,000–2,200 vehicles per hour for one freeway lane. For example, in Boston and San Francisco, light rail lines carry 9,600 and 13,100 passengers per hour, respectively, in the peak direction during rush hour.
Combining highway expansion with LRT construction can save costs by doing both highway improvements and rail construction at the same time. As an example, Denver's Transportation Expansion Project rebuilt interstate highways 25 and 225 and added a light rail expansion for a total cost of $1.67 billion over five years. The cost of of highway improvements and of double-track light rail worked out to $19.3 million per highway lane-mile and $27.6 million per LRT track-mile. The project came in under budget and 22 months ahead of schedule.
LRT cost efficiency improves dramatically as ridership increases, as can be seen from the numbers above: the same rail line, with similar capital and operating costs, is far more efficient if it is carrying 20,000 people per hour than if it is carrying 2,400. The Calgary, Alberta, C-Train used many common light rail techniques to keep costs low, including minimizing underground and elevated trackage, sharing transit malls with buses, leasing rights-of-way from freight railroads, and combining LRT construction with freeway expansion. As a result, Calgary ranks toward the less expensive end of the scale with capital costs of around $24 million per mile.
However, Calgary's LRT ridership is much higher than any comparable US light rail system, at 300,000 passengers per weekday, and as a result, its capital efficiency is also much higher. Its capital costs were one-third those of the San Diego Trolley, a comparably sized US system built at the same time, while by 2009 its ridership was approximately three times as high. Thus, Calgary's capital cost per passenger was much lower than that of San Diego. Its operating cost per passenger was also much lower because of its higher ridership. A typical C-Train vehicle costs only per hour to operate, and since it averages 600 passengers per operating hour, Calgary Transit estimates that its LRT operating costs are only 27 cents per ride, versus $1.50 per ride on its buses.
Compared to buses, costs can be lower due to lower labor costs per passenger mile, higher ridership (observations show that light rail attracts more ridership than a comparable bus service) and faster average speed (reducing the number of vehicles needed for the same service frequency). While light rail vehicles are more expensive to buy, they have a longer useful life than buses, sometimes making for lower life-cycle costs. Compared to heavy rail investment costs are lower, however operating costs are higher than heavy rail.
Efficiency
Energy efficiency for light rail may be 120 passenger miles per gallon of fuel (or equivalent), but variation is great, depending on circumstances.
Effects
Safety
An analysis of data from the 505-page National Transportation Statistics report published by the US Department of Transportation shows that light rail fatalities are higher than all other forms of transportation except motorcycle travel (31.5 fatalities per 100 million miles).
However, the National Transportation Statistics report published by the US Department of Transportation states that:Caution must be exercised in comparing fatalities across modes because significantly different definitions are used. In particular, Rail and Transit fatalities include incident-related (as distinct from accident-related) fatalities, such as fatalities from falls in transit stations or railroad employee fatalities from a fire in a workshed. Equivalent fatalities for the Air and Highway modes (fatalities at airports not caused by moving aircraft or fatalities from accidents in automobile repair shops) are not counted toward the totals for these modes. Thus, fatalities not necessarily directly related to in-service transportation are counted for the transit and rail modes, potentially overstating the risk for these modes.
Health impact
Studies have attributed light rail with a number of health impacts. Research has associated light rail positively with increased walking and decreased obesity. Additionally, one electric light rail train produces nearly 99 percent less carbon monoxide and hydrocarbon emissions per mile than one automobile does.
Tram and light rail transit systems worldwide
Around the world, there are many extant tram and streetcar systems. Some date from the beginning of the 20th century or earlier such as Toronto streetcar system, but many of the original tram and streetcar systems were closed down in the mid-20th century, except for many Eastern European countries. Even though many systems closed down over the years, there are still several tram systems that have been operating much as they did when they were first built over a century ago. Some cities (such as Los Angeles and Jersey City) that once closed down their streetcar networks are now restoring, or have already rebuilt, at least some of their former streetcar/tram systems. Most light rail services are currently committed to articulated vehicles like modern LRVs, i.e. trams, except for large underground metro or rapid transit systems.
Several UK cities have substantial light rail networks including the Nottingham Express Transit, Sheffield Supertram, Manchester Metrolink.
A smaller network between Birmingham and The Black Country (West Midlands Metro), with plans to add 6 new lines and extend out to Stourbridge, Birmingham Airport & Walsall. Edinburgh Trams is also a single line route, currently looking to add other lines.
| Technology | Rail and cable transport | null |
50952 | https://en.wikipedia.org/wiki/Swan | Swan | Swans are birds of the genus Cygnus within the family Anatidae. The swans' closest relatives include the geese and ducks. Swans are grouped with the closely related geese in the subfamily Anserinae where they form the tribe Cygnini. Sometimes, they are considered a distinct subfamily, Cygninae.
There are six living and many extinct species of swan; in addition, there is a species known as the coscoroba swan which is no longer considered one of the true swans. Swans usually mate for life, although separation sometimes occurs, particularly following nesting failure, and if a mate dies, the remaining swan will take up with another. The number of eggs in each clutch ranges from three to eight.
Taxonomy and terminology
The genus Cygnus was introduced in 1764 by the French naturalist François Alexandre Pierre de Garsault. The English word swan, akin to the German , Dutch and Swedish , is derived from the Indo-European root (H) ().
are known as cygnets, from Old French or (diminutive suffix et ), from the Latin word , a variant form of , itself from the Greek , a word of the same meaning. An adult male is a cob, from Middle English (leader of a group); an adult female is a pen. A group of swans is called a bevy or a wedge.
Description
Swans are the largest extant members of the waterfowl family Anatidae and are among the largest flying birds. The largest living species, including the mute swan, trumpeter swan, and whooper swan, can reach a length of over and weigh over . Their wingspans can be over . Compared to the closely related geese, they are much larger and have proportionally larger feet and necks. Adults also have a patch of unfeathered skin between the eyes and bill. The sexes are alike in plumage, but males are generally bigger and heavier than females. The biggest species of swan ever was the extinct Cygnus falconeri, a flightless giant swan known from fossils found on the Mediterranean islands of Malta and Sicily. Its disappearance is thought to have resulted from extreme climate fluctuations or the arrival of superior predators and competitors.
The Northern Hemisphere species of swan have pure white plumage, but the Southern Hemisphere species are mixed black and white. The Australian black swan (Cygnus atratus) is completely black except for the white flight feathers on its wings; the chicks of black swans are light grey. The South American black-necked swan has a white body with a black neck.
The legs of most swans are typically a dark blackish-grey colour, except for the South American black-necked swan, which has pink legs. Bill colour varies: the four subarctic species have black bills with varying amounts of yellow, and all the others are patterned red and black. Although birds do not have teeth, swans, like other Anatidae, have beaks with serrated edges that look like small jagged "teeth" as part of their beaks used for catching and eating aquatic plants and algae, but also molluscs, small fish, frogs, and worms. In the mute swan and black-necked swan, both sexes have a fleshy lump at the base of their bills on the upper mandible, known as the knob, which is larger in males and is condition dependent, changing seasonally.
Distribution and movements
Swans are generally found in temperate environments, rarely occurring in the tropics. Four (or five) species occur in the Northern Hemisphere, one species is found in Australia, one extinct species was found in New Zealand and the Chatham Islands, and one species is distributed in southern South America. They are absent from tropical Asia, Central America, northern South America and the entirety of Africa. One species, the mute swan, has been introduced to North America, Australia and New Zealand.
Several species are migratory, either wholly or partly so. The mute swan is a partial migrant, being resident over areas of Western Europe but wholly migratory in Eastern Europe and Asia. The whooper swan and tundra swan are wholly migratory, and the trumpeter swans are almost entirely migratory. There is some evidence that the black-necked swan is migratory over part of its range, but detailed studies have not established whether these movements are long or short-range migration.
Behaviour
Swans feed in water and on land. They are almost entirely herbivorous, although they may eat small amounts of aquatic animals. In the water, food is obtained by up-ending or dabbling, and their diet is composed of the roots, tubers, stems and leaves of aquatic and submerged plants.
A familiar behaviour of swans is that they mate for life, and typically bond even before they reach sexual maturity. Trumpeter swans, for example, can live as long as 24 years and only start breeding at the age of 4–7, forming monogamous pair bonds as early as 20 months. "Divorce", though rare, does occur; one study of mute swans shows a 3% rate for pairs that breed successfully and 9% for pairs that do not. The pair bonds are maintained year-round, even in gregarious and migratory species like the tundra swan, which congregate in large flocks in the wintering grounds.
Swans' nests are on the ground near water and about a metre (3') across. Unlike many other ducks and geese, the male helps with the nest construction, and will also take turns incubating the eggs. Alongside the whistling ducks, swans are the only anatids that will do this. The average egg size (for the mute swan) is 113 × 74 mm ( x 3 in), weighing 340 g (12 oz), in a clutch size of 4 to 7, and an incubation period of 34–45 days. Swans are highly protective of their nests. They will viciously attack anything that they perceive as a threat to their chicks, including humans. One man was suspected to have drowned in such an attack. Swans' intraspecific aggressive behaviour is shown more frequent than interspecific behaviour for food and shelter. The aggression with other species is shown more in tundra swans.
Systematics and evolution
Evidence suggests that the genus Cygnus evolved in Europe or western Eurasia during the Miocene, spreading all over the Northern Hemisphere until the Pliocene. When the southern species branched off is not known. The mute swan is closest to the Southern Hemisphere Cygnus; its habits of carrying the neck curved (not straight) and the wings fluffed (not flush) as well as its bill colour and knob indicate that its closest living relative is the black swan. Given the biogeography and appearance of the subgenus Olor, it seems likely that these are of a more recent origin, as evidenced shows by their modern ranges (which were mostly uninhabitable during the last ice age) and great similarity between the taxa.
Phylogeny
Species
Genus Cygnus
The coscoroba swan (Coscoroba coscoroba) from South America, the only species in its genus, is not a true swan. Its phylogenetic position is not fully resolved; it is in some aspects more similar to geese and shelducks.
Fossil record
The fossil record of the genus Cygnus is quite impressive, although allocation to the subgenera is often tentative; as indicated above, at least the early forms probably belong to the C. olor – Southern Hemisphere lineage, whereas the Pleistocene taxa from North America would be placed in Olor. Several prehistoric species have been described, mostly from the Northern Hemisphere. In the Mediterranean, the leg bones of the giant swan (C. falconeri) were found on the islands of Malta and Sicily; it may have been over 2 metres from tail to bill, which was taller (though not heavier) than the contemporary local dwarf elephants (Palaeoloxodon falconeri).
Subgenus Chenopis
†New Zealand swan, Cygnus sumnerensis, an extinct species related to the black swan of Australia
Other subgenera (see above):
†Cygnus csakvarensis Lambrecht 1933 [Cygnus csákvárensis Lambrecht 1931a nomen nudum; Cygnanser csakvarensis (Lambrecht 1933) Kretzoi 1957; Olor csakvarensis (Lambrecht 1933) Mlíkovský 1992b] (Late Miocene of Hungary)
†Cygnus mariae Bickart 1990 (Early Pliocene of Wickieup, U.S.)
†Cygnus verae Boev 2000 (Early Pliocene of Sofia, Bulgaria)
†Cygnus liskunae (Kuročkin 1976) [Anser liskunae Kuročkin 1976] (Middle Pliocene of western Mongolia)
†Cygnus hibbardi Brodkorb 1958 (?Early Pleistocene of Idaho, U.S.)
†Cygnus sp. Louchart et al. 1998 (Early Pleistocene of Dursunlu, Turkey)
†Giant swan (Cygnus falconeri) Parker 1865 sensu Livezey 1997a [Cygnus melitensis Falconer 1868; Palaeocygnus falconeri (Parker 1865) Oberholser 1908] (Middle Pleistocene of Malta and Sicily, Mediterranean)
†Cygnus paloregonus Cope 1878 [Anser condoni Schufeldt 1892; Cygnus matthewi Schufeldt 1913] (Middle Pleistocene of west-central U.S.)
†Dwarf swan (Cygnus equitum) Bate 1916 sensu Livezey 1997 [Anser equitum (Bate 1916) Brodkorb 1964; Cygnus (Olor) equitum Bate 1916 sensu Northcote 1988a] (Middle – Late Pleistocene of Malta and Sicily, Mediterranean)
†Cygnus lacustris (De Vis 1905) [Archaeocycnus lacustris De Vis 1905] (Late Pleistocene of the Lake Eyre region, Australia)
†Cygnus sp. (Pleistocene of Australia)
†Cygnus atavus (Fraas 1870) Mlíkovský 1992 [Anas atava Fraas 1870; Anas cygniformis Fraas 1870; Palaelodus steinheimensis Fraas 1870; Anser atavus (Fraas 1870) Lambrecht 1933; Anser cygniformis (Fraas 1870) Lambrecht 1933]
Other genera
† Annakacygna
The supposed fossil swans "Cygnus" bilinicus and "Cygnus" herrenthalsi were, respectively, a stork and some large bird of unknown affinity (due to the bad state of preservation of the referred material).
In culture
European motifs
Many of the cultural aspects refer to the mute swan of Europe. Perhaps the best-known story about a swan is the fairy tale "The Ugly Duckling". Swans are often a symbol of love or fidelity because of their long-lasting, apparently monogamous relationships. See Wagner's famous swan-related operas Lohengrin and Parsifal.
As food
Swan meat was regarded as a luxury food in England during the reign of Elizabeth I. A recipe for baked swan survives from that time: "To bake a Swan Scald it and take out the bones, and parboil it, then season it very well with Pepper, Salt and Ginger, then lard it, and put it in a deep Coffin of Rye Paste with store of Butter, close it and bake it very well, and when it is baked, fill up the Vent-hole with melted Butter, and so keep it; serve it in as you do the Beef-Pie." Swans being raised for food were sometimes kept in swan pits.
The Illustrious Brotherhood of Our Blessed Lady, a religious confraternity which existed in 's-Hertogenbosch in the late Middle Ages, had "sworn members", also called "swan-brethren" because they used to donate a swan for the yearly banquet.
Based on a mistaken belief that the British monarch owns all the swans in Britain, it is popularly believed the British monarch is the only person allowed to eat swans in the United Kingdom.
Heraldics
Ancient Greece and Rome
Swans feature strongly in mythology. In Greek mythology, the story of Leda and the Swan recounts that Helen of Troy was conceived in a union of Zeus disguised as a swan and Leda, Queen of Sparta.
Other references in classical literature include the belief that, upon death, the mute swan would sing beautifully—hence the phrase swan song.
The mute swan is also one of the sacred birds of Apollo, whose associations stem both from the nature of the bird as a symbol of light, as well as the notion of a "swan song". The god is often depicted riding a chariot pulled by or composed of swans in his ascension from Delos.
In the second century, the Roman poet Juvenal made a sarcastic reference to a good woman being a "rare bird, as rare on earth as a black swan" (black swans being completely unknown in the Northern Hemisphere until Dutch explorers reached Australia in the 1600s), from which comes the Latin phrase (rare bird).
Irish lore and poetry
The Irish legend of the Children of Lir is about a stepmother who transformed her children into swans for 900 years.
In the legend The Wooing of Etain the king of the Sidhe (subterranean-dwelling, supernatural beings) transforms himself and the most beautiful woman in Ireland, Etain, into swans to escape from the king of Ireland and Ireland's armies. The swan has recently been depicted on an Irish commemorative coin.
Swans are also present in Irish literature in the poetry of W. B. Yeats. "The Wild Swans at Coole" has a heavy focus on the mesmerising characteristics of the swan. Yeats also recounts the myth of Leda and the Swan in the poem of the same name.
Nordic lore
In Norse mythology, two swans drink from the sacred Well of Urd in the realm of Asgard, home of the gods. According to the Prose Edda, the water of this well is so pure and holy that all things that touch it turn white, including this original pair of swans and all others descended from them. The poem Volundarkvida, or the Lay of Volund, part of the Poetic Edda, also features swan maidens.
In the Finnish epic Kalevala, a swan lives in the Tuoni River located in Tuonela, the underworld realm of the dead. According to the story, whoever killed a swan would perish as well. Jean Sibelius composed the Lemminkäinen Suite based on the Kalevala, with the second piece entitled Swan of Tuonela (Tuonelan joutsen). Today, five flying swans are the symbol of the Nordic countries; the whooper swan (Cygnus cygnus) is the national bird of Finland; and the mute swan is the national bird of Denmark.
Swan Lake ballet
The ballet Swan Lake is among the most canonic of classical ballets. Based on the 1875–76 score by Pyotr Ilyich Tchaikovsky, the most promulgated choreographic version was created by Marius Petipa and Lev Ivanov (1895), the premiere of which was danced by the Imperial Ballet at the Mariinsky Theater in St. Petersburg. The ballet's lead dual roles of Odette (white swan)/Odile (black swan) represent good and evil and are among the most challenging roles created in Romantic classical ballet. The ballet is in the repertories of ballet companies around the world.
Christianity
A swan is one of the attributes of St. Hugh of Lincoln, based on the story of a swan who was devoted to him.
Spanish language literature
In Latin American literature, the Nicaraguan poet Rubén Darío (1867–1916) consecrated the swan as a symbol of artistic inspiration by drawing attention to the constancy of swan imagery in Western culture, beginning with the rape of Leda and ending with Wagner's Lohengrin. Darío's most famous poem in this regard is Blasón – "Coat of Arms" (1896), and his use of the swan made it a symbol for the Modernismo poetic movement that dominated Spanish language poetry from the 1880s until the First World War. Such was the dominance of Modernismo in Spanish language poetry that the Mexican poet Enrique González Martínez attempted to announce the end of Modernismo with a sonnet provocatively entitled, Tuércele el cuello al cisne – "Wring the Swan's Neck" (1910).
Hinduism
Swans are revered in Hinduism and are compared to saintly persons whose chief characteristic is to be in the world without getting attached to it, just as a swan's feather does not get wet although it is in water. The Sanskrit word for swan is hamsa and the "Raja Hamsam" or the Royal Swan is the vehicle of Devi Saraswati, which symbolises the Sattva Guna or purity par excellence. The swan, if offered a mixture of milk and water, is said to be able to drink the milk alone. Therefore, Saraswati, the goddess of knowledge, is seen riding the swan because the swan thus symbolizes Viveka, i.e. prudence and discrimination between the good and the bad or between the eternal and the transient. This is seen as a great quality, as shown by this Sanskrit verse:
It is mentioned several times in the Vedic literature, and persons who have attained great spiritual capabilities are sometimes called Paramahamsa ("Supreme Swan") on account of their spiritual grace and ability to travel between various spiritual worlds. In the Vedas, swans are said to reside in the summer on Lake Manasarovar and migrate to Indian lakes for the winter. They are believed to possess some powers, such as the ability to eat pearls.
Indo-European religions
Swans are intimately associated with the divine twins in Indo-European religions, and it is thought that in Proto-Indo-European times, swans were a solar symbol associated with the divine twins and the original Indo-European sun goddess.
| Biology and health sciences | Anseriformes | null |
50958 | https://en.wikipedia.org/wiki/Sulfur%20dioxide | Sulfur dioxide | Sulfur dioxide (IUPAC-recommended spelling) or sulphur dioxide (traditional Commonwealth English) is the chemical compound with the formula . It is a colorless gas with a pungent smell that is responsible for the odor of burnt matches. It is released naturally by volcanic activity and is produced as a by-product of copper extraction and the burning of sulfur-bearing fossil fuels.
Sulfur dioxide is somewhat toxic to humans, although only when inhaled in relatively large quantities for a period of several minutes or more. It was known to medieval alchemists as "volatile spirit of sulfur".
Structure and bonding
SO2 is a bent molecule with C2v symmetry point group.
A valence bond theory approach considering just s and p orbitals would describe the bonding in terms of resonance between two resonance structures.
The sulfur–oxygen bond has a bond order of 1.5. There is support for this simple approach that does not invoke d orbital participation.
In terms of electron-counting formalism, the sulfur atom has an oxidation state of +4 and a formal charge of +1.
Occurrence
Sulfur dioxide is found on Earth and exists in very small concentrations in the atmosphere at about 15 ppb.
On other planets, sulfur dioxide can be found in various concentrations, the most significant being the atmosphere of Venus, where it is the third-most abundant atmospheric gas at 150 ppm. There, it reacts with water to form clouds of sulfurous acid ( + ⇌ + ), is a key component of the planet's global atmospheric sulfur cycle and contributes to global warming. It has been implicated as a key agent in the warming of early Mars, with estimates of concentrations in the lower atmosphere as high as 100 ppm, though it only exists in trace amounts. On both Venus and Mars, as on Earth, its primary source is thought to be volcanic. The atmosphere of Io, a natural satellite of Jupiter, is 90% sulfur dioxide and trace amounts are thought to also exist in the atmosphere of Jupiter. The James Webb Space Telescope has observed the presence of sulfur dioxide on the exoplanet WASP-39b, where it is formed through photochemistry in the planet's atmosphere.
As an ice, it is thought to exist in abundance on the Galilean moons—as subliming ice or frost on the trailing hemisphere of Io, and in the crust and mantle of Europa, Ganymede, and Callisto, possibly also in liquid form and readily reacting with water.
Production
Sulfur dioxide is primarily produced for sulfuric acid manufacture (see contact process, but other processes predated that at least since 16th century). In the United States in 1979, 23.6 million metric tons (26 million U.S. short tons) of sulfur dioxide were used in this way, compared with 150,000 metric tons (165,347 U.S. short tons) used for other purposes. Most sulfur dioxide is produced by the combustion of elemental sulfur. Some sulfur dioxide is also produced by roasting pyrite and other sulfide ores in air.
Combustion routes
Sulfur dioxide is the product of the burning of sulfur or of burning materials that contain sulfur:
+ 8 → 8 , ΔH = −297 kJ/mol
To aid combustion, liquified sulfur ( is sprayed through an atomizing nozzle to generate fine drops of sulfur with a large surface area. The reaction is exothermic, and the combustion produces temperatures of . The significant amount of heat produced is recovered by steam generation that can subsequently be converted to electricity.
The combustion of hydrogen sulfide and organosulfur compounds proceeds similarly. For example:
2 + 3 → 2 + 2
The roasting of sulfide ores such as pyrite, sphalerite, and cinnabar (mercury sulfide) also releases SO2:
4 + 11 → 2 + 8
2 + 3 → 2 + 2
4 FeS + 7 → 2 + 4
A combination of these reactions is responsible for the largest source of sulfur dioxide, volcanic eruptions. These events can release millions of tons of SO2.
Reduction of higher oxides
Sulfur dioxide can also be a byproduct in the manufacture of calcium silicate cement; CaSO4 is heated with coke and sand in this process:
2 + 2 + C → 2 + 2 +
Until the 1970s commercial quantities of sulfuric acid and cement were produced by this process in Whitehaven, England. Upon being mixed with shale or marl, and roasted, the sulfate liberated sulfur dioxide gas, used in sulfuric acid production, the reaction also produced calcium silicate, a precursor in cement production.
On a laboratory scale, the action of hot concentrated sulfuric acid on copper turnings produces sulfur dioxide.
Cu + 2 →
Tin also reacts with concentrated sulfuric acid but it produces tin(II) sulfate which can later be pyrolyzed at 360 °C into tin dioxide and dry sulfur dioxide.
Sn + →
→ +
From sulfites
The reverse reaction occurs upon acidification:
Reactions
Sulfites result by the action of aqueous base on sulfur dioxide:
Sulfur dioxide is a mild but useful reducing agent. It is oxidized by halogens to give the sulfuryl halides, such as sulfuryl chloride:
Sulfur dioxide is the oxidising agent in the Claus process, which is conducted on a large scale in oil refineries. Here, sulfur dioxide is reduced by hydrogen sulfide to give elemental sulfur:
The sequential oxidation of sulfur dioxide followed by its hydration is used in the production of sulfuric acid.
+ + →
Sulfur dioxide dissolves in water to give "sulfurous acid", which cannot be isolated and is instead an acidic solution of bisulfite, and possibly sulfite, ions.
Ka = 1.54; pKa = 1.81
Laboratory reactions
Sulfur dioxide is one of the few common acidic yet reducing gases. It turns moist litmus pink (being acidic), then white (due to its bleaching effect). It may be identified by bubbling it through a dichromate solution, turning the solution from orange to green (Cr3+ (aq)). It can also reduce ferric ions to ferrous.
Sulfur dioxide can react with certain 1,3-dienes in a cheletropic reaction to form cyclic sulfones. This reaction is exploited on an industrial scale for the synthesis of sulfolane, which is an important solvent in the petrochemical industry.
Sulfur dioxide can bind to metal ions as a ligand to form metal sulfur dioxide complexes, typically where the transition metal is in oxidation state 0 or +1. Many different bonding modes (geometries) are recognized, but in most cases, the ligand is monodentate, attached to the metal through sulfur, which can be either planar and pyramidal η1. As a η1-SO2 (S-bonded planar) ligand sulfur dioxide functions as a Lewis base using the lone pair on S. SO2 functions as a Lewis acids in its η1-SO2 (S-bonded pyramidal) bonding mode with metals and in its 1:1 adducts with Lewis bases such as dimethylacetamide and trimethyl amine. When bonding to Lewis bases the acid parameters of SO2 are EA = 0.51 and EA = 1.56.
Uses
The overarching, dominant use of sulfur dioxide is in the production of sulfuric acid.
Precursor to sulfuric acid
Sulfur dioxide is an intermediate in the production of sulfuric acid, being converted to sulfur trioxide, and then to oleum, which is made into sulfuric acid. Sulfur dioxide for this purpose is made when sulfur combines with oxygen. The method of converting sulfur dioxide to sulfuric acid is called the contact process. Several million tons are produced annually for this purpose.
Food preservative
Sulfur dioxide is sometimes used as a preservative for dried apricots, dried figs, and other dried fruits, owing to its antimicrobial properties and ability to prevent oxidation, and is called E220 when used in this way in Europe. As a preservative, it maintains the colorful appearance of the fruit and prevents rotting. Historically, molasses was "sulfured" as a preservative and also to lighten its color. Treatment of dried fruit was usually done outdoors, by igniting sublimed sulfur and burning in an enclosed space with the fruits. Fruits may be sulfured by dipping them into an either sodium bisulfite, sodium sulfite or sodium metabisulfite.
Winemaking
Sulfur dioxide was first used in winemaking by the Romans, when they discovered that burning sulfur candles inside empty wine vessels keeps them fresh and free from vinegar smell.
It is still an important compound in winemaking, and is measured in parts per million (ppm) in wine. It is present even in so-called unsulfurated wine at concentrations of up to 10 mg/L. It serves as an antibiotic and antioxidant, protecting wine from spoilage by bacteria and oxidation – a phenomenon that leads to the browning of the wine and a loss of cultivar specific flavors. Its antimicrobial action also helps minimize volatile acidity. Wines containing sulfur dioxide are typically labeled with "containing sulfites".
Sulfur dioxide exists in wine in free and bound forms, and the combinations are referred to as total SO2. Binding, for instance to the carbonyl group of acetaldehyde, varies with the wine in question. The free form exists in equilibrium between molecular SO2 (as a dissolved gas) and bisulfite ion, which is in turn in equilibrium with sulfite ion. These equilibria depend on the pH of the wine. Lower pH shifts the equilibrium towards molecular (gaseous) SO2, which is the active form, while at higher pH more SO2 is found in the inactive sulfite and bisulfite forms. The molecular SO2 is active as an antimicrobial and antioxidant, and this is also the form which may be perceived as a pungent odor at high levels. Wines with total SO2 concentrations below 10 ppm do not require "contains sulfites" on the label by US and EU laws. The upper limit of total SO2 allowed in wine in the US is 350 ppm; in the EU it is 160 ppm for red wines and 210 ppm for white and rosé wines. In low concentrations, SO2 is mostly undetectable in wine, but at free SO2 concentrations over 50 ppm, SO2 becomes evident in the smell and taste of wine.
SO2 is also a very important compound in winery sanitation. Wineries and equipment must be kept clean, and because bleach cannot be used in a winery due to the risk of cork taint, a mixture of SO2, water, and citric acid is commonly used to clean and sanitize equipment. Ozone (O3) is now used extensively for sanitizing in wineries due to its efficacy, and because it does not affect the wine or most equipment.
As a reducing agent
Sulfur dioxide is also a good reductant. In the presence of water, sulfur dioxide is able to decolorize substances. Specifically, it is a useful reducing bleach for papers and delicate materials such as clothes. This bleaching effect normally does not last very long. Oxygen in the atmosphere reoxidizes the reduced dyes, restoring the color. In municipal wastewater treatment, sulfur dioxide is used to treat chlorinated wastewater prior to release. Sulfur dioxide reduces free and combined chlorine to chloride.
Sulfur dioxide is fairly soluble in water, and by both IR and Raman spectroscopy; the hypothetical sulfurous acid, H2SO3, is not present to any extent. However, such solutions do show spectra of the hydrogen sulfite ion, HSO3−, by reaction with water, and it is in fact the actual reducing agent present:
SO2 + H2O ⇌ HSO3− + H+
As a fumigant
In the beginning of the 20th century sulfur dioxide was used in Buenos Aires as a fumigant to kill rats that carried the Yersinia pestis bacterium, which causes bubonic plague. The application was successful, and the application of this method was extended to other areas in South America. In Buenos Aires, where these apparatuses were known as Sulfurozador, but later also in Rio de Janeiro, New Orleans and San Francisco, the sulfur dioxide treatment machines were brought into the streets to enable extensive disinfection campaigns, with effective results.
Biochemical and biomedical roles
Sulfur dioxide or its conjugate base bisulfite is produced biologically as an intermediate in both sulfate-reducing organisms and in sulfur-oxidizing bacteria, as well. The role of sulfur dioxide in mammalian biology is not yet well understood. Sulfur dioxide blocks nerve signals from the pulmonary stretch receptors and abolishes the Hering–Breuer inflation reflex.
It is considered that endogenous sulfur dioxide plays a significant physiological role in regulating cardiac and blood vessel function, and aberrant or deficient sulfur dioxide metabolism can contribute to several different cardiovascular diseases, such as arterial hypertension, atherosclerosis, pulmonary arterial hypertension, and stenocardia.
It was shown that in children with pulmonary arterial hypertension due to congenital heart diseases the level of homocysteine is higher and the level of endogenous sulfur dioxide is lower than in normal control children. Moreover, these biochemical parameters strongly correlated to the severity of pulmonary arterial hypertension. Authors considered homocysteine to be one of useful biochemical markers of disease severity and sulfur dioxide metabolism to be one of potential therapeutic targets in those patients.
Endogenous sulfur dioxide also has been shown to lower the proliferation rate of endothelial smooth muscle cells in blood vessels, via lowering the MAPK activity and activating adenylyl cyclase and protein kinase A. Smooth muscle cell proliferation is one of important mechanisms of hypertensive remodeling of blood vessels and their stenosis, so it is an important pathogenetic mechanism in arterial hypertension and atherosclerosis.
Endogenous sulfur dioxide in low concentrations causes endothelium-dependent vasodilation. In higher concentrations it causes endothelium-independent vasodilation and has a negative inotropic effect on cardiac output function, thus effectively lowering blood pressure and myocardial oxygen consumption. The vasodilating and bronchodilating effects of sulfur dioxide are mediated via ATP-dependent calcium channels and L-type ("dihydropyridine") calcium channels. Endogenous sulfur dioxide is also a potent antiinflammatory, antioxidant and cytoprotective agent. It lowers blood pressure and slows hypertensive remodeling of blood vessels, especially thickening of their intima. It also regulates lipid metabolism.
Endogenous sulfur dioxide also diminishes myocardial damage, caused by isoproterenol adrenergic hyperstimulation, and strengthens the myocardial antioxidant defense reserve.
As a reagent and solvent in the laboratory
Sulfur dioxide is a versatile inert solvent widely used for dissolving highly oxidizing salts. It is also used occasionally as a source of the sulfonyl group in organic synthesis. Treatment of aryl diazonium salts with sulfur dioxide and cuprous chloride yields the corresponding aryl sulfonyl chloride, for example:
As a result of its very low Lewis basicity, it is often used as a low-temperature solvent/diluent for superacids like magic acid (FSO3H/SbF5), allowing for highly reactive species like tert-butyl cation to be observed spectroscopically at low temperature (though tertiary carbocations do react with SO2 above about −30 °C, and even less reactive solvents like SO2ClF must be used at these higher temperatures).
As a refrigerant
Being easily condensed and possessing a high heat of evaporation, sulfur dioxide is a candidate material for refrigerants. Before the development of chlorofluorocarbons, sulfur dioxide was used as a refrigerant in home refrigerators.
As an indicator of volcanic activity
Sulfur dioxide content in naturally-released geothermal gasses is measured by the Icelandic Meteorological Office as an indicator of possible volcanic activity.
Safety
Ingestion
In the United States, the Center for Science in the Public Interest lists the two food preservatives, sulfur dioxide and sodium bisulfite, as being safe for human consumption except for certain asthmatic individuals who may be sensitive to them, especially in large amounts. Symptoms of sensitivity to sulfiting agents, including sulfur dioxide, manifest as potentially life-threatening trouble breathing within minutes of ingestion. Sulphites may also cause symptoms in non-asthmatic individuals, namely dermatitis, urticaria, flushing, hypotension, abdominal pain and diarrhea, and even life-threatening anaphylaxis.
Inhalation
Incidental exposure to sulfur dioxide is routine, e.g. the smoke from matches, coal, and sulfur-containing fuels like bunker fuel. Relative to other chemicals, it is only mildly toxic and requires high concentrations to be actively hazardous. However, its ubiquity makes it a major air pollutant with significant impacts on human health.
In 2008, the American Conference of Governmental Industrial Hygienists reduced the short-term exposure limit to 0.25 parts per million (ppm). In the US, the OSHA set the PEL at 5 ppm (13 mg/m3) time-weighted average. Also in the US, NIOSH set the IDLH at 100 ppm. In 2010, the EPA "revised the primary SO2 NAAQS by establishing a new one-hour standard at a level of 75 parts per billion (ppb). EPA revoked the two existing primary standards because they would not provide additional public health protection given a one-hour standard at 75 ppb."
Environmental role
Air pollution
Major volcanic eruptions have an overwhelming effect on sulfate aerosol concentrations in the years when they occur: eruptions ranking 4 or greater on the Volcanic Explosivity Index inject and water vapor directly into the stratosphere, where they react to create sulfate aerosol plumes. Volcanic emissions vary significantly in composition, and have complex chemistry due to the presence of ash particulates and a wide variety of other elements in the plume. Only stratovolcanoes containing primarily felsic magmas are responsible for these fluxes, as mafic magma erupted in shield volcanoes doesn't result in plumes which reach the stratosphere. However, before the Industrial Revolution, dimethyl sulfide pathway was the largest contributor to sulfate aerosol concentrations in a more average year with no major volcanic activity. According to the IPCC First Assessment Report, published in 1990, volcanic emissions usually amounted to around 10 million tons in 1980s, while dimethyl sulfide amounted to 40 million tons. Yet, by that point, the global human-caused emissions of sulfur into the atmosphere became "at least as large" as all natural emissions of sulfur-containing compounds combined: they were at less than 3 million tons per year in 1860, and then they increased to 15 million tons in 1900, 40 million tons in 1940 and about 80 millions in 1980. The same report noted that "in the industrialized regions of Europe and North America, anthropogenic emissions dominate over natural emissions by about a factor of ten or even more". In the eastern United States, sulfate particles were estimated to account for 25% or more of all air pollution. Exposure to sulfur dioxide emissions by coal power plants (coal PM2.5) in the US was associated with 2.1 times greater mortality risk than exposure to PM2.5 from all sources.
Meanwhile, the Southern Hemisphere had much lower concentrations due to being much less densely populated, with an estimated 90% of the human population in the north. In the early 1990s, anthropogenic sulfur dominated in the Northern Hemisphere, where only 16% of annual sulfur emissions were natural, yet amounted for less than half of the emissions in the Southern Hemisphere.
Such an increase in sulfate aerosol emissions had a variety of effects. At the time, the most visible one was acid rain, caused by precipitation from clouds carrying high concentrations of sulfate aerosols in the troposphere.
At its peak, acid rain has eliminated brook trout and some other fish species and insect life from lakes and streams in geographically sensitive areas, such as Adirondack Mountains in the United States. Acid rain worsens soil function as some of its microbiota is lost and heavy metals like aluminium are mobilized (spread more easily) while essential nutrients and minerals such as magnesium can leach away because of the same. Ultimately, plants unable to tolerate lowered pH are killed, with montane forests being some of the worst-affected ecosystems due to their regular exposure to sulfate-carrying fog at high altitudes. While acid rain was too dilute to affect human health directly, breathing smog or even any air with elevated sulfate concentrations is known to contribute to heart and lung conditions, including asthma and bronchitis. Further, this form of pollution is linked to preterm birth and low birth weight, with a study of 74,671 pregnant women in Beijing finding that every additional 100 μg/m3 of in the air reduced infants' weight by 7.3 g, making it and other forms of air pollution the largest attributable risk factor for low birth weight ever observed.
Control measures
Due largely to the US EPA's Acid Rain Program, the U.S. has had a 33% decrease in emissions between 1983 and 2002 (see table). This improvement resulted in part from flue-gas desulfurization, a technology that enables SO2 to be chemically bound in power plants burning sulfur-containing coal or petroleum.
In particular, calcium oxide (lime) reacts with sulfur dioxide to form calcium sulfite:
CaO + SO2 → CaSO3
Aerobic oxidation of the CaSO3 gives CaSO4, anhydrite. Most gypsum sold in Europe comes from flue-gas desulfurization.
To control sulfur emissions, dozens of methods with relatively high efficiencies have been developed for fitting of coal-fired power plants. Sulfur can be removed from coal during burning by using limestone as a bed material in fluidized bed combustion.
Sulfur can also be removed from fuels before burning, preventing formation of SO2 when the fuel is burnt. The Claus process is used in refineries to produce sulfur as a byproduct. The Stretford process has also been used to remove sulfur from fuel. Redox processes using iron oxides can also be used, for example, Lo-Cat or Sulferox.
Fuel additives such as calcium additives and magnesium carboxylate may be used in marine engines to lower the emission of sulfur dioxide gases into the atmosphere.
Effects on ozone layer
Sulfur dioxide aerosols in the stratosphere can contribute to ozone depletion in the presence of chlorofluorocarbons and other halogenated ozone-depleting substances. The effects of volcanic eruptions containing sulfur dioxide aerosols on the ozone layer are complex, however. In the absence of anthropogenic or biogenic halogenated compounds in the lower stratosphere, depletion of dinitrogen pentoxide in the middle stratosphere associated with its reactivity to the aerosols can promote ozone formation. Injection of sulfur dioxide and large amounts of water vapor into the stratosphere following the 2022 eruption of Hunga Tonga-Hunga Haʻapai resulted in altered atmospheric circulation that promoted a decrease in ozone in the southern latitudes but an increase in the tropics. The additional presence of hydrochloric acid in eruptions can result in net ozone depletion.
Impact on climate change
Projected impacts
Solar geoengineering
Properties
Table of thermal and physical properties of saturated liquid sulfur dioxide:
| Physical sciences | Covalent oxides | Chemistry |
50982 | https://en.wikipedia.org/wiki/CT%20scan | CT scan | A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists.
CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated.
Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography".
Types
On the basis of image acquisition and procedures, various type of scanners are available in the market.
Sequential CT
Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning.
Spiral CT
Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution.
Electron beam tomography
Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage.
Dual Energy CT
Dual Energy CT, also known as Spectral CT, is an advancement of Computed Tomography in which two energies are used to create two sets of data. A Dual Energy CT may employ Dual source, Single source with dual detector layer, Single source with energy switching methods to get two different sets of data.
Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual Source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication.
Single Source with Energy switching is another mode of Dual energy CT in which a single tube is operated at two different energies by switching the energies frequently.
CT perfusion imaging
CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types.
PET CT
Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning.
PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers.
Medical use
Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied.
The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015.
Head
CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer.
Neck
Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities.
Lungs
A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images.
Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi.
An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines.
Angiography
Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure.
Cardiac
A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves.
The main forms of cardiac CT scanning are:
Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease.
Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well.
To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding.
Abdomen and pelvis
CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain.
Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone.
Axial skeleton and extremities
For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout.
Biomechanical use
CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues.
Other uses
Industrial use
Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts.
Aviation security
CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer.
Geological use
X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images.
Paleontological use
Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil.
Cultural heritage use
X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics.
Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT.
Micro organism research
Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions.
Timber sawmill
Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high.
Interpretation of results
Presentation
The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories:
Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm.
Projection, including maximum intensity projection and average intensity projection
Volume rendering (VR)
Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations.
Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients.
Grayscale
Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics.
Windowing
CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan.
Multiplanar reconstruction and projections
Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution.
MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane.
New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan.
Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure.
For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view.
Volume rendering
A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures.
Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another.
Image quality
Dose versus image quality
An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist.
New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose.
Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation.
Higher resolution is not always suitable, such as detection of small pulmonary masses.
Artifacts
Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5
Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging).
Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner.
Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts.
Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy.
Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch.
Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods.
Advantages
CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task.
The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose.
CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion.
Adverse effects
Cancer
The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation.
Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years.
Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure.
There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm.
One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic.
A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans.
Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask.
Contrast reactions
In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people.
The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer.
There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury.
The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast.
In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis.
Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought.
Scan dose
The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans.
For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years.
Lead is the main material used by radiography personnel for shielding against scattered X-rays.
Radiation dose units
The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy.
The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners.
The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism.
Effects of radiation
Most adverse health effects of radiation exposure may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses;
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000.
Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy.
Excess doses
In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error.
Procedure
CT scan procedure varies according to the type of the study and the organ being imaged. The patient is made to lie on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose.
Preparation
Patient preparation may vary according to the type of scan. The general patient preparation includes.
Signing the informed consent.
Removal of metallic objects and jewelry from the region of interest.
Changing to the hospital gown according to hospital protocol.
Checking of kidney function, especially creatinine and urea levels (in case of CECT).
Mechanism
Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels.
Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit.
Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients.
Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy.
Contrast
Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast.
History
The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972.
It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners.
Etymology
The word tomography is derived from the Greek 'slice' and 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography.
The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers.
In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title.
The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975.
Society and culture
Campaigns
In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology.
Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely.
The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose.
Prevalence
Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998).
The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost.
Manufacturers
Major manufacturers of CT scanning devices and equipment are:
Canon Medical Systems Corporation
Fujifilm Healthcare
GE HealthCare
Neusoft Medical Systems
Philips
Siemens Healthineers
United Imaging
Research
Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering.
| Technology | Imaging | null |
51025 | https://en.wikipedia.org/wiki/Electroplating | Electroplating | Electroplating, also known as electrochemical deposition or electrodeposition, is a process for producing a metal coating on a solid substrate through the reduction of cations of that metal by means of a direct electric current. The part to be coated acts as the cathode (negative electrode) of an electrolytic cell; the electrolyte is a solution of a salt whose cation is the metal to be coated, and the anode (positive electrode) is usually either a block of that metal, or of some inert conductive material. The current is provided by an external power supply.
Electroplating is widely used in industry and decorative arts to improve the surface qualities of objects—such as resistance to abrasion and corrosion, lubricity, reflectivity, electrical conductivity, or appearance. It is used to build up thickness on undersized or worn-out parts and to manufacture metal plates with complex shape, a process called electroforming. It is used to deposit copper and other conductors in forming printed circuit boards and copper interconnects in integrated circuits. It is also used to purify metals such as copper.
The aforementioned electroplating of metals uses an electroreduction process (that is, a negative or cathodic current is on the working electrode). The term "electroplating" is also used occasionally for processes that occur under electro-oxidation (i.e positive or anodic current on the working electrode), although such processes are more commonly referred to as anodizing rather than electroplating. One such example is the formation of silver chloride on silver wire in chloride solutions to make silver/silver-chloride (AgCl) electrodes.
Electropolishing, a process that uses an electric current to selectively remove the outermost layer from the surface of a metal object, is the reverse of the process of electroplating.
Throwing power is an important parameter that provides a measure of the uniformity of electroplating current, and consequently the uniformity of the electroplated metal thickness, on regions of the part that are near to the anode compared to regions that are far from it. It depends mostly on the composition and temperature of the electroplating solution, as well as on the operating current density. A higher throwing power of the plating bath results in a more uniform coating.
Process
The electrolyte in the electrolytic plating cell should contain positive ions (cations) of the metal to be deposited. These cations are reduced at the cathode to the metal in the zero valence state. For example, the electrolyte for copper electroplating can be a solution of copper(II) sulfate, which dissociates into Cu2+ cations and anions. At the cathode, the Cu2+ is reduced to metallic copper by gaining two electrons.
When the anode is made of the metal that is intended for coating onto the cathode, the opposite reaction may occur at the anode, turning it into dissolved cations. For example, copper would be oxidized at the anode to Cu2+ by losing two electrons. In this case, the rate at which the anode is dissolved will equal the rate at which the cathode is plated, and thus the ions in the electrolyte bath are continuously replenished by the anode. The net result is the effective transfer of metal from the anode to the cathode.
The anode may instead be made of a material that resists electrochemical oxidation, such as lead or carbon. Oxygen, hydrogen peroxide, and some other byproducts are then produced at the anode instead. In this case, ions of the metal to be plated must be replenished (continuously or periodically) in the bath as they are drawn out of the solution.
The plating is most commonly a single metallic element, not an alloy. However, some alloys can be electrodeposited, notably brass and solder. Plated "alloys" are not "true alloys" (solid solutions), but rather they are tiny crystals of the elemental metals being plated. In the case of plated solder, it is sometimes deemed necessary to have a true alloy, and the plated solder is melted to allow the tin and lead to combine into a true alloy. The true alloy is more corrosion-resistant than the as-plated mixture.
Many plating baths include cyanides of other metals (such as potassium cyanide) in addition to cyanides of the metal to be deposited. These free cyanides facilitate anode corrosion, help to maintain a constant metal ion level, and contribute to conductivity. Additionally, non-metal chemicals such as carbonates and phosphates may be added to increase conductivity.
When plating is not desired on certain areas of the substrate, stop-offs are applied to prevent the bath from coming in contact with the substrate. Typical stop-offs include tape, foil, lacquers, and waxes.
Strike
Initially, a special plating deposit called a strike or flash may be used to form a very thin (typically less than 0.1 μm thick) plating with high quality and good adherence to the substrate. This serves as a foundation for subsequent plating processes. A strike uses a high current density and a bath with a low ion concentration. The process is slow, so more efficient plating processes are used once the desired strike thickness is obtained.
The striking method is also used in combination with the plating of different metals. If it is desirable to plate one type of deposit onto a metal to improve corrosion resistance but this metal has inherently poor adhesion to the substrate, then a strike can be first deposited that is compatible with both. One example of this situation is the poor adhesion of electrolytic nickel on zinc alloys, in which case a copper strike is used, which has good adherence to both.
Pulse electroplating
The pulse electroplating or pulse electrodeposition (PED) process involves the swift alternating of the electrical potential or current between two different values, resulting in a series of pulses of equal amplitude, duration, and polarity, separated by zero current. By changing the pulse amplitude and width, it is possible to change the deposited film's composition and thickness.
The experimental parameters of pulse electroplating usually consist of peak current/potential, duty cycle, frequency, and effective current/potential. Peak current/potential is the maximum setting of electroplating current or potential. Duty cycle is the effective portion of time in a certain electroplating period with the current or potential applied. The effective current/potential is calculated by multiplying the duty cycle and peak value of the current or potential. Pulse electroplating could help to improve the quality of electroplated film and release the internal stress built up during fast deposition. A combination of the short duty cycle and high frequency could decrease surface cracks. However, in order to maintain the constant effective current or potential, a high-performance power supply may be required to provide high current/potential and a fast switch. Another common problem of pulse electroplating is that the anode material could get plated and contaminated during the reverse electroplating, especially for a high-cost, inert electrode such as platinum.
Other factors that affect the pulse electroplating include temperature, anode-to-cathode gap, and stirring. Sometimes, pulse electroplating can be performed in a heated electroplating bath to increase the deposition rate, since the rate of most chemical reactions increases exponentially with temperature per the Arrhenius law. The anode-to-cathode gap is related to the current distribution between anode and cathode. A small gap-to-sample-area ratio may cause uneven distribution of current and affect the surface topology of the plated sample. Stirring may increase the transfer/diffusion rate of metal ions from the bulk solution to the electrode surface. The ideal stirring setting varies for different metal electroplating processes.
Brush electroplating
A closely-related process is brush electroplating, in which localized areas or entire items are plated using a brush saturated with plating solution. The brush, typically a graphite body wrapped with an absorbent cloth material that both holds the plating solution and prevents direct contact with the item being plated, is connected to the anode of a low-voltage and 3-4 ampere direct-current power source, and the item to be plated (the cathode) is grounded. The operator dips the brush in plating solution and then applies it to the item, moving the brush continually to get an even distribution of the plating material.
Brush electroplating has several advantages over tank plating, including portability, the ability to plate items that for some reason cannot be tank plated (one application was the plating of portions of very large decorative support columns in a building restoration), low or no masking requirements, and comparatively low plating solution volume requirements. Mainly used industrially for part repair, worn bearing surfaces getting a nickel or silver deposit. With technological advancement deposits up to .025" have been achieved and retained uniformity. Disadvantages compared to tank plating can include greater operator involvement (tank plating can frequently be done with minimal attention and the solutions used are often toxic), and the inconsistency in achieving as great a plate thickness.
Barrel plating
This technique of electroplating is one of the most common used in the industry for large numbers of small objects. The objects are placed in a barrel-shaped non-conductive cage and then immersed in a chemical bath containing dissolved ions of the metal that is to be plated onto them. The barrel is then rotated, and electrical currents are run through the various pieces in the barrel, which complete circuits as they touch one another. The result is a very uniform and efficient plating process, though the finish on the end products will likely suffer from abrasion during the plating process. It is unsuitable for highly ornamental or precisely engineered items.
Cleanliness
Cleanliness is essential to successful electroplating, since molecular layers of oil can prevent adhesion of the coating. ASTM B322 is a standard guide for cleaning metals prior to electroplating. Cleaning includes solvent cleaning, hot alkaline detergent cleaning, electrocleaning, ultrasonic cleaning and acid treatment. The most common industrial test for cleanliness is the waterbreak test, in which the surface is thoroughly rinsed and held vertical. Hydrophobic contaminants such as oils cause the water to bead and break up, allowing the water to drain rapidly. Perfectly clean metal surfaces are hydrophilic and will retain an unbroken sheet of water that does not bead up or drain off. ASTM F22 describes a version of this test. This test does not detect hydrophilic contaminants, but electroplating can displace these easily, since the solutions are water-based. Surfactants such as soap reduce the sensitivity of the test and must be thoroughly rinsed off.
Test cells and characterization
Throwing power
Throwing power (or macro throwing power) is an important parameter that provides a measure of the uniformity of electroplating current, and consequently the uniformity of the electroplated metal thickness, on regions of the part that are near the anode compared to regions that are far from it. It depends mostly on the composition and temperature of the electroplating solution. Micro throwing power refers to the extent to which a process can fill or coat small recesses such as through-holes. Throwing power can be characterized by the dimensionless Wagner number:
where R is the universal gas constant, T is the operating temperature, κ is the ionic conductivity of the plating solution, F is the Faraday constant, L is the equivalent size of the plated object, α is the transfer coefficient, and i the surface-averaged total (including hydrogen evolution) current density. The Wagner number quantifies the ratio of kinetic to ohmic resistances. A higher Wagner number produces a more uniform deposition. This can be achieved in practice by decreasing the size (L) of the plated object, reducing the current density |i|, adding chemicals that lower α (make the electric current less sensitive to voltage), and raising the solution conductivity (e.g. by adding acid). Concurrent hydrogen evolution usually improves the uniformity of electroplating by increasing |i|; however, this effect can be offset by blockage due to hydrogen bubbles and hydroxide deposits.
The Wagner number is rather difficult to measure accurately; therefore, other related parameters, that are easier to obtain experimentally with standard cells, are usually used instead. These parameters are derived from two ratios: the ratio of the plating thickness of a specified region of the cathode "close" to the anode to the thickness of a region "far" from the cathode and the ratio of the distances of these regions through the electrolyte to the anode. In a Haring-Blum cell, for example, for its two independent cathodes, and a cell yielding plating thickness ratio of M = 6 has Harring-Blum throwing power . Other conventions include the Heatley throwing power , Field throwing power , and Luke throwing power . A more uniform thickness is obtained by making the throwing power larger (less negative), except for Luke's throwing power, which has the advantage of having a minimum of 0 and a maximum of 100, in terms of the less negative value, according to any of these definitions.
Parameters that describe cell performance such as throwing power are measured in small test cells of various designs that aim to reproduce conditions similar to those found in the production plating bath.
Haring–Blum cell
The Haring–Blum cell is used to determine the macro throwing power of a plating bath. The cell consists of two parallel cathodes with a fixed anode in the middle. The cathodes are at distances from the anode in the ratio of 1:5. The macro throwing power is calculated from the thickness of plating at the two cathodes when a direct current is passed for a specific period of time. The cell is fabricated out of perspex or glass.
Hull cell
The Hull cell is a type of test cell used to semi-quantitatively check the condition of an electroplating bath. It measures useable current density range, optimization of additive concentration, recognition of impurity effects, and indication of macro throwing power capability. The Hull cell replicates the plating bath on a lab scale. It is filled with a sample of the plating solution and an appropriate anode which is connected to a rectifier. The "work" is replaced with a Hull cell test panel that will be plated to show the "health" of the bath.
The Hull cell is a trapezoidal container that holds 267 milliliters of a plating bath solution. This shape allows one to place the test panel on an angle to the anode. As a result, the deposit is plated at a range current densities along its length, which can be measured with a Hull cell ruler. The solution volume allows for a semi-quantitative measurement of additive concentration: 1 gram addition to 267 mL is equivalent to 0.5 oz/gal in the plating tank.
Effects
Electroplating changes the chemical, physical, and mechanical properties of the workpiece. An example of a chemical change is when nickel plating improves corrosion resistance. An example of a physical change is a change in the outward appearance. An example of a mechanical change is a change in tensile strength or surface hardness, which is a required attribute in the tooling industry. Electroplating of acid gold on underlying copper- or nickel-plated circuits reduces contact resistance as well as surface hardness. Copper-plated areas of mild steel act as a mask if case-hardening of such areas are not desired. Tin-plated steel is chromium-plated to prevent dulling of the surface due to oxidation of tin.
Specific metals
Alternatives to electroplating
There are a number of alternative processes to produce metallic coatings on solid substrates that do not involve electrolytic reduction:
Electroless deposition uses a bath containing metal ions and chemicals that will reduce them to the metal by redox reactions. The reaction should be autocatalytic, so that new metal will be deposited over the growing coating, rather than precipitated as a powder through the whole bath at once. Electroless processes are widely used to deposit nickel-phosphorus or nickel-boron alloys for wear and corrosion resistance, silver for mirror-making, copper for printed circuit boards, and many more. A major advantage of these processes over electroplating is that they can produce coatings of uniform thickness over surfaces of arbitrary shape, even inside holes, and the substrate need not be electrically conducting. Another major benefit is that they do not need power sources or specially-shaped anodes. Disadvantages include lower deposition speed, consumption of relatively expensive chemicals, and a limited choice of coating metals.
Immersion coating processes exploit displacement reactions in which the substrate metal is oxidized to soluble ions while ions of the coating metal get reduced and deposited in its place. This process is limited to very thin coatings, since the reaction stops after the substrate has been completely covered. Nevertheless, it has some important applications, such as the electroless nickel immersion gold (ENIG) process used to obtain gold-plated electrical contacts on printed circuit boards.
Sputtering uses an electron beam or a plasma to eject microscopic particles of the metal onto the substrate in a vacuum.
Physical vapor deposition transfer the metal onto the substrate by evaporating it.
Chemical vapor deposition uses a gas containing a volatile compound of the metal, which gets deposited onto the substrate as a result of a chemical reaction.
Gilding is a traditional way to attach a gold layer onto metals by applying a very thin sheet of gold held in place by an adhesive.
History
Electroplating was invented by Italian chemist Luigi Valentino Brugnatelli in 1805. Brugnatelli used his colleague Alessandro Volta's invention of five years earlier, the voltaic pile, to facilitate the first electrodeposition. Brugnatelli's inventions were suppressed by the French Academy of Sciences and did not become used in general industry for the following thirty years. By 1839, scientists in Britain and Russia had independently devised metal-deposition processes similar to Brugnatelli's for the copper electroplating of printing press plates.
Research from the 1930s had theorized that electroplating might have been performed in the Parthian Empire using a device resembling a Baghdad Battery, but this has since been refuted; the items were fire-gilded using mercury.
Boris Jacobi in Russia not only rediscovered galvanoplastics, but developed electrotyping and galvanoplastic sculpture. Galvanoplastics quickly came into fashion in Russia, with such people as inventor Peter Bagration, scientist Heinrich Lenz, and science-fiction author Vladimir Odoyevsky all contributing to further development of the technology. Among the most notorious cases of electroplating usage in mid-19th century Russia were the gigantic galvanoplastic sculptures of St. Isaac's Cathedral in Saint Petersburg and gold-electroplated dome of the Cathedral of Christ the Saviour in Moscow, the third tallest Orthodox church in the world.
Soon after, John Wright of Birmingham, England discovered that potassium cyanide was a suitable electrolyte for gold and silver electroplating. Wright's associates, George Elkington and Henry Elkington were awarded the first patents for electroplating in 1840. These two then founded the electroplating industry in Birmingham from where it spread around the world. The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in industry. It was used by Elkingtons.
The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant starting its production in 1876.
As the science of electrochemistry grew, its relationship to electroplating became understood and other types of non-decorative metal electroplating were developed. Commercial electroplating of nickel, brass, tin, and zinc were developed by the 1850s. Electroplating baths and equipment based on the patents of the Elkingtons were scaled up to accommodate the plating of numerous large-scale objects and for specific manufacturing and engineering applications.
The plating industry received a big boost with the advent of the development of electric generators in the late 19th century. With the higher currents available, metal machine components, hardware, and automotive parts requiring corrosion protection and enhanced wear properties, along with better appearance, could be processed in bulk.
The two World Wars and the growing aviation industry gave impetus to further developments and refinements, including such processes as hard chromium plating, bronze alloy plating, sulfamate nickel plating, and numerous other plating processes. Plating equipment evolved from manually-operated tar-lined wooden tanks to automated equipment capable of processing thousands of kilograms per hour of parts.
One of the American physicist Richard Feynman's first projects was to develop technology for electroplating metal onto plastic. Feynman developed the original idea of his friend into a successful invention, allowing his employer (and friend) to keep commercial promises he had made but could not have fulfilled otherwise.
| Technology | Metallurgy | null |
51042 | https://en.wikipedia.org/wiki/Paleoclimatology | Paleoclimatology | Paleoclimatology (British spelling, palaeoclimatology) is the scientific study of climates predating the invention of meteorological instruments, when no direct measurement data were available. As instrumental records only span a tiny part of Earth's history, the reconstruction of ancient climate is important to understand natural variation and the evolution of the current climate.
Paleoclimatology uses a variety of proxy methods from Earth and life sciences to obtain data previously preserved within rocks, sediments, boreholes, ice sheets, tree rings, corals, shells, and microfossils. Combined with techniques to date the proxies, the paleoclimate records are used to determine the past states of Earth's atmosphere.
The scientific field of paleoclimatology came to maturity in the 20th century. Notable periods studied by paleoclimatologists include the frequent glaciations that Earth has undergone, rapid cooling events like the Younger Dryas, and the rapid warming during the Paleocene–Eocene Thermal Maximum. Studies of past changes in the environment and biodiversity often reflect on the current situation, specifically the impact of climate on mass extinctions and biotic recovery and current global warming.
History
Notions of a changing climate most likely evolved in ancient Egypt, Mesopotamia, the Indus Valley and China, where prolonged periods of droughts and floods were experienced. In the seventeenth century, Robert Hooke postulated that fossils of giant turtles found in Dorset could only be explained by a once warmer climate, which he thought could be explained by a shift in Earth's axis. Fossils were, at that time, often explained as a consequence of a biblical flood. Systematic observations of sunspots started by amateur astronomer Heinrich Schwabe in the early 19th century, starting a discussion of the Sun's influence on Earth's climate.
The scientific study of paleoclimatology began to take shape in the early 19th century, when discoveries about glaciations and natural changes in Earth's past climate helped to understand the greenhouse effect. It was only in the 20th century that paleoclimatology became a unified scientific field. Before, different aspects of Earth's climate history were studied by a variety of disciplines. At the end of the 20th century, the empirical research into Earth's ancient climates started to be combined with computer models of increasing complexity. A new objective also developed in this period: finding ancient analog climates that could provide information about current climate change.
Reconstructing ancient climates
Paleoclimatologists employ a wide variety of techniques to deduce ancient climates. The techniques used depend on which variable has to be reconstructed (this could be temperature, precipitation, or something else) and how long ago the climate of interest occurred. For instance, the deep marine record, the source of most isotopic data, exists only on oceanic plates, which are eventually subducted; the oldest remaining material is old. Older sediments are also more prone to corruption by diagenesis. This is due to the millions of years of disruption experienced by the rock formations, such as pressure, tectonic activity, and fluid flowing. These factors often result in a lack in quality or quantity of data, which causes resolution and confidence in the data decrease over time.
Specific techniques used to make inferences on ancient climate conditions are the use of lake sediment cores and speleothems. These utilize an analysis of sediment layers and rock growth formations respectively, amongst element-dating methods utilizing oxygen, carbon and uranium.
Proxies for climate
Direct Quantitative Measurements
The Direct Quantitative Measurements method is the most direct approach to understand the change in a climate. Comparisons between recent data to older data allows a researcher to gain a basic understanding of weather and climate changes within an area. There is a disadvantage to this method. Data of the climate only started being recorded in the mid-1800s. This means that researchers can only utilize 150 years of data. That is not helpful when trying to map the climate of an area 10,000 years ago. This is where more complex methods can be used.
Ice
Mountain glaciers and the polar ice caps/ice sheets provide much data in paleoclimatology. Ice-coring projects in the ice caps of Greenland and Antarctica have yielded data going back several hundred thousand years, over 800,000 years in the case of the EPICA project.
Air trapped within fallen snow becomes encased in tiny bubbles as the snow is compressed into ice in the glacier under the weight of later years' snow. The trapped air has proven a tremendously valuable source for direct measurement of the composition of air from the time the ice was formed.
Layering can be observed because of seasonal pauses in ice accumulation and can be used to establish chronology, associating specific depths of the core with ranges of time.
Changes in the layering thickness can be used to determine changes in precipitation or temperature.
Oxygen-18 quantity changes () in ice layers represent changes in average ocean surface temperature. Water molecules containing the heavier O-18 evaporate at a higher temperature than water molecules containing the normal Oxygen-16 isotope. The ratio of O-18 to O-16 will be higher as temperature increases but it also depends on factors such as water salinity and the volume of water locked up in ice sheets. Various cycles in isotope ratios have been detected.
Pollen has been observed in the ice cores and can be used to understand which plants were present as the layer formed. Pollen is produced in abundance and its distribution is typically well understood. A pollen count for a specific layer can be produced by observing the total amount of pollen categorized by type (shape) in a controlled sample of that layer. Changes in plant frequency over time can be plotted through statistical analysis of pollen counts in the core. Knowing which plants were present leads to an understanding of precipitation and temperature, and types of fauna present. Palynology includes the study of pollen for these purposes.
Volcanic ash is contained in some layers, and can be used to establish the time of the layer's formation. Volcanic events distribute ash with a unique set of properties (shape and color of particles, chemical signature). Establishing the ash's source will give a time period to associate with the layer of ice.
A multinational consortium, the European Project for Ice Coring in Antarctica (EPICA), has drilled an ice core in Dome C on the East Antarctic ice sheet and retrieved ice from roughly 800,000 years ago. The international ice core community has, under the auspices of International Partnerships in Ice Core Sciences (IPICS), defined a priority project to obtain the oldest possible ice core record from Antarctica, an ice core record reaching back to or towards 1.5 million years ago.
Dendroclimatology
Climatic information can be obtained through an understanding of changes in tree growth. Generally, trees respond to changes in climatic variables by speeding up or slowing down growth, which in turn is generally reflected by a greater or lesser thickness in growth rings. Different species, however, respond to changes in climatic variables in different ways. A tree-ring record is established by compiling information from many living trees in a specific area. This is done by comparing the number, thickness, ring boundaries, and pattern matching of tree growth rings.
The differences in thickness displayed in the growth rings in trees can often indicate the quality of conditions in the environment, and the fitness of the tree species evaluated. Different species of trees will display different growth responses to the changes in the climate. An evaluation of multiple trees within the same species, along with one of trees in different species, will allow for a more accurate analysis of the changing variables within the climate and how they affected the surrounding species.
Older intact wood that has escaped decay can extend the time covered by the record by matching the ring depth changes to contemporary specimens. By using that method, some areas have tree-ring records dating back a few thousand years. Older wood not connected to a contemporary record can be dated generally with radiocarbon techniques. A tree-ring record can be used to produce information regarding precipitation, temperature, hydrology, and fire corresponding to a particular area.
Sedimentary content
On a longer time scale, geologists must refer to the sedimentary record for data.
Sediments, sometimes lithified to form rock, may contain remnants of preserved vegetation, animals, plankton, or pollen, which may be characteristic of certain climatic zones.
Biomarker molecules such as the alkenones may yield information about their temperature of formation.
Chemical signatures, particularly Mg/Ca ratio of calcite in Foraminifera tests, can be used to reconstruct past temperature.
Isotopic ratios can provide further information. Specifically, the record responds to changes in temperature and ice volume, and the record reflects a range of factors, which are often difficult to disentangle.
Sedimentary facies
On a longer time scale, the rock record may show signs of sea level rise and fall, and features such as "fossilised" sand dunes can be identified. Scientists can get a grasp of long-term climate by studying sedimentary rock going back billions of years. The division of Earth history into separate periods is largely based on visible changes in sedimentary rock layers that demarcate major changes in conditions. Often, they include major shifts in climate.
Sclerochronology
Corals (see also sclerochronology)
Coral “rings'' share similar evidence of growth to that of trees, and thus can be dated in similar ways. A primary difference is their environments and the conditions within those that they respond to. Examples of these conditions for coral include water temperature, freshwater influx, changes in pH, and wave disturbances. From there, specialized equipment, such as the Advanced Very High Resolution Radiometer (AVHRR) instrument, can be used to derive the sea surface temperature and water salinity from the past few centuries. The δ18O of coralline red algae provides a useful proxy of the combined sea surface temperature and sea surface salinity at high latitudes and the tropics, where many traditional techniques are limited.
Landscapes and landforms
Within climatic geomorphology, one approach is to study relict landforms to infer ancient climates. Being often concerned about past climates climatic geomorphology is considered sometimes to be a theme of historical geology. Evidence of these past climates to be studied can be found in the landforms they leave behind. Examples of these landforms are those such as glacial landforms (moraines, striations), desert features (dunes, desert pavements), and coastal landforms (marine terraces, beach ridges). Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.
Timing of proxies
The field of geochronology has scientists working on determining how old certain proxies are. For recent proxy archives of tree rings and corals the individual year rings can be counted, and an exact year can be determined. Radiometric dating uses the properties of radioactive elements in proxies. In older material, more of the radioactive material will have decayed and the proportion of different elements will be different from newer proxies. One example of radiometric dating is radiocarbon dating. In the air, cosmic rays constantly convert nitrogen into a specific radioactive carbon isotope, 14C. When plants then use this carbon to grow, this isotope is not replenished anymore and starts decaying. The proportion of 'normal' carbon and Carbon-14 gives information of how long the plant material has not been in contact with the atmosphere.
Notable climate events in Earth history
Knowledge of precise climatic events decreases as the record goes back in time, but some notable climate events are known:
Faint young Sun paradox (start)
Huronian glaciation (~2400 Mya Earth completely covered in ice probably due to Great Oxygenation Event)
Later Neoproterozoic Snowball Earth (~600 Mya, precursor to the Cambrian Explosion)
Andean-Saharan glaciation (~450 Mya)
Carboniferous Rainforest Collapse (~300 Mya)
Permian–Triassic extinction event (251.9 Mya)
Oceanic anoxic events (~120 Mya, 93 Mya, and others)
Cretaceous–Paleogene extinction event ( Mya)
Paleocene–Eocene Thermal Maximum (Paleocene–Eocene, 55Mya)
Last Glacial Maximum (~23,000 BCE)
Younger Dryas/Big Freeze (~11,000 BCE)
Holocene climatic optimum (~7000–3000 BCE)
Extreme weather events of 535–536 (535–536 CE)
Medieval Warm Period (900–1300)
Little Ice Age (1300–1800)
Year Without a Summer (1816)
History of the atmosphere
Earliest atmosphere
The first atmosphere would have consisted of gases in the solar nebula, primarily hydrogen. In addition, there would probably have been simple hydrides such as those now found in gas giants like Jupiter and Saturn, notably water vapor, methane, and ammonia. As the solar nebula dissipated, the gases would have escaped, partly driven off by the solar wind.
Second atmosphere
The next atmosphere, consisting largely of nitrogen, carbon dioxide, and inert gases, was produced by outgassing from volcanism, supplemented by gases produced during the late heavy bombardment of Earth by huge asteroids. A major part of carbon dioxide emissions were soon dissolved in water and built up carbonate sediments.
Water-related sediments have been found dating from as early as 3.8 billion years ago. About 3.4 billion years ago, nitrogen was the major part of the then stable "second atmosphere". An influence of life has to be taken into account rather soon in the history of the atmosphere because hints of early life forms have been dated to as early as 3.5 to 4.3 billion years ago. The fact that it is not perfectly in line with the 30% lower solar radiance (compared to today) of the early Sun has been described as the "faint young Sun paradox".
The geological record, however, shows a continually relatively warm surface during the complete early temperature record of Earth with the exception of one cold glacial phase about 2.4 billion years ago. In the late Archaean eon, an oxygen-containing atmosphere began to develop, apparently from photosynthesizing cyanobacteria (see Great Oxygenation Event) which have been found as stromatolite fossils from 2.7 billion years ago. The early basic carbon isotopy (isotope ratio proportions) was very much in line with what is found today, suggesting that the fundamental features of the carbon cycle were established as early as 4 billion years ago.
Third atmosphere
The constant rearrangement of continents by plate tectonics influences the long-term evolution of the atmosphere by transferring carbon dioxide to and from large continental carbonate stores. Free oxygen did not exist in the atmosphere until about 2.4 billion years ago, during the Great Oxygenation Event, and its appearance is indicated by the end of the banded iron formations. Until then, any oxygen produced by photosynthesis was consumed by oxidation of reduced materials, notably iron. Molecules of free oxygen did not start to accumulate in the atmosphere until the rate of production of oxygen began to exceed the availability of reducing materials. That point was a shift from a reducing atmosphere to an oxidizing atmosphere. O2 showed major variations until reaching a steady state of more than 15% by the end of the Precambrian. The following time span was the Phanerozoic eon, during which oxygen-breathing metazoan life forms began to appear.
The amount of oxygen in the atmosphere has fluctuated over the last 600 million years, reaching a peak of 35% during the Carboniferous period, significantly higher than today's 21%. Two main processes govern changes in the atmosphere: plants use carbon dioxide from the atmosphere, releasing oxygen and the breakdown of pyrite and volcanic eruptions release sulfur into the atmosphere, which oxidizes and hence reduces the amount of oxygen in the atmosphere. However, volcanic eruptions also release carbon dioxide, which plants can convert to oxygen. The exact cause of the variation of the amount of oxygen in the atmosphere is not known. Periods with much oxygen in the atmosphere are associated with rapid development of animals. Today's atmosphere contains 21% oxygen, which is high enough for rapid development of animals.
Climate during geological ages
The Huronian glaciation, is the first known glaciation in Earth's history, and lasted from 2400 to 2100 million years ago.
The Cryogenian glaciation lasted from 720 to 635 million years ago.
The Andean-Saharan glaciation lasted from 450 to 420 million years ago.
The Karoo glaciation lasted from 360 to 260 million years ago.
The Quaternary glaciation is the current glaciation period and began 2.58 million years ago.
In 2020 scientists published a continuous, high-fidelity record of variations in Earth's climate during the past 66 million years and identified four climate states, separated by transitions that include changing greenhouse gas levels and polar ice sheets volumes. They integrated data of various sources. The warmest climate state since the time of the dinosaur extinction, "Hothouse", endured from 56 Mya to 47 Mya and was ~14 °C warmer than average modern temperatures.
Precambrian climate
The Precambrian took place between the time when Earth first formed 4.6 billion years (Ga) ago, and 542 million years ago. The Precambrian can be split into two eons, the Archean and the Proterozoic, which can be further subdivided into eras. The reconstruction of the Precambrian climate is difficult for various reasons including the low number of reliable indicators and a, generally, not well-preserved or extensive fossil record (especially when compared to the Phanerozoic eon). Despite these issues, there is evidence for a number of major climate events throughout the history of the Precambrian: The Great Oxygenation Event, which started around 2.3 Ga ago (the beginning of the Proterozoic) is indicated by biomarkers which demonstrate the appearance of photosynthetic organisms. Due to the high levels of oxygen in the atmosphere from the GOE, CH4 levels fell rapidly cooling the atmosphere causing the Huronian glaciation. For about 1 Ga after the glaciation (2-0.8 Ga ago), the Earth likely experienced warmer temperatures indicated by microfossils of photosynthetic eukaryotes, and oxygen levels between 5 and 18% of the Earth's current oxygen level. At the end of the Proterozoic, there is evidence of global glaciation events of varying severity causing a 'Snowball Earth'. Snowball Earth is supported by different indicators such as, glacial deposits, significant continental erosion called the Great Unconformity, and sedimentary rocks called cap carbonates that form after a deglaciation episode.
Phanerozoic climate
Major drivers for the preindustrial ages have been variations of the Sun, volcanic ashes and exhalations, relative movements of the Earth towards the Sun, and tectonically induced effects as for major sea currents, watersheds, and ocean oscillations. In the early Phanerozoic, increased atmospheric carbon dioxide concentrations have been linked to driving or amplifying increased global temperatures. Royer et al. 2004 found a climate sensitivity for the rest of the Phanerozoic which was calculated to be similar to today's modern range of values.
The difference in global mean temperatures between a fully glacial Earth and an ice free Earth is estimated at 10 °C, though far larger changes would be observed at high latitudes and smaller ones at low latitudes. One requirement for the development of large scale ice sheets seems to be the arrangement of continental land masses at or near the poles. The constant rearrangement of continents by plate tectonics can also shape long-term climate evolution. However, the presence or absence of land masses at the poles is not sufficient to guarantee glaciations or exclude polar ice caps. Evidence exists of past warm periods in Earth's climate when polar land masses similar to Antarctica were home to deciduous forests rather than ice sheets.
The relatively warm local minimum between Jurassic and Cretaceous goes along with an increase of subduction and mid-ocean ridge volcanism due to the breakup of the Pangea supercontinent.
Superimposed on the long-term evolution between hot and cold climates have been many short-term fluctuations in climate similar to, and sometimes more severe than, the varying glacial and interglacial states of the present ice age. Some of the most severe fluctuations, such as the Paleocene-Eocene Thermal Maximum, may be related to rapid climate changes due to sudden collapses of natural methane clathrate reservoirs in the oceans.
A similar, single event of induced severe climate change after a meteorite impact has been proposed as reason for the Cretaceous–Paleogene extinction event. Other major thresholds are the Permian-Triassic, and Ordovician-Silurian extinction events with various reasons suggested.
Quaternary climate
The Quaternary geological period includes the current climate. There has been a cycle of ice ages for the past 2.2–2.1 million years (starting before the Quaternary in the late Neogene Period).
Note in the graphic on the right the strong 120,000-year periodicity of the cycles, and the striking asymmetry of the curves. This asymmetry is believed to result from complex interactions of feedback mechanisms. It has been observed that ice ages deepen by progressive steps, but the recovery to interglacial conditions occurs in one big step.
The graph on the left shows the temperature change over the past 12,000 years, from various sources; the thick black curve is an average.
Climate forcings
Climate forcing is the difference between radiant energy (sunlight) received by the Earth and the outgoing longwave radiation back to space. Such radiative forcing is quantified based on the amount in the tropopause, in units of watts per square meter to the Earth's surface. Dependent on the radiative balance of incoming and outgoing energy, the Earth either warms up or cools down. Earth radiative balance originates from changes in solar insolation and the concentrations of greenhouse gases and aerosols. Climate change may be due to internal processes in Earth sphere's and/or following external forcings.
One example of a way this can be applied to study climatology is analyzing how the varying concentrations of CO2 affect the overall climate. This is done by using various proxies to estimate past greenhouse gas concentrations and compare those to that of the present day. Researchers are then able to assess their role in progression of climate change throughout Earth’s history.
Internal processes and forcings
The Earth's climate system involves the atmosphere, biosphere, cryosphere, hydrosphere, and lithosphere, and the sum of these processes from Earth's spheres is what affects the climate. Greenhouse gasses act as the internal forcing of the climate system. Particular interests in climate science and paleoclimatology focus on the study of Earth climate sensitivity, in response to the sum of forcings. Analyzing the sum of these forcings contributes to the ability of scientists to make broad conclusive estimates on the Earth’s climate system. These estimates include the evidence for systems such as long term climate variability (eccentricity, obliquity precession), feedback mechanisms (Ice-Albedo Effect), and anthropogenic influence.
Examples:
Thermohaline circulation (Hydrosphere)
Life (Biosphere)
External forcings
The Milankovitch cycles determine Earth distance and position to the Sun. The solar insolation is the total amount of solar radiation received by Earth.
Volcanic eruptions are considered an internal forcing.
Human changes of the composition of the atmosphere or land use.
Human activities causing anthropogenic greenhouse gas emissions leading to global warming and associated climate changes.
Large asteroids that have cataclysmic impacts on Earth’s climate are considered external forcings.
Mechanisms
On timescales of millions of years, the uplift of mountain ranges and subsequent weathering processes of rocks and soils and the subduction of tectonic plates, are an important part of the carbon cycle. The weathering sequesters , by the reaction of minerals with chemicals (especially silicate weathering with ) and thereby removing from the atmosphere and reducing the radiative forcing. The opposite effect is volcanism, responsible for the natural greenhouse effect, by emitting into the atmosphere, thus affecting glaciation (Ice Age) cycles. Jim Hansen suggested that humans emit 10,000 times faster than natural processes have done in the past.
Ice sheet dynamics and continental positions (and linked vegetation changes) have been important factors in the long term evolution of the Earth's climate. There is also a close correlation between and temperature, where has a strong control over global temperatures in Earth's history.
| Physical sciences | Paleoclimate | Earth science |
51054 | https://en.wikipedia.org/wiki/American%20black%20bear | American black bear | The American black bear (Ursus americanus), or simply black bear, is a species of medium-sized bear endemic to North America. It is the continent's smallest and most widely distributed bear species. It is an omnivore, with a diet varying greatly depending on season and location. It typically lives in largely forested areas but will leave forests in search of food and is sometimes attracted to human communities due to the immediate availability of food.
The International Union for Conservation of Nature (IUCN) lists the American black bear as a least-concern species because of its widespread distribution and a large population, estimated to be twice that of all other bear species combined. Along with the brown bear (Ursus arctos), it is one of only two modern bear species not considered by the IUCN to be globally threatened with extinction.
Taxonomy and evolution
The American black bear is not closely related to the brown bear or polar bear, though all three species are found in North America; genetic studies reveal that they split from a common ancestor 5.05 million years ago (mya). American and Asian black bears are considered sister taxa and are more closely related to each other than to the other modern species of bears. According to recent studies, the sun bear is also a relatively recent split from this lineage.
A small primitive bear called Ursus abstrusus is the oldest known North American fossil member of the genus Ursus, dated to 4.95 mya. This suggests that U. abstrusus may be the direct ancestor of the American black bear, which evolved in North America. Although Wolverton and Lyman still consider U. vitabilis an "apparent precursor to modern black bears", it has also been placed within U. americanus.
The ancestors of American black bears and Asian black bears diverged from sun bears 4.58 mya. The American black bear then split from the Asian black bear 4.08 mya. The earliest American black bear fossils, which were located in Port Kennedy, Pennsylvania, greatly resemble the Asian species, though later specimens grew to sizes comparable to grizzly bears. From the Holocene to the present, American black bears seem to have shrunk in size, but this has been disputed because of problems with dating these fossil specimens.
The American black bear lived during the same period as the giant and lesser short-faced bears (Arctodus simus and A. pristinus, respectively) and the Florida spectacled bear (Tremarctos floridanus). These tremarctine bears evolved from bears that had emigrated from Asia to the Americas 7–8 mya. The giant and lesser short-faced bears are thought to have been heavily carnivorous and the Florida spectacled bear more herbivorous, while the American black bears remained arboreal omnivores, like their Asian ancestors.
The American black bear's generalist behavior allowed it to exploit a wider variety of foods and has been given as a reason why, of these three genera, it alone survived climate and vegetative changes through the last Ice Age while the other, more specialized North American predators became extinct. However, both Arctodus and Tremarctos had survived several other, previous ice ages. After these prehistoric ursids became extinct during the last glacial period 10,000 years ago, American black bears were probably the only bear present in much of North America until the migration of brown bears to the rest of the continent.
Hybrids
American black bears are reproductively compatible with several other bear species and occasionally produce hybrid offspring. According to Jack Hanna's Monkeys on the Interstate, a bear captured in Sanford, Florida, was thought to have been the offspring of an escaped female Asian black bear and a male American black bear. In 1859, an American black bear and a Eurasian brown bear were bred together in the London Zoological Gardens, but the three cubs that were born died before they reached maturity. In The Variation of Animals and Plants Under Domestication, Charles Darwin noted:
A bear shot in autumn 1986 in Michigan was thought by some to be an American black bear/grizzly bear hybrid, because of its unusually large size and its proportionately larger brain case and skull. DNA testing was unable to determine whether it was a large American black bear or a grizzly bear.
Subspecies
Sixteen subspecies are traditionally recognized; however, a recent genetic study does not support designating some of these, such as the Florida black bear, as distinct subspecies. Listed alphabetically according to subspecific name:
Distribution and population
Historically, American black bears occupied the majority of North America's forested regions. Today, they are primarily limited to sparsely settled, forested areas. American black bears currently inhabit much of their original Canadian range, though they seldom occur in the southern farmlands of Alberta, Saskatchewan and Manitoba; they have been extirpated on Prince Edward Island since 1937. Surveys taken in the mid-1990s found the Canadian black bear population to be between 396,000 and 476,000 in seven provinces; this estimate excludes populations in New Brunswick, the Northwest Territories, Nova Scotia and Saskatchewan. All provinces indicated stable populations of American black bears over the last decade.
The current range in the United States is constant throughout most of the Northeast and within the Appalachian Mountains almost continuously from Maine to northern Georgia, the northern Midwest, the Rocky Mountain region, the West Coast and Alaska. However, it becomes increasingly fragmented or absent in other regions. Despite this, American black bears in those areas seem to have expanded their range in recent decades, such as with recent sightings in Ohio, Illinois, southern Indiana, and western Nebraska. Sightings of itinerant black bears in the Driftless Area of southeastern Minnesota, northeastern Iowa, and southwestern Wisconsin are common. In 2019, biologists with the Iowa Department of Natural Resources confirmed documentation of an American black bear living year-round in woodlands near the town of Decorah in northeastern Iowa, believed to be the first instance of a resident black bear in Iowa since the 1880s.
Surveys taken from 35 states in the early 1990s indicated that American black bear populations were either stable or increasing, except in Idaho and New Mexico. The population in the United States was estimated to range between 339,000 and 465,000 in 2011, though this estimate does not include data from Alaska, Idaho, South Dakota, Texas or Wyoming, whose populations were not recorded in the survey. In California there were an estimated 25,000-35,000 black bears in 2017, making it the largest population of the species in any of the 48 contiguous United States. In 2020 there were about 1,500 bears in Great Smoky Mountains National Park, where the population density is about two per square mile. In western North Carolina, the black bear population has dramatically increased in recent decades, from about 3,000 in the early 2000s to over 8,000 in the 2020s.
As of 1993, known black bear populations in Mexico existed in four areas, though knowledge on the distribution of populations outside those areas has not been updated since 1959. Mexico is the only country where the species is classified as "endangered".
Habitat
Throughout their range, habitats preferred by American black bears have a few shared characteristics. They are often found in areas with relatively inaccessible terrain, thick understory vegetation and large quantities of edible material (especially masts). The adaptation to woodlands and thick vegetation in this species may have originally been because the bear evolved alongside larger, more aggressive bear species, such as the extinct giant short-faced bear and the grizzly bear, that monopolized more open habitats and the historic presence of larger predators, such as Smilodon and the American lion, that could have preyed on black bears. Although found in the largest numbers in wild, undisturbed areas and rural regions, American black bears can adapt to surviving in some numbers in peri-urban regions, as long as they contain easily accessible foods and some vegetative coverage.
In most of the contiguous United States, American black bears today are usually found in heavily vegetated mountainous areas, from in elevation. For American black bears living in the American Southwest and Mexico, habitat usually consists of stands of chaparral and pinyon juniper woods. In this region, bears occasionally move to more open areas to feed on prickly pear cactus. At least two distinct, prime habitat types are inhabited in the Southeastern United States. American black bears in the southern Appalachian Mountains survive in predominantly oak-hickory and mixed mesophytic forests. In the coastal areas of the southeast (such as Florida, the Carolinas and Louisiana), bears inhabit a mixture of flatwoods, bays and swampy hardwood sites.
In the northeastern part of the range (the United States and Canada), prime habitat consists of a forest canopy of hardwoods such as beech, maple, birch and coniferous species. Corn crops and oak-hickory mast are also common sources of food in some sections of the northeast; small, thick swampy areas provide excellent refuge cover largely in stands of white cedar. Along the Pacific coast, redwood, Sitka spruce and hemlocks predominate as overstory cover. Within these northern forest types are early successional areas important for American black bears, such as fields of brush, wet and dry meadows, high tidelands, riparian areas and a variety of mast-producing hardwood species. The spruce-fir forest dominates much of the range of the American black bear in the Rockies. Important non-forested areas here are wet meadows, riparian areas, avalanche chutes, roadsides, burns, sidehill parks and subalpine ridgetops.
In areas where human development is relatively low, such as stretches of Canada and Alaska, American black bears tend to be found more regularly in lowland regions. In parts of northeastern Canada, especially Labrador, American black bears have adapted exclusively to semi-open areas that are more typical habitat in North America for brown bears (likely due to the absence there of brown and polar bears, as well as other large carnivore species).
Description
Build
The skulls of American black bears are broad, with narrow muzzles and large jaw hinges. In Virginia, the length of adult bear skulls was found to average . Across its range, the greatest skull length for the species has been reportedly measured from . Females tend to have slenderer and more pointed faces than males.
Their claws are typically black or grayish-brown. The claws are short and rounded, being thick at the base and tapering to a point. Claws from both hind and front legs are almost identical in length, though the foreclaws tend to be more sharply curved. The paws of the species are relatively large, with a rear foot length of , which is proportionately larger than other medium-sized bear species, but much smaller than the paws of large adult brown, and especially polar bears. The soles of the feet are black or brownish and are naked, leathery and deeply wrinkled.
The hind legs are relatively longer than those of Asian black bears. The typically small tail is long. The ears are small and rounded and are set well back on the head.
American black bears are highly dexterous, being capable of opening screw-top jars and manipulating door latches. They also have great physical strength; a bear weighing was observed turning flat rocks weighing by flipping them over with a single foreleg. They move in a rhythmic, sure-footed way and can run at speeds of . American black bears have good eyesight and have been proven experimentally to be able to learn visual color discrimination tasks faster than chimpanzees and just as fast as domestic dogs. They are also capable of rapidly learning to distinguish different shapes such as small triangles, circles and squares.
Size
Adults typically range from in head-and-body length, and in shoulder height. Although they are the smallest bear species in North America, large males exceed the size of other bear species, except the brown bear and the polar bear.
Weight tends to vary according to age, sex, health and season. Seasonal variation in weight is very pronounced: in autumn, their pre-den weight tends to be 30% higher than in spring, when black bears emerge from their dens. Bears on the East Coast tend to be heavier on average than those on the West Coast, although they typically follow Bergmann's rule, and bears from the northwest are often slightly heavier than the bears from the southeast. Adult males typically weigh between , while females weigh 33% less at .
In California, studies indicate that the average mass is in adult males and in adult females. Adults in Yukon Flats National Wildlife Refuge in east-central Alaska were found to average in males and in females, whereas on Kuiu Island in southeastern Alaska (where nutritious salmon are readily available) adults averaged . In Great Smoky Mountains National Park, adult males averaged and adult females averaged per one study.
In one of the largest studies on regional body mass, bears in British Columbia averaged in 89 females and in 243 males. In Yellowstone National Park, a study found that adult males averaged and adult females averaged . Black bears in north-central Minnesota averaged in 163 females and in 77 males. In New York, the males average and females . It was found in Nevada and the Lake Tahoe region that bears closer to urban regions were significantly heavier than their arid-country dwelling counterparts, with males near urban areas averaging against wild-land males which averaged whereas peri-urban females averaged against the average of in wild-land ones. In Waterton Lakes National Park, Alberta, adults averaged .
The biggest wild American black bear ever recorded was a male from New Brunswick, shot in November 1972, that weighed after it had been dressed, meaning it weighed an estimated in life and measured long. Another notably outsized wild American black bear, weighing in at , was the cattle-killer shot in December 1921 on the Moqui Reservation in Arizona. The record-sized American black bear from New Jersey was shot in Morris County December 2011 and scaled . The Pennsylvania state record weighed and was shot in November 2010 in Pike County. The North American Bear Center, located in Ely, Minnesota, is home to the world's largest captive male and female American black bears. Ted, the male, weighed in the fall of 2006. Honey, the female, weighed in the fall of 2007.
Pelage
The fur is soft, with dense underfur and long, coarse, thick guard hairs. The fur is not as shaggy or coarse as that of brown bears. American black bear skins can be distinguished from those of Asian black bears by the lack of a white blaze on the chest and hairier footpads.
Despite their name, black bears show a great deal of color variation. Individual coat colors can range from white, blonde, cinnamon, light brown or dark chocolate brown to jet black, with many intermediate variations existing. Silvery-gray American black bears with a blue luster (this is found mostly on the flanks) occur along a portion of coastal Alaska and British Columbia. White to cream-colored American black bears occur in the coastal islands and the adjacent mainland of southwestern British Columbia. Albino individuals have also been recorded. Black coats tend to predominate in humid areas, such as Maine, New England, New York, Tennessee, Michigan and western Washington. Approximately 70% of all American black bears are black, though only 50% in the Rocky Mountains are black. Many in northwestern North America are cinnamon, blonde or light brown in color and thus may sometimes be mistaken for grizzly bears. Grizzly (and other types of brown) bears can be distinguished by their shoulder hump, larger size and broader, more concave skull.
In his book The Great Bear Almanac, Gary Brown summarized the predominance of black or brown/blonde specimens by location:
Behavior and life history
American black bears have eyesight and hearing comparable to that of humans. Their keenest sense is smell, which is about seven times more sensitive than a domestic dog's. They are excellent and strong swimmers, swimming for pleasure and to feed (largely on fish). They regularly climb trees to feed, escape enemies and hibernate. Four of the eight modern bear species are habitually arboreal (the most arboreal species, the American and Asian black bears and the sun bear, being fairly closely related). Their arboreal abilities tend to decline with age. They may be active at any time of the day or night, although they mainly forage by night. Bears living near human habitations tend to be more extensively nocturnal, while those living near brown bears tend to be more often diurnal.
American black bears tend to be territorial and non-gregarious in nature. However, at abundant food sources (e.g. spawning salmon or garbage dumps), they may congregate and dominance hierarchies form, with the largest, most powerful males dominating the most fruitful feeding spots. They mark their territories by rubbing their bodies against trees and clawing at the bark. Annual ranges held by mature male bears tend to be very large, though there is some variation. On Long Island off the coast of Washington, ranges average , whereas on the Ungava Peninsula in Canada ranges can average up to , with some male bears traveling as far as at times of food shortages.
Bears may communicate with various vocal and non-vocal sounds. Tongue-clicking and grunting are the most common sounds and are made in cordial situations to conspecifics, offspring and occasionally humans. When at ease, they produce a loud rumbling hum. During times of fear or nervousness, bears may moan, huff or blow air. Warning sounds include jaw-clicking and lip-popping. In aggressive interactions, black bears produce guttural pulsing calls that may sound like growling. Cubs squeal, bawl or scream when anxious and make a motor-like humming sound when comfortable or nursing. American black bears often mark trees using their teeth and claws as a form of communication with other bears, a behavior common to many species of bears.
Reproduction and development
Sows usually produce their first litter at the age of 3 to 5 years, with those living in more developed areas tending to get pregnant at younger ages. The breeding period usually occurs in the June–July period, though it can extend to August in the species' northern range. The breeding period lasts for two to three months. Both sexes are promiscuous. Males try to mate with several females, but large, dominant ones may violently claim a female if another mature male comes near. Copulation can last 20–30 minutes. Sows tend to be short-tempered with their mates after copulating.
The fertilized eggs undergo delayed development and do not implant in the female's womb until November. The gestation period lasts 235 days, and litters are usually born in late January to early February. Litter size is between one and six cubs, typically two or three. At birth, cubs weigh and measure in length. They are born with fine, gray, down-like hair and their hind quarters are underdeveloped. They typically open their eyes after 28–40 days and begin walking after 5 weeks. Cubs are dependent on their mother's milk for 30 weeks and will reach independence at 16–18 months. At 6 weeks, they attain , by 8 weeks they reach and by 6 months they weigh . They reach sexual maturity at 3 years and attain their full growth at 5 years.
Longevity and mortality
The average lifespan in the wild is 18 years, though it is quite possible for wild individuals to survive for more than 23 years. The record age of a wild individual was 39 years, while that in captivity was 44 years. The average annual survival rate is variable, ranging from 86% in Florida to 73% in Virginia and North Carolina. In Minnesota, 99% of wintering adult bears were able to survive the hibernation cycle in one study. Remarkably, a study of American black bears in Nevada found that the amount of annual mortality of a population of bears in wilderness areas was 0%, whereas in developed areas in the state this figure rose to 83%. Survival in subadults is generally less assured. In Alaska, only 14–17% of subadult males and 30–48% of subadult females were found in a study to survive to adulthood. Across the range, the estimated number of cubs who survive past their first year is 60%.
With the exception of the rare confrontation with an adult brown bear or a gray wolf pack, adult black bears are not usually subject to natural predation. However, as evidenced by scats with fur inside of them and the recently discovered carcass of an adult sow with puncture marks in the skull, black bears may occasionally fall prey to jaguars in the southern parts of their range. In such scenarios, the big cat would have the advantage if it ambushed the bear, killing it with a crushing bite to the back of the skull. Cubs tend to be more vulnerable to predation than adults, with known predators including bobcats, coyotes, cougars, gray wolves, brown bears and other bears of their own species. Many of these will stealthily snatch small cubs right from under the sleeping mother. There is record of a golden eagle snatching a yearling cub. Once out of hibernation, mother bears may be able to fight off most potential predators. Even cougars will be displaced by an angry mother bear if they are discovered stalking the cubs. Flooding of dens after birth may also occasionally kill newborn cubs. However, in current times, bear fatalities are mainly attributable to human activities. Seasonally, thousands of black bears are hunted legally across North America to control their numbers, while some are illegally poached or trapped unregulated. Auto collisions also may claim many black bears annually.
Hibernation
American black bears were once not considered true or "deep" hibernators, but because of discoveries about the metabolic changes that allow black bears to remain dormant for months without eating, drinking, urinating or defecating, most biologists have redefined mammalian hibernation as "specialized, seasonal reduction in metabolism concurrent with scarce food and cold weather". American black bears are now considered highly efficient hibernators. The physiology of American black bears in the wild is closely related to that of bears in captivity. Understanding the physiology of bears in the wild is vital to the bear's success in captivity.
The bears enter their dens in October and November, although in the southernmost areas of their range (i.e. Florida, Mexico, the southeastern United States), only pregnant females and mothers with yearling cubs will enter hibernation. Prior to that time, they can put on up to of body fat to get them through the several months during which they fast. Hibernation typically lasts 3–8 months, depending on regional climate.
Hibernating bears spend their time in hollowed-out dens in tree cavities, under logs or rocks, in banks, caves, or culverts, and in shallow depressions. Although naturally-made dens are occasionally used, most dens are dug out by the bear. During their time in hibernation, an American black bear's heart rate drops from 40 to 50 beats per minute to 8 beats per minute, and the metabolic rate can drop to a quarter of the bear's (nonhibernating) basal metabolic rate. These reductions in metabolic rate and heart rate do not appear to decrease the bear's ability to heal injuries during hibernation. Their circadian rhythm stays intact during hibernation. This allows the bear to sense the changes in the day based on the ambient temperature caused by the sun's position in the sky. It has also been shown that ambient light exposure and low disturbance levels (that is to say, wild bears in ambient light conditions) directly correlate with their activity levels. The bear keeping track of the changing days allows it to awaken from hibernation at the appropriate time of year to conserve as much energy as possible.
The hibernating bear does not display the same rate of muscle and bone atrophy relative to other nonhibernatory animals that are subject to long periods of inactivity due to ailment or old age. A hibernating bear only loses approximately half the muscular strength compared to that of a well-nourished, inactive human. The bear's bone mass does not change in geometry or mineral composition during hibernation, which implies that the bear's conservation of bone mass during hibernation is caused by a biological mechanism. During hibernation American black bears retain all excretory waste, leading to the development of a hardened mass of fecal material in the colon known as a fecal plug. Leptin is released into the bear's systems to suppress appetite. The retention of waste during hibernation (specifically in minerals such as calcium) may play a role in the bear's resistance to atrophy.
The body temperature does not drop significantly, like other mammalian hibernators (staying around ) and they remain somewhat alert and active. If the winter is mild enough, they may wake up and forage for food. Females also give birth in February and nurture their cubs until the snow melts. During winter, American black bears consume 25–40% of their body weight. The footpads peel off while they sleep, making room for new tissue.
Many of the physiological changes an American black bear exhibits during hibernation are retained slightly post-hibernation. Upon exiting hibernation, bears retain a reduced heart rate and basal metabolic rate. The metabolic rate of a hibernating bear will remain at a reduced level for up to 21 days after hibernation. After emerging from their winter dens in spring, they wander their home ranges for two weeks so that their metabolism accustoms itself to the activity. In mountainous areas, they seek southerly slopes at lower elevations for forage and move to northerly and easterly slopes at higher elevations as summer progresses.
The time that American black bears emerge from hibernation varies. Factors affecting this include temperature, flooding, and hunger. In southern areas, they may wake up in midwinter. Further north, they may not be seen until late March, April, or even early May. Altitude also has an effect. Bears at lower altitudes tend to emerge earlier. Mature males tend to come out earliest, followed by immature males and females, and lastly mothers with cubs. Mothers with yearling cubs are seen before those with newborns.
Dietary habits
Generally, American black bears are largely crepuscular in foraging activity, though they may actively feed at any time. Up to 85% of their diet consists of vegetation, though they tend to dig less than brown bears, eating far fewer roots, bulbs, corms and tubers than the latter species. When initially emerging from hibernation, they will seek to feed on carrion from winter-killed animals and newborn ungulates. As the spring temperature warms, American black bears seek new shoots of many plant species, especially new grasses, wetland plants and forbs. Young shoots and buds from trees and shrubs during the spring period are important to bears emerging from hibernation, as they assist in rebuilding muscle and strengthening the skeleton and are often the only digestible foods available at that time. During summer, the diet largely comprises fruits, especially berries and soft masts such as buds and drupes.
During the autumn hyperphagia, feeding becomes virtually the full-time task. Hard masts become the most important part of the diet in autumn and may even partially dictate the species' distribution. Favored masts such as hazelnuts, oak acorns and whitebark pine nuts may be consumed by the hundreds each day by a single bear during the fall. During the fall period, bears may also habitually raid the nut caches of tree squirrels. Also extremely important in fall are berries such as huckleberries and buffalo berries. Bears living in areas near human settlements or around a considerable influx of recreational human activity often come to rely on foods inadvertently provided by humans, especially during summertime. These include refuse, birdseed, agricultural products and honey from apiaries.
The majority of the diet consists of insects, such as bees, yellow jackets, ants, beetles and their larvae. American black bears are also fond of honey and will gnaw through trees if hives are too deeply set into the trunks for them to reach it with their paws. Once the hive is breached, the bears will scrape the honeycombs together with their paws and eat them, regardless of stings from the bees. Bears that live in northern coastal regions (especially the Pacific Coast) will fish for salmon during the night, as their black fur is easily spotted by salmon in the daytime. Other bears, such as the white-furred Kermode bears of the islands of western Canada, have a 30% greater success rate in catching salmon than their black-furred counterparts. Other fish, including suckers, trout and catfish, are readily caught whenever possible. Although American black bears do not often engage in active predation of other large animals for much of the year, the species will regularly prey on mule and white-tailed deer fawns in spring, given the opportunity. Bears may catch the scent of hiding fawns when foraging for something else and then sniff them out and pounce on them. As the fawns reach 10 days of age, they can outmaneuver the bears, and their scent is soon ignored until the next year. American black bears have also been recorded similarly preying on elk calves in Idaho and moose calves in Alaska.
Predation on adult deer is rare, but it has been recorded. They may even hunt prey up to the size of adult female moose, which are considerably larger than themselves, by ambushing them. There is at least one record of a male American black bear killing two bull elk over the course of six days by chasing them into deep snow banks, which impeded their movements. In Labrador, American black bears are exceptionally carnivorous, living largely off caribou, usually young, injured, old, sickly or dead specimens, and rodents such as voles. This is believed to be due to a paucity of edible plant life in this sub-Arctic region and a local lack of competing large carnivores (including other bear species). Like brown bears, American black bears try to use surprise to ambush their prey and target the weak, injured, sickly or dying animals in the herds. Once a deer fawn is captured, it is frequently torn apart alive while feeding. If it is able to capture a mother deer in spring, the bear frequently begins feeding on the udder of lactating females, but generally prefers meat from the viscera. Bears often drag their prey to cover, preferring to feed in seclusion. The skin of large prey is stripped back and turned inside out, with the skeleton usually left largely intact. Unlike gray wolves and coyotes, bears rarely scatter the remains of their kills. Vegetation around the carcass is usually matted down, and their droppings are frequently found nearby. Bears may attempt to cover remains of larger carcasses, though they do not do so with the same frequency as cougars and grizzly bears. They will readily consume eggs and nestlings of various birds and can easily access many tree nests, even the huge nests of bald eagles. Bears have been reported stealing deer and other game from human hunters.
Interspecific predatory relationships
Over much of their range, American black bears are assured scavengers that can intimidate, using their large size and considerable strength, and if necessary dominate other predators in confrontations over carcasses. However, on occasions where they encounter Kodiak or grizzly bears, the larger two brown subspecies dominate them. American black bears tend to escape competition from brown bears by being more active in the daytime and living in more densely forested areas. Violent interactions, resulting in the deaths of American black bears, have been recorded in Yellowstone National Park.
American black bears do occasionally compete with cougars over carcasses. Like brown bears, they will sometimes steal kills from cougars. One study found that both bear species visited 24% of cougar kills in Yellowstone and Glacier National Parks, usurping 10% of the carcasses. Another study found that American black bears visited 48% of cougar kills in summer in Colorado and 77% of kills in California. As a result, the cats spend more time killing and less time feeding on each kill.
American black bear interactions with gray wolves are much rarer than with brown bears, due to differences in habitat preferences. The majority of American black bear encounters with wolves occur in the species' northern range, with no interactions being recorded in Mexico. Despite the American black bear being more powerful on a one-to-one basis, packs of wolves have been recorded to kill black bears on numerous occasions without eating them. Unlike brown bears, American black bears frequently lose against wolves in disputes over kills. Wolf packs typically kill American black bears when the larger animals are in their hibernation cycle.
There is at least one record of an American black bear killing a wolverine (Gulo gulo) in a dispute over food in Yellowstone National Park. Anecdotal cases of alligator predation on American black bears have been reported, though such cases may involve assaults on cubs. At least one jaguar (Panthera onca) has been recorded to have attacked and eaten a black bear: "El Jefe", the jaguar famous for being the first jaguar seen in the United States in over a century.
Relationships with humans
In folklore, mythology and culture
Indigenous
Black bears feature prominently in the stories of some of North America's indigenous peoples. One tale tells of how the black bear was a creation of the Great Spirit, while the grizzly bear was created by the Evil Spirit. In the mythology of the Haida, Tlingit and Tsimshian people of the northwest coast, mankind first learned to respect bears when a girl married the son of a black bear chieftain. In Kwakwa̱ka̱ʼwakw mythology, black and brown bears became enemies when Grizzly Bear Woman killed Black Bear Woman for being lazy. Black Bear Woman's children, in turn, killed Grizzly Bear Woman's children. The Navajo believed that the Big Black Bear was chief among the bears of the four directions surrounding Sun's house and would pray to it in order to be granted its protection during raids.
Sleeping Bear Dunes in Michigan is named after a Native American legend, where a female bear and her two cubs swam across Lake Michigan to escape a fire on the Wisconsin shore. The mother bear reached the shore and waited for her cubs, but they did not make it across. Two islands mark where the cubs drowned, while the dune marks the spot where the mother bear waited.
Anglo-American
Morris Michtom, the creator of the teddy bear, was inspired to make the toy when he came across a cartoon of Theodore Roosevelt refusing to shoot a black bear cub tied to a tree. The fictional character Winnie-the-Pooh was named after Winnipeg, a female cub that lived at the London Zoo from 1915 until her death in 1934. A cub, who in the spring of 1950 was caught in the Capitan Gap Fire, was made into the living representative of Smokey Bear, the mascot of the United States Forest Service.
Terrible Ted was a de-toothed and de-clawed bear who was forced to perform as a pro wrestler and whose "career" lasted from the 1950s to the 1970s. The American black bear is the mascot of the University of Maine and Baylor University, where the university houses two live bears on campus.
Attacks on humans
Although an adult bear is quite capable of killing a human, American black bears typically avoid confronting humans. Unlike grizzly bears, which became a subject of fearsome legend among the European settlers of North America, black bears were rarely considered overly dangerous, even though they lived in areas where the pioneers had settled.
American black bears rarely attack when confronted by humans and usually only make mock charges, emit blowing noises and swat the ground with their forepaws. The number of attacks on humans is higher than those by brown bears in North America, but this is largely because black bears considerably outnumber brown bears. Compared to brown bear attacks, aggressive encounters with black bears rarely lead to serious injury. Most attacks tend to be motivated by hunger rather than territoriality and thus victims have a higher probability of surviving by fighting back rather than submitting. Unlike female brown bears, female American black bears are not as protective of their cubs and rarely attack humans in the vicinity of the cubs. However, occasionally such attacks do occur. The worst recorded attack occurred in May 1978, in which a bear killed three teenagers fishing in Algonquin Park in Ontario. Another exceptional attack occurred in August 1997 in Liard River Hot Springs Provincial Park in British Columbia, when an emaciated bear attacked a mother and child, killing the mother and a man who intervened. The bear was shot while mauling a fourth victim.
The majority of attacks happened in national parks, usually near campgrounds, where the bears had habituated to close human proximity and food. Of 1,028 incidents of aggressive acts toward humans, recorded from 1964 to 1976 in the Great Smoky Mountains National Park, 107 resulted in injury and occurred mainly in tourist hot spots where people regularly fed the bears handouts. In almost every case where open garbage dumps that attracted bears were closed and handouts ceased, the number of aggressive encounters dropped. However, in the Liard River Hot Springs case, the bear was apparently dependent on a local garbage dump that had closed and so was starving to death. Attempts to relocate bears are typically unsuccessful, as the bears seem able to return to their home range, even without familiar landscape cues.
Livestock and crop predation
A limitation of food sources in early spring and wild berry and nut crop failures in summer may contribute to bears regularly feeding from human-based food sources. These bears often eat crops, especially during autumn hyperphagia when natural foods are scarce. Favored crops include apples, oats and corn. American black bears can do extensive damage in areas of the northwestern United States by stripping the bark from trees and feeding on the cambium. Livestock depredations occur mostly in spring.
Although they occasionally hunt adult cattle and horses, they seem to prefer smaller prey such as sheep, goats, pigs and young calves. They usually kill by biting the neck and shoulders, though they may break the neck or back of the prey with blows with the paws. Evidence of a bear attack includes claw marks and is often found on the neck, back and shoulders of larger animals. Surplus killing of sheep and goats is common. American black bears have been known to frighten livestock herds over cliffs, causing injuries and death to many animals; whether this is intentional is not known. Occasionally bears kill pets, especially domestic dogs, which are most prone to harass a bear. It is not recommended to use unleashed dogs to deter bear attacks. Although large, aggressive dogs can sometimes cause a bear to run, if pressed, angry bears often turn the tables and end up chasing the dogs in return. A bear in pursuit of a pet dog can threaten both canid and human lives.
Hunting
The hunting of American black bears has taken place since the initial settlement of the Americas. The first piece of evidence dates to a Clovis site at Lehner Ranch, Arizona. Partially calcined teeth of a 3-month old black bear cub came from a roasting pit, suggesting the bear cub was eaten. The surrounding charcoal was dated to the Early Holocene (10,940 BP). Black bear remains also appear to be associated with early peoples in Tlapacoya, Mexico. Native Americans increasingly utilized black bears during the Holocene, particularly in the late Holocene upper Midwest, e.g., Hopewell and Mississippian cultures.
Some Native American tribes, in admiration for the American black bear's intelligence, would decorate the heads of bears they killed with trinkets and place them on blankets. Tobacco smoke would be wafted into the disembodied head's nostrils by the hunter that dealt the killing blow, who would compliment the animal for its courage. The Kutchin typically hunted American black bears during their hibernation cycle. Unlike the hunting of hibernating grizzly bears, which was fraught with danger, hibernating American black bears took longer to awaken and hunting them was thus safer and easier. During the European colonisation of eastern North America, thousands of bears were hunted for their meat, fat and fur. Theodore Roosevelt wrote extensively on black bear hunting in his Hunting the Grisly and other sketches, in which he stated,
He wrote that black bears were difficult to hunt by stalking, due to their habitat preferences, though they were easy to trap. Roosevelt described how, in the southern states, planters regularly hunted bears on horseback with hounds. General Wade Hampton was known to have been present at 500 successful bear hunts, two-thirds of which he killed personally. He killed 30 or 40 bears with only a knife, which he would use to stab the bears between the shoulder blades while they were distracted by his hounds. Unless well trained, horses were often useless in bear hunts, as they often bolted when the bears stood their ground. In 1799, 192,000 American black bear skins were exported from Quebec. In 1822, 3,000 skins were exported from the Hudson's Bay Company. In 1992, untanned, fleshed and salted hides were sold for an average of $165.
In Canada, black bears are considered as both a big game and furbearer species in all provinces, save for New Brunswick and the Northwest Territories, where they are only classed as a big game species. There are around 80,900 licensed bear hunters in Canada. Canadian black bear hunts take place in the fall and spring, and both male and female bears can be legally taken, though some provinces prohibit the hunting of females with cubs, or yearlings.
Currently, 28 of the U.S. states have American black bear hunting seasons. Nineteen states require a bear hunting license, with some also requiring a big game license. In eight states, only a big game license is required. Overall, over 481,500 American black bear hunting licenses are sold per year. The hunting methods and seasons vary greatly according to state, with some bear hunting seasons including fall only, spring and fall, or year-round. New Jersey, in November 2010, approved a six-day bear-hunting season in early December 2010 to slow the growth of the population. Bear hunting had been banned in New Jersey for five years before that time. A Fairleigh Dickinson University PublicMind poll found that 53% of New Jersey voters approved of the new season if scientists concluded that bears were leaving their usual habitats and destroying private property. Men, older voters and those living in rural areas were more likely to approve of a bear hunting season in New Jersey than women, younger voters and those living in more developed parts of the state. In the western states, where there are large American black bear populations, there are spring and year-round seasons. Approximately 18,000 American black bears were killed annually in the U.S. between 1988 and 1992. Within this period, annual kills ranged from six bears in South Carolina to 2,232 in Maine. According to Dwight Schuh in his Bowhunter's Encyclopedia, American black bears are the third most popular quarry of bowhunters, behind deer and elk.
Meat
Bear meat had historically been held in high esteem among North America's indigenous people and colonists. American black bears were the only bear species the Kutchin hunted for their meat, though this constituted only a small part of their diet. According to the second volume of Frank Forester's Field Sports of the United States, and British Provinces, of North America:
Theodore Roosevelt likened the flesh of young American black bears to that of pork, and not as coarse or flavorless as the meat of grizzly bears. The most favored cuts of are concentrated in the legs and loins. Meat from the neck, front legs and shoulders is usually ground into minced meat or used for stews and casseroles. Keeping the fat on tends to give the meat a strong flavor. As American black bears can have trichinellosis, cooking temperatures need to be high in order to kill the parasites.
Bear fat was once valued as a cosmetic article that promoted hair growth and gloss. The fat most favored for this purpose was the hard white fat found in the body's interior. As only a small portion of this fat could be harvested for this purpose, the oil was often mixed with large quantities of hog lard. However, animal rights activism over the last decade has slowed the harvest of these animals; therefore the lard from bears has not been used in recent years for the purpose of cosmetics.
| Biology and health sciences | Bears | Animals |
51079 | https://en.wikipedia.org/wiki/Magnet | Magnet | A magnet is a material or object that produces a magnetic field. This magnetic field is invisible but is responsible for the most notable property of a magnet: a force that pulls on other ferromagnetic materials, such as iron, steel, nickel, cobalt, etc. and attracts or repels other magnets.
A permanent magnet is an object made from a material that is magnetized and creates its own persistent magnetic field. An everyday example is a refrigerator magnet used to hold notes on a refrigerator door. Materials that can be magnetized, which are also the ones that are strongly attracted to a magnet, are called ferromagnetic (or ferrimagnetic). These include the elements iron, nickel and cobalt and their alloys, some alloys of rare-earth metals, and some naturally occurring minerals such as lodestone. Although ferromagnetic (and ferrimagnetic) materials are the only ones attracted to a magnet strongly enough to be commonly considered magnetic, all other substances respond weakly to a magnetic field, by one of several other types of magnetism.
Ferromagnetic materials can be divided into magnetically "soft" materials like annealed iron, which can be magnetized but do not tend to stay magnetized, and magnetically "hard" materials, which do. Permanent magnets are made from "hard" ferromagnetic materials such as alnico and ferrite that are subjected to special processing in a strong magnetic field during manufacture to align their internal microcrystalline structure, making them very hard to demagnetize. To demagnetize a saturated magnet, a certain magnetic field must be applied, and this threshold depends on coercivity of the respective material. "Hard" materials have high coercivity, whereas "soft" materials have low coercivity. The overall strength of a magnet is measured by its magnetic moment or, alternatively, the total magnetic flux it produces. The local strength of magnetism in a material is measured by its magnetization.
An electromagnet is made from a coil of wire that acts as a magnet when an electric current passes through it but stops being a magnet when the current stops. Often, the coil is wrapped around a core of "soft" ferromagnetic material such as mild steel, which greatly enhances the magnetic field produced by the coil.
Discovery and development
Ancient people learned about magnetism from lodestones (or magnetite) which are naturally magnetized pieces of iron ore. The word magnet was adopted in Middle English from Latin magnetum "lodestone", ultimately from Greek (magnētis [lithos]) meaning "[stone] from Magnesia", a place in Anatolia where lodestones were found (today Manisa in modern-day Turkey). Lodestones, suspended so they could turn, were the first magnetic compasses. The earliest known surviving descriptions of magnets and their properties are from Anatolia, India, and China around 2,500 years ago. The properties of lodestones and their affinity for iron were written of by Pliny the Elder in his encyclopedia Naturalis Historia in the 1st century AD.
In 11th century China, it was discovered that quenching red hot iron in the Earth's magnetic field would leave the iron permanently magnetized. This led to the development of the navigational compass, as described in Dream Pool Essays in 1088. By the 12th to 13th centuries AD, magnetic compasses were used in navigation in China, Europe, the Arabian Peninsula and elsewhere.
A straight iron magnet tends to demagnetize itself by its own magnetic field. To overcome this, the horseshoe magnet was invented by Daniel Bernoulli in 1743. A horseshoe magnet avoids demagnetization by returning the magnetic field lines to the opposite pole.
In 1820, Hans Christian Ørsted discovered that a compass needle is deflected by a nearby electric current. In the same year André-Marie Ampère showed that iron can be magnetized by inserting it in an electrically fed solenoid. This led William Sturgeon to develop an iron-cored electromagnet in 1824. Joseph Henry further developed the electromagnet into a commercial product in 1830–1831, giving people access to strong magnetic fields for the first time. In 1831 he built an ore separator with an electromagnet capable of lifting .
Physics
Magnetic field
The magnetic flux density (also called magnetic B field or just magnetic field, usually denoted by B) is a vector field. The magnetic B field vector at a given point in space is specified by two properties:
Its direction, which is along the orientation of a compass needle.
Its magnitude (also called strength), which is proportional to how strongly the compass needle orients along that direction.
In SI units, the strength of the magnetic B field is given in teslas.
Magnetic moment
A magnet's magnetic moment (also called magnetic dipole moment and usually denoted μ) is a vector that characterizes the magnet's overall magnetic properties. For a bar magnet, the direction of the magnetic moment points from the magnet's south pole to its north pole, and the magnitude relates to how strong and how far apart these poles are. In SI units, the magnetic moment is specified in terms of A·m2 (amperes times meters squared).
A magnet both produces its own magnetic field and responds to magnetic fields. The strength of the magnetic field it produces is at any given point proportional to the magnitude of its magnetic moment. In addition, when the magnet is put into an external magnetic field, produced by a different source, it is subject to a torque tending to orient the magnetic moment parallel to the field. The amount of this torque is proportional both to the magnetic moment and the external field. A magnet may also be subject to a force driving it in one direction or another, according to the positions and orientations of the magnet and source. If the field is uniform in space, the magnet is subject to no net force, although it is subject to a torque.
A wire in the shape of a circle with area A and carrying current I has a magnetic moment of magnitude equal to IA.
Magnetization
The magnetization of a magnetized material is the local value of its magnetic moment per unit volume, usually denoted M, with units A/m. It is a vector field, rather than just a vector (like the magnetic moment), because different areas in a magnet can be magnetized with different directions and strengths (for example, because of domains, see below). A good bar magnet may have a magnetic moment of magnitude 0.1 A·m2 and a volume of 1 cm3, or 1×10−6 m3, and therefore an average magnetization magnitude is 100,000 A/m. Iron can have a magnetization of around a million amperes per meter. Such a large value explains why iron magnets are so effective at producing magnetic fields.
Modelling magnets
Two different models exist for magnets: magnetic poles and atomic currents.
Although for many purposes it is convenient to think of a magnet as having distinct north and south magnetic poles, the concept of poles should not be taken literally: it is merely a way of referring to the two different ends of a magnet. The magnet does not have distinct north or south particles on opposing sides. If a bar magnet is broken into two pieces, in an attempt to separate the north and south poles, the result will be two bar magnets, each of which has both a north and south pole. However, a version of the magnetic-pole approach is used by professional magneticians to design permanent magnets.
In this approach, the divergence of the magnetization ∇·M inside a magnet is treated as a distribution of magnetic monopoles. This is a mathematical convenience and does not imply that there are actually monopoles in the magnet. If the magnetic-pole distribution is known, then the pole model gives the magnetic field H. Outside the magnet, the field B is proportional to H, while inside the magnetization must be added to H. An extension of this method that allows for internal magnetic charges is used in theories of ferromagnetism.
Another model is the Ampère model, where all magnetization is due to the effect of microscopic, or atomic, circular bound currents, also called Ampèrian currents, throughout the material. For a uniformly magnetized cylindrical bar magnet, the net effect of the microscopic bound currents is to make the magnet behave as if there is a macroscopic sheet of electric current flowing around the surface, with local flow direction normal to the cylinder axis. Microscopic currents in atoms inside the material are generally canceled by currents in neighboring atoms, so only the surface makes a net contribution; shaving off the outer layer of a magnet will not destroy its magnetic field, but will leave a new surface of uncancelled currents from the circular currents throughout the material. The right-hand rule tells which direction positively-charged current flows. However, current due to negatively-charged electricity is far more prevalent in practice.
Polarity
The north pole of a magnet is defined as the pole that, when the magnet is freely suspended, points towards the Earth's North Magnetic Pole in the Arctic (the magnetic and geographic poles do not coincide, see magnetic declination). Since opposite poles (north and south) attract, the North Magnetic Pole is actually the south pole of the Earth's magnetic field. As a practical matter, to tell which pole of a magnet is north and which is south, it is not necessary to use the Earth's magnetic field at all. For example, one method would be to compare it to an electromagnet, whose poles can be identified by the right-hand rule. The magnetic field lines of a magnet are considered by convention to emerge from the magnet's north pole and reenter at the south pole.
Magnetic materials
The term magnet is typically reserved for objects that produce their own persistent magnetic field even in the absence of an applied magnetic field. Only certain classes of materials can do this. Most materials, however, produce a magnetic field in response to an applied magnetic field – a phenomenon known as magnetism. There are several types of magnetism, and all materials exhibit at least one of them.
The overall magnetic behavior of a material can vary widely, depending on the structure of the material, particularly on its electron configuration. Several forms of magnetic behavior have been observed in different materials, including:
Ferromagnetic and ferrimagnetic materials are the ones normally thought of as magnetic; they are attracted to a magnet strongly enough that the attraction can be felt. These materials are the only ones that can retain magnetization and become magnets; a common example is a traditional refrigerator magnet. Ferrimagnetic materials, which include ferrites and the longest used and naturally occurring magnetic materials magnetite and lodestone, are similar to but weaker than ferromagnetics. The difference between ferro- and ferrimagnetic materials is related to their microscopic structure, as explained in Magnetism.
Paramagnetic substances, such as platinum, aluminum, and oxygen, are weakly attracted to either pole of a magnet. This attraction is hundreds of thousands of times weaker than that of ferromagnetic materials, so it can only be detected by using sensitive instruments or using extremely strong magnets. Magnetic ferrofluids, although they are made of tiny ferromagnetic particles suspended in liquid, are sometimes considered paramagnetic since they cannot be magnetized.
Diamagnetic means repelled by both poles. Compared to paramagnetic and ferromagnetic substances, diamagnetic substances, such as carbon, copper, water, and plastic, are even more weakly repelled by a magnet. The permeability of diamagnetic materials is less than the permeability of a vacuum. All substances not possessing one of the other types of magnetism are diamagnetic; this includes most substances. Although force on a diamagnetic object from an ordinary magnet is far too weak to be felt, using extremely strong superconducting magnets, diamagnetic objects such as pieces of lead and even mice can be levitated, so they float in mid-air. Superconductors repel magnetic fields from their interior and are strongly diamagnetic.
There are various other types of magnetism, such as spin glass, superparamagnetism, superdiamagnetism, and metamagnetism.
Shape
The shape of a permanent magnet has a large influence on its magnetic properties. When a magnet is magnetized, a demagnetizing field will be created inside it. As the name suggests, the demagnetizing field will work to demagnetize the magnet, decreasing its magnetic properties. The strength of the demagnetizing field is proportional to the magnet's magnetization and shape, according to
Here, is called the demagnetizing factor, and has a different value depending on the magnet's shape. For example, if the magnet is a sphere, then .
The value of the demagnetizing factor also depends on the direction of the magnetization in relation to the magnet's shape. Since a sphere is symmetrical from all angles, the demagnetizing factor only has one value. But a magnet that is shaped like a long cylinder will yield two different demagnetizing factors, depending on if it's magnetized parallel to or perpendicular to its length.
Common uses
Magnetic recording media: VHS tapes contain a reel of magnetic tape. The information that makes up the video and sound is encoded on the magnetic coating on the tape. Common audio cassettes also rely on magnetic tape. Similarly, in computers, floppy disks and hard disks record data on a thin magnetic coating.
Credit, debit, and automatic teller machine cards: All of these cards have a magnetic strip on one side. This strip encodes the information to contact an individual's financial institution and connect with their account(s).
Older types of televisions (non flat screen) and older large computer monitors: TV and computer screens containing a cathode-ray tube employ an electromagnet to guide electrons to the screen.
Speakers and microphones: Most speakers employ a permanent magnet and a current-carrying coil to convert electric energy (the signal) into mechanical energy (movement that creates the sound). The coil is wrapped around a bobbin attached to the speaker cone and carries the signal as changing current that interacts with the field of the permanent magnet. The voice coil feels a magnetic force and in response, moves the cone and pressurizes the neighboring air, thus generating sound. Dynamic microphones employ the same concept, but in reverse. A microphone has a diaphragm or membrane attached to a coil of wire. The coil rests inside a specially shaped magnet. When sound vibrates the membrane, the coil is vibrated as well. As the coil moves through the magnetic field, a voltage is induced across the coil. This voltage drives a current in the wire that is characteristic of the original sound.
Electric guitars use magnetic pickups to transduce the vibration of guitar strings into electric current that can then be amplified. This is different from the principle behind the speaker and dynamic microphone because the vibrations are sensed directly by the magnet, and a diaphragm is not employed. The Hammond organ used a similar principle, with rotating tonewheels instead of strings.
Electric motors and generators: Some electric motors rely upon a combination of an electromagnet and a permanent magnet, and, much like loudspeakers, they convert electric energy into mechanical energy. A generator is the reverse: it converts mechanical energy into electric energy by moving a conductor through a magnetic field.
Medicine: Hospitals use magnetic resonance imaging to spot problems in a patient's organs without invasive surgery.
Chemistry: Chemists use nuclear magnetic resonance to characterize synthesized compounds.
Chucks are used in the metalworking field to hold objects. Magnets are also used in other types of fastening devices, such as the magnetic base, the magnetic clamp and the refrigerator magnet.
Compasses: A compass (or mariner's compass) is a magnetized pointer free to align itself with a magnetic field, most commonly Earth's magnetic field.
Art: Vinyl magnet sheets may be attached to paintings, photographs, and other ornamental articles, allowing them to be attached to refrigerators and other metal surfaces. Objects and paint can be applied directly to the magnet surface to create collage pieces of art. Metal magnetic boards, strips, doors, microwave ovens, dishwashers, cars, metal I beams, and any metal surface can be used magnetic vinyl art.
Science projects: Many topic questions are based on magnets, including the repulsion of current-carrying wires, the effect of temperature, and motors involving magnets.
Toys: Given their ability to counteract the force of gravity at close range, magnets are often employed in children's toys, such as the Magnet Space Wheel and Levitron, to amusing effect.
Refrigerator magnets are used to adorn kitchens, as a souvenir, or simply to hold a note or photo to the refrigerator door.
Magnets can be used to make jewelry. Necklaces and bracelets can have a magnetic clasp, or may be constructed entirely from a linked series of magnets and ferrous beads.
Magnets can pick up magnetic items (iron nails, staples, tacks, paper clips) that are either too small, too hard to reach, or too thin for fingers to hold. Some screwdrivers are magnetized for this purpose.
Magnets can be used in scrap and salvage operations to separate magnetic metals (iron, cobalt, and nickel) from non-magnetic metals (aluminum, non-ferrous alloys, etc.). The same idea can be used in the so-called "magnet test", in which a car chassis is inspected with a magnet to detect areas repaired using fiberglass or plastic putty.
Magnets are found in process industries, food manufacturing especially, in order to remove metal foreign bodies from materials entering the process (raw materials) or to detect a possible contamination at the end of the process and prior to packaging. They constitute an important layer of protection for the process equipment and for the final consumer.
Magnetic levitation transport, or maglev, is a form of transportation that suspends, guides and propels vehicles (especially trains) through electromagnetic force. Eliminating rolling resistance increases efficiency. The maximum recorded speed of a maglev train is .
Magnets may be used to serve as a fail-safe device for some cable connections. For example, the power cords of some laptops are magnetic to prevent accidental damage to the port when tripped over. The MagSafe power connection to the Apple MacBook is one such example.
Medical issues and safety
Because human tissues have a very low level of susceptibility to static magnetic fields, there is little mainstream scientific evidence showing a health effect associated with exposure to static fields. Dynamic magnetic fields may be a different issue, however; correlations between electromagnetic radiation and cancer rates have been postulated due to demographic correlations (see Electromagnetic radiation and health).
If a ferromagnetic foreign body is present in human tissue, an external magnetic field interacting with it can pose a serious safety risk.
A different type of indirect magnetic health risk exists involving pacemakers. If a pacemaker has been embedded in a patient's chest (usually for the purpose of monitoring and regulating the heart for steady electrically induced beats), care should be taken to keep it away from magnetic fields. It is for this reason that a patient with the device installed cannot be tested with the use of a magnetic resonance imaging device.
Children sometimes swallow small magnets from toys, and this can be hazardous if two or more magnets are swallowed, as the magnets can pinch or puncture internal tissues.
Magnetic imaging devices (e.g. MRIs) generate enormous magnetic fields, and therefore rooms intended to hold them exclude ferrous metals. Bringing objects made of ferrous metals (such as oxygen canisters) into such a room creates a severe safety risk, as those objects may be powerfully thrown about by the intense magnetic fields.
Magnetizing ferromagnets
Ferromagnetic materials can be magnetized in the following ways:
Heating the object higher than its Curie temperature, allowing it to cool in a magnetic field and hammering it as it cools. This is the most effective method and is similar to the industrial processes used to create permanent magnets.
Placing the item in an external magnetic field will result in the item retaining some of the magnetism on removal. Vibration has been shown to increase the effect. Ferrous materials aligned with the Earth's magnetic field that are subject to vibration (e.g., frame of a conveyor) have been shown to acquire significant residual magnetism. Likewise, striking a steel nail held by fingers in a N-S direction with a hammer will temporarily magnetize the nail.
Stroking: An existing magnet is moved from one end of the item to the other repeatedly in the same direction (single touch method) or two magnets are moved outwards from the center of a third (double touch method).
Electric Current: The magnetic field produced by passing an electric current through a coil can get domains to line up. Once all of the domains are lined up, increasing the current will not increase the magnetization.
Demagnetizing ferromagnets
Magnetized ferromagnetic materials can be demagnetized (or degaussed) in the following ways:
Heating a magnet past its Curie temperature; the molecular motion destroys the alignment of the magnetic domains, completely demagnetizing it
Placing the magnet in an alternating magnetic field with intensity above the material's coercivity and then either slowly drawing the magnet out or slowly decreasing the magnetic field to zero. This is the principle used in commercial demagnetizers to demagnetize tools, erase credit cards, hard disks, and degaussing coils used to demagnetize CRTs.
Some demagnetization or reverse magnetization will occur if any part of the magnet is subjected to a reverse field above the magnetic material's coercivity.
Demagnetization progressively occurs if the magnet is subjected to cyclic fields sufficient to move the magnet away from the linear part on the second quadrant of the B–H curve of the magnetic material (the demagnetization curve).
Hammering or jarring: mechanical disturbance tends to randomize the magnetic domains and reduce magnetization of an object, but may cause unacceptable damage.
Types of permanent magnets
Magnetic metallic elements
Many materials have unpaired electron spins, and the majority of these materials are paramagnetic. When the spins interact with each other in such a way that the spins align spontaneously, the materials are called ferromagnetic (what is often loosely termed as magnetic). Because of the way their regular crystalline atomic structure causes their spins to interact, some metals are ferromagnetic when found in their natural states, as ores. These include iron ore (magnetite or lodestone), cobalt and nickel, as well as the rare earth metals gadolinium and dysprosium (when at a very low temperature). Such naturally occurring ferromagnets were used in the first experiments with magnetism. Technology has since expanded the availability of magnetic materials to include various man-made products, all based, however, on naturally magnetic elements.
Composites
Ceramic, or ferrite, magnets are made of a sintered composite of powdered iron oxide and barium/strontium carbonate ceramic. Given the low cost of the materials and manufacturing methods, inexpensive magnets (or non-magnetized ferromagnetic cores, for use in electronic components such as portable AM radio antennas) of various shapes can be easily mass-produced. The resulting magnets are non-corroding but brittle and must be treated like other ceramics.
Alnico magnets are made by casting or sintering a combination of aluminium, nickel and cobalt with iron and small amounts of other elements added to enhance the properties of the magnet. Sintering offers superior mechanical characteristics, whereas casting delivers higher magnetic fields and allows for the design of intricate shapes. Alnico magnets resist corrosion and have physical properties more forgiving than ferrite, but not quite as desirable as a metal. Trade names for alloys in this family include: Alni, Alcomax, Hycomax, Columax, and Ticonal.
Injection-molded magnets are a composite of various types of resin and magnetic powders, allowing parts of complex shapes to be manufactured by injection molding. The physical and magnetic properties of the product depend on the raw materials, but are generally lower in magnetic strength and resemble plastics in their physical properties.
Flexible magnet
Flexible magnets are composed of a high-coercivity ferromagnetic compound (usually ferric oxide) mixed with a resinous polymer binder. This is extruded as a sheet and passed over a line of powerful cylindrical permanent magnets. These magnets are arranged in a stack with alternating magnetic poles facing up (N, S, N, S...) on a rotating shaft. This impresses the plastic sheet with the magnetic poles in an alternating line format. No electromagnetism is used to generate the magnets. The pole-to-pole distance is on the order of 5 mm, but varies with manufacturer. These magnets are lower in magnetic strength but can be very flexible, depending on the binder used.
For magnetic compounds (e.g. Nd2Fe14B) that are vulnerable to a grain boundary corrosion problem it gives additional protection.
Rare-earth magnets
Rare earth (lanthanoid) elements have a partially occupied f electron shell (which can accommodate up to 14 electrons). The spin of these electrons can be aligned, resulting in very strong magnetic fields, and therefore, these elements are used in compact high-strength magnets where their higher price is not a concern. The most common types of rare-earth magnets are samarium–cobalt and neodymium–iron–boron (NIB) magnets.
Single-molecule magnets (SMMs) and single-chain magnets (SCMs)
In the 1990s, it was discovered that certain molecules containing paramagnetic metal ions are capable of storing a magnetic moment at very low temperatures. These are very different from conventional magnets that store information at a magnetic domain level and theoretically could provide a far denser storage medium than conventional magnets. In this direction, research on monolayers of SMMs is currently under way. Very briefly, the two main attributes of an SMM are:
a large ground state spin value (S), which is provided by ferromagnetic or ferrimagnetic coupling between the paramagnetic metal centres
a negative value of the anisotropy of the zero field splitting (D)
Most SMMs contain manganese but can also be found with vanadium, iron, nickel and cobalt clusters. More recently, it has been found that some chain systems can also display a magnetization that persists for long times at higher temperatures. These systems have been called single-chain magnets.
Nano-structured magnets
Some nano-structured materials exhibit energy waves, called magnons, that coalesce into a common ground state in the manner of a Bose–Einstein condensate.
Rare-earth-free permanent magnets
The United States Department of Energy has identified a need to find substitutes for rare-earth metals in permanent-magnet technology, and has begun funding such research. The Advanced Research Projects Agency-Energy (ARPA-E) has sponsored a Rare Earth Alternatives in Critical Technologies (REACT) program to develop alternative materials. In 2011, ARPA-E awarded 31.6 million dollars to fund Rare-Earth Substitute projects. Iron nitrides are promising materials for rare-earth free magnets.
Costs
The cheapest permanent magnets, allowing for field strengths, are flexible and ceramic magnets, but these are also among the weakest types. The ferrite magnets are mainly low-cost magnets since they are made from cheap raw materials: iron oxide and Ba- or Sr-carbonate. However, a new low cost magnet, Mn–Al alloy, has been developed and is now dominating the low-cost magnets field. It has a higher saturation magnetization than the ferrite magnets. It also has more favorable temperature coefficients, although it can be thermally unstable.
Neodymium–iron–boron (NIB) magnets are among the strongest. These cost more per kilogram than most other magnetic materials but, owing to their intense field, are smaller and cheaper in many applications.
Temperature
Temperature sensitivity varies, but when a magnet is heated to a temperature known as the Curie point, it loses all of its magnetism, even after cooling below that temperature. The magnets can often be remagnetized, however.
Additionally, some magnets are brittle and can fracture at high temperatures.
The maximum usable temperature is highest for alnico magnets at over , around for ferrite and SmCo, about for NIB and lower for flexible ceramics, but the exact numbers depend on the grade of material.
Electromagnets
An electromagnet, in its simplest form, is a wire that has been coiled into one or more loops, known as a solenoid. When electric current flows through the wire, a magnetic field is generated. It is concentrated near (and especially inside) the coil, and its field lines are very similar to those of a magnet. The orientation of this effective magnet is determined by the right hand rule. The magnetic moment and the magnetic field of the electromagnet are proportional to the number of loops of wire, to the cross-section of each loop, and to the current passing through the wire.
If the coil of wire is wrapped around a material with no special magnetic properties (e.g., cardboard), it will tend to generate a very weak field. However, if it is wrapped around a soft ferromagnetic material, such as an iron nail, then the net field produced can result in a several hundred- to thousandfold increase of field strength.
Uses for electromagnets include particle accelerators, electric motors, junkyard cranes, and magnetic resonance imaging machines. Some applications involve configurations more than a simple magnetic dipole; for example, quadrupole and sextupole magnets are used to focus particle beams.
Units and calculations
For most engineering applications, MKS (rationalized) or SI (Système International) units are commonly used. Two other sets of units, Gaussian and CGS-EMU, are the same for magnetic properties and are commonly used in physics.
In all units, it is convenient to employ two types of magnetic field, B and H, as well as the magnetization M, defined as the magnetic moment per unit volume.
The magnetic induction field B is given in SI units of teslas (T). B is the magnetic field whose time variation produces, by Faraday's Law, circulating electric fields (which the power companies sell). B also produces a deflection force on moving charged particles (as in TV tubes). The tesla is equivalent to the magnetic flux (in webers) per unit area (in meters squared), thus giving B the unit of a flux density. In CGS, the unit of B is the gauss (G). One tesla equals 104 G.
The magnetic field H is given in SI units of ampere-turns per meter (A-turn/m). The turns appear because when H is produced by a current-carrying wire, its value is proportional to the number of turns of that wire. In CGS, the unit of H is the oersted (Oe). One A-turn/m equals 4π×10−3 Oe.
The magnetization M is given in SI units of amperes per meter (A/m). In CGS, the unit of M is the oersted (Oe). One A/m equals 10−3 emu/cm3. A good permanent magnet can have a magnetization as large as a million amperes per meter.
In SI units, the relation B = μ0(H + M) holds, where μ0 is the permeability of space, which equals 4π×10−7 T•m/A. In CGS, it is written as B = H + 4πM. (The pole approach gives μ0H in SI units. A μ0M term in SI must then supplement this μ0H to give the correct field within B, the magnet. It will agree with the field B calculated using Ampèrian currents).
Materials that are not permanent magnets usually satisfy the relation M = χH in SI, where χ is the (dimensionless) magnetic susceptibility. Most non-magnetic materials have a relatively small χ (on the order of a millionth), but soft magnets can have χ on the order of hundreds or thousands. For materials satisfying M = χH, we can also write B = μ0(1 + χ)H = μ0μrH = μH, where μr = 1 + χ is the (dimensionless) relative permeability and μ =μ0μr is the magnetic permeability. Both hard and soft magnets have a more complex, history-dependent, behavior described by what are called hysteresis loops, which give either B vs. H or M vs. H. In CGS, M = χH, but χSI = 4πχCGS, and μ = μr.
Caution: in part because there are not enough Roman and Greek symbols, there is no commonly agreed-upon symbol for magnetic pole strength and magnetic moment. The symbol m has been used for both pole strength (unit A•m, where here the upright m is for meter) and for magnetic moment (unit A•m2). The symbol μ has been used in some texts for magnetic permeability and in other texts for magnetic moment. We will use μ for magnetic permeability and m for magnetic moment. For pole strength, we will employ qm. For a bar magnet of cross-section A with uniform magnetization M along its axis, the pole strength is given by qm = MA, so that M can be thought of as a pole strength per unit area.
Fields of a magnet
Far away from a magnet, the magnetic field created by that magnet is almost always described (to a good approximation) by a dipole field characterized by its total magnetic moment. This is true regardless of the shape of the magnet, so long as the magnetic moment is non-zero. One characteristic of a dipole field is that the strength of the field falls off inversely with the cube of the distance from the magnet's center.
Closer to the magnet, the magnetic field becomes more complicated and more dependent on the detailed shape and magnetization of the magnet. Formally, the field can be expressed as a multipole expansion: A dipole field, plus a quadrupole field, plus an octupole field, etc.
At close range, many different fields are possible. For example, for a long, skinny bar magnet with its north pole at one end and south pole at the other, the magnetic field near either end falls off inversely with the square of the distance from that pole.
Calculating the magnetic force
Pull force of a single magnet
The strength of a given magnet is sometimes given in terms of its pull force — its ability to pull ferromagnetic objects. The pull force exerted by either an electromagnet or a permanent magnet with no air gap (i.e., the ferromagnetic object is in direct contact with the pole of the magnet) is given by the Maxwell equation:
,
where:
F is force (SI unit: newton)
A is the cross section of the area of the pole (in square meters)
B is the magnetic induction exerted by the magnet.
This result can be easily derived using Gilbert model, which assumes that the pole of magnet is charged with magnetic monopoles that induces the same in the ferromagnetic object.
If a magnet is acting vertically, it can lift a mass m in kilograms given by the simple equation:
where g is the gravitational acceleration.
Force between two magnetic poles
Classically, the force between two magnetic poles is given by:
where
F is force (SI unit: newton)
qm1 and qm2 are the magnitudes of magnetic poles (SI unit: ampere-meter)
μ is the permeability of the intervening medium (SI unit: tesla meter per ampere, henry per meter or newton per ampere squared)
r is the separation (SI unit: meter).
The pole description is useful to the engineers designing real-world magnets, but real magnets have a pole distribution more complex than a single north and south. Therefore, implementation of the pole idea is not simple. In some cases, one of the more complex formulae given below will be more useful.
Force between two nearby magnetized surfaces of area A
The mechanical force between two nearby magnetized surfaces can be calculated with the following equation. The equation is valid only for cases in which the effect of fringing is negligible and the volume of the air gap is much smaller than that of the magnetized material:
where:
A is the area of each surface, in m2
H is their magnetizing field, in A/m
μ0 is the permeability of space, which equals 4π×10−7 T•m/A
B is the flux density, in T.
Force between two bar magnets
The force between two identical cylindrical bar magnets placed end to end at large distance is approximately:,
where:
B0 is the magnetic flux density very close to each pole, in T,
A is the area of each pole, in m2,
L is the length of each magnet, in m,
R is the radius of each magnet, in m, and
z is the separation between the two magnets, in m.
relates the flux density at the pole to the magnetization of the magnet.
Note that all these formulations are based on Gilbert's model, which is usable in relatively great distances. In other models (e.g., Ampère's model), a more complicated formulation is used that sometimes cannot be solved analytically. In these cases, numerical methods must be used.
Force between two cylindrical magnets
For two cylindrical magnets with radius and length , with their magnetic dipole aligned, the force can be asymptotically approximated at large distance by,
where is the magnetization of the magnets and is the gap between the magnets.
A measurement of the magnetic flux density very close to the magnet is related to approximately by the formula
The effective magnetic dipole can be written as
Where is the volume of the magnet. For a cylinder, this is .
When , the point dipole approximation is obtained,
which matches the expression of the force between two magnetic dipoles.
| Physical sciences | Basics_9 | null |
51108 | https://en.wikipedia.org/wiki/Poison | Poison | A poison is any chemical substance that is harmful or lethal to living organisms. The term is used in a wide range of scientific fields and industries, where it is often specifically defined. It may also be applied colloquially or figuratively, with a broad sense.
Whether something is considered a poison or not may depend on the amount, the circumstances, and what living things are present. Poisoning could be accidental or deliberate, and if the cause can be identified there may be ways to neutralise the effects or minimise the symptoms.
In biology, a poison is a chemical substance causing death, injury or harm to organisms or their parts. In medicine, poisons are a kind of toxin that are delivered passively, not actively. In industry the term may be negative, something to be removed to make a thing safe, or positive, an agent to limit unwanted pests. In ecological terms, poisons introduced into the environment can later cause unwanted effects elsewhere, or in other parts of the food chain.
Modern definitions
In broad metaphorical (colloquial) usage of the term, "poison" may refer to anything deemed harmful.
In biology, poisons are substances that can cause death, injury, or harm to organs, tissues, cells, and DNA usually by chemical reactions or other activity on the molecular scale, when an organism is exposed to a sufficient quantity.
Medicinal fields (particularly veterinary medicine) and zoology often distinguish poisons from toxins and venoms. Both poisons and venoms are toxins, which are toxicants produced by organisms in nature. The difference between venom and poison is the delivery method of the toxin. Venoms are toxins that are actively delivered by being injected via a bite or sting through a venom apparatus, such as fangs or a stinger, in a process called envenomation, whereas poisons are toxins that are passively delivered by being swallowed, inhaled, or absorbed through the skin. Unantidoteable refers to toxins that cannot be neutralized by modern medical technology, regardless of their type.
Uses
Industry, agriculture, and other sectors employ many poisonous substances, usually for reasons other than their toxicity to humans. Examples include medicines (e.g. anthelmintics used on chickens), solvents (e.g. rubbing alcohol, turpentine), cleaners (e.g. bleach, ammonia), coatings (e.g. arsenic wallpaper), and feedstocks. The toxicity itself sometimes has economic value, when it serves agricultural purposes of weed control and pest control. Most poisonous industrial compounds have associated material safety data sheets and are classified as hazardous substances. Hazardous substances are subject to extensive regulation on production, procurement, and use in overlapping domains of occupational safety and health, public health, drinking water quality standards, air pollution, and environmental protection. Due to the mechanics of molecular diffusion, many poisonous compounds rapidly diffuse into biological tissues, air, water, or soil on a molecular scale. By the principle of entropy, chemical contamination is typically costly or infeasible to reverse, unless specific chelating agents or micro-filtration processes are available. Chelating agents are often broader in scope than the acute target, and therefore their ingestion necessitates careful medical or veterinarian supervision.
Pesticides are one group of substances whose prime purpose is their toxicity to various insects and other animals deemed to be pests (e.g., rats and cockroaches). Natural pesticides have been used for this purpose for thousands of years (e.g. concentrated table salt is toxic to many slugs and snails). Bioaccumulation of chemically-prepared agricultural insecticides is a matter of concern for the many species, especially birds, which consume insects as a primary food source. Selective toxicity, controlled application, and controlled biodegradation are major challenges in herbicide and pesticide development and in chemical engineering generally, as all lifeforms on earth share an underlying biochemistry; organisms exceptional in their environmental resilience are classified as extremophiles, these for the most part exhibiting radically different susceptibilities.
Ecological lifetime
A poison which enters the food chain—whether of industrial, agricultural, or natural origin—might not be immediately toxic to the first organism that ingests the toxin, but can become further concentrated in predatory organisms further up the food chain, particularly carnivores and omnivores, especially concerning fat soluble poisons which tend to become stored in biological tissue rather than excreted in urine or other water-based effluents.
Apart from food, many poisons readily enter the body through the skin and lungs. Hydrofluoric acid is a notorious contact poison, in addition to its corrosive damage. Naturally occurring sour gas is a fast-acting atmospheric poison, which can be released by volcanic activity or drilling rigs. Plant-based contact irritants, such as that possessed by poison ivy, are often classed as allergens rather than poisons; the effect of an allergen being not a poison as such, but to turn the body's natural defenses against itself. Poison can also enter the body through faulty medical implants, or by injection (which is the basis of lethal injection in the context of capital punishment).
In 2013, 3.3 million cases of unintentional human poisonings occurred. This resulted in 98,000 deaths worldwide, down from 120,000 deaths in 1990. In modern society, cases of suspicious death elicit the attention of the Coroner's office and forensic investigators.
Of increasing concern since the isolation of natural radium by Marie and Pierre Curie in 1898—and the subsequent advent of nuclear physics and nuclear technologies—are radiological poisons. These are associated with ionizing radiation, a mode of toxicity quite distinct from chemically active poisons. In mammals, chemical poisons are often passed from mother to offspring through the placenta during gestation, or through breast milk during nursing. In contrast, radiological damage can be passed from mother or father to offspring through genetic mutation, which—if not fatal in miscarriage or childhood, or a direct cause of infertility—can then be passed along again to a subsequent generation. Atmospheric radon is a natural radiological poison of increasing impact since humans moved from hunter-gatherer lifestyles and cave dwelling to increasingly enclosed structures able to contain radon in dangerous concentrations. The 2006 poisoning of Alexander Litvinenko was a notable use of radiological assassination, presumably meant to evade the normal investigation of chemical poisons.
Poisons widely dispersed into the environment are known as pollution. These are often of human origin, but pollution can also include unwanted biological processes such as toxic red tide, or acute changes to the natural chemical environment attributed to invasive species, which are toxic or detrimental to the prior ecology (especially if the prior ecology was associated with human economic value or an established industry such as shellfish harvesting).
The scientific disciplines of ecology and environmental resource management study the environmental life cycle of toxic compounds and their complex, diffuse, and highly interrelated effects.
Etymology
The word "poison" was first used in 1200 to mean "a deadly potion or substance"; the English term comes from the "...Old French poison, puison (12c., Modern French poison) "a drink", especially a medical drink, later "a (magic) potion, poisonous drink" (14c.), from Latin potionem (nominative potio) "a drinking, a drink", also "poisonous drink" (Cicero), from potare "to drink". The use of "poison" as an adjective ("poisonous") dates from the 1520s. Using the word "poison" with plant names dates from the 18th century. The term "poison ivy", for example, was first used in 1784 and the term "poison oak" was first used in 1743. The term "poison gas" was first used in 1915.
Terminology
The term "poison" is often used colloquially to describe any harmful substance—particularly corrosive substances, carcinogens, mutagens, teratogens and harmful pollutants, and to exaggerate the dangers of chemicals. Paracelsus (1493–1541), the father of toxicology, once wrote: "Everything is poison, there is poison in everything. Only the dose makes a thing not a poison"
(see median lethal dose). The term "poison" is also used in a figurative sense: "His brother's presence poisoned the atmosphere at the party". The law defines "poison" more strictly. Substances not legally required to carry the label "poison" can also cause a medical condition of poisoning.
Some poisons are also toxins, which is any poison produced by an organism, such as the bacterial proteins that cause tetanus and botulism. A distinction between the two terms is not always observed, even among scientists. The derivative forms "toxic" and "poisonous" are synonymous. Animal poisons delivered subcutaneously (e.g., by sting or bite) are also called venom. In normal usage, a poisonous organism is one that is harmful to consume, but a venomous organism uses venom to kill its prey or defend itself while still alive. A single organism can be both poisonous and venomous, but it is rare.
All living things produce substances to protect them from getting eaten, so the term "poison" is usually only used for substances which are poisonous to humans, while substances that mainly are poisonous to a common pathogen to the organism and humans are considered antibiotics. Bacteria are for example a common adversary for Penicillium chrysogenum mold and humans, and since the mold's poison only targets bacteria, humans use it for getting rid of it in their bodies. Human antimicrobial peptides which are toxic to viruses, fungi, bacteria, and cancerous cells are considered a part of the immune system.
In nuclear physics, a poison is a substance that obstructs or inhibits a nuclear reaction.
Environmentally hazardous substances are not necessarily poisons, and vice versa. For example, food-industry wastewater—which may contain potato juice or milk—can be hazardous to the ecosystems of streams and rivers by consuming oxygen and causing eutrophication, but is nonhazardous to humans and not classified as a poison.
Biologically speaking, any substance, if given in large enough amounts, is poisonous and can cause death. For instance, several kilograms worth of water would constitute a lethal dose. Many substances used as medications—such as fentanyl—have an only one order of magnitude greater than the ED50. An alternative classification distinguishes between lethal substances that provide a therapeutic value and those that do not.
Poisoning
Poisoning can be either acute or chronic, and caused by a variety of natural or synthetic substances. Substances that destroy tissue but do not absorb, such as lye, are classified as corrosives rather than poisons.
Acute
Acute poisoning is exposure to a poison on one occasion or during a short period of time. Symptoms develop in close relation to the exposure. Absorption of a poison is necessary for systemic poisoning. Furthermore, many common household medications are not labeled with skull and crossbones, although they can cause severe illness or even death. Poisoning can be caused by excessive consumption of generally safe substances, as in the case of water intoxication.
Agents that act on the nervous system can paralyze in seconds or less, and include both biologically derived neurotoxins and so-called nerve gases, which may be synthesized for warfare or industry.
Inhaled or ingested cyanide, used as a method of execution in gas chambers, or as a suicide method, almost instantly starves the body of energy by inhibiting the enzymes in mitochondria that make ATP. Intravenous injection of an unnaturally high concentration of potassium chloride, such as in the execution of prisoners in parts of the United States, quickly stops the heart by eliminating the cell potential necessary for muscle contraction.
Most biocides, including pesticides, are created to act as acute poisons to target organisms, although acute or less observable chronic poisoning can also occur in non-target organisms (secondary poisoning), including the humans who apply the biocides and other beneficial organisms. For example, the herbicide 2,4-D imitates the action of a plant hormone, which makes its lethal toxicity specific to plants. Indeed, 2,4-D is not a poison, but classified as "harmful" (EU).
Many substances regarded as poisons are toxic only indirectly, by toxication. An example is "wood alcohol" or methanol, which is not poisonous itself, but is chemically converted to toxic formaldehyde and formic acid in the liver. Many drug molecules are made toxic in the liver, and the genetic variability of certain liver enzymes makes the toxicity of many compounds differ between individuals.
Exposure to radioactive substances can produce radiation poisoning, an unrelated phenomenon.
Two common cases of acute natural poisoning are theobromine poisoning of dogs and cats, and mushroom poisoning in humans. Dogs and cats are not natural herbivores, but a chemical defense developed by Theobroma cacao can be incidentally fatal nevertheless. Many omnivores, including humans, readily consume edible fungi, and thus many fungi have evolved to become decisively inedible, in this case as a direct defense.
Chronic
Chronic poisoning is long-term repeated or continuous exposure to a poison where symptoms do not occur immediately or after each exposure. The person gradually becomes ill, or becomes ill after a long latent period. Chronic poisoning most commonly occurs following exposure to poisons that bioaccumulate, or are biomagnified, such as mercury, gadolinium, and lead.
Management
Initial management for all poisonings includes ensuring adequate cardiopulmonary function and providing treatment for any symptoms such as seizures, shock, and pain.
Injected poisons (e.g., from the sting of animals) can be treated by binding the affected body part with a pressure bandage and placing the affected body part in hot water (with a temperature of 50 °C). The pressure bandage prevents the poison being pumped throughout the body, and the hot water breaks it down. This treatment, however, only works with poisons composed of protein-molecules.
In the majority of poisonings the mainstay of management is providing supportive care for the patient, i.e., treating the symptoms rather than the poison.
Decontamination
Treatment of a recently ingested poison may involve gastric decontamination to decrease absorption. Gastric decontamination can involve activated charcoal, gastric lavage, whole bowel irrigation, or nasogastric aspiration. Routine use of emetics (syrup of Ipecac), cathartics or laxatives are no longer recommended.
Activated charcoal is the treatment of choice to prevent poison absorption. It is usually administered when the patient is in the emergency room or by a trained emergency healthcare provider such as a Paramedic or EMT. However, charcoal is ineffective against metals such as sodium, potassium, and lithium, and alcohols and glycols; it is also not recommended for ingestion of corrosive chemicals such as acids and alkalis.
Cathartics were postulated to decrease absorption by increasing the expulsion of the poison from the gastrointestinal tract. There are two types of cathartics used in poisoned patients; saline cathartics (sodium sulfate, magnesium citrate, magnesium sulfate) and saccharide cathartics (sorbitol). They do not appear to improve patient outcome and are no longer recommended.
Emesis (i.e. induced by ipecac) is no longer recommended in poisoning situations, because vomiting is ineffective at removing poisons.
Gastric lavage, commonly known as a stomach pump, is the insertion of a tube into the stomach, followed by administration of water or saline down the tube. The liquid is then removed along with the contents of the stomach. Lavage has been used for many years as a common treatment for poisoned patients. However, a recent review of the procedure in poisonings suggests no benefit. It is still sometimes used if it can be performed within 1 hour of ingestion and the exposure is potentially life-threatening.
Nasogastric aspiration involves the placement of a tube via the nose down into the stomach, the stomach contents are then removed by suction. This procedure is mainly used for liquid ingestions where activated charcoal is ineffective, e.g. ethylene glycol poisoning.
Whole bowel irrigation cleanses the bowel. This is achieved by giving the patient large amounts of a polyethylene glycol solution. The osmotically balanced polyethylene glycol solution is not absorbed into the body, having the effect of flushing out the entire gastrointestinal tract. Its major uses are to treat ingestion of sustained release drugs, toxins not absorbed by activated charcoal (e.g., lithium, iron), and for removal of ingested drug packets (body packing/smuggling).
Enhanced excretion
In some situations elimination of the poison can be enhanced using diuresis, hemodialysis, hemoperfusion, hyperbaric medicine, peritoneal dialysis, exchange transfusion or chelation. However, this may actually worsen the poisoning in some cases, so it should always be verified based on what substances are involved.
Epidemiology
In 2010, poisoning resulted in about 180,000 deaths down from 200,000 in 1990. There were approximately 727,500 emergency department visits in the United States involving poisonings—3.3% of all injury-related encounters.
Applications
Poisonous compounds may be useful either for their toxicity, or, more often, because of another chemical property, such as specific chemical reactivity. Poisons are widely used in industry and agriculture, as chemical reagents, solvents or complexing reagents, e.g. carbon monoxide, methanol and sodium cyanide, respectively. They are less common in household use, with occasional exceptions such as ammonia and methanol. For instance, phosgene is a highly reactive nucleophile acceptor, which makes it an excellent reagent for polymerizing diols and diamines to produce polycarbonate and polyurethane plastics. For this use, millions of tons are produced annually. However, the same reactivity makes it also highly reactive towards proteins in human tissue and thus highly toxic. In fact, phosgene has been used as a chemical weapon. It can be contrasted with mustard gas, which has only been produced for chemical weapons uses, as it has no particular industrial use.
Biocides need not be poisonous to humans, because they can target metabolic pathways absent in humans, leaving only incidental toxicity. For instance, the herbicide 2,4-dichlorophenoxyacetic acid is a mimic of a plant growth hormone, which causes uncontrollable growth leading to the death of the plant. Humans and animals, lacking this hormone and its receptor, are unaffected by this, and need to ingest relatively large doses before any toxicity appears. Human toxicity is, however, hard to avoid with pesticides targeting mammals, such as rodenticides.
The risk from toxicity is also distinct from toxicity itself. For instance, the preservative thiomersal used in vaccines is toxic, but the quantity administered in a single shot is negligible.
History
Throughout human history, intentional application of poison has been used as a method of murder, pest-control, suicide, and execution. As a method of execution, poison has been ingested, as the ancient Athenians did (see Socrates), inhaled, as with carbon monoxide or hydrogen cyanide (see gas chamber), injected (see lethal injection), or even as an enema. Poison's lethal effect can be combined with its allegedly magical powers; an example is the Chinese gu poison. Poison was also employed in gunpowder warfare. For example, the 14th-century Chinese text of the Huolongjing written by Jiao Yu outlined the use of a poisonous gunpowder mixture to fill cast iron grenade bombs.
While arsenic is a naturally occurring environmental poison, its artificial concentrate was once nicknamed inheritance powder. In Medieval Europe, it was common for monarchs to employ personal food tasters to thwart royal assassination, in the dawning age of the Apothecary.
Figurative use
The term poison is also used in a figurative sense. The slang sense of alcoholic drink is first attested 1805, American English (e.g., a bartender might ask a customer "what's your poison?" or "Pick your poison").
Figurative use of the term dates from the late 15th century. Figuratively referring to persons as poison dates from 1910. The figurative term poison-pen letter became well known in 1913 by a notorious criminal case in Pennsylvania, U.S.; the phrase dates to 1898.
| Biology and health sciences | Miscellaneous | null |
51111 | https://en.wikipedia.org/wiki/Pipeline | Pipeline | A pipeline is a system of pipes for long-distance transportation of a liquid or gas, typically to a market area for consumption. The latest data from 2014 gives a total of slightly less than of pipeline in 120 countries around the world. The United States had 65%, Russia had 8%, and Canada had 3%, thus 76% of all pipeline were in these three countries. The main attribute to pollution from pipelines is caused by corrosion and leakage.
Pipeline and Gas Journals worldwide survey figures indicate that of pipelines are planned and under construction. Of these, represent projects in the planning and design phase; reflect pipelines in various stages of construction. Liquids and gases are transported in pipelines, and any chemically stable substance can be sent through a pipeline.
Pipelines exist for the transport of crude and refined petroleum, fuels – such as oil, natural gas and biofuels – and other fluids including sewage, slurry, water, beer, hot water or steam for shorter distances and even pneumatic systems which allow for the generation of suction pressure for useful work and in transporting solid objects. Pipelines are useful for transporting water for drinking or irrigation over long distances when it needs to move over hills, or where canals or channels are poor choices due to considerations of evaporation, pollution, or environmental impact. Oil pipelines are made from steel or plastic tubes which are usually buried. The oil is moved through the pipelines by pump stations along the pipeline. Natural gas (and similar gaseous fuels) are pressurized into liquids known as natural gas liquids (NGLs). Natural gas pipelines are constructed of carbon steel. Hydrogen pipeline transport is the transportation of hydrogen through a pipe. Pipelines are one of the safest ways of transporting materials as compared to road or rail, and hence in war, pipelines are often the target of military attacks.
Oil and natural gas
It is well documented when the first crude oil pipeline was built. Credit for the development of pipeline transport belongs indisputably to the Oil Transport Association, which first constructed a wrought iron pipeline over a track from an oil field in Pennsylvania to a railroad station in Oil Creek, in the 1860s. Pipelines are generally the most economical way to transport large quantities of oil, refined oil products or natural gas over land. For example, in 2014, pipeline transport of crude oil cost about $5 per barrel, while rail transport cost about $10 to $15 per barrel. Trucking has even higher costs due to the additional labor required; employment on completed pipelines represents only "1% of that of the trucking industry.".
In the United States, 70% of crude oil and petroleum products are shipped by pipeline. (23% are by ship, 4% by truck, and 3% by rail) In Canada for natural gas and petroleum products, 97% are shipped by pipeline.
Natural gas (and similar gaseous fuels) are lightly pressurized into liquids known as Natural Gas Liquids (NGLs). Small NGL processing facilities can be located in oil fields so the butane and propane liquid under light pressure of , can be shipped by rail, truck or pipeline. Propane can be used as a fuel in oil fields to heat various facilities used by the oil drillers or equipment and trucks used in the oil patch. EG: Propane will convert from a gas to a liquid under light pressure, 100 psi, give or take depending on temperature, and is pumped into cars and trucks at less than at retail stations. Pipelines and rail cars use about double that pressure to pump at .
The distance to ship propane to markets is much shorter, as thousands of natural-gas processing plants are located in or near oil fields. Many Bakken Basin oil companies in North Dakota, Montana, Manitoba and Saskatchewan gas fields separate the NGLs in the field, allowing the drillers to sell propane directly to small wholesalers, eliminating the large refinery control of product and prices for propane or butane.
The most recent major pipeline to start operating in North America is a TransCanada natural gas line going north across the Niagara region bridges. This gas line carries Marcellus shale gas from Pennsylvania and other tied in methane or natural gas sources into the Canadian province of Ontario. It began operations in the fall of 2012, supplying 16 percent of all the natural gas used in Ontario.
This new US-supplied natural gas displaces the natural gas formerly shipped to Ontario from western Canada in Alberta and Manitoba, thus dropping the government regulated pipeline shipping charges because of the significantly shorter distance from gas source to consumer. To avoid delays and US government regulation, many small, medium and large oil producers in North Dakota have decided to run an oil pipeline north to Canada to meet up with a Canadian oil pipeline shipping oil from west to east. This allows the Bakken Basin and Three Forks oil producers to get higher negotiated prices for their oil because they will not be restricted to just one wholesale market in the US. The distance from the biggest oil patch in North Dakota, in Williston, North Dakota, is only about 85 miles or 137 kilometers to the Canada–US border and Manitoba. Mutual funds and joint ventures are the largest investors in new oil and gas pipelines. In the fall of 2012, the US began exporting propane to Europe, known as LPG, as wholesale prices there are much higher than in North America. Additionally, a pipeline is currently being constructed from North Dakota to Illinois, commonly known as the Dakota Access Pipeline.
As more North American pipelines are built, even more exports of LNG, propane, butane, and other natural gas products occur on all three US coasts. To give insight, North Dakota Bakken region's oil production has grown by 600% from 2007 to 2015. North Dakota oil companies are shipping huge amounts of oil by tanker rail car as they can direct the oil to the market that gives the best price, and rail cars can be used to avoid a congested oil pipeline to get the oil to a different pipeline in order to get the oil to market faster or to a different less busy oil refinery. However, pipelines provide a cheaper means to transport by volume.
Enbridge in Canada is applying to reverse an oil pipeline going from east-to-west (Line 9) and expanding it and using it to ship western Canadian bitumen oil eastward. From a presently rated 250,000 barrels equivalent per day pipeline, it will be expanded to between 1.0 and 1.3 million barrels per day. It will bring western oil to refineries in Ontario, Michigan, Ohio, Pennsylvania, Quebec and New York by early 2014. New Brunswick will also refine some of this western Canadian crude and export some crude and refined oil to Europe from its deep water oil ULCC loading port.
Although pipelines can be built under the sea, that process is economically and technically demanding, so the majority of oil at sea is transported by tanker ships. Similarly, it is often more economically feasible to transport natural gas in the form of LNG, however the break-even point between LNG and pipelines would depend on the volume of natural gas and the distance it travels.
Growth of market
The market size for oil and gas pipeline construction experienced tremendous growth prior to the economic downturn in 2008. After faltering in 2009, demand for pipeline expansion and updating increased the following year as energy production grew. By 2012, almost 32,000 miles (51500 km) of North American pipeline were being planned or under construction. When pipelines are constrained, additional pipeline product transportation options may include the use of drag reducing agents, or by transporting product via truck or rail.
Construction and operation
Oil pipelines are made from steel or plastic tubes with inner diameter typically from . Most pipelines are typically buried at a depth of about . To protect pipes from impact, abrasion, and corrosion, a variety of methods are used. These can include wood lagging (wood slats), concrete coating, rockshield, high-density polyethylene, imported sand padding, sacrificial cathodes and padding machines.
Crude oil contains varying amounts of paraffin wax and in colder climates wax buildup may occur within a pipeline. Often these pipelines are inspected and cleaned using pigging, the practice of using devices known as "pigs" to perform various maintenance operations on a pipeline. The devices are also known as "scrapers" or "Go-devils". "Smart pigs" (also known as "intelligent" or "intelligence" pigs) are used to detect anomalies in the pipe such as dents, metal loss caused by corrosion, cracking or other mechanical damage. These devices are launched from pig-launcher stations and travel through the pipeline to be received at any other station down-stream, either cleaning wax deposits and material that may have accumulated inside the line or inspecting and recording the condition of the line.
For natural gas, pipelines are constructed of carbon steel and vary in size from in diameter, depending on the type of pipeline. The gas is pressurized by compressor stations and is odorless unless mixed with a mercaptan odorant where required by a regulating authority.
Ammonia
A major ammonia pipeline is the Ukrainian Transammiak line connecting the TogliattiAzot facility in Russia to the exporting Black Sea-port of Odesa.
Alcohol fuels
Pipelines have been used for transportation of ethanol in Brazil, and there are several ethanol pipeline projects in Brazil and the United States. The main problems related to the transport of ethanol by pipeline are its corrosive nature and tendency to absorb water and impurities in pipelines, which are not problems with oil and natural gas. Insufficient volumes and cost-effectiveness are other considerations limiting construction of ethanol pipelines.
In the US minimal amounts of ethanol are transported by pipeline. Most ethanol is shipped by rail, the main alternatives being truck and barge. Delivering ethanol by pipeline is the most desirable option, but ethanol's affinity for water and solvent properties require the use of a dedicated pipeline, or significant cleanup of existing pipelines.
Coal and ore
Slurry pipelines are sometimes used to transport coal or ore from mines. The material to be transported is closely mixed with water before being introduced to the pipeline; at the far end, the material must be dried. One example is a slurry pipeline which is planned to transport iron ore from the Minas-Rio mine (producing 26.5 million tonnes per year) to the Port of Açu in Brazil. An existing example is the Savage River Slurry pipeline in Tasmania, Australia, possibly the world's first when it was built in 1967. It includes a bridge span at above the Savage River.
Hydrogen
Hydrogen pipeline transport is a transportation of hydrogen through a pipe as part of the hydrogen infrastructure. Hydrogen pipeline transport is used to connect the point of hydrogen production or delivery of hydrogen with the point of demand, with transport costs similar to CNG, the technology is proven. Most hydrogen is produced at the place of demand with every 50 to an industrial production facility. The 1938 Rhine-Ruhr hydrogen pipeline is still in operation. , there are of low pressure hydrogen pipelines in the US and in Europe.
Water
Two millennia ago, the ancient Romans made use of large aqueducts to transport water from higher elevations by building the aqueducts in graduated segments that allowed gravity to push the water along until it reached its destination. Hundreds of these were built throughout Europe and elsewhere, and along with flour mills were considered the lifeline of the Roman Empire. The ancient Chinese also made use of channels and pipe systems for public works. The famous Han dynasty court eunuch Zhang Rang (d. 189 AD) once ordered the engineer Bi Lan to construct a series of square-pallet chain pumps outside the capital city of Luoyang. These chain pumps serviced the imperial palaces and living quarters of the capital city as the water lifted by the chain pumps was brought in by a stoneware pipe system.
Pipelines are useful for transporting water for drinking or irrigation over long distances when it needs to move over hills, or where canals or channels are poor choices due to considerations of evaporation, pollution, or environmental impact.
The Goldfields Water Supply Scheme in Western Australia using 750 mm (30 inch) pipe and completed in 1903 was the largest water supply scheme of its time.
Examples of significant water pipelines in South Australia are the Morgan-Whyalla pipeline (completed 1944) and Mannum-Adelaide pipeline (completed 1955) pipelines, both part of the larger Snowy Mountains scheme.
Two Los Angeles, California aqueducts, the Owens Valley aqueduct (completed 1913) and the Second Los Angeles Aqueduct (completed 1970), include extensive use of pipelines.
The Great Manmade River of Libya supplies of water each day to Tripoli, Benghazi, Sirte, and several other cities in Libya. The pipeline is over long, and is connected to wells tapping an aquifer over underground.
Other systems
District heating
District heating or teleheating systems consist of a network of insulated feed and return pipes which transport heated water, pressurized hot water, or sometimes steam to the customer. While steam is hottest and may be used in industrial processes due to its higher temperature, it is less efficient to produce and transport due to greater heat losses. Heat transfer oils are generally not used for economic and ecological reasons. The typical annual loss of thermal energy through distribution is around 10%, as seen in Norway's district heating network.
District heating pipelines are normally installed underground, with some exceptions. Within the system, heat storage may be installed to even out peak load demands. Heat is transferred into the central heating of the dwellings through heat exchangers at heat substations, without mixing of the fluids in either system.
Beer
Bars in the Veltins-Arena, a major football ground in Gelsenkirchen, Germany, are interconnected by a long beer pipeline. In Randers city in Denmark, the so-called Thor Beer pipeline was operated. Originally, copper pipes ran directly from the brewery, but when the brewery moved out of the city in the 1990s, Thor Beer replaced it with a giant tank.
A three-kilometer beer pipeline was completed in Bruges, Belgium in September 2016 to reduce truck traffic on the city streets.
Brine
The village of Hallstatt in Austria, which is known for its long history of salt mining, claims to contain "the oldest industrial pipeline in the world", dating back to 1595. It was constructed from 13,000 hollowed-out tree trunks to transport brine from Hallstatt to Ebensee.
Milk
Between 1978 and 1994, a 15 km milk pipeline ran between the Dutch island of Ameland and Holwerd on the mainland, of which 8 km was beneath the Wadden Sea. Every day, 30,000 litres of milk produced on the island were transported to be processed on the mainland. In 1994, the pipeline was abandoned.
Pneumatic transport
Rather than transporting fluids, pneumatic tubes are usually used to transport solids in a cylindrical container by compressed air or by partial vacuum. They were most popular in the late 19th and early 20th centuries, and were used to transport small solid objects within a building, e.g. documents in an office or money in a bank. By the 21st century, pneumatic tube transport had been mostly superseded by digital solutions for transporting information, but is still used in cases where convenience and speed in a local environment are important. Hospitals, for example, use them to deliver drugs and specimens.
Marine pipelines
In places, a pipeline may have to cross water expanses, such as small seas, straits and rivers. In many instances, they lie entirely on the seabed. These pipelines are referred to as "marine" pipelines (also, "submarine" or "offshore" pipelines). They are used primarily to carry oil or gas, but transportation of water is also important. In offshore projects, a distinction is made between a "flowline" and a pipeline. The former is an intrafield pipeline, in the sense that it is used to connect subsea wellheads, manifolds and the platform within a particular development field. The latter, sometimes referred to as an "export pipeline", is used to bring the resource to shore. The construction and maintenance of marine pipelines imply logistical challenges that are different from those onland, mainly because of wave and current dynamics, along with other geohazards.
Functions
In general, pipelines can be classified in three categories depending on purpose:
Gathering pipelines Group of smaller interconnected pipelines forming complex networks with the purpose of bringing crude oil or natural gas from several nearby wells to a treatment plant or processing facility. In this group, pipelines are usually short- a couple hundred metres- and with small diameters. Sub-sea pipelines for collecting product from deep water production platforms are also considered gathering systems.
Transportation pipelines Mainly long pipes with large diameters, moving products (oil, gas, refined products) between cities, countries and even continents. These transportation networks include several compressor stations in gas lines or pump stations for crude and multi-products pipelines.
Distribution pipelines Composed of several interconnected pipelines with small diameters, used to take the products to the final consumer. Feeder lines to distribute gas to homes and businesses downstream. Pipelines at terminals for distributing products to tanks and storage facilities are included in this groups.
Development and planning
When a pipeline is built, the construction project not only covers the civil engineering work to lay the pipeline and build the pump/compressor stations, it also has to cover all the work related to the installation of the field devices that will support remote operation.
The pipeline is routed along what is known as a "right of way". Pipelines are generally developed and built using the following stages:
Open season to determine market interest: Potential customers are given the chance to sign up for part of the new pipeline's capacity rights.
Route (right of way) selection including land acquisition (eminent domain)
Pipeline design: The pipeline project may take a number of forms, including the construction of a new pipeline, conversion of existing pipeline from one fuel type to another, or improvements to facilities on a current pipeline route.
Obtaining approval: Once the design is finalized and the first pipeline customers have purchased their share of capacity, the project must be approved by the relevant regulatory agencies.
Surveying the route
Clearing the route
Trenching – Main Route and Crossings (roads, rail, other pipes, etc.)
Installing the pipe
Installing valves, intersections, etc.
Covering the pipe and trench
Testing: Once construction is completed, the new pipeline is subjected to tests to ensure its structural integrity. These may include hydrostatic testing and line packing.
Russia has "Pipeline Troops" as part of the Rear Services, who are trained to build and repair pipelines. Russia is the only country to have Pipeline Troops.
The U.S. government, mainly through the EPA, the FERC and others, reviews proposed pipeline projects in order to comply with the Clean Water Act, the National Environmental Policy Act, other laws and, in some cases, municipal laws. The Biden administration has sought to permit the respective states and tribal groups to appraise and potentially block the proposed projects.
Operation
Field devices are instrumentation, data gathering units and communication systems. The field instrumentation includes flow, pressure, and temperature gauges/transmitters, and other devices to measure the relevant data required. These instruments are installed along the pipeline on some specific locations, such as injection or delivery stations, pump stations (liquid pipelines) or compressor stations (gas pipelines), and block valve stations.
The information measured by these field instruments is then gathered in local remote terminal units (RTU) that transfer the field data to a central location in real time using communication systems, such as satellite channels, microwave links, or cellular phone connections.
Pipelines are controlled and operated remotely, from what is usually known as the "Main Control Room". In this center, all the data related to field measurement is consolidated in one central database. The data is received from multiple RTUs along the pipeline. It is common to find RTUs installed at every station along the pipeline.
The SCADA system at the Main Control Room receives all the field data and presents it to the pipeline operator through a set of screens or Human Machine Interface, showing the operational conditions of the pipeline. The operator can monitor the hydraulic conditions of the line, as well as send operational commands (open/close valves, turn on/off compressors or pumps, change setpoints, etc.) through the SCADA system to the field.
To optimize and secure the operation of these assets, some pipeline companies are using what is called "Advanced Pipeline Applications", which are software tools installed on top of the SCADA system, that provide extended functionality to perform leak detection, leak location, batch tracking (liquid lines), pig tracking, composition tracking, predictive modeling, look ahead modeling, and operator training.
Technology
Components
Pipeline networks are composed of several pieces of equipment that operate together to move products from location to location. The main elements of a pipeline system are:
Initial injection station Known also as "supply" or "inlet" station, is the beginning of the system, where the product is injected into the line. Storage facilities, pumps or compressors are usually located at these locations.
Compressor/pump stations Pumps for liquid pipelines and compressors for gas pipelines, are located along the line to move the product through the pipeline. The location of these stations is defined by the topography of the terrain, the type of product being transported, or operational conditions of the network.
Partial delivery station Known also as "intermediate stations", these facilities allow the pipeline operator to deliver part of the product being transported.
Block valve station These are the first line of protection for pipelines. With these valves the operator can isolate any segment of the line for maintenance work or isolate a rupture or leak. Block valve stations are usually located every 20 to , depending on the type of pipeline. Even though it is not a design rule, it is a very usual practice in liquid pipelines. The location of these stations depends exclusively on the nature of the product being transported, the trajectory of the pipeline and/or the operational conditions of the line.
Regulator station This is a special type of valve station, where the operator can release some of the pressure from the line. Regulators are usually located at the downhill side of a peak.
Final delivery station Known also as "outlet" stations or terminals, this is where the product will be distributed to the consumer. It could be a tank terminal for liquid pipelines or a connection to a distribution network for gas pipelines.
Leak detection systems
Since oil and gas pipelines are an important asset of the economic development of almost any country, it has been required either by government regulations or internal policies to ensure the safety of the assets, and the population and environment where these pipelines run.
Pipeline companies face government regulation, environmental constraints and social situations. Government regulations may define minimum staff to run the operation, operator training requirements, pipeline facilities, technology and applications required to ensure operational safety. For example, in the State of Washington it is mandatory for pipeline operators to be able to detect and locate leaks of 8 percent of maximum flow within fifteen minutes or less. Social factors also affect the operation of pipelines. Product theft is sometimes also a problem for pipeline companies. In this case, the detection levels should be under two percent of maximum flow, with a high expectation for location accuracy.
Various technologies and strategies have been implemented for monitoring pipelines, from physically walking the lines to satellite surveillance. The most common technology to protect pipelines from occasional leaks is Computational Pipeline Monitoring or CPM. CPM takes information from the field related to pressures, flows, and temperatures to estimate the hydraulic behavior of the product being transported. Once the estimation is completed, the results are compared to other field references to detect the presence of an anomaly or unexpected situation, which may be related to a leak.
The American Petroleum Institute has published several articles related to the performance of CPM in liquids pipelines. The API Publications are:
RAM 1130 – Computational pipeline monitoring for liquids pipelines
API 1149 – Pipeline variable uncertainties & their effects on leak detectability
Where a pipeline containing passes under a road or railway, it is usually enclosed in a protective casing. This casing is vented to the atmosphere to prevent the build-up of flammable gases or corrosive substances, and to allow the air inside the casing to be sampled to detect leaks. The casing vent, a pipe protruding from the ground, often doubles as a warning marker called a casing vent marker.
Implementation
Pipelines are generally laid underground because temperature is less variable. Because pipelines are usually metal, this helps to reduce the expansion and shrinkage that can occur with weather changes. However, in some cases it is necessary to cross a valley or a river on a pipeline bridge. Pipelines for centralized heating systems are often laid on the ground or overhead. Pipelines for petroleum running through permafrost areas as Trans-Alaska-Pipeline are often run overhead in order to avoid melting the frozen ground by hot petroleum which would result in sinking the pipeline in the ground.
Maintenance
Maintenance of pipelines includes checking cathodic protection levels for the proper range, surveillance for construction, erosion, or leaks by foot, land vehicle, boat, or air, and running cleaning pigs when there is anything carried in the pipeline that is corrosive.
US pipeline maintenance rules are covered in Code of Federal Regulations(CFR) sections, 49 CFR 192 for natural gas pipelines, and 49 CFR 195 for petroleum liquid pipelines.
Regulation
In the US, onshore and offshore pipelines used to transport oil and gas are regulated by the Pipeline and Hazardous Materials Safety Administration (PHMSA). Certain offshore pipelines used to produce oil and gas are regulated by the Minerals Management Service (MMS). In Canada, pipelines are regulated by either the provincial regulators or, if they cross provincial boundaries or the Canada–US border, by the National Energy Board (NEB). Government regulations in Canada and the United States require that buried fuel pipelines must be protected from corrosion. Often, the most economical method of corrosion control is by use of pipeline coating in conjunction with cathodic protection and technology to monitor the pipeline. Above ground, cathodic protection is not an option. The coating is the only external protection.
Pipelines and geopolitics
Pipelines for major energy resources (petroleum and natural gas) are not merely an element of trade. They connect to issues of geopolitics and international security as well, and the construction, placement, and control of oil and gas pipelines often figure prominently in state interests and actions. A notable example of pipeline politics occurred at the beginning of the year 2009, wherein a dispute between Russia and Ukraine ostensibly over pricing led to a major political crisis. Russian state-owned gas company Gazprom cut off natural gas supplies to Ukraine after talks between it and the Ukrainian government fell through. In addition to cutting off supplies to Ukraine, Russian gas flowing through Ukraine—which included nearly all supplies to Southeastern Europe and some supplies to Central and Western Europe—was cut off, creating a major crisis in several countries heavily dependent on Russian gas as fuel. Russia was accused of using the dispute as leverage in its attempt to keep other powers, and particularly the European Union, from interfering in its "near abroad".
Oil and gas pipelines also figure prominently in the politics of Central Asia and the Caucasus.
Hazard identification
Because the solvent fraction of dilbit typically comprises volatile aromatics such as naptha and benzene, reasonably rapid carrier vaporization can be expected to follow an above-ground spill—ostensibly enabling timely intervention by leaving only a viscous residue that is slow to migrate. Effective protocols to minimize exposure to petrochemical vapours are well-established, and oil spilled from the pipeline would be unlikely to reach the aquifer unless incomplete remediation were followed by the introduction of another carrier (e.g. a series of torrential downpours).
The introduction of benzene and other volatile organic compounds (collectively BTEX) to the subterranean environment compounds the threat posed by a pipeline leak. Particularly if followed by rain, a pipeline breach would result in BTEX dissolution and equilibration of benzene in water, followed by percolation of the admixture into the aquifer. Benzene can cause many health problems and is carcinogenic with EPA Maximum Contaminant Level (MCL) set at 5 μg/L for potable water. Although it is not well studied, single benzene exposure events have been linked to acute carcinogenesis. Additionally, the exposure of livestock, mainly cattle, to benzene has been shown to cause many health issues, such as neurotoxicity, fetal damage and fatal poisoning.
The entire surface of an above-ground pipeline can be directly examined for material breach. Pooled petroleum is unambiguous, readily spotted, and indicates the location of required repairs. Because the effectiveness of remote inspection is limited by the cost of monitoring equipment, gaps between sensors, and data that requires interpretation, small leaks in buried pipe can sometimes go undetected.
Pipeline developers do not always prioritize effective surveillance against leaks. Buried pipes draw fewer complaints. They are insulated from extremes in ambient temperature, they are shielded from ultraviolet rays, and they are less exposed to photodegradation. Buried pipes are isolated from airborne debris, electrical storms, tornadoes, hurricanes, hail, and acid rain. They are protected from nesting birds, rutting mammals, and stray buckshot. Buried pipe is less vulnerable to accident damage (e.g. automobile collisions) and less accessible to vandals, saboteurs, and terrorists.
Exposure
Previous work has shown that a 'worst-case exposure scenario' can be limited to a specific set of conditions. Based on the advanced detection methods and pipeline shut-off SOP developed by TransCanada, the risk of a substantive or large release over a short period of time contaminating groundwater with benzene is unlikely. Detection, shutoff, and remediation procedures would limit the dissolution and transport of benzene. Therefore, the exposure of benzene would be limited to leaks that are below the limit of detection and go unnoticed for extended periods of time. Leak detection is monitored through a SCADA system that assesses pressure and volume flow every 5 seconds. A pinhole leak that releases small quantities that cannot be detected by the SCADA system (<1.5% flow) could accumulate into a substantive spill. Detection of pinhole leaks would come from a visual or olfactory inspection, aerial surveying, or mass-balance inconsistencies. It is assumed that pinhole leaks are discovered within the 14-day inspection interval, however snow cover and location (e.g. remote, deep) could delay detection. Benzene typically makes up 0.1 – 1.0% of oil and will have varying degrees of volatility and dissolution based on environmental factors.
Even with pipeline leak volumes within SCADA detection limits, sometimes pipeline leaks are misinterpreted by pipeline operators to be pump malfunctions, or other problems. The Enbridge Line 6B crude oil pipeline failure in Marshall, Michigan, on July 25, 2010, was thought by operators in Edmonton to be from column separation of the dilbit in that pipeline. The leak in wetlands along the Kalamazoo River was only confirmed 17 hours after it happened by a local gas company employee.
Spill frequency-volume
Although the Pipeline and Hazardous Materials Safety Administration (PHMSA) has standard baseline incident frequencies to estimate the number of spills, TransCanada altered these assumptions based on improved pipeline design, operation, and safety. Whether these adjustments are justified is debatable as these assumptions resulted in a nearly 10-fold decrease in spill estimates. Given that the pipeline crosses 247 miles of the Ogallala Aquifer, or 14.5% of the entire pipeline length, and the 50-year life of the entire pipeline is expected to have between 11 – 91 spills, approximately 1.6 – 13.2 spills can be expected to occur over the aquifer. An estimate of 13.2 spills over the aquifer, each lasting 14 days, results in 184 days of potential exposure over the 50 year lifetime of the pipeline.
In the reduced-scope worst-case exposure scenario, the volume of a pinhole leak at 1.5% of max flow-rate for 14 days has been estimated at 189,000 barrels or 7.9 million gallons of oil. According to PHMSA's incident database, only 0.5% of all spills in the last 10 years were >10,000 barrels.
Benzene fate and transport
Benzene is considered a light aromatic hydrocarbon with high solubility and high volatility. It is unclear how temperature and depth would impact the volatility of benzene, so assumptions have been made that benzene in oil (1% weight by volume) would not volatilize before equilibrating with water.
Using the octanol-water partition coefficient and a 100-year precipitation event for the area, a worst-case estimate of 75 mg/L of benzene is anticipated to flow toward the aquifer. The actual movement of the plume through groundwater systems is not well described, although one estimate is that up to 4.9 billion gallons of water in the Ogallala Aquifer could become contaminated with benzene at concentrations above the MCL. The Final Environmental Impact Statement from the State Department does not include a quantitative analysis because it assumed that most benzene will volatilize.
Previous dilbit spill remediation difficulties
One of the major concerns over dilbit is the difficulty in cleaning it up. When the aforementioned Enbridge Line 6B crude oil pipeline ruptured in Marshall, Michigan in 2010, at least 843,000 gallons of dilbit were spilled. After detection of the leak, booms and vacuum trucks were deployed. Heavy rains caused the river to overtop existing dams, and carried dilbit 30 miles downstream before the spill was contained. Remediation work collected over 1.1 million gallons of oil and almost 200,000 cubic yards of oil-contaminated sediment and debris from the Kalamazoo River system. However, oil was still being found in affected waters in October 2012.
Accidents and dangers
Pipelines can help ensure a country's economic well-being and as such present a likely target of terrorists or wartime adversaries.
Fossil fuels can be transported by pipeline, rail, truck or ship, though natural gas requires compression or liquefaction to make vehicle transport economical. For transport of crude oil via these four modes, various reports rank pipelines as proportionately causing less human death and property damage than rail and truck and spilling less oil than truck.
Accidents
Pipelines conveying flammable or explosive material, such as natural gas or oil, pose special safety concerns. While corrosion, pressure, and equipment failure are common causes, excavation damage is also a leading accident type that can be avoided by calling 811 before digging near pipelines.
1965 – A 32-inch gas transmission pipeline, north of Natchitoches, Louisiana, belonging to the Tennessee Gas Pipeline exploded and burned from stress corrosion cracking failure on March 4, killing 17 people. At least 9 others were injured, and 7 homes 450 feet from the rupture were destroyed. This accident, and others of the era, led then-President Lyndon B. Johnson to call for the formation of a national pipeline safety agency in 1967. The same pipeline had also had an explosion on May 9, 1955, just 930 feet (280 m) from the 1965 failure.
June 16, 1976 – A gasoline pipeline was ruptured by a road construction crew in Los Angeles, California. Gasoline sprayed across the area, and soon ignited, killing 9, and injuring at least 14 others. Confusion over the depth of the pipeline in the construction area seemed to be a factor in the accident.
June 4, 1989 – The Ufa train disaster: Sparks from two passing trains detonated gas leaking from a LPG pipeline near Ufa, Russia. At least 575 people were reported killed.
October 17, 1998 – 1998 Jesse pipeline explosion: A petroleum pipeline exploded at Jesse on the Niger Delta in Nigeria, killing about 1,200 villagers, some of whom were scavenging gasoline.
June 10, 1999 – A pipeline rupture in a Bellingham, Washington park led to the release of 277,200 gallons of gasoline. The gasoline was ignited, causing an explosion that killed two children and one adult. Misoperation of the pipeline and a previously damaged section of the pipe that was not detected before were identified as causing the failure.
August 19, 2000 – A natural gas pipeline rupture and fire near Carlsbad, New Mexico; this explosion and fire killed 12 members of an extended family. The cause was due to severe internal corrosion of the pipeline.
July 30, 2004 – A major natural gas pipeline exploded in Ghislenghien, Belgium near Ath (thirty kilometres southwest of Brussels), killing at least 24 people and leaving 132 wounded, some critically.
May 12, 2006 – An oil pipeline ruptured outside Lagos, Nigeria. Up to 200 people may have been killed. See Nigeria oil blast.
November 1, 2007 – A propane pipeline exploded near Carmichael, Mississippi, about south of Meridian, Mississippi. Two people were killed instantly and an additional four were injured. Several homes were destroyed and sixty families were displaced. The pipeline is owned by Enterprise Products Partners LP, and runs from Mont Belvieu, Texas, to Apex, North Carolina. Inability to find flaws in pre-1971 ERW seam welded pipe flaws was a contributing factor to the accident.
September 9, 2010 – 2010 San Bruno pipeline explosion: A 30-inch diameter high pressure natural gas pipeline owned by the Pacific Gas and Electric Company exploded in the Crestmoor residential neighborhood 2 mi (3.2 km) west of San Francisco International Airport, killing 8, injuring 58, and destroying 38 homes. Poor quality control of the pipe used & of the construction were cited as factors in the accident.
June 27, 2014 – An explosion occurred after a natural gas pipe line ruptured in Nagaram village, East Godavari district, Andhra Pradesh, India causing 16 deaths and destroying "scores of homes".
July 31, 2014 – On the night of July 31, a series of explosions originating in underground gas pipelines occurred in the city of Kaohsiung, Taiwan. Leaking gas filled the sewers along several major thoroughfares and the resulting explosions turned several kilometers of road surface into deep trenches, sending vehicles and debris high into the air and igniting fires over a large area. At least 32 people were killed and 321 injured.
As targets
Pipelines can be the target of vandalism, sabotage, or even terrorist attacks. For example, between early 2011 and July 2012, a natural gas pipeline connecting Egypt to Israel and Jordan was attacked 15 times. In 2019, a fuel pipeline north of Mexico City exploded after fuel thieves tapped into the line. At least sixty-six people were reported to have been killed. In war, pipelines are often the target of military attacks, as destruction of pipelines can seriously disrupt enemy logistics. On 26 September 2022, a series of explosions and subsequent major gas leaks occurred on the Nord Stream 1 and Nord Stream 2 pipelines that run to Europe from Russia under the Baltic Sea. The leaks are believed to have been caused by an act of sabotage.
Fluid control in pipeline transport
In pipeline transport systems, the efficient and safe movement of fluids—whether gas, oil, water, or chemicals—relies on effective fluid control mechanisms. These mechanisms help regulate the flow, pressure, and direction of the fluids within the pipeline, preventing blockages, backflows, and ensuring smooth transportation over long distances.
Valves
Industrial valves are integral in controlling fluid flow in pipelines. They are used for starting, stopping, or regulating fluid movement. Different types of valves play distinct roles in pipeline systems, including:
Ball valves allow for quick on/off flow control and are often used in pipeline systems where rapid changes in flow are required.
Gate valves provide full flow control, making them suitable for pipelines that require complete shutoff or consistent flow.
Check valves are used to prevent backflow, protecting the pipeline from potential damage caused by reverse fluid movement.
Flow meters
Flow meters are critical for monitoring and adjusting the flow of liquids and gases in pipelines. These devices ensure that the correct flow rate is maintained, preventing both under and over-supply of fluid in the pipeline.
Pressure regulators
Pressure regulators are designed to maintain stable pressure in pipelines by automatically adjusting the flow of fluid. This helps to prevent damage caused by pressure surges or drops, ensuring that the pipeline system operates within safe and efficient parameters.
Actuators
Actuators are used in conjunction with valves to control their opening and closing. Powered by electric, pneumatic, or hydraulic sources, actuators allow for automated fluid control, making them essential for pipelines that require continuous or remote monitoring.
| Technology | Other | null |
51124 | https://en.wikipedia.org/wiki/Pangolin | Pangolin | Pangolins, sometimes known as scaly anteaters, are mammals of the order Pholidota (). The one extant family, the Manidae, has three genera: Manis, Phataginus, and Smutsia. Manis comprises four species found in Asia, while Phataginus and Smutsia include two species each, all found in sub-Saharan Africa. These species range in size from . Several extinct pangolin species are also known. In September 2023, nine species were reported.
Pangolins have large, protective keratin scales, similar in material to fingernails and toenails, covering their skin; they are the only known mammals with this feature. Depending on the species, they live in hollow trees or burrows. Pangolins are nocturnal, and their diet consists of mainly ants and termites, which they capture using their long tongues. They tend to be solitary animals, meeting only to mate and produce a litter of one to three offspring, which they raise for about two years. Pangolins superficially resemble armadillos, though the two are not closely related; they have merely undergone convergent evolution.
Pangolins are threatened by poaching (for their meat and scales, which are used in traditional medicine) and heavy deforestation of their natural habitats, and are the most trafficked mammals in the world. , there are eight species of pangolin whose conservation status is listed in the threatened tier. Three (Manis culionensis, M. pentadactyla and M. javanica) are critically endangered, three (Phataginus tricuspis, Manis crassicaudata and Smutsia gigantea) are endangered and two (Phataginus tetradactyla and Smutsia temminckii) are vulnerable on the Red List of Threatened Species of the International Union for Conservation of Nature.
Etymology
The name of order Pholidota comes from Ancient Greek ϕολιδωτός – "clad in scales" from pholís "scale".
The name "pangolin" comes from the Malay word pengguling meaning "one who rolls up" from guling or giling "to roll"; it was used for the Sunda pangolin (Manis javanica). However, the modern name is tenggiling. In Javanese, it is terenggiling; and in the Philippine languages, it is goling, tanggiling, or balintong (with the same meaning).
In ancient India, according to Aelian, it was known as the phattáges (φαττάγης).
Description
The physical appearance of a pangolin is marked by large, hardened, overlapping, plate-like scales, which are soft on newborn pangolins, but harden as the animal matures. They are made of keratin, the same material from which human fingernails and tetrapod claws are made, and are structurally and compositionally very different from the scales of reptiles. The pangolin's scaled body is comparable in appearance to a pine cone. It can curl up into a ball when threatened, with its overlapping scales acting as armor, while it protects its face by tucking it under its tail. The scales are sharp, providing extra defense from predators.
Pangolins can emit a noxious-smelling chemical from glands near the anus, similar to the spray of a skunk. They have short legs, with sharp claws which they use for burrowing into ant and termite mounds and for climbing.
The tongues of pangolins are extremely long, and like those of the giant anteater and the tube-lipped nectar bat, the root of the tongue is not attached to the hyoid bone but is in the thorax between the sternum and the trachea. Large pangolins can extend their tongues as much as , with a diameter of only about .
Behaviour
Most pangolins are nocturnal animals which use their well-developed sense of smell to find insects. The long-tailed pangolin is also active by day, while other species of pangolins spend most of the daytime sleeping, curled up into a ball ("volvation").
Arboreal pangolins live in hollow trees, whereas the ground-dwelling species dig tunnels to a depth of .
Some pangolins walk with their front claws bent under the foot pad, although they use the entire foot pad on their rear limbs. Furthermore, some exhibit a bipedal stance for some behavior, and may walk a few steps bipedally. Pangolins are also good swimmers.
Diet
Pangolins are insectivorous. Most of their diet consists of various species of ants and termites and may be supplemented by other insects, especially larvae. They are somewhat particular and tend to consume only one or two species of insects, even when many species are available. A pangolin can consume of insects per day. Pangolins are an important regulator of termite populations in their natural habitats.
Pangolins have very poor vision. They also lack teeth. They rely heavily on smell and hearing, and they have other physical characteristics to help them eat ants and termites. Their skeletal structure is sturdy and they have strong front legs used for tearing into termite mounds. They use their powerful front claws to dig into trees, soil, and vegetation to find prey, then proceed to use their long tongues to probe inside the insect tunnels and to retrieve their prey.
The structure of their tongue and stomach is key to aiding pangolins in obtaining and digesting insects. Their saliva is sticky, causing ants and termites to stick to their long tongues when they are hunting through insect tunnels. Without teeth, pangolins cannot also chew; but while foraging, they ingest small stones (gastroliths), which accumulate in their stomachs to help to grind up ants. This part of their stomach is called the gizzard, and it is also covered in keratinous spines. These spines further aid in the grinding up and digestion of the pangolin's prey.
Some species, such as the tree pangolin, use their strong, prehensile tails to hang from tree branches and strip away bark from the trunk, exposing insect nests inside.
Reproduction
Pangolins are solitary and meet only to have sex, with mating typically taking place at night after the male and female pangolin meet near a watering hole. Males are larger than females, weighing up to 40% more. While the mating season is not defined, they typically mate once each year, usually during the summer or autumn. Rather than the males seeking out the females, males mark their location with urine or feces and the females find them. If competition over a female occurs, the males use their tails as clubs to fight for the opportunity to mate with her.
Gestation periods differ by species, ranging from roughly 70 to 140 days. African pangolin females usually give birth to a single offspring at a time, but the Asiatic species may give birth to from one to three. Weight at birth is , and the average length is . At the time of birth, the scales are soft and white. After several days, they harden and darken to resemble those of an adult pangolin. During the vulnerable stage, the mother stays with her offspring in the burrow, nursing it, and wraps her body around it if she senses danger. The young cling to the mother's tail as she moves about, although, in burrowing species, they remain in the burrow for the first two to four weeks of life. At one month, they first leave the burrow riding on the mother's back. Weaning takes place around three months of age, when the young begin to eat insects in addition to nursing. At two years of age, the offspring are sexually mature and are abandoned by the mother.
Classification and phylogeny
Taxonomy
Order: Pholidota Weber, 1904
Genus: †Euromanis Gaudin, Emry & Wible, 2009
Family: †Eurotamanduidae Szalay & Schrenk, 1994
Suborder: Eupholidota Gaudin, Emry & Wible, 2009
Superfamily: Manoidea Gaudin, Emry & Wible, 2009
Family: Manidae Gray, 1821
Family: †Patriomanidae Szalay & Schrenk 1998 [sensu Gaudin, Emry & Pogue, 2006]
Incertae sedis
Genus: †Necromanis Filhol, 1893
Superfamily: †Eomanoidea Gaudin, Emry & Wible, 2009
Family: †Eomanidae Storch, 2003
Phylogeny
Among placentals
The order Pholidota was long considered to be the sister taxon to Xenarthra (neotropical anteaters, sloths, and armadillos), but recent genetic evidence indicates their closest living relatives are the carnivorans, with which they form a clade, the Ferae. Palaeanodonts are even closer relatives to pangolins, being classified with pangolins in the clade Pholidotamorpha. The split between carnivorans and pangolins is estimated to have occurred 79.47 Ma (million years) ago.
Among Manidae
The first dichotomy in the phylogeny of extant Manidae separates Asian pangolins (Manis) from African pangolins (Smutsia and Phataginus). Within the former, Manis pentadactyla is the sister group to a clade comprising M. crassicaudata and M. javanica. Within the latter, a split separates the large terrestrial African pangolins of the genus Smutsia from the small arboreal African pangolins of the genus Phataginus.
Asian and African pangolins are thought to have diverged about 41.37 Ma ago. Moreover, the basal position of Manis within Pholidota suggests the group originated in Eurasia, consistent with their laurasiatherian phylogeny.
Threats
Pangolins are in high demand in southern China and Vietnam because their scales are believed to have medicinal properties in traditional Chinese and Vietnamese medicine. Their meat is also considered a delicacy. 100,000 are estimated to be trafficked a year to China and Vietnam, amounting to over one million over the past decade. This makes them the most trafficked animal in the world. This, coupled with deforestation, has led to a large decrease in the numbers of pangolins. Some species, such as Manis pentadactyla have become commercially extinct in certain ranges as a result of overhunting. In November 2010, pangolins were added to the Zoological Society of London's list of evolutionarily distinct and endangered mammals. All eight species of pangolin are assessed as threatened by the IUCN, while three are classified as critically endangered. All pangolin species are currently listed under Appendix I of CITES which prohibits international trade, except when the product is intended for non-commercial purposes and a permit has been granted.
China had been the main destination country for pangolins until 2018, where it was surpassed by Vietnam. In 2019, Vietnam was reported to have seized the largest volumes of pangolin scales, surpassing Nigeria that year.
Pangolins are also hunted and eaten in Ghana and are one of the more popular types of bushmeat, while local healers use the pangolin as a source of traditional medicine.
Though pangolins are protected by an international ban on their trade, populations have suffered from illegal trafficking due to beliefs in East Asia that their ground-up scales can stimulate lactation or cure cancer or asthma. In the past decade, numerous seizures of illegally trafficked pangolin and pangolin meat have taken place in Asia. In one such incident in April 2013, of pangolin meat were seized from a Chinese vessel that ran aground in the Philippines. In another case in August 2016, an Indonesian man was arrested after police raided his home and found over 650 pangolins in freezers on his property. The same threat is reported in Nigeria, where the animal is on the verge of extinction due to overexploitation. The overexploitation comes from hunting pangolins for game meat and the reduction of their forest habitats due to deforestation caused by timber harvesting. The pangolin are hunted as game meat for both medicinal purposes and food consumption.
Virology
COVID-19 infection
The nucleic acid sequence of a specific receptor-binding domain of the spike protein belonging to coronaviruses taken from pangolins was found to be a 99% match with SARS coronavirus 2 (SARS-CoV-2), the virus which causes COVID-19 and is responsible for the COVID-19 pandemic. Researchers in Guangzhou, China, hypothesized that SARS-CoV-2 had originated in bats, and prior to infecting humans, was circulating among pangolins. The illicit Chinese trade of pangolins for use in traditional Chinese medicine was suggested as a vector for human transmission. However, whole-genome comparison found that the pangolin and human coronaviruses share only up to 92% of their RNA. Ecologists worried that the early speculation about pangolins being the source may have led to mass slaughters, endangering them further, which was similar to what happened to Asian palm civets during the SARS outbreak. It was later proved that the testing which suggested that pangolins were a potential host for the virus was flawed, when genetic analysis showed that the spike protein and its binding to receptors in pangolins had minimal effect from the virus, and therefore were not likely mechanisms for COVID-19 infections in humans.
Pestivirus and Coltivirus
In 2020, two novel RNA viruses distantly related to pestiviruses and coltiviruses have been detected in the genomes of dead Manis javanica and Manis pentadactyla. To refer to both sampling site and hosts, they were named Dongyang pangolin virus (DYPV) and Lishui pangolin virus (LSPV). The DYPV pestivirus was also identified in Amblyomma javanense nymph ticks from a diseased pangolin.
Folk medicine
Pangolin scales and flesh are used as ingredients for various traditional Chinese medicine preparations. While no scientific evidence exists for the efficacy of those practices, and they have no logical mechanism of action, their popularity still drives the black market for animal body parts, despite concerns about toxicity, transmission of diseases from animals to humans, and species extermination. The ongoing demand for parts as ingredients continues to fuel pangolin poaching, hunting and trading.
The first record of pangolin scales occurs in Ben Cao Jinji Zhu ("Variorum of Shennong's Classic of Materia Medica", 500 CE), which recommends pangolin scales for protection against ant bites; burning the scales as a cure for people crying hysterically during the night. During the Tang dynasty, a recipe for expelling evil spirits with a formulation of scales, herbs, and minerals appeared in 682, and in 752 CE the idea that pangolin scales could also stimulate milk secretion in lactating women, one of the main uses today, was recommended in the Wai Tai Mi Yao ("Arcane Essentials from the Imperial Library"). In the Song dynasty, the notion of penetrating and clearing blockages was emphasized in the Taiping sheng hui fan ("Formulas from Benevolent Sages Compiled During the Era of Peace and Tranquility"), compiled by Wang Huaiyin in 992.
In the 21st century, the main uses of pangolin scales are quackery practices based on unproven claims the scales dissolve blood clots, promote blood circulation, or help lactating women secrete milk. The supposed health effects of pangolin meat and scales claimed by folk medicine practitioners are based on their consumption of ants, long tongues, and protective scales.
The official pharmacopoeia of the People's Republic of China included Chinese pangolin scales as an ingredient in TCM formulations. Pangolins were removed from the pharmacopoeia starting from the first half of 2020. Although pangolin scales have been removed from the list of raw ingredients, the scales are still listed as a key ingredient in various medicines.
Pangolin parts are also used for medicinal purposes in other Asian countries such as India, Nepal and Pakistan. In some parts of India and Nepal, locals believe that wearing the scales of a pangolin can help prevent pneumonia. Pangolin scales have also been used for medicinal purposes in Malaysia, Indonesia and northern Myanmar. Indigenous people in southern Palawan, Philippines, have held the belief that elders could avoid prostate illnesses by wearing belts made with the scales.
Conservation
As a result of increasing threats to pangolins, mainly in the form of illegal, international trade in pangolin skin, scales, and meat, these species have received increasing conservation attention in recent years. , the IUCN considered all eight species of pangolin on its Red List of Threatened Species as threatened. The IUCN SSC Pangolin Specialist Group launched a global action plan to conserve pangolins, dubbed "Scaling up Pangolin Conservation", in July 2014. This action plan aims to improve all aspects of pangolin conservation with an added emphasis on combating poaching and trafficking of the animal while educating communities on its importance. Another suggested approach to fighting pangolin (and general wildlife) trafficking consists in "following the money" rather than "the animal", which aims to disrupt smugglers' profits by interrupting money flows. Financial intelligence gathering could thus become a key tool in protecting these animals, although this opportunity is often overlooked. In 2018, a Chinese NGO launched the Counting Pangolins movement, calling for joint efforts to save the mammals from trafficking. Wildlife conservation group TRAFFIC has identified 159 smuggling routes used by pangolin traffickers and aims to shut these down.
Many attempts have been made to breed pangolins in captivity, but due to their reliance on wide-ranging habitats and very particular diets, these attempts are often unsuccessful. Pangolins have significantly decreased immune responses due to a genetic dysfunction, making them extremely fragile. They are susceptible to diseases such as pneumonia and the development of ulcers in captivity, complications that can lead to an early death. In addition, pangolins rescued from illegal trade often have a higher chance of being infected with parasites such as intestinal worms, further lessening their chance for rehabilitation and reintroduction to the wild.
The idea of farming pangolins to reduce the number being illegally trafficked is being explored with little success. The third Saturday in February is promoted as World Pangolin Day by the conservation NPO Annamiticus. World Pangolin Day has been noted for its effectiveness in generating awareness about pangolins.
In 2017, Jackie Chan made a public service announcement called WildAid: Jackie Chan & Pangolins (Kung Fu Pangolin).
In December 2020, a study found that it is "not too late" to establish conservation efforts for Philippine pangolins (Manis culionensis), a species that is only found on the island province of Palawan.
Taiwan
Taiwan is one of the few conservation grounds for pangolins in the world after the country enacted the 1989 Wildlife Conservation Act. The introduction of Wildlife Rehabilitation Centers in places like Luanshan (Yanping Township) in Taitung and Xiulin townships in Hualien became important communities for protecting pangolins and their habitats and has greatly improved the survival of pangolins. These centers work with local aboriginal tribes and forest police in the National Police Agency to prevent poaching, trafficking, and smuggling of pangolins, especially to black markets in China. These centers have also helped to reveal the causes of death and injury among Taiwan's pangolin population. Today, Taiwan has the highest population density of pangolins in the world.
| Biology and health sciences | Mammals | null |
51138 | https://en.wikipedia.org/wiki/Mail | Mail | The mail or post is a system for physically transporting postcards, letters, and parcels. A postal service can be private or public, though many governments place restrictions on private systems. Since the mid-19th century, national postal systems have generally been established as a government monopoly, with a fee on the article prepaid. Proof of payment is usually in the form of an adhesive postage stamp, but a postage meter is also used for bulk mailing.
Postal authorities often have functions aside from transporting letters. In some countries, a postal, telegraph and telephone (PTT) service oversees the postal system, in addition to telephone and telegraph systems. Some countries' postal systems allow for savings accounts and handle applications for passports.
The Universal Postal Union (UPU), established in 1874, includes 192 member countries and sets the rules for international mail exchanges as a Specialized Agency of the United Nations.
Etymology
The word mail comes from the Middle English word , referring to a travelling bag or pack. It was spelled in that manner until the 17th century and is distinct from the word male. The French have a similar word, , for a trunk or large box, and is the Irish term for a bag. In the 17th century, the word mail began to appear as a reference for a bag that contained letters: "bag full of letter" (1654). Over the next hundred years the word mail began to be applied strictly to the letters themselves and the sack as the mailbag. In the 19th century, the British typically used mail to refer to letters being sent abroad (i.e. on a ship) and post to refer to letters for domestic delivery. The word Post is derived from Old French , which ultimately stems from the past participle of the Latin verb 'to lay down or place'. So in the U.K., the Royal Mail delivers the post, while in North America both the U.S. Postal Service and Canada Post deliver the mail.
The term email, short for "electronic mail", first appeared in the 1970s. The term snail mail is a retronym to distinguish it from the quicker email. Various dates have been given for its first use.
History
The practice of communication by written documents carried by an intermediary from one person or place to another almost certainly dates back nearly to the invention of writing. However, the development of formal postal systems occurred much later. The first documented use of an organized courier service for the dissemination of written documents is in Egypt, where Pharaohs used couriers to send out decrees throughout the territory of the state (2400 BCE). The earliest surviving piece of mail is also Egyptian, dating to 255 BCE.
Persia (Iran)
The first credible claim for the development of a real postal system comes from Ancient Persia. The best-documented claim (Xenophon) attributes the invention to the Persian King Cyrus the Great (550 BCE), who mandated that every province in his kingdom would organize reception and delivery of post to each of its citizens. Other writers credit his successor Darius I of Persia (521 BCE). Other sources claim much earlier dates for an Assyrian postal system, with credit given to Hammurabi (1700 BCE) and Sargon II (722 BCE). Mail may not have been the primary mission of this postal service, however. The role of the system as an intelligence gathering apparatus is well documented, and the service was (later) called angariae, a term that in time came to indicate a tax system. The Old Testament (Esther, VIII) makes mention of this system: Ahasuerus, king of Medes, used couriers for communicating his decisions.
The Persian system worked using stations (called Chapar-Khaneh), whence the message carrier (called Chapar) would ride to the next post, whereupon he would swap his horse with a fresh one for maximum performance and delivery speed. Herodotus described the system in this way: "It is said that as many days as there are in the whole journey, so many are the men and horses that stand along the road, each horse and man at the interval of a day's journey; and these are stayed neither by snow nor rain nor heat nor darkness from accomplishing their appointed course with all speed". The verse prominently features on New York's James Farley Post Office, although it uses the translation "Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds".
India
The economic growth and political stability under the Mauryan Empire (322–185 BCE) stimulated sustained development of civil infrastructure in ancient India. The Mauryans developed early Indian mail service as well as public wells, rest houses, and other facilities for the public. Common chariots called Dagana were sometimes used as mail chariots in ancient India. Couriers were used militarily by kings and local rulers to deliver information through runners and other carriers. The postmaster, the head of the intelligence service, was responsible for ensuring the maintenance of the courier system. Couriers were also used to deliver personal letters.
In South India, the Wodeyar dynasty (1399–1947) of the Kingdom of Mysore used mail service for espionage purposes thereby acquiring knowledge related to matters that took place at great distances.
By the end of the 18th century, a postal system in India was in operation. Later this system underwent complete modernization when the British Raj established its control over most of India. The Post Office Act XVII of 1837 provided that the Governor-General of India in Council had the exclusive right of conveying letters by post for hire within the territories of the East India Company. The mails were available to certain officials without charge, which became a controversial privilege as the years passed. On this basis the Indian Post Office was established on October 1, 1837.
Rome
The first well-documented postal service was that of Rome. Organized at the time of Augustus Caesar (62 BCE – 14 CE), the service was called cursus publicus and was provided with light carriages (rhedæ) pulled by fast horses. By the time of Diocletian, a parallel service was established with two-wheeled carts (birotæ) pulled by oxen. This service was reserved for government correspondence. Yet another service for citizens was later added.
Vietnam
In 1802, the first Vietnamese postal service was established under the Nguyen dynasty, under the Ministry of Rites. During the Nguyen dynasty, official documents were transported by horse and other primitive means to stations built about 25-30 kilometers apart. In 1904, three wireless communication offices were established, and in early 1906 they were merged with the postal service to form the Post and Wireless Office. In 1945, after the August Revolution, the Post and Wireless Office was renamed the Post Office under the Ministry of Transportation. In 1955, the Post Office was upgraded to the Ministry of Post.
China
Some Chinese sources claim mail or postal systems dating back to the Xia or Shang dynasties, which would be the oldest mailing service in the world. The earliest credible system of couriers was initiated by the Han dynasty (206 BCE – 220 CE), who had relay stations every 30 li (about 15km) along major routes.
The Tang dynasty (618 to 907 AD) operated a recorded 1,639 posthouses, including maritime offices, employing around 20,000 people. The system was administered by the Ministry of War and private correspondence was forbidden from the network. The Ming dynasty (1368 to 1644) sought a postal system to deliver mail quickly, securely, and cheaply. Adequate speed was always a problem, because of the slow overland transportation system, and underfunding. Its network had 1,936 posthouses every 60 li along major routes, with fresh horses available every 10 li between them. The Qing operated 1,785 posthouses throughout their lands. More efficient, however, was the system linking the international settlements, centered around Shanghai and the Treaty ports. It was the main communication system for China's international trade.
Mongol Empire
Genghis Khan installed an empire-wide messenger and postal station system named Örtöö within the Mongol Empire. During the Yuan dynasty under Kublai Khan, this system also covered the territory of China. Postal stations were used not only for the transmission and delivery of official mail but were also available for travelling officials, military men, and foreign dignitaries. These stations aided and facilitated the transport of foreign and domestic tribute specifically and the conduct of trade in general.
By the end of Kublai Khan's rule, there were more than 1400 postal stations in China alone, which in turn had at their disposal about 50,000 horses, 1,400 oxen, 6,700 mules, 400 carts, 6,000 boats, more than 200 dogs, and 1,150 sheep.
The stations were apart and had reliable attendants working for the mail service. Foreign observers, such as Marco Polo, have attested to the efficiency of this early postal system.
Each station was maintained by up to twenty five families. Work for postal service counted as military service. The system was still operational in 18th century when 64 stations were required for a message to cross Mongolia from the Altai Mountains to China.
Japan
The modern Japanese system was developed in the mid-19th century, closely copying European models. Japan was highly innovative in developing the world's largest and most successful postal savings system and later a postal life insurance system as well. Postmasters play a key role in linking the Japanese political system to local politics. The postmasters are high prestige, and are often hereditary. To a large extent, the postal system generated the enormous funding necessary to rapidly industrialize Japan in the late 19th century.
Korea
The postal service was one of Korea's first attempts at modernization. The Joseon Post Office was established in 1884.
Other systems
Another important postal service was created in the Islamic world by the caliph Mu'awiyya; the service was called barid, for the name of the towers built to protect the roads by which couriers travelled.
By 3000 BC, Egypt was using homing pigeons for pigeon post, taking advantage of a singular quality of this bird, which when taken far from its nest is able to find its way home due to a particularly developed sense of orientation. Messages were then tied around the legs of the pigeon, which was freed and could reach its original nest. By the 19th century, homing pigeons were used extensively for military communications.
Charlemagne extended to the whole territory of his empire the system used by Franks in northern Gaul and connected this service with that of missi dominici.
In the mid-11th century, flax traders known as the Cairo Geniza Merchants from Fustat, Egypt wrote about using a postal service known as the kutubi. The kutubi system managed routes between the cities of Jerusalem, Ramla, Tyre, Ascalon, Damascus, Aleppo, and Fustat with year-round, regular mail delivery.
Many religious orders had a private mail service. Notably, the Cistercians had one which connected more than 6,000 abbeys, monasteries, and churches. The best organization, however, was created by the Knights Templar.
In 1716, Correos y Telégrafos was established in Spain as public mail service, available to all citizens. Delivery postmen were first employed in 1756 and post boxes were installed firstly in 1762.
Thurn und Taxis
In 1505, Holy Roman Emperor Maximilian I established a postal system in the Empire, appointing Franz von Taxis to run it. This system, originally the Kaiserliche Reichspost, is often considered the first modern postal service in the world, which initiated a revolution in communication in Europe. The system combined contemporary technical and organization means to create a stable transcontinental service which was also the first to offer (fee-based) public access.
The Thurn und Taxis family, then known as Tassis, had operated postal services between Italian city-states from 1290 onward. For 500 years the postal business based in Brussels and in Frankfurt was passed from one generation to another. Following the abolition of the Empire in 1806, the Thurn-und-Taxis Post system continued as a private organization into the postage stamp era before being absorbed into the postal system of the new German Empire after 1871. 1867 July 1 the State of Prussia had to make a compensation payment of 3.000.000 Thalers reinvested by Helene von Thurn & Taxis, daughter-in-law of the last postmaster, Maximilian Karl, 6th Prince of Thurn and Taxis, into real estate, most of it continuing to exist today.
The Phone Book of the World has its roots in the long history of the avant-garde telecommunications family Thurn & Taxis. The directory is the result of Johannes, 11th Prince of Thurn & Taxis transmitting PTT culture to a student and helping with the opening of a small Telephone Boutique next to a historic Postal mansion his ancestors used to go to centuries earlier. Several European Post Carriers like Deutsche Post or Austrian Post continue to use the Thurn & Taxis Post Horn in their company logo just like the global Phone Book of the World based in the old Postal mansion of King Louis XIV in Paris.
Postal reforms
In the United Kingdom, prior to 1840 letters were paid for by the recipient and the cost was determined by the distance from sender to recipient and the number of sheets of paper rather than by a countrywide flat rate with weight restrictions. Sir Rowland Hill reformed the postal system based on the concepts of penny postage and prepayment. In his proposal, Hill also called for official pre-printed envelopes and adhesive postage stamps as alternative ways of getting the sender to pay for the postage, at a time when prepayment was optional, which led to the invention of the postage stamp, the Penny Black.
Modern transport and technology
The postal system was important in the development of modern transportation. Railways carried railway post offices. During the 20th century, air mail became the transport of choice for inter-continental mail. Postmen started to use mail trucks. The handling of mail became increasingly automated.
The Internet came to change the conditions for physical mail. Email (and in recent years social networking sites) became a fierce competitor to physical mail systems, but online auctions and Internet shopping opened new business opportunities as people often get items bought online through the mail.
Modern mail
Modern mail is organized by national and privatized services, which are reciprocally connected by international regulations, organizations and international agreements. Paper letters and parcels can be sent to almost any country in the world relatively easily and cheaply. The Internet has made the process of sending letter-like messages nearly instantaneous, and in many cases and situations correspondents use email where they previously would have used letters. The volume of paper mail sent through the U.S. Postal Service has declined by more than 15% since its peak at 213 billion pieces per annum in 2006.
Organization
Some countries have organized their mail services as public limited liability corporations without a legal monopoly.
The worldwide postal system constituting the individual national postal systems of the world's self-governing states is coordinated by the Universal Postal Union, which among other things sets international postage rates, defines standards for postage stamps and operates the system of international reply coupons.
In most countries a system of codes has been created (referred to as ZIP codes in the United States, postcodes in the United Kingdom and Australia, eircodes in Ireland and postal codes in most other countries) in order to facilitate the automation of operations. This also includes placing additional marks on the address portion of the letter or mailed object, called "bar coding". Bar coding of mail for delivery is usually expressed either by a series of vertical bars, usually called POSTNET coding or a block of dots as a two-dimensional barcode. The "block of dots" method allows for the encoding of proof of payment of postage, exact routing for delivery, and other features.
The ordinary mail service was improved in the 20th century with the use of planes for a quicker delivery. The world's first scheduled airmail post service took place in the United Kingdom between the London suburbs of Hendon and Windsor, Berkshire, on 9 September 1911. Some methods of airmail proved ineffective, however, including the United States Postal Service's experiment with rocket mail.
Receipt services were made available in order to grant the sender a confirmation of effective delivery.
Payment
Before about the mid-nineteenth century, in regions where postal systems existed, the payment models varied, but most mail was sent unpaid requiring the recipient to pay the postage fee. In some regions a partial payment was made by the sender. Today, worldwide, the most common method of prepaying postage is by buying an adhesive postage stamp to be applied to the envelope before mailing; a much less common method is to use a postage-prepaid envelope. Franking is a method of creating postage-prepaid envelopes under licence using a special machine. They are used by companies with large mail programs, such as banks and direct mail companies.
In 1998, the U.S. Postal Service authorised the first tests of a secure system of sending digital franks via the Internet to be printed out on a PC printer, obviating the necessity to license a dedicated franking machine and allowing companies with smaller mail programs to make use of the option; this was later expanded to test the use of personalized postage. The service provided by the U.S. Postal Service in 2003 allows the franks to be printed out on special adhesive-backed labels.
In 2004 the Royal Mail in the United Kingdom introduced its SmartStamp Internet-based system, allowing printing on ordinary adhesive labels or envelopes. Similar systems are being considered by postal administrations around the world.
When the pre-paid envelope or package is accepted into the mail by an agent of the postal service, the agent usually indicates by means of a cancellation that it is no longer valid for pre-payment of postage. The exceptions are when the agent forgets or neglects to cancel the mailpiece, for stamps that are pre-cancelled and thus do not require cancellation and for, in most cases, metered mail. (The "personalized stamps" authorized by the USPS and manufactured by Zazzle and other companies are in fact a form of meter label and thus do not need to be cancelled.)
Privacy and censorship
Documents should generally not be read by anyone other than the addressee; for example, in the United States of America it is a violation of federal law for anyone other than the addressee and the government to open mail. There are exceptions however: executives often assign secretaries or assistants the task of handling their mail; and postcards do not require opening and can be read by anyone. For mail contained within an envelope, there are legal provisions in some jurisdictions allowing the recording of identities of sender and recipient.
The privacy of correspondence is guaranteed by the constitutions of Mexico, Colombia, Brazil and Venezuela, and is alluded to in the European Convention on Human Rights and the Universal Declaration of Human Rights. The control of the contents inside private citizens' mail is censorship and concerns social, political, and legal aspects of civil rights. International mail and packages are subject to customs control, with the mail and packages often surveyed and their contents sometimes edited out (or even in).
There have been cases over the millennia of governments opening and copying or photographing the contents of private mail. Subject to the laws in the relevant jurisdiction, correspondence may be openly or covertly opened, or the contents determined via some other method, by the police or other authorities in some cases relating to a suspected criminal conspiracy, although black chambers (largely in the past, though there is apparently some continuance of their use today) opened extralegally.
The mail service may be allowed to open the mail if neither addressee nor sender can be located, in order to attempt to locate either. Mail service may also open the mail to inspect if it contains materials that are hazardous to transport or violate local laws.
While in most cases mail censorship is exceptional, military mail to and from soldiers is often subject to surveillance. The mail is censored to prevent leaking tactical secrets, such as troop movements or weather conditions. Depending on the country, civilian mail containing military secrets can also be monitored and censored.
Mail sent to and from inmates in jails or prisons within the United States is subject to opening and review by jail or prison staff to determine if the mail has any criminal action dictated or provides means for an escape. The only mail that is not able to be read is attorney-client mail, which is covered under the attorney-client confidentiality laws in the United States.
Rise of electronic correspondence
Modern alternatives, such as the telegraph, telephone, telex, facsimile, and email, have reduced the attractiveness of paper mail for many applications. These modern alternatives have some advantages: in addition to their speed, they may be more secure, e.g., because the general public cannot learn the address of the sender or recipient from the envelope, and occasionally traditional items of mail may fail to arrive, e.g. due to vandalism to mailboxes, unfriendly pets, and adverse weather conditions. Mail carriers due to perceived hazards or inconveniences, may refuse, officially or otherwise, to deliver mail to a particular address (for instance, if there is no clear path to the door or mailbox). On the other hand, traditional mail avoids the possibility of computer malfunctions and malware, and the recipient does not need to print it out if they wish to have a paper copy, though scanning is required to make a digital copy.
Physical mail is still widely used in business and personal communications for such reasons as legal requirements for signatures, requirements of etiquette, and the requirement to enclose small physical objects.
Since the advent of email, which is almost always much faster, the postal system has come to be referred to in Internet slang by the retronym "snail mail". Occasionally, the term "white mail" has also been used as a neutral term for postal mail.
Mainly during the 20th century, experimentation with hybrid mail has combined electronic and paper delivery. Electronic mechanisms include telegram, telex, facsimile (fax), email, and short message service (SMS). There have been methods which have combined mail and some of these newer methods, such as temporary emails, that combine facsimile transmission with overnight delivery. These vehicles commonly use a mechanical or electro-mechanical standardised writing (typing), that on the one hand makes for more efficient communication, while on the other hand makes impossible characteristics and practices that traditionally were in conventional mail, such as calligraphy.
This epoch is undoubtedly mainly dominated by mechanical writing, with a general use of no more of half a dozen standard typographic fonts from standard keyboards. However, the increased use of typewritten or computer-printed letters for personal communication and the advent of email have sparked renewed interest in calligraphy, as a letter has become more of a "special event". Long before email and computer-printed letters, however, decorated envelopes, rubber stamps and artistamps formed part of the medium of mail art.
In the 2000s (decade) with the advent of eBay and other online auction sites and online stores, postal services in industrialized nations have seen a major shift to item shipping. This has been seen as a boost to the system's usage in the wake of lower paper mail volume due to the accessibility of email.
Online post offices have emerged to give recipients a means of receiving traditional correspondence mail in a scanned electronic format.
Collecting
Postage stamps are also object of a particular form of collecting. Stamp collecting has been a very popular hobby. In some cases, when demand greatly exceeds supply, their commercial value on this specific market may become enormously greater than face value, even after use. For some postal services the sale of stamps to collectors who will never use them is a significant source of revenue; for example, stamps from Tokelau, South Georgia & South Sandwich Islands, Tristan da Cunha, Niuafoʻou and many others. Stamp collecting is commonly known as philately, although strictly the latter term refers to the study of stamps.
Another form of collecting regards postcards, a document written on a single robust sheet of paper, usually decorated with photographic pictures or artistic drawings on one of the sides, and short messages on a small part of the other side, that also contained the space for the address. In strict philatelic usage, the postcard is to be distinguished from the postal card, which has a pre-printed postage on the card. The fact that this communication is visible by other than the receiver often causes the messages to be written in jargon.
Letters are often studied as an example of literature, and also in biography in the case of a famous person. A portion of the New Testament of the Bible is composed of the Apostle Paul's epistles to Christian congregations in various parts of the Roman Empire. See below for a list of famous letters.
A style of writing, called epistolary, tells a fictional story in the form of the correspondence between two or more characters.
A makeshift mail method after stranding on a deserted island is a message in a bottle.
Deregulation
Numerous countries, including Sweden (1 January 1993), New Zealand (1998 and 2003), Germany (2005 and 2007), Argentina and Chile opened up the postal services market to new entrants. In the case of New Zealand Post Limited, this included (from 2003) its right to be the sole New Zealand postal administration member of the Universal Postal Union, thus the ending of its monopoly on stamps bearing the name New Zealand.
Types
Letters
Letter-sized mail constitutes the bulk of the contents sent through most postal services. These are usually documents printed on A4 (210×297 mm), Letter-sized (8.5×11 inches), or smaller paper and placed in envelopes.
Handwritten correspondence, while once a major means of communications between distant people, is now used less frequently due to the advent of more immediate forms of communication, such as the telephone or email. Traditional letters, however, are often considered to hark back to a "simpler time" and are still used when someone wishes to be deliberate and thoughtful about their communication. An example would be a letter of sympathy to a bereaved person.
Bills and invoices are often sent through the mail, like regular billing correspondence from utility companies and other service providers. These letters often contain a self-addressed envelope that allows the receiver to remit payment back to the company easily. While still very common, many people now opt to use online bill payment services, which eliminate the need to receive bills through the mail. Paperwork for the confirmation of large financial transactions is often sent through the mail. Many tax documents are as well.
New credit cards and their corresponding personal identification numbers are sent to their owners through the mail. The card and number are usually mailed separately several days or weeks apart for security reasons.
Bulk mail is mail that is prepared for bulk mailing, often by presorting, and processing at reduced rates. It is often used in direct marketing and other advertising mail, although it has other uses as well. The senders of these messages sometimes purchase lists of addresses (which are sometimes targeted towards certain demographics) and then send letters advertising their product or service to all recipients. Other times, commercial solicitations are sent by local companies advertising local products, like a restaurant delivery service advertising to their delivery area or a retail store sending their weekly advertising circular to a general area. Bulk mail is also often sent to companies' existing subscriber bases, advertising new products or services.
First-Class
First-Class Mail in the U.S. includes postcards, letters, large envelopes (flats), and small packages, providing each piece weighs or less. Delivery is given priority over second-class (newspapers and magazines), third class (bulk advertisements), and fourth-class mail (books and media packages). First-Class Mail prices are based on both the shape and weight of the item being mailed. Pieces over 13 ounces can be sent as Priority Mail. As of 2011 42% of First-Class Mail arrived the next day, 27% in two days, and 31% in three. The USPS expected that changes to the service in 2012 would cause about 51% to arrive in two days and most of the rest in three.
The Canada Post counterpart is Lettermail.
The British Royal Mail's 1st Class, as it is styled, is simply a priority option over 2nd Class, at a slightly higher cost. Royal Mail aims (but does not guarantee) to deliver all 1st Class letters the day after posting.
In Austria priority delivery mail is called Prio, in Switzerland A-Post.
Registered and recorded mail
Registered mail allows the location and in particular the correct delivery of a letter to be tracked. It is usually considerably more expensive than regular mail, and is typically used for valuable items. Registered mail is constantly tracked through the system.
Recorded mail is handled just like ordinary mail with the exception that it has to be signed for on receipt. This is useful for legal documents where proof of delivery is required.
In the United Kingdom recorded delivery mail (branded as signed for by the Royal Mail) is covered by The Recorded Delivery Services Act 1962. Under this legislation any document which its relevant law requires service by registered post can also be lawfully served by recorded delivery.
Repositionable notes
The United States Postal Service introduced a test allowing "repositionable notes" (for example, 3M's Post-it notes) to be attached to the outside of envelopes and bulk mailings, afterwards extending the test for an unspecified period. The repositionable note may be fixed directly to the address side of First-Class Mail and Standard Mail letter-size mailpieces. These mailpieces must meet the standards in 7.2 through 7.6. The note is included as an integral part of the mailpiece for weight and postage rate and must be accounted for in pricing.
Postal cards and postcards
Postal cards and postcards are small message cards that are sent by mail unenveloped; the distinction often, though not invariably and reliably, drawn between them is that "postal cards" are issued by the postal authority or entity with the "postal indicia" (or "stamp") preprinted on them, while postcards are privately issued and require affixing an adhesive stamp (though there have been some cases of a postal authority's issuing non-stamped postcards). Postcards are often printed to promote tourism, with pictures of resorts, tourist attractions or humorous messages on the front and allowing for a short message from the sender to be written on the back. The postage required for postcards is generally less than postage required for standard letters; however, certain technicalities such as their being oversized or having cut-outs, may result in payment of the first-class rate being required.
Postcards are also used by magazines for new subscriptions. Inside many magazines are postage-paid subscription cards that a reader can fill out and mail back to the publishing company to be billed for a subscription to the magazine. In this fashion, magazines also use postcards for other purposes, including reader surveys, contests or information requests.
Postcards are sometimes sent by charities to their members with a message to be signed and sent to a politician (e.g. to promote fair trade or third world debt cancellation).
Other mail services
Small packets are usually less than 2 kg (4 lb).
Larger envelopes are also sent through the mail. These are often composed of a stronger material than standard envelopes and are often used by businesses to transport documents that may not be folded or damaged, such as legal documents and contracts. Due to their size, larger envelopes are sometimes charged additional postage.
Packages are often sent through some postal services, usually requiring additional postage than an average letter or postcard. Many postal services have limitations as to what a package may or may not contain, usually placing limits or bans on perishable, hazardous or flammable materials. Some hazardous materials in limited quantities may be shipped with appropriate markings and packaging, like an ORM-D label. Additionally, as a result of terrorism concerns, the U.S. Postal Service subjects their packages to numerous security tests, often scanning or x-raying packages for materials that might be found in biological materials or mail bombs.
Newspapers and magazines are also sent through postal services. Many magazines are simply deposited in the mail like any other mailpiece. In the U.S., they are printed with a special Intelligent Mail barcode that acts as prepaid postage. Other magazines are now shipped in shrinkwrap to protect loose contents such as blow-in cards. During the second half of the 19th century and the first half of the 20th century, newspapers and magazines were normally posted using wrappers with a stamp imprint.
Hybrid mail, sometimes referred to as L-mail, is the electronic lodgement of mail from the mail generator's computer directly to a Postal Service provider. The Postal Service provider is then able to use electronic means to have the mail piece sorted, routed and physically produced at a site closest to the delivery point. It is a type of mail growing in popularity with some Post Office operations and individual businesses venturing into this market. In some countries, these services are available to print and deliver emails to those who are unable to receive email, such as the elderly or infirm. Services provided by Hybrid mail providers are closely related to that of mail forwarding service providers.
Business model
The business model of modern postal operators can be broken down to four stages: (1) collection, (2) sorting, (3) transportation, and (4) delivery.
Collection is the gathering of mailpieces from various locations such as customer premises, post boxes, and post offices. Newly collected mail is normally not sorted immediately upon receipt and is instead taken directly in its unsorted state to sorting centers.
Sorting is the process of segregating mailpieces into groups based on their type and destination, so that they can be loaded onto an appropriate mode of transportation headed in the general direction of their final destinations. Traditionally, mail was manually sorted by hand, but it is increasingly sorted by automatic sorting machines. The main dilemma faced by postal operators when organizing the sorting stage is whether to have a smaller number of large, centralized sorting centers (a spoke–hub distribution paradigm) or a larger number of smaller sorting centers along with a larger number of direct connections between all of them (point-to-point transit).
Transportation is the process of carrying mail from one place to another. A mailpiece usually has to be transported from one sorting center to another sorting center, where it is often sorted to another transportation segment headed towards its destination address, until it reaches the sorting center that directly serves that address.
Delivery is the process of carrying mail to final destinations such as letter boxes. Sorting centers sort mailpieces destined for addresses in their immediate vicinity to carriers serving those addresses. Transporting mail to final destinations is the most labor-intensive stage and accounts for up to 50% of postal operators' expenses. Depending upon the final destination, carriers often use vehicles, their own feet, or a combination of both. Postal operators try to control costs by presorting mail for carriers, so that they receive mail already arranged in the correct sequence for their designated routes; reducing the frequency of deliveries; or retiming deliveries so that they are spread throughout the day.
| Technology | Media and communication | null |
51143 | https://en.wikipedia.org/wiki/Giant-impact%20hypothesis | Giant-impact hypothesis | The giant-impact hypothesis, sometimes called the Theia Impact, is an astrogeology hypothesis for the formation of the Moon first proposed in 1946 by Canadian geologist Reginald Daly. The hypothesis suggests that the Early Earth collided with a Mars-sized protoplanet of the same orbit approximately 4.5 billion years ago in the early Hadean eon (about 20 to 100 million years after the Solar System coalesced), and the ejecta of the impact event later accreted to form the Moon. The impactor planet is sometimes called Theia, named after the mythical Greek Titan who was the mother of Selene, the goddess of the Moon.
Analysis of lunar rocks published in a 2016 report suggests that the impact might have been a direct hit, causing a fragmentation and thorough mixing of both parent bodies.
The giant-impact hypothesis is currently the favored hypothesis for lunar formation among astronomers. Evidence that supports this hypothesis includes:
The Moon's orbit has a similar orientation to Earth's rotation, both of which are at a similar angle to the ecliptic plane of the Solar System.
The stable isotope ratios of lunar and terrestrial rock are identical, implying a common origin.
The Earth–Moon system contains an anomalously high angular momentum, meaning the momentum contained in Earth's rotation, the Moon's rotation and the Moon revolving around Earth is significantly higher than the other terrestrial planets. A giant impact might have supplied this excess momentum.
Moon samples indicate that the Moon was once molten to a substantial, but unknown, depth. This might have required much more energy than predicted to be available from the accretion of a celestial body of the Moon's size and mass. An extremely energetic process, such as a giant impact, could provide this energy.
The Moon has a relatively small iron core, which gives it a much lower density than Earth. Computer models of a giant impact of a Mars-sized body with Earth indicate the impactor's core would likely penetrate deep into Earth and fuse with its own core. This would leave the Moon, which was formed from the ejecta of lighter crust and mantle fragments that went beyond the Roche limit and were not pulled back by gravity to re-fuse with Earth, with less remaining metallic iron than other planetary bodies.
The Moon is depleted in volatile elements compared to Earth. Vaporizing at comparably lower temperatures, they could be lost in a high-energy event, with the Moon's smaller gravity unable to recapture them while Earth did.
There is evidence in other star systems of similar collisions, resulting in debris discs.
Giant collisions are consistent with the leading theory of the formation of the Solar System.
However, several questions remain concerning the best current models of the giant-impact hypothesis. The energy of such a giant impact is predicted to have heated Earth to produce a global magma ocean, and evidence of the resultant planetary differentiation of the heavier material sinking into Earth's mantle has been documented. However, there is no self-consistent model that starts with the giant-impact event and follows the evolution of the debris into a single moon. Other remaining questions include when the Moon lost its share of volatile elements and why Venuswhich experienced giant impacts during its formationdoes not host a similar moon.
History
In 1898, George Darwin made the suggestion that Earth and the Moon were once a single body. Darwin's hypothesis was that a molten Moon had been spun from Earth because of centrifugal forces, and this became the dominant academic explanation. Using Newtonian mechanics, he calculated that the Moon had orbited much more closely in the past and was drifting away from Earth. This drifting was later confirmed by American and Soviet experiments, using laser ranging targets placed on the Moon.
Nonetheless, Darwin's calculations could not resolve the mechanics required to trace the Moon back to the surface of Earth. In 1946, Reginald Aldworth Daly of Harvard University challenged Darwin's explanation, adjusting it to postulate that the creation of the Moon was caused by an impact rather than centrifugal forces. Little attention was paid to Professor Daly's challenge until a conference on satellites in 1974, during which the idea was reintroduced and later published and discussed in Icarus in 1975 by William K. Hartmann and Donald R. Davis. Their models suggested that, at the end of the planet formation period, several satellite-sized bodies had formed that could collide with the planets or be captured. They proposed that one of these objects might have collided with Earth, ejecting refractory, volatile-poor dust that could coalesce to form the Moon. This collision could potentially explain the unique geological and geochemical properties of the Moon.
A similar approach was taken by Canadian astronomer Alastair G. W. Cameron and American astronomer William R. Ward, who suggested that the Moon was formed by the tangential impact upon Earth of a body the size of Mars. It is hypothesized that most of the outer silicates of the colliding body would be vaporized, whereas a metallic core would not. Hence, most of the collisional material sent into orbit would consist of silicates, leaving the coalescing Moon deficient in iron. The more volatile materials that were emitted during the collision probably would escape the Solar System, whereas silicates would tend to coalesce.
Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, and do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant-impact hypothesis emerged as the most favored hypothesis.
Theia
The name of the hypothesised protoplanet is derived from the mythical Greek titan Theia , who gave birth to the Moon goddess Selene. This designation was proposed initially by the English geochemist Alex N. Halliday in 2000 and has become accepted in the scientific community. According to modern theories of planet formation, Theia was part of a population of Mars-sized bodies that existed in the Solar System 4.5 billion years ago. One of the attractive features of the giant-impact hypothesis is that the formation of the Moon and Earth align; during the course of its formation, Earth is thought to have experienced dozens of collisions with planet-sized bodies. The Moon-forming collision would have been only one such "giant impact" but certainly the last significant impactor event. The Late Heavy Bombardment by much smaller asteroids may have occurred laterapproximately 3.9 billion years ago.
Basic model
Astronomers think the collision between Earth and Theia happened at about 4.4 to 4.45 billion years ago (bya); about 0.1 billion years after the Solar System began to form. In astronomical terms, the impact would have been of moderate velocity. Theia is thought to have struck Earth at an oblique angle when Earth was nearly fully formed. Computer simulations of this "late-impact" scenario suggest an initial impactor velocity below at "infinity" (far enough that gravitational attraction is not a factor), increasing as it approached to over at impact, and an impact angle of about 45°. However, oxygen isotope abundance in lunar rock suggests "vigorous mixing" of Theia and Earth, indicating a steep impact angle. Theia's iron core would have sunk into the young Earth's core, and most of Theia's mantle accreted onto Earth's mantle. However, a significant portion of the mantle material from both Theia and Earth would have been ejected into orbit around Earth (if ejected with velocities between orbital velocity and escape velocity) or into individual orbits around the Sun (if ejected at higher velocities).
Modelling has hypothesised that material in orbit around Earth may have accreted to form the Moon in three consecutive phases; accreting first from the bodies initially present outside Earth's Roche limit, which acted to confine the inner disk material within the Roche limit. The inner disk slowly and viscously spread back out to Earth's Roche limit, pushing along outer bodies via resonant interactions. After several tens of years, the disk spread beyond the Roche limit, and started producing new objects that continued the growth of the Moon, until the inner disk was depleted in mass after several hundreds of years. Material in stable Kepler orbits was thus likely to hit the Earth–Moon system sometime later (because the Earth–Moon system's Kepler orbit around the Sun also remains stable). Estimates based on computer simulations of such an event suggest that some twenty percent of the original mass of Theia would have ended up as an orbiting ring of debris around Earth, and about half of this matter coalesced into the Moon. Earth would have gained significant amounts of angular momentum and mass from such a collision. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact, and Earth's equator and the Moon's orbit would have become coplanar.
Not all of the ring material need have been swept up right away: the thickened crust of the Moon's far side suggests the possibility that a second moon about in diameter formed in a Lagrange point of the Moon. The smaller moon may have remained in orbit for tens of millions of years. As the two moons migrated outward from Earth, solar tidal effects would have made the Lagrange orbit unstable, resulting in a slow-velocity collision that "pancaked" the smaller moon onto what is now the far side of the Moon, adding material to its crust.
Lunar magma cannot pierce through the thick crust of the far side, causing fewer lunar maria, while the near side has a thin crust displaying the large maria visible from Earth.
Above a high resolution threshold for simulations, a study published in 2022 finds that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. Furthermore, the outer layers of these directly formed satellites are molten over cooler interiors and are composed of around 60% proto-Earth material. This could alleviate the tension between the Moon's Earth-like isotopic composition and the different signature expected for the impactor. Immediate formation opens up new options for the Moon's early orbit and evolution, including the possibility of a highly tilted orbit to explain the lunar inclination, and offers a simpler, single-stage scenario for the origin of the Moon.
Composition
In 2001, a team at the Carnegie Institution of Washington reported that the rocks from the Apollo program carried an isotopic signature that was identical with rocks from Earth, and were different from almost all other bodies in the Solar System.
In 2014, a team in Germany reported that the Apollo samples had a slightly different isotopic signature from Earth rocks. The difference was slight, but statistically significant. One possible explanation is that Theia formed near Earth.
This empirical data showing close similarity of composition can be explained only by the standard giant-impact hypothesis, as it is extremely unlikely that two bodies prior to collision had such similar composition.
Equilibration hypothesis
In 2007, researchers from the California Institute of Technology showed that the likelihood of Theia having an identical isotopic signature as Earth was very small (less than 1 percent). They proposed that in the aftermath of the giant impact, while Earth and the proto-lunar disc were molten and vaporised, the two reservoirs were connected by a common silicate vapor atmosphere and that the Earth–Moon system became homogenised by convective stirring while the system existed in the form of a continuous fluid. Such an "equilibration" between the post-impact Earth and the proto-lunar disc is the only proposed scenario that explains the isotopic similarities of the Apollo rocks with rocks from Earth's interior. For this scenario to be viable, however, the proto-lunar disc would have to endure for about 100 years. Work is ongoing to determine whether or not this is possible.
Direct collision hypothesis
According to research (2012) to explain similar compositions of the Earth and the Moon based on simulations at the University of Bern by physicist Andreas Reufer and his colleagues, Theia collided directly with Earth instead of barely swiping it. The collision speed may have been higher than originally assumed, and this higher velocity may have totally destroyed Theia. According to this modification, the composition of Theia is not so restricted, making a composition of up to 50% water ice possible.
Synestia hypothesis
One effort, in 2018, to homogenise the products of the collision was to energise the primary body by way of a greater pre-collision rotational speed. This way, more material from the primary body would be spun off to form the Moon. Further computer modelling determined that the observed result could be obtained by having the pre-Earth body spinning very rapidly, so much so that it formed a new celestial object which was given the name 'synestia'. This is an unstable state that could have been generated by yet another collision to get the rotation spinning fast enough. Further modelling of this transient structure has shown that the primary body spinning as a doughnut-shaped object (the synestia) existed for about a century (a very short time) before it cooled down and gave birth to Earth and the Moon.
Terrestrial magma ocean hypothesis
Another model, in 2019, to explain the similarity of Earth and the Moon's compositions posits that shortly after Earth formed, it was covered by a sea of hot magma, while the impacting object was likely made of solid material. Modelling suggests that this would lead to the impact heating the magma much more than solids from the impacting object, leading to more material being ejected from the proto-Earth, so that about 80% of the Moon-forming debris originated from the proto-Earth. Many prior models had suggested 80% of the Moon coming from the impactor.
Evidence
Indirect evidence for the giant impact scenario comes from rocks collected during the Apollo Moon landings, which show oxygen isotope ratios nearly identical to those of Earth. The highly anorthositic composition of the lunar crust, as well as the existence of KREEP-rich samples, suggest that a large portion of the Moon once was molten; and a giant impact scenario could easily have supplied the energy needed to form such a magma ocean. Several lines of evidence show that if the Moon has an iron-rich core, it must be a small one. In particular, the mean density, moment of inertia, rotational signature, and magnetic induction response of the Moon all suggest that the radius of its core is less than about 25% the radius of the Moon, in contrast to about 50% for most of the other terrestrial bodies. Appropriate impact conditions satisfying the angular momentum constraints of the Earth–Moon system yield a Moon formed mostly from the mantles of Earth and the impactor, while the core of the impactor accretes to Earth. Earth has the highest density of all the planets in the Solar System; the absorption of the core of the impactor body explains this observation, given the proposed properties of the early Earth and Theia.
Comparison of the zinc isotopic composition of lunar samples with that of Earth and Mars rocks provides further evidence for the impact hypothesis. Zinc is strongly fractionated when volatilised in planetary rocks, but not during normal igneous processes, so zinc abundance and isotopic composition can distinguish the two geological processes. Moon rocks contain more heavy isotopes of zinc, and overall less zinc, than corresponding igneous Earth or Mars rocks, which is consistent with zinc being depleted from the Moon through evaporation, as expected for the giant impact origin.
Collisions between ejecta escaping Earth's gravity and asteroids would have left impact heating signatures in stony meteorites; analysis based on assuming the existence of this effect has been used to date the impact event to 4.47 billion years ago, in agreement with the date obtained by other means.
Warm silica-rich dust and abundant SiO gas, products of high velocity impactsover between rocky bodies, have been detected by the Spitzer Space Telescope around the nearby (29 pc distant) young (~12 My old) star HD 172555 in the Beta Pictoris moving group. A belt of warm dust in a zone between 0.25AU and 2AU from the young star HD 23514 in the Pleiades cluster appears similar to the predicted results of Theia's collision with the embryonic Earth, and has been interpreted as the result of planet-sized objects colliding with each other. A similar belt of warm dust was detected around the star BD+20°307 (HIP 8920, SAO 75016).
On 1 November 2023, scientists reported that, according to computer simulations, remnants of Theia could be still visible inside the Earth as two giant anomalies of the Earth's mantle.
Difficulties
This lunar origin hypothesis has some difficulties that have yet to be resolved. For example, the giant-impact hypothesis implies that a surface magma ocean would have formed following the impact. Yet there is no evidence that Earth ever had such a magma ocean and it is likely there exists material that has never been processed in a magma ocean.
Composition
A number of compositional inconsistencies need to be addressed.
The ratios of the Moon's volatile elements are not explained by the giant-impact hypothesis. If the giant-impact hypothesis is correct, these ratios must be due to some other cause.
The presence of volatiles such as water trapped in lunar basalts and carbon emissions from the lunar surface is more difficult to explain if the Moon was caused by a high-temperature impact.
The iron oxide (FeO) content (13%) of the Moon, intermediate between that of Mars (18%) and the terrestrial mantle (8%), rules out most of the source of the proto-lunar material from Earth's mantle.
If the bulk of the proto-lunar material had come from an impactor, the Moon should be enriched in siderophilic elements, when, in fact, it is deficient in them.
The Moon's oxygen isotopic ratios are essentially identical to those of Earth. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each Solar System body. If a separate proto-planet Theia had existed, it probably would have had a different oxygen isotopic signature than Earth, as would the ejected mixed material.
The Moon's titanium isotope ratio (50Ti/47Ti) appears so close to Earth's (within 4 ppm), that little if any of the colliding body's mass could likely have been part of the Moon.
Lack of a Venusian moon
If the Moon was formed by such an impact, it is possible that other inner planets also may have been subjected to comparable impacts. A moon that formed around Venus by this process would have been unlikely to escape. If such a moon-forming event had occurred there, a possible explanation of why the planet does not have such a moon might be that a second collision occurred that countered the angular momentum from the first impact. Another possibility is that the strong tidal forces from the Sun would tend to destabilise the orbits of moons around close-in planets. For this reason, if Venus's slow rotation rate began early in its history, any satellites larger than a few kilometers in diameter would likely have spiraled inwards and collided with Venus.
Simulations of the chaotic period of terrestrial planet formation suggest that impacts like those hypothesised to have formed the Moon were common. For typical terrestrial planets with a mass of 0.5 to 1 Earth masses, such an impact typically results in a single moon containing 4% of the host planet's mass. The inclination of the resulting moon's orbit is random, but this tilt affects the subsequent dynamic evolution of the system. For example, some orbits may cause the moon to spiral back into the planet. Likewise, the proximity of the planet to the star will also affect the orbital evolution. The net effect is that it is more likely for impact-generated moons to survive when they orbit more distant terrestrial planets and are aligned with the planetary orbit.
Possible origin of Theia
In 2004, Princeton University mathematician Edward Belbruno and astrophysicist J. Richard Gott III proposed that Theia coalesced at the or Lagrangian point relative to Earth (in about the same orbit and about 60° ahead or behind), similar to a trojan asteroid. Two-dimensional computer models suggest that the stability of Theia's proposed trojan orbit would have been affected when its growing mass exceeded a threshold of approximately 10% of Earth's mass (the mass of Mars). In this scenario, gravitational perturbations by planetesimals caused Theia to depart from its stable Lagrangian location, and subsequent interactions with proto-Earth led to a collision between the two bodies.
In 2008, evidence was presented that suggests that the collision might have occurred later than the accepted value of 4.53 Gya, at approximately 4.48 Gya. A 2014 comparison of computer simulations with elemental abundance measurements in Earth's mantle indicated that the collision occurred approximately 95 My after the formation of the Solar System.
It has been suggested that other significant objects might have been created by the impact, which could have remained in orbit between Earth and the Moon, stuck in Lagrangian points. Such objects might have stayed within the Earth–Moon system for as long as 100 million years, until the gravitational tugs of other planets destabilised the system enough to free the objects. A study published in 2011 suggested that a subsequent collision between the Moon and one of these smaller bodies caused the notable differences in physical characteristics between the two hemispheres of the Moon. This collision, simulations have supported, would have been at a low enough velocity so as not to form a crater; instead, the material from the smaller body would have spread out across the Moon (in what would become its far side), adding a thick layer of highlands crust. The resulting mass irregularities would subsequently produce a gravity gradient that resulted in tidal locking of the Moon so that today, only the near side remains visible from Earth. However, mapping by the GRAIL mission has ruled out this scenario.
In 2019, a team at the University of Münster reported that the molybdenum isotopic composition in Earth's primitive mantle originates from the outer Solar System, hinting at the source of water on Earth. One possible explanation is that Theia originated in the outer Solar System.
Alternative hypotheses
Other mechanisms that have been suggested at various times for the Moon's origin are that the Moon was spun off from Earth's molten surface by centrifugal force; that it was formed elsewhere and was subsequently captured by Earth's gravitational field; or that Earth and the Moon formed at the same time and place from the same accretion disk. None of these hypotheses can account for the high angular momentum of the Earth–Moon system.
Another hypothesis attributes the formation of the Moon to the impact of a large asteroid with Earth much later than previously thought, creating the satellite primarily from debris from Earth. In this hypothesis, the formation of the Moon occurs 60–140 million years after the formation of the Solar System (as compared to hypothesized Theia impact at 4.527 ± 0.010 billion years). The asteroid impact in this scenario would have created a magma ocean on Earth and the proto-Moon with both bodies sharing a common plasma metal vapor atmosphere. The shared metal vapor bridge would have allowed material from Earth and the proto-Moon to exchange and equilibrate into a more common composition.
Yet another hypothesis proposes that the Moon and Earth formed together, not from the collision of once-distant bodies. This model, published in 2012 by Robin M. Canup, suggests that the Moon and Earth formed from a massive collision of two planetary bodies, each larger than Mars, which then re-collided to form what is now called Earth. After the re-collision, Earth was surrounded by a disk of material which accreted to form the Moon.
| Physical sciences | Solar System | Astronomy |
51215 | https://en.wikipedia.org/wiki/Airliner | Airliner | An airliner is a type of airplane for transporting passengers and air cargo. Such aircraft are most often operated by airlines. The modern and most common variant of the airliner is a long, tube shaped, and jet powered aircraft. The largest of them are wide-body jets which are also called twin-aisle because they generally have two separate aisles running from the front to the back of the passenger cabin. These are usually used for long-haul flights between airline hubs and major cities. A smaller, more common class of airliners is the narrow-body or single-aisle. These are generally used for short to medium-distance flights with fewer passengers than their wide-body counterparts.
Regional airliners typically seat fewer than 100 passengers and may be powered by turbofans or turboprops. These airliners are the non-mainline counterparts to the larger aircraft operated by the major carriers, legacy carriers, and flag carriers, and are used to feed traffic into the large airline hubs. These regional routes then form the spokes of a hub-and-spoke air transport model.
The lightest aircraft are short-haul regional feeder airliner type aircraft that carry a small number of passengers are called commuter aircraft, commuterliners, feederliners, and air taxis, depending on their size, engines, how they are marketed, region of the world, and seating configurations. The Beechcraft 1900, for example, has only 19 seats.
History
Emergence
When the Wright brothers made the world's first sustained heavier-than-air flight, they laid the foundation for what would become a major transport industry. Their flight, performed in the Wright Flyer during 1903, was just 11 years before what is often defined as the world's first airliner. By the 1960s, airliners had expanded capabilities, making a significant impact on global society, economics, and politics.
During 1913, Igor Sikorsky developed the first large multi-engine airplane, the Russky Vityaz. This aircraft was subsequently refined into the more practical Ilya Muromets, being furnished with dual controls for a pilot and copilot and a comfortable cabin with a lavatory, cabin heating and lighting.
This large four-engine biplane was further adapted into an early bomber aircraft, preceding subsequent transport and bomber aircraft.
It first flew on 10 December 1913 and took off for its first demonstration flight with 16 passengers aboard on 25 February 1914.
However, it was never used as a commercial airliner due to the onset of the First World War which led to military applications being prioritised.
Interwar period
In 1919, shortly after the end of the First World War, large numbers of ex-military aircraft flooded the market. One such aircraft was the French Farman F.60 Goliath, which had originally been designed as a long-range heavy bomber; a number were converted for commercial use into passenger airliners starting in 1919, being able to accommodate a maximum of 14 seated passengers. and around 60 were built. Initially, several publicity flights were made, including one on 8 February 1919, when the Goliath flew 12 passengers from Toussus-le-Noble to RAF Kenley, near Croydon, despite having no permission from the British authorities to land. Dozens of early airlines subsequently procured the type. One high-profile flight, made on 11 August 1919, involved an F.60 flying eight passengers and a ton of supplies from Paris via Casablanca and Mogador to Koufa, north of Saint-Louis, Senegal, flying more than .
Another important airliner built in 1919 was the Airco DH.16; a redesigned Airco DH.9A with a wider fuselage to accommodate an enclosed cabin seating four passengers, plus pilot in an open cockpit. In March 1919, the prototype first flew at Hendon Aerodrome. Nine aircraft were built, all but one being delivered to the nascent airline, Aircraft Transport and Travel, which used the first aircraft for pleasure flying, and on 25 August 1919, it inaugurated the first scheduled international airline service from London to Paris. One aircraft was sold to the River Plate Aviation Company in Argentina, to operate a cross-river service between Buenos Aires and Montevideo.
Meanwhile, the competing Vickers converted its successful First World War era bomber, the Vickers Vimy, into a civilian version, the Vimy Commercial. It was redesigned with a larger-diameter fuselage (largely of spruce plywood), and first flew from the Joyce Green airfield in Kent on 13 April 1919.
The world's first all-metal transport aircraft was the Junkers F.13, which also made its first flight in 1919. Junkers marketed the aircraft towards business travellers and commercial operators, and European entrepreneurs bought examples for their private use and business trips. Over 300 Junkers F 13s were built between 1919 and 1932.
The Dutch Fokker company produced the Fokker F.II, then the enlarged F.III. These were used by the Dutch airline KLM, including on its Amsterdam-London service in 1921. A relatively reliable aircraft for the era, the Fokkers were flying to destinations across Europe, including Bremen, Brussels, Hamburg, and Paris.
The Handley Page company in Britain produced the Handley Page Type W, its first civil transport aircraft. It housed two crew in an open cockpit and 15 passengers in an enclosed cabin. Powered by two Napier Lion engines, the prototype first flew on 4 December 1919, shortly after it was displayed at the 1919 Paris Air Show at Le Bourget. It was ordered by the Belgian firm Sabena, a further ten Type Ws were produced under license in Belgium by SABCA. In 1921 the Air Ministry ordered three aircraft, built as the W.8b, for use by Handley Page Transport, and later by Imperial Airways, on services to Paris and Brussels.
In France, the Bleriot-SPAD S.33 was introduced during the early 1920s. It was commercially successful, initially serving the Paris-London route, and later on continental routes. The enclosed cabin could carry four passengers with an extra seat in the cockpit. It was further developed into the Blériot-SPAD S.46. Throughout the 1920s, companies in Britain and France were at the forefront of the civil airliner industry.
By 1921, the capacity of airliners needed to be increased to achieve more favourable economics. The English company de Havilland, built the 10-passenger DH.29 monoplane, while starting work on the design of the DH.32, an eight-seater biplane with a more economical but less powerful Rolls-Royce Eagle engine. For more capacity, DH.32 development was replaced by the DH.34 biplane, accommodating 10 passengers. A commercially successful aircraft, Daimler Airway ordered a batch of nine.
The Ford Trimotor had two engines mounted on the wings and one in the nose, and a slabsided body, it carried eight passengers and was produced from 1925 to 1933. It was an important early airliner in America. It was used by the predecessor to Trans World Airlines, and by other airlines long after production ceased. The Trimotor helped to popularise numerous aspects of modern aviation infrastructure, including paved runways, passenger terminals, hangars, airmail, and radio navigation. Pan Am opened up transoceanic service in the late 1920s and early 1930s, based on a series of large seaplanes – the Sikorsky S-38 through Sikorsky S-42.
By the 1930s, the airliner industry had matured and large consolidated national airlines were established with regular international services that spanned the globe, including Imperial Airways in Britain, Lufthansa in Germany, KLM in the Netherlands, and United Airlines in America. Multi-engined aircraft were now capable of transporting dozens of passengers in comfort.
During the 1930s, the British de Havilland Dragon emerged as a short-haul, low-capacity airliner. Its relatively simple design could carry six passengers, each with of luggage, on the London-Paris route on a fuel consumption of 13 gal (49 L) per hour. The DH.84 Dragon entered worldwide service. During early August 1934, one performed the first non-stop flight between the Canadian mainland and Britain in 30 hours 55 minutes, although the intended destination had originally been Baghdad in Iraq. British production of the Dragon ended in favour of the de Havilland Dragon Rapide, a faster and more comfortable successor.
By November 1934, series production of the Dragon Rapide had commenced. De Havilland invested into advanced features including elongated rear windows, cabin heating, thickened wing tips, and a strengthened airframe for a higher gross weight of . Later aircraft were amongst the first airliners to be fitted with flaps for improved landing performance, along with downwards-facing recognition light and metal propellers, which were often retrofitted to older aircraft. It was also used in military roles; civil Dragon Rapides were impressed into military service during the Second World War.
Metal airliners came into service in the 1930s. In the United States, the Boeing 247, and the 14-passenger Douglas DC-2, flew during the first half of the decade, while the more powerful, faster, 21–32 passenger Douglas DC-3 first appeared in 1935. DC-3s were produced in quantity for the Second World War and were sold as surplus afterward, becoming widespread within the commercial sector. It was one of first airliners to be profitable without the support of postal or government subsidies.
Long-haul flights were expanded during the 1930s as Pan American Airways and Imperial Airways competed on transatlantic travel using fleets of flying boats, such as the British Short Empire and the American Boeing 314. Imperial Airways' order for 28 Empire flying boats was viewed by some as a bold gamble. At the time, flying boats were the only practical means of building aircraft of such size and weight as land-based aircraft would have unfeasibly poor field performance. One Boeing 314, registration NC18602, became the first commercial plane to circumnavigate the globe during December 1941 and January 1942.
The postwar era
United Kingdom
In the United Kingdom, the Brabazon Committee was formed in 1942 under John Moore-Brabazon, 1st Baron Brabazon of Tara to forecast advances in aviation technology and the air transport needs of the postwar British Empire (in South Asia, Africa, and the Near and Far East) and Commonwealth (Australia, Canada, New Zealand). For British use, multi-engine aircraft types were allegedly split between the US for military transport aircraft and the UK for heavy bombers. That such a policy was suggested or implemented have been disputed, at least by Sir Peter Masefield. British aircraft manufacturers were tied up to fulfill military requirements, and had no free capacity to address other matters though the war.
The committee final report pushed four designs for the state-owned airlines British Overseas Airways Corporation (BOAC) and later British European Airways (BEA): three piston-powered aircraft of varying sizes, and a jet-powered 100-seat design at the request of Geoffrey de Havilland, involved in the first jet fighters development.
After a brief contest, the Type I design was given to the Bristol Aeroplane Company, building on a "100 ton bomber" submission. This evolved into the Bristol Brabazon but this project folded in 1951 as BOAC lost interest and the first aircraft needed a costly wing re-design to accommodate the Bristol Proteus engine.
The Type II was split between the de Havilland Dove and Airspeed Ambassador conventional piston designs, and the Vickers model powered by newly developed turboprops: first flown in 1948, the VC.2 Viceroy was the first turboprop design to enter service; a commercial success with 445 Viscounts built. The Type III requirement led to the conventional Avro Tudor and the more ambitious Bristol Britannia, although both aircraft suffered protracted developments, with the latter entering service with BOAC in February 1957, over seven years following its order.
The jet-powered Type IV became the de Havilland Comet in 1949. It featured an aerodynamically clean design with four de Havilland Ghost turbojet engines buried in the wings, a pressurised fuselage, and large square windows. On 2 May 1952, the Comet took off on the world's first jetliner flight carrying fare-paying passengers and simultaneously inaugurated scheduled service between London and Johannesburg. However, roughly one year after introduction, three Comets broke up mid-flight due to airframe metal fatigue, not well understood at the time. The Comet was grounded and tested to discover the cause, while rival manufacturers heeded the lessons learned while developing their own aircraft. The improved Comet 2 and the prototype Comet 3 culminated in the redesigned Comet 4 series which debuted in 1958 and had a productive career over 30 years, but sales never fully recovered.
By the 1960s, the UK had lost the airliner market to the US due to the Comet disaster and a smaller domestic market, not regained by later designs like the BAC 1-11, Vickers VC10, and Hawker Siddeley Trident. The STAC committee was formed to consider supersonic designs and worked with Bristol to create the Bristol 223, a 100-passenger transatlantic airliner. The effort was later merged with similar efforts in France to create the Concorde supersonic airliner to share the cost.
United States
The first batch of the Douglas DC-4s went to the U.S. Army and Air Forces, and was named the C-54 Skymaster. Some ex-military DC-6s were later converted into airliners, with both passenger and cargo versions flooding the market shortly after the war's end. Douglas also developed a pressurized version of the DC-4, which it designated the Douglas DC-6. Rival company Lockheed produced the Constellation, a triple-tailed aircraft with a wider fuselage than the DC-4.
The Boeing 377 Stratocruiser was based on the C-97 Stratofreighter military transport, it had a double deck and a pressurized fuselage.
Convair produced the Convair 240, a 40-person pressurized airplane; 566 examples flew. Convair later developed the Convair 340, which was slightly larger and could accommodate between 44 and 52 passengers, of which 311 were produced. The firm also commenced work on the Convair 37, a relatively large double-deck airliner that would have served transcontinental routes; however, the project was abandoned due to a lack of customer demand and its high development costs.
Rival planes include the Martin 2-0-2 and Martin 4-0-4, but the 2-0-2 had safety concerns and was unpressurized, while the 4-0-4 only sold around 100 units.
During the postwar years, engines became much larger and more powerful, and safety features such as deicing, navigation, and weather information were added to the planes. American planes were allegedly more comfortable and had superior flight decks than those produced in Europe.
France
In 1936, the French Air Ministry requested transatlantic flying boats that could hold at least 40 passengers, leading to three Latécoère 631s introduced by Air France in July 1947. However, two crashed and the third was removed from service over safety concerns. The SNCASE Languedoc was the first French post-war airliner. Accommodating up to 44 seats, 40 aircraft were completed for Air France between October 1945 and April 1948. Air France withdrew the last Languedoc from its domestic routes in 1954, being replaced by later designs. First flying in February 1949, the four-engined Breguet Deux-Ponts was a double-decker transport for passengers and cargo. Air France used it on its busiest routes, including from Paris to the Mediterranean area and to London.
The Sud-Aviation Caravelle was developed during the late 1950s as the first short range jet airliner. The nose and cockpit layout were licensed from the de Havilland Comet, along with some fuselage elements. Entering service in mid 1959, 172 Caravelles had been sold within four years and six versions were in production by 1963. Sud Aviation then focused its design team on a Caravelle successor.
The Super-Caravelle was a supersonic transport project of similar size and range to the Caravelle. It was merged with the similar Bristol Aeroplane Company project into the Anglo-French Concorde. The Concorde entered service in January 1967 as the second and last commercial supersonic transport, after large overruns and delays, costing £1.3 billion. All subsequent French airliner efforts were part of the Airbus pan-European initiative.
USSR
Soon after the war, most of the Soviet fleet of airliners consisted of DC-3s or Lisunov Li-2s. These planes were in desperate need of replacement, and in 1946, the Ilyushin Il-12 made its first flight. The Il-12 was very similar in design to American Convair 240, except was unpressurized. In 1953, the Ilyushin Il-14 made its first flight, and this version was equipped with much more powerful engines. The main contribution that the Soviets made in regards to airliners was the Antonov An-2. This plane is a biplane, unlike most of the other airliners, and sold more units than any other transport plane.
Types
Narrow-body airliners
The most common airliners are the narrow-body aircraft, or single-aisles.
The earliest jet airliners were narrowbodies: the initial de Havilland Comet, the Boeing 707 and its competitor the Douglas DC-8.
They were followed by smaller models : the Douglas DC-9 and its MD-80/MD-90/Boeing 717 derivatives; the Boeing 727, 737 and 757 using the 707 cabin cross-section; or the Tupolev Tu-154, Ilyushin Il-18, and the Ilyushin Il-62.
Currently produced narrow-body airliners include the Airbus A220, A320 family, Boeing 737, Embraer E-Jet family and Comac C919, generally used for medium-haul flights with 100 to 240 passengers.
They could be joined by the in-development Irkut MC-21.
Wide-body airliners
The larger wide-body aircraft, or twin-aisle as they have two separate aisles in the cabin, are used for long-haul flights.
The first was the Boeing 747 quadjet, followed by the trijets: the Lockheed L-1011 and the Douglas DC-10, then its MD-11 stretch.
Then other quadjets were introduced: the Ilyushin Il-86 and Il-96, the Airbus A340 and the double-deck A380.
Twinjets were also put into service: the Airbus A300/A310, A330 and A350; the 767, 777 and 787.
Regional aircraft
Regional airliners seat fewer than 100 passengers.
These smaller aircraft are often used to feed traffic at large airline hubs to larger aircraft operated by the major mainline carriers, legacy carriers, or flag carriers; often sharing the same livery.
Regional jets include the Bombardier CRJ100/200 and Bombardier CRJ700 series, or the Embraer ERJ family.
Currently produced turboprop regional airliners include the Dash-8 series, and the ATR 42/72.
Commuter aircraft
Light aircraft can be used as small commuter airliners, or as air taxis.
Twin turboprops carrying up to 19 passengers include the Beechcraft 1900, Fairchild Metro, Jetstream 31, DHC-6 Twin Otter and Embraer EMB 110 Bandeirante.
Smaller airliners include the single-engined turboprops like the Cessna Caravan and Pilatus PC-12; or twin piston-powered aircraft made by Cessna, Piper, Britten-Norman, and Beechcraft.
They often lack lavatories, stand-up cabins, pressurization, galleys, overhead storage bins, reclining seats, or a flight attendant.
Engines
Until the beginning of the Jet Age, piston engines were common on propliners such as the Douglas DC-3. Nearly all modern airliners are now powered by turbine engines, either turbofans or turboprops. Gas turbine engines operate efficiently at much higher altitudes, are more reliable than piston engines, and produce less vibration and noise. The use of a common fuel type – kerosene-based jet fuel – is another advantage.
Airliner variants
Some variants of airliners have been developed for carrying freight or for luxury corporate use. Many airliners have also been modified for government use as VIP transports and for military functions such as airborne tankers (for example, the Vickers VC10, Lockheed L-1011, Boeing 707), air ambulance (USAF/USN McDonnell Douglas DC-9), reconnaissance (Embraer ERJ 145, Saab 340, and Boeing 737), as well as for troop-carrying roles.
Configuration
Modern jetliners are usually low-wing designs with two engines mounted underneath the swept wings, while turboprop aircraft are slow enough to use straight wings. Smaller airliners sometimes have their engines mounted on either side of the rear fuselage. Numerous advantages and disadvantages exist due to this arrangement. Perhaps the most important advantage to mounting the engines under the wings is that the total aircraft weight is more evenly distributed across the wingspan, which imposes less bending moment on the wings and allows for a lighter wing structure. This factor becomes more important as aircraft weight increases, and no in-production airliners have both a maximum takeoff weight more than 50 tons and engines mounted on the fuselage. The Antonov An-148 is the only in-production jetliner with high-mounted wings (usually seen in military transport aircraft), which reduces the risk of damage from unpaved runways.
Except for a few experimental or military designs, all aircraft built to date have had all of their weight lifted off the ground by airflow across the wings. In terms of aerodynamics, the fuselage has been a mere burden. NASA and Boeing are currently developing a blended wing body design in which the entire airframe, from wingtip to wingtip, contributes lift. This promises a significant gain in fuel efficiency.
Current manufacturers
The major manufacturers with large aircraft airliners currently in production include:
Airbus (France/Germany/Spain/United Kingdom/Canada)
Antonov (Ukraine)
ATR Aircraft (France/Italy)
Boeing (United States)
Britten-Norman (United Kingdom)
Comac (China)
De Havilland Canada (Canada)
Embraer (Brazil)
Irkut Corporation (UAC, Russia, includes Sukhoi)
Let Kunovice (Czech Republic)
Xi'an Aircraft Industrial Corporation (China)
The narrow-body and wide-body airliner market is dominated by Airbus and Boeing, and the regional airliner market is shared between ATR Aircraft, De Havilland Canada, and Embraer.
Setting up a reliable customer support network, ensuring uptime, availability and support 24/7 and anywhere, is critical for the success of airliner manufacturers.
Boeing and Airbus are ranked 1 and 2 in customer satisfaction for aftermarket support by a survey by Inside MRO and Air Transport World, and this is a reason why Mitsubishi Aircraft Corporation purchased the Bombardier CRJ program.
It is an entry barrier for new entrants like the Xian MA700 and Comac C919, with no credible previous experience with the MA60, or the Irkut MC-21 after the Sukhoi Superjet 100.
Notable airliners
Boeing 247 – the first modern airliner, with all-metal construction and retractable landing gear
Douglas DC-3 – very widespread, still serving
Boeing 307 Stratoliner – the first with a pressurized cabin
Boeing 377 Stratocruiser - popularized multiple passenger decks
Vickers Viscount – the first turboprop airliner
Lockheed Constellation – popularized the pressurized cabin
Antonov An-2 – a single engine biplane, a widespread large utility aircraft
De Havilland Comet – the first operational jetliner, grounded by early crashes
Tupolev Tu-104 – the first twinjet, developed into the first turbofan-powered airliner, the Tupolev Tu-124
Boeing 707 – the most successful early jetliner, along the less widespread Douglas DC-8
Sud Aviation Caravelle – the first jetliner with rear podded engines, the configuration of the more widespread Douglas DC-9
Boeing 737 – the most successful jet airliner by deliveries as of 2022
Tupolev Tu-144 – the first operational supersonic transport in 1975, with passenger service 1977-78
Concorde – the first supersonic airliner in passenger service, operating from 1976 to 2003; the first airliner with fly-by-wire flight controls
Boeing 747 – the first wide-body aircraft and first high-bypass turbofan-powered airliner, the largest passenger airliner until the A380
McDonnell Douglas DC-10 – the first trijet wide-body, along the later Lockheed L-1011 TriStar
Airbus A300 – the first twinjet wide-body, followed by the Boeing 767
Airbus A320 – the first airliner with digital fly-by-wire flight controls, the most ordered jet airliner
Boeing 777 – the largest twinjet
Airbus A380 – full double-deck aircraft, the largest passenger airliner
Boeing 787 – the first airliner mostly constructed with composite materials
In production aircraft
Fleet
The airliner fleet went from 13,500 in 2000 to 25,700 in 2017: 16% to 30.7% in Asia/Pacific (2,158 to 7,915), 34.7% to 23.6% in USA (4,686 to 6,069) and 24% to 20.5% in Europe (3,234 to 5,272).
In 2018, there were 29,398 airliners in service: 26,935 passenger transports and 2,463 freighters, while 2,754 others were stored.
The largest fleet was in Asia-Pacific with 8,808 (5% stored), followed by 8,572 in North America (10% stored), 7,254 in Europe (9% stored), 2,027 in Latin America, 1,510 in Middle East and 1,347 in Africa.
Narrowbody are dominant with 16,235, followed by 5,581 Widebodies, 3,743 Turboprops, 3,565 Regional jets and 399 Others.
By the end of 2018, there were 1,826 parked or in storage jetliners out of 29,824 in service (6.1%): 1,434 narrowbodies and 392 widebodies, down from 9.8% of the fleet at the end of 2012 and 11.3% at the end of 2001.
Market
Since it began, the jet airliner market had a recurring pattern of seven years of growth followed by three years of deliveries falling 30–40%, except a steady growth from 2004 due to the economic rise of China going from 3% of world market in 2001 to 22% in 2015, expensive jet fuel till 2014 stimulating old jets replacement allowed by low interest rates since 2008, and strong airline passenger demand since.
In 2004, 718 Airbus and Boeings were delivered, worth $39.3 billion; 1,466 are expected in 2017, worth $104.4 billion: a growth by 3.5 from 2004 to 2020 is unprecedented and highly unusual for any mature market.
In 2016, the deliveries went for 38% in Asia-Pacific, 25% in Europe, 22% in North America, 7% in Middle East, 6% in South America and 2% in Africa. 1,020 narrowbodies were delivered and their backlog reach : 4,991 A320neo, A320ceo; 3,593 737 Max, 835 737NG, 348 CSeries, 305 C919 and 175 MC-21; while 398 widebodies were delivered : 137 Dreamliners and 99 B777 for
Boeing (65%) against 63 A330 and 49 A350 for Airbus, more than 2,400 widebodies were in backlog, led by the A350 with 753 (31%) then the Boeing 787 with 694 (28%).
The most important driver of orders is airline profitability, itself driven mainly by world GDP growth but also supply and demand balance and oil prices, while new programmes by Airbus and Boeing help to stimulate aircraft demand.
In 2016, 38% of the 25 years old airliners had been retired, 50% of the 28 years old : there will be 523 aircraft reaching 25 years old in 2017, 1,127 in 2026 and 1,628 in 2041.
Deliveries rose by 80% from 2004 to 2016, they represented 4.9% of the fleet in 2004 and 5.9% in 2016, down from 8% previously.
Oil prices and airshow orders are trending together.
In 2020, deliveries were down by more than 50% compared to 2019 due to the impact of the COVID-19 pandemic on aviation, after 10 years of growth.
Storage, scrapping and recycling
Storage can be an adjustment variable for the airliner fleet: as Jan–Apr 2018 RPKs are up by 7% over a year and FTKs up by 5.1%, the IATA reports 81 net aircraft went back from storage (132 recalled and 51 stored) in April.
It is the second month of storage contraction after eight of expansion and the largest in four years, while new aircraft deliveries fell slightly to 448 from 454 due to supply-chain issues and in-service issues grounding others.
Retirements were down by 8% and utilization up by 2%, according to Canaccord Genuity, driving used aircraft and engines values up while MRO shops have unexpected demand for legacy products like the PW4000 and GE CF6.
Cabin configurations and features
An airliner will usually have several classes of seating: first class, business class, and/or economy class (which may be referred to as coach class or tourist class, and sometimes has a separate "premium" economy section with more legroom and amenities). The seats in more expensive classes are wider, more comfortable, and have more amenities such as "lie flat" seats for more comfortable sleeping on long flights. Generally, the more expensive the class, the better the beverage and meal service.
Domestic flights generally have a two-class configuration, usually first or business class and coach class, although many airlines instead offer all-economy seating. International flights generally have either a two-class configuration or a three-class configuration, depending on the airline, route and aircraft type. Many airliners offer movies or audio/video on demand (this is standard in first and business class on many international flights and may be available on economy). Cabins of all classes have lavatory facilities, reading lights, and air vents. Some larger airliners have a rest compartment reserved for crew use during breaks.
Seats
The types of seats that are provided and how much legroom is given to each passenger are decisions made by the individual airlines, not the aircraft manufacturers. Seats are mounted in "tracks" on the floor of the cabin and can be moved back and forth by the maintenance staff or removed altogether. One driver of airline profitability is how many passengers can be seated in economy class cabins, meaning that airline companies have an incentive to place seats close together to fit as many passengers in as possible. In contrast, ‘premium class’ seat configurations provide more space for travelers.
Passengers seated in an exit row (the row of seats adjacent to an emergency exit) usually have substantially more legroom than those seated in the remainder of the cabin, while the seats directly in front of the exit row may have less legroom and may not even recline (for evacuation safety reasons). However, passengers seated in an exit row may be required to assist cabin crew during an emergency evacuation of the aircraft opening the emergency exit and assisting fellow passengers to the exit. As a precaution, many airlines prohibit young people under the age of 15 from being seated in the exit row.
The seats are designed to withstand strong forces so as not to break or come loose from their floor tracks during turbulence or accidents. The backs of seats are often equipped with a fold-down tray for eating, writing, or as a place to set up a portable computer, or a music or video player. Seats without another row of seats in front of them have a tray that is either folded into the armrest or that clips into brackets on the underside of the armrests. However, seats in premium cabins generally have trays in the armrests or clip-on trays, regardless of whether there is another row of seats in front of them. Seatbacks now often feature small colour LCD screens for videos, television and video games. Controls for this display as well as an outlet to plug in audio headsets are normally found in the armrest of each seat.
Overhead bins
The overhead bins, also known as overhead lockers or pivot bins, are used for stowing carry-on baggage and other items. While the airliner manufacturer will normally specify a standard version of the product to supply, airlines can choose to have bins of differing size, shape, or color installed. Over time, overhead bins evolved out of what were originally overhead shelves that were used for little more than coat and briefcase storage. As concerns about falling debris during turbulence or in accidents increased, enclosed bins became the norm. Bins have increased in size to accommodate the larger carry-on baggage passengers can bring onto the aircraft. Newer bin designs have included a handrail, useful when moving through the cabin.
Passenger service units
Above the passenger seats are Passenger Service Units (PSU). These typically contain reading lights, air vents, and a flight attendant call light. On most narrowbody aircraft (and some Airbus A300s and A310s), the flight attendant call button and the buttons to control the reading lights are located directly on the PSU, while on most widebody aircraft, the flight attendant call button and the reading light control buttons are usually part of the in-flight entertainment system. The units frequently have small "Fasten Seat Belt" and "No Smoking" illuminated signage and may also contain a speaker for the cabin public address system. On some newer aircraft, a "Turn off electronic devices" sign is used instead of the "No Smoking" sign, as smoking isn't permitted on board the aircraft anyway.
The PSU will also normally contain the drop-down oxygen masks which are activated if there is a sudden drop in cabin pressure. These are supplied with oxygen by means of a chemical oxygen generator. By using a chemical reaction rather than a connection to an oxygen tank, these devices supply breathing oxygen for long enough for the airliner to descend to thicker, more breathable air. Oxygen generators do generate considerable heat in the process. Because of this, the oxygen generators are thermally shielded and are only allowed in commercial airliners when properly installed – they are not permitted to be loaded as freight on passenger-carrying flights. ValuJet Flight 592 crashed on May 11, 1996, as a result of improperly loaded chemical oxygen generators.
Cabin pressurization
Airliners developed since the 1940s have had pressurized cabins (or, more accurately, pressurized hulls including baggage holds) to enable them to carry passengers safely at high altitudes where low oxygen levels and air pressure would otherwise cause sickness or death. High altitude flight enabled airliners to fly above most weather systems that cause turbulent or dangerous flying conditions, and also to fly faster and further as there is less drag due to the lower air density. Pressurization is applied using compressed air, in most cases bled from the engines, and is managed by an environmental control system which draws in clean air, and vents stale air out through a valve.
Pressurization presents design and construction challenges to maintain the structural integrity and sealing of the cabin and hull and to prevent rapid decompression. Some of the consequences include small round windows, doors that open inwards and are larger than the door hole, and an emergency oxygen system.
To maintain a pressure in the cabin equivalent to an altitude close to sea level would, at a cruising altitude around , create a pressure difference between inside the aircraft and outside the aircraft that would require greater hull strength and weight. Most people do not suffer ill effects up to an altitude of , and maintaining cabin pressure at this equivalent altitude significantly reduces the pressure difference and therefore the required hull strength and weight. A side effect is that passengers experience some discomfort as the cabin pressure changes during ascent and descent to the majority of airports, which are at low altitudes.
Cabin climate control
The air bled from the engines is hot and requires cooling by air conditioning units. It is also extremely dry at cruising altitude, and this causes sore eyes, dry skin and mucosa on long flights. Although humidification technology could raise its relative humidity to comfortable middle levels, this is not done since humidity promotes corrosion to the inside of the hull and risks condensation which could short electrical systems, so for safety reasons it is deliberately kept to a low value, around 10%. Another problem of the air coming from the ventilation (unto which the oil lubrication system of the engines is hooked up) is that fumes from components in the synthetic oils can sometimes travel along, causing passengers, pilots and crew to be intoxicated. The illness it causes is called aerotoxic syndrome.
Baggage holds
Airliners must have space on board to store "checked" baggage – that which will not safely fit in the passenger cabin.
Designed to hold baggage as well as freight, these compartments are called "cargo bins", "baggage holds", "luggage holds", or occasionally "pits". Occasionally baggage holds may be referred to as cargo decks on the largest of aircraft. These compartments can be accessed through doors on the outside of the aircraft.
Depending on the aircraft, baggage holds are normally inside the hull and are therefore pressurized just like the passenger cabin although they may not be heated. While lighting is normally installed for use by the loading crew, typically the compartment is unlit when the door is closed.
Baggage holds on modern airliners are equipped with fire detection equipment and larger aircraft have automated or remotely activated fire-fighting devices installed.
Narrow-body airliners
Most "narrow-body" airliners with more than 100 seats have space below the cabin floor, while smaller aircraft often have a special compartment separate from the passenger area but on the same level.
Baggage is normally stacked within the bin by hand, sorted by destination category. Netting that fits across the width of the bin is secured to limit movement of the bags. Airliners often carry items of freight and mail. These may be loaded separately from the baggage or mixed in if they are bound for the same destination. For securing bulky items "hold down" rings are provided to tie items into place.
Wide-body airliners
"Wide-body" airliners frequently have a compartment like the ones described above, typically called a "bulk bin". It is normally used for late arriving luggage or bags which may have been checked at the gate.
However, most baggage and loose freight items are loaded into containers called Unit Load Devices (ULDs), often referred to as "cans". ULDs come in a variety of sizes and shapes, but the most common model is the LD3. This particular container has approximately the same height as the cargo compartment and fits across half of its width.
ULDs are loaded with baggage and are transported to the aircraft on dolly carts and loaded into the baggage hold by a loader designed for the task. By means of belts and rollers an operator can maneuver the ULD from the dolly cart, up to the aircraft baggage hold door, and into the aircraft. Inside the hold, the floor is also equipped with drive wheels and rollers that an operator inside can use to move the ULD properly into place. Locks in the floor are used to hold the ULD in place during flight.
For consolidated freight loads, like a pallet of boxes or an item too oddly shaped to fit into a container, flat metal pallets that resemble large baking sheets that are compatible with the loading equipment are used.
| Technology | Types of aircraft | null |
51257 | https://en.wikipedia.org/wiki/Peach | Peach | The peach (Prunus persica) is a deciduous tree first domesticated and cultivated in China. It bears edible juicy fruits with various characteristics, most called peaches and the glossy-skinned, non-fuzzy varieties called nectarines. Peaches and nectarines are the same species, though they are regarded commercially as different fruits.
The tree is regarded as handsome and is planted in gardens for its springtime blooms in addition to fruit production. The peach tree is relatively short lived, usually not exceeding twenty years of age. However, the peach fruit is regarded as a symbol of longevity in several East Asian cultures.
The specific name persica refers to its widespread cultivation in Persia (modern-day Iran), from where it was transplanted to Europe and in the 16th century to the Americas. It belongs to the genus Prunus, which includes the cherry, apricot, almond, and plum, and which is part of the rose family.
The peach is very popular, only the Apple and Pear are have higher production amounts for temperate fruits. In 2023, China produced 65% of the world total of peaches and nectarines. Other leading countries, such as Spain, Turkey, Italy, the U.S., and Iran lag far behind China with none producing more than 5% of the world total.
Description
The peach is a deciduous tree or tree like shrub that may very rarely grow to as much as tall, but is more typically with large specimens reaching . The spread of the crown is similar to the height, ranging from 3 to 4 meters. They never produce suckers or have thorns. Unlike with apples the size of peach trees is not generally controlled by dwarfing rootstocks in commercial orchards. A great variety of growth habits have been selected including columnar, dwarf, spreading, and weeping. In order to have a single trunk trees must pruned and likewise the branches have a tendance to droop over time and must be trained to allow for access under the tree. The bark on the trunk and branches is dark gray with horizontal lenticels. It becomes more scaly and rough as the tree becomes older. The root system is deep on peach trees and the roots of peach trees continue to grow during the winter season.
Twigs on peach trees have a smooth, hairless surface, the bark is usually red, but may be green on the sides not exposed to the sun. As they become older branchlets weather to gray in color. Twigs have true terminal buds at their ends.
Peach leaves are oblong to lanceolate, having sides nearly parallel until tapering at end and base or shaped like the head of a spear. The widest portion of the leaf is midway or further towards the leaf tip. Each leaf folds along the central rib of the leaf and is often also curved, usually long and wide, though occasionally they may be shorter. The surface of the leaves is smooth and hairless, but the leaf stem sometimes has glands. The edges of the leaves have serrated edges with blunt teeth. The teeth have a reddish-brown gland at the tip. Leaves are attached to the twigs by petioles, leaf stems. They are strong and measure 1 to 2 cm. They can also have one or more extrafloral nectaries.
Flowering
Flowers on peach trees are either solitary or in groups of two and usually bloom before the leaves begin to grow. They may range in shades from white to red, but having pink or red flowers 2–3.5 cm in width is typical of cultivars selected for their fruit. Trees grown as ornamentals also may have double flowers, semi-doubled flowers, or bicolored forms. Each flower has four or five petals and is somewhat cup shaped with the petals curving to shelter the flower's center. Each flower will have 20 to 30 stamens and purple-red anthers at their ends. The single style is nearly as long as the stamens. The flowers are self-fertile and outcross at about 5%.
The bloom period is in the early spring, often cut short by frosts, in February, March, April, or May depending on location. Correspondingly in August or October in New Zealand in the southern hemisphere.
Fruit
Trees can begin producing fruit in the two or three years after sprouting. Because of the hardness of the seed casing peaches are called stone fruits like the others in the Prunus genus, but are more formally called drupes. Fruits range in color from greenish white to orange yellow, usually with a blush of red on the side of the fruit most exposed to the sun. Their shape varies wildly from a flattened sphere resembling a doughnut, egg shaped, or a slightly compressed sphere usually with a seam on one side. A normal diameter for a fruit is between , but sometimes they may be as small as or as large as .
The flesh of the peach is quite variable in color from greenish-white to white to yellow to dark red. The texture can also differ, melting, nonmelting, or stony hard all possible.
The growth of the fruit is a double-sigmoid growth curve: a beginning quick period of development followed by a resting period of little growth and then a second period of rapid growth.
The seed of the peach is much larger and less round than the seeds of its closest living relatives. Unlike the pit of an almond, which is only pitted, the peach pit's stony exterior is both pitted and deeply furrowed.
Taxonomy
The peach tree was given the name Amygdalus persica by Carl Linnaeus in 1753 in his book Systema Naturae. The accepted species name of Prunus persica was published by August Batsch in 1801. Though this was far from settled until the 20th century with many different placements of the peach and even divisions of nectarines and flat peaches into different species. The botanist Ulysses Prentiss Hedrick argued persuasively in 1917 that these differences are merely simple mutations and not species or even varieties beginning consensus towards the modern classification. This was supported by breeding experiments as early as 1906 showing the hairlessness of nectarines is a recessive trait. Though sometimes alternative names continue to be used even in the 21st Century with Amygdalus persica being used as recently as 2003 in an authoritative scientific publication. More than 200 scientific names have been published that are considered synonyms of Prunus persica by Plants of the World Online (POWO). Though the majority of sources agree on its classification as Prunus persica, there is division on the correct author citation for the name. Most sources, such as POWO, World Flora Online, and the Flora of North America give August Batsch credit. However, a few sources such as World Plants maintained by the botanist Michael Hassler instead credit Jonathan Stokes with priority dated to 1812.
Prunus persica is classified in Prunus with other stone fruits within the rose family, Rosaceae. The further classification into a subgenus or section is disputed. The work of Alfred Rehder, published in 1940, has been widely used to group the species of Prunus. Rehder based his system largely on that of Bernhard Adalbert Emil Koehne with the peach placed with the almond in subgenus Amygdalus because similarities in the rough and pitted stone. However, since 2000 studies of nuclear and chloroplast DNA have shown that the five subgenera accepted by Rehder are not more closely related to each other than to other species in Prunus. In 2013 Shuo Shi and collaborators published research where they proposed it be part of subgenus Prunus together with the plums and cherries, but in a section named Persicae, now corrected to Persica. However, these groupings are not yet widely accepted.
The greatest genetic diversity in peaches is found in China and where it is generally agreed to have been domesticated. The species is often thought to be a cultigen, a taxa that has its origins in cultivation rather than as a wild species.
The closest relatives of the peach are the Chinese bush peach (Prunus kansuensis), Chinese wild peach (Prunus davidiana), the smooth stone peach (Prunus mira). Though Charles Darwin speculated that the peach might be a marvelous modification of the almond (Prunus amygdalus), research into the divergence of peach relatives shows this not to be the case. Quite the opposite the almond, while in the same genus, is confirmed to be a more distant relative.
In April 2010, an international consortium, the International Peach Genome Initiative, which includes researchers from the United States, Italy, Chile, Spain, and France, announced they had sequenced the peach tree genome (doubled haploid Lovell). In 2013 they published the peach genome sequence and related analyses. The sequence is composed of 227 million nucleotides arranged in eight pseudomolecules representing the eight peach chromosomes (2n = 16). In addition, 27,852 protein-coding genes and 28,689 protein-coding transcripts were predicted.
Particular emphasis in this study is reserved for the analysis of the genetic diversity in peach germplasm and how it was shaped by human activities such as domestication and breeding. Major historical bottlenecks were found, one related to the putative original domestication that is supposed to have taken place in China about 4,000–5,000 years ago, the second is related to the western germplasm and is due to the early dissemination of the peach in Europe from China and the more recent breeding activities in the United States and Europe. These bottlenecks highlighted the substantial reduction of genetic diversity associated with domestication and breeding activities.
Though not a separate grouping genetically, nectarines are regarded as different fruits commercially. The difference is the lack of fuzz, the trichomes, on the skin of the fruits. Research into the cause of this trait found the transcription factor gene PpeMYB25 regulates the formation of trichomes on peach fruits. A mutation can cause a loss of function resulting in the changed fruit type.
Fossil record
Fossil endocarps with characteristics indistinguishable from those of modern peaches have been recovered from late Pliocene deposits in Kunming, dating to 2.6 million years ago. In the absence of evidence that the plants were in other ways identical to the modern peach, the name Prunus kunmingensis has been assigned to these fossils. Genetic evidence supports a very early emergence of edibility in the wild ancestors of the peach.
Names
The genus name Prunus is from Latin for plum. The specific name persica was given by Linnaeus because European botanists of the 1700s and 1800s continued to believe the Roman accounts of peaches originating in Persia to be correct.
The modern English word – and its cognates in many European languages such as the German Pfirsich and Finnish persikka – also have Latin origins. In ancient Rome the peach was called persicum malum or simply persicum meaning "Persian apple". This became the Late Latin pessica and in turn the medieval pesca. In Old French it was variously the peche, pesche, or peske. The first usage in England was as the surname Pecche in about 1184–1185. The French word was directly adopted into English to mean the fruit and spelled either pechis or peches around the year 1400. In 1605 the first known instance of the modern spelling of peach was published. Peach trees are also, less frequently, called common peaches.
The various cultivars of peach with smooth skinned fruits are called nectarines. This word was coined by English speakers, originally as an adjective meaning nectar like, "nectar" and the suffix "-ine" with the first use in print in 1611.
Distribution
The exact place of origin for the domestic peach is unknown. Based on archeology from the 2010s East China near the Yangtze Delta has emerged as a likely candidate and contradicting the theory of domestication in Northwestern China. Many sources since the 1980s have listed North China as its likely place of origin. They are now naturalized in many other parts of Asia. It grows throughout eastern China and into Inner Mongolia. To the east they are found on the Korean Peninsula and in Japan. To the south they are also found in Vietnam and Laos. In the Indian Subcontinent are reported in the Eastern Himalayas and nearby Assam province, but not Nepal, parts of central India, Pakistan, and the Western Himalayas. Westwards they are also an introduced species in Afghanistan, Iran, and all the countries of Central Asia. Transitioning to Europe they also grow in the North Caucasus, Transcaucasus, and Turkey.
In Europe the peach trees are partly naturalized. In western Europe they are found in Portugal, Spain, France, Ireland, and the United Kingdom. In central Europe they are reported as escaped from cultivation in Germany, Hungary, and Switzerland and in Corsica, Sardinia, Italy, Cyprus, and Greece in the south. In the southeast they grow as introduced plants in Slovenia, Croatia, Romania, and Bulgaria. To the east they are found in parts of European Russia, Ukraine, and Crimea.
They also have escaped from cultivation in the African nations of Libya, Ethiopia, Kenya, South Africa, and the Cape Verde Islands off the northeast coast. Specific areas of South Africa include the biogeographic areas of the Northern Provinces, Orange Free State, and KwaZulu-Natal.
In North America, in addition to cultivation, peach saplings are often found growing anywhere pits have been discarded. Most of these feral trees are short lived, but some have established naturalized populations. Such escapes are reported in the Canadian provinces of Ontario and Nova Scotia. Trees outside of cultivation have been found in all of the United States east of the Mississippi excluding Minnesota, Vermont, and New Hampshire. In the northwest they are found in Oregon and Idaho. In the Southwestern United States they are to some extent naturalized from California to Texas, with the exception of in Nevada. Similar occurrences are also found in the northwest of Mexico and El Salvador in Central America.
In South America escapees are only reported from Ecuador and the northeast of Argentina.
In Australia it is naturalized in the states of New South Wales, Queensland, Victoria, South Australia, and Western Australia. In New Zealand it can be found as an escapee from cultivation on both the North Island and south Island, especially around Auckland, Christchurch, and in the Otago region. It is also naturalized on many oceanic islands including the Mariana Islands, Mauritius, Rodrigues, Réunion, and Saint Helena.
Cultivation
History
Which peaches might be wild type or feral escapes from cultivation is still an open scientific question. The authors of the Flora of China wrote in 2003 that completely wild peach trees no longer exist and this view is widely accepted. Although its botanical name Prunus persica refers to Persia peaches originated in China, where they have been cultivated since the Neolithic period. From the 1980s to the 2010s it was believed that cultivation started around 2000 BCE. In 2014 new research was published showing that domestication occurred as early as 6000 BCE in Zhejiang Province on the central east coast of China. The oldest archaeological peach stones are from the Kuahuqiao site near Hangzhou. Archaeologists point to the Yangtze River Valley as the place where the early selection for favorable peach varieties probably took place.
A domesticated peach appeared very early in Japan, in 4700–4400 BCE, during the Jōmon period. It was already similar to modern cultivated forms, where the peach stones are significantly larger and more compressed than earlier stones. This domesticated type of peach was brought into Japan from China. Nevertheless, in China itself, this variety is currently attested only at a later date around 3300 to 2300 BCE.
In India, the peach first appeared sometime between 2500 and 1700 BCE, during the Harappan period in the Kashmir.
It is also found elsewhere in West Asia in ancient times. Peach cultivation reached Greece by 300 BC. Alexander the Great is sometimes said to have introduced them into Greece after conquering Persia, but no historical evidence for this claim has been found. Peaches were, however, well known to the Romans in the first century AD; the oldest known artistic representations of the fruit are in two fragments of wall paintings, dated to the first century AD, in Herculaneum, preserved due to the Vesuvius eruption of 79 AD, and now held in the National Archaeological Museum in Naples. Archaeological finds show that peaches were cultivated widely in Roman northwestern Continental Europe, but production collapsed around the sixth century; some revival of production followed with the Carolingian Renaissance of the ninth century.
An article on peach tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture. The peach was brought to the Americas by Spanish explorers in the 16th century, and eventually made it to England and France in the 17th century, where it was a prized and expensive treat. Although Thomas Jefferson had peach trees at Monticello, American farmers did not begin commercial production until the 19th century in Maryland, Delaware, Georgia, South Carolina, and finally Virginia.
The Shanghai honey nectar peach was a key component of both the food culture and agrarian economy the area where the modern megacity of Shanghai stands. Peaches were the cornerstone of early Shanghai's garden culture. As modernization and westernization swept through the city the Shanghai honey nectar peach nearly disappeared completely. Much of modern Shanghai is built over these gardens and peach orchards.
The first European botanist to argue that the peach did not originate in Persia was Augustin Pyramus de Candolle in 1855. He argued on the basis of it not being mentioned by Xenophon in 401 BCE or by other early sources that it could not have arrived there much before it was imported to Rome in the 100s BCE. An important western botanist to argue for a Chinese origin of the species was Ulysses Prentiss Hedrick in 1917. Chinese literature records the fruit for at least 1000 years before its appearance in Europe.
Peaches in the Americas
Peaches were introduced into the Americas in the 16th century by the Spanish. By 1580, peaches were being grown in Latin America and were cultivated by the remnants of the Inca Empire in Argentina.
In the United States the peach was soon adopted as a crop by American Indians. In the eastern U.S. the peach also became naturalized and abundant as a feral species. Peaches were being grown in Virginia as early as 1629. Peaches grown by Indians in Virginia were said to have been "of greater variety and finer sorts" than those of the English colonists. Also in 1629, peaches were listed as a crop in New Mexico. William Penn noted the existence of wild peaches in Pennsylvania in 1683. In fact, peaches may have already spread to the American Southeast by the early to mid 1600s, actively cultivated by indigenous communities such as the Muscogee before permanent Spanish settlement of the region.
Peach plantations became an objective of American military campaigns against the Indians. In 1779, the Sullivan Expedition destroyed the livelihood of many of the Iroquois people of New York. Among the crops destroyed were plantations of peach trees. In 1864, Kit Carson led a successful U.S. army expedition to Canyon de Chelly in Arizona to destroy the livelihood of the Navajo. Carson destroyed thousands of peach trees. A soldier said they were the "best peach trees I have ever seen in the country, every one of them bearing fruit." The Navajo signed a treaty with the US government in 1868 and were able to return to the canyon. They had saved peach pits and some trees resprouted from stumps and so by the 1870s and 1880s many peach orchards had been restored.
Growing conditions
Peaches are easiest to grow dry, continental or temperate climates, with conditions of high humidity greatly increasing diseases and pests in subtropics and tropics. In addition the trees have a chilling requirement. Most cultivars require 600 to 1,000 hours of chilling at temperatures between . During the chilling period, key chemical reactions occur, but the plant appears dormant. Temperatures under are ineffective for fulfilling the chilling requirement. Once the chilling period is fulfilled, the plant enters a second type of dormancy, the quiescence period. During quiescence, buds break and grow when sufficient warm weather favorable to growth is accumulated. The chilling requirement is not satisfied in tropical or subtropical areas except at high altitudes with low-chill cultivars, some which require less than 100 hours of suitable temperatures.
The trees themselves can usually tolerate temperatures to around , although the following season's flower buds are usually killed at these temperatures, preventing a crop that summer. Flower bud death begins to occur between , depending on the cultivar and on the timing of the cold, with the buds becoming less cold tolerant in late winter. Another climate constraint is spring frost. The trees flower fairly early and the blossom is damaged or killed if temperatures drop below about . If the flowers are not fully open, though, they can tolerate a few degrees colder. The flowers are also vulnerable to temperatrues higher than during the day.
Climates with significant winter rainfall at temperatures below are also unsuitable for peach cultivation, as the rain promotes peach leaf curl, which is the most serious fungal disease for peaches. In practice, fungicides are extensively used for peach cultivation in such climates, with more than 1% of European peaches exceeding legal pesticide limits in 2013.
Finally, summer heat is required to mature the crop, with mean temperatures of the hottest month between .
Peach trees are grown in well draining soils as they are vulnerable to disease in wet soils. They are most productive in topsoils approximately with a sandy loam character.
Most peach trees sold by nurseries are cultivars budded or grafted onto a suitable rootstock. Common rootstocks are 'Lovell Peach', 'Nemaguard Peach', Prunus besseyi, and 'Citation'. The rootstock provides hardiness and budding is done to improve predictability of the fruit quality.
Typical peach cultivars begin bearing fruit in their third year. Their lifespan in the U.S. varies by region; the University of California at Davis gives a lifespan of about 15 years while the University of Maine gives a lifespan of 7 years there.
Peach trees need full sun, and a layout that allows good natural air flow to assist the thermal environment for the tree. Peaches are planted in early winter. During the growth season, they need a regular and reliable supply of water, with higher amounts just before harvest.
Peaches need nitrogen-rich fertilizers more than other fruit trees. Without regular fertilizer supply, peach tree leaves start turning yellow or exhibit stunted growth. Blood meal, bone meal, and calcium ammonium nitrate are suitable fertilizers.
The flowers on a peach tree are typically thinned out because if the full number of peaches mature on a branch, they are undersized and lack flavor. Fruits are thinned midway in the season by commercial growers. Fresh peaches are easily bruised, so do not store well. They are most flavorful when they ripen on the tree and are eaten the day of harvest.
The peach tree can be grown in an espalier shape. The Baldassari palmette is a design created around 1950 used primarily for training peaches. In walled gardens constructed from stone or brick, which absorb and retain solar heat and then slowly release it, raising the temperature against the wall, peaches can be grown as espaliers against south-facing walls as far north as southeast Great Britain and southern Ireland.
Storage
Peaches and nectarines are best stored at temperatures of 0 °C (32 °F) and in high humidity. They are highly perishable, so are typically consumed or canned within two weeks of harvest.
Peaches are climacteric fruits and continue to ripen after being picked from the tree. However, though climacteric fruits continue to ripen nutritional quality may not improve after picking with studies showing Vitamin C content to be higher in peaches when ripened on the tree. Both ethylene and the plant hormone auxin are involved in regulating the ripening process. Though the ethylene antagonist 1-Methylcyclopropene can be used to delay the ripening of peaches its use negatively affects the arroma of the fruit.
Insects
The European earwig (Forficula auricularia) can be a minor to significant pest of the peach fruit, particuarly when they are tightly clustered or have splits in the skin. The earwigs feed on the fruit and dirty them with waste.
The larvae of many moth species are of concern to peach growers. Frequently noted are the peachtree borer (Synanthedon exitiosa), the peach twig borer (Anarsia lineatella), the yellow peach moth (Conogethes punctiferalis), the fruit tree leafroller (Archips argyrospila), oriental fruit moths (Grapholita molesta), and the lesser peachtree borer (Synanthedon pictipes). Other moths include the well-marked cutworm (Abagrotis orbis), the climbing cutworm (Abagrotis barnesi), Lyonetia prunifoliella, the grey dagger (Acronicta psi),, ghost moth (Aenetus virescens), the march moth (Alsophila aescularia), Phyllonorycter hostis, the fruit tree borer (Maroga melanostigma), Parornix finitimella, Caloptilia zachrysa, Phyllonorycter crataegella, Trifurcula sinica, Suzuki's promolactis moth (Promalactis suzukiella), the white-spotted tussock moth (Orgyia thyellina), the catapult moth (Serrodes partita), the wood groundling (Parachronistis albiceps) or the omnivorous leafroller (Platynota stultana) are reported to feed on P. persica. The flatid planthopper (Metcalfa pruinosa) causes damage to fruit trees.
The tree is also a host plant for such species as the Japanese beetle (Popillia japonica), the shothole borer (Scolytus rugulosus), plum curculio (Conotrachelus nenuphar), the unmonsuzume (Callambulyx tatarinovii), the promethea silkmoth (Callosamia promethea), the orange oakleaf (Kallima inachus), Langia zenzeroides, the speckled emperor (Gynanisa maja) or the brown playboy (Deudorix antalus). The European red mite (Panonychus ulmi) or the yellow mite (Lorryia formosa) are also found on the peach tree.
Green peach aphids (Myzus persicae) can be a significant problem on peach trees. They overwinter as eggs on the trees and feed upon them in the spring before moving to other host species during the summer. Two scale insects can cause serious damage to peach trees, the white peach scale (Pseudaulacaspis pentagona) and the San Jose scale (Comstockaspis perniciosa).
At best it is poor nectar and pollen source for honey bees, with the double flowering varieties particuarly noted for not producing any usable resources for bees. Some fruiting cultivars also produce no pollen and nectar flow is often impacted by early frosts.
Though not native to North America, peach trees have become a host for caterpillars of the Eastern tiger swallowtail butterfly (Papilio glacucus). Though they are not a significant pest.
Diseases
Peach trees are prone to a disease called leaf curl, which usually does not directly affect the fruit, but does reduce the crop yield by partially defoliating the tree. Several fungicides can be used to combat the disease, including Bordeaux mixture and other copper-based products (the University of California considers these organic treatments), ziram, chlorothalonil, and dodine. The fruit is susceptible to brown rot or a dark reddish spot.
Cultivars
Hundreds of peach and nectarine cultivars are known. These are classified into two categories—freestones and clingstones. Freestones are those whose flesh separates readily from the pit. Clingstones are those whose flesh clings tightly to the pit. Some cultivars are partially freestone and clingstone, so are called semifree. Freestone types are preferred for eating fresh, while clingstone types are for canning. The fruit flesh may be creamy white to deep yellow, to dark red; the hue and shade of the color depend on the cultivar. The genetic diversity of peach cultivars is highest in China with 495 recognized cultivars.
Peach breeding has favored cultivars with more firmness, more red color, and shorter fuzz on the fruit surface. These characteristics ease shipping and supermarket sales by improving eye appeal. This selection process has not necessarily led to increased flavor, though. Peaches have a short shelf life, so commercial growers typically plant a mix of different cultivars to have fruit to ship all season long.
Nectarines
The cultivars commonly called nectarines have a smooth skin. It is on occasion referred to as a "shaved peach" or "fuzzless peach", due to its lack of fuzz or short hairs. Though fuzzy peaches and nectarines are regarded commercially as different fruits, with nectarines often erroneously believed to be a crossbreed between peaches and plums, or a "peach with a plum skin", nectarines belong to the same species as peaches. Several genetic studies have concluded nectarines are produced due to a recessive allele, whereas a fuzzy peach skin is dominant.
As with peaches, nectarines can be white or yellow, and clingstone or freestone. On average, nectarines are slightly smaller and sweeter than peaches, but with much overlap. The lack of skin fuzz can make nectarine skins appear more reddish than those of peaches, contributing to the fruit's plum-like appearance. The lack of down on nectarines' skin also means their skin is more easily bruised than peaches.
The history of the nectarine is unclear; the first recorded mention in English is from 1611, but they had probably been grown much earlier within the native range of the peach in central and eastern Asia. A number of colonial-era newspaper articles make reference to nectarines being grown in the United States prior to the Revolutionary War. The 28 March 1768 edition of the New York Gazette (p. 3), for example, mentions a farm in Jamaica, Long Island, New York, where nectarines were grown. Later, cultivars of higher quality with better shipping qualities were introduced to the United States by David Fairchild of the Department of Agriculture in 1906.
Peacherines
Peacherines are claimed to be a cross between a peach and a nectarine, but as they are the same species cannot be a true cross (hybrid); they are sometimes marketed in Australia and New Zealand. The linguist Louise Pound, in 1920, wrote that the term peacherine is an example of language stunt.
Flat peaches
Flat peaches, or pan-tao, have a flattened shape, in contrast to ordinary near-spherical peaches.
Ornamentals
Peach trees are also grown for ornamental value in gardens, but trees specifically selected for this purpose have small, inedible fruits.
Production
In 2023, world production of peaches (combined with nectarines for reporting) was 27.1 million tonnes, led by China with 65% of the total. Spain, the next most productive country, only produced about 5% of the total (table). Peaches rank third in total production of temperate fruits after the apple and pear.
The U.S. state of Georgia is known as the "Peach State" due to its significant production and shipping of peaches in the 1870s and 1880s, with the first export of to New York occurring around 1853 and significant amounts being sold there by 1858. In 2014, Georgia was third in US peach production behind California and South Carolina. The largest peach producing countries in Latin America are Argentina, Brazil, Chile, and Mexico.
Nutrition
Raw peach flesh is 88% water, 10% carbohydrates, 1% protein, and contains negligible fat (table). A medium-sized raw peach, weighing , supplies 46 calories, and contains no micronutrients having a significant percentage of the Daily Value (DV, table). A raw nectarine has similar low content of nutrients. The glycemic load of an average peach (120 grams) is 5, similar to other low-sugar fruits.
Phytochemicals
Total polyphenols in mg per 100 g of fresh weight were 14–113 in white-flesh nectarines, 17–78 in yellow-flesh nectarines, 20–113 in white-flesh peaches, and 16–93 mg per 100 g in yellow-flesh peaches. The major phenolic compounds identified in peach are chlorogenic acid, catechins and epicatechins, with other compounds, identified by HPLC, including gallic acid and ellagic acid. Rutin and isoquercetin are the primary flavonols found in clingstone peaches. The levels of flavonols and cyanidins are highest in the skins. Though phenols vary by cultivar and due to the growing conditions in a growing season. Red-fleshed peaches are rich in anthocyanins, especially red fleshed varieties and their skins. malvin glycosides in clingstone peaches.
As with many other members of the rose family, peach seeds contain cyanogenic glycosides, primarily amygdalin. Amygdalin decomposes into a sugar molecule,hydrogen cyanide gas, and benzaldehyde. Hydrogen cyanide poisons the action of a critical enzyme for the use of oxygen in cells, resulting in death in severe cases. While peach seeds are not the most toxic within the rose family (see bitter almond), large consumption of these chemicals from any source is potentially hazardous to animal and human health.
Peach allergy or intolerance is a relatively common form of hypersensitivity to proteins contained in peaches and related fruits (such as almonds). Symptoms range from local effects (e.g. oral allergy syndrome, contact urticaria) to more severe systemic reactions, including anaphylaxis (e.g. urticaria, angioedema, gastrointestinal and respiratory symptoms). Adverse reactions are related to the "freshness" of the fruit: peeled or canned fruit may be tolerated.
Due to their close relatedness, the kernel of a peach stone tastes similar to almond, and peach stones are used to make a cheap version of marzipan, known as persipan.
Aroma
The attractive smell of a ripe peach has 110 different volatile molecules combined, including alcohols, ketones, aldehydes, esters, polyphenols and terpenoids. The proportions vary significantly between different cultivars of peach.
In culture
Peaches are not only a popular fruit, but also are symbolic in many cultural traditions, such as in art, paintings, and folk tales such as the Peaches of Immortality.
China
Peach blossoms are highly prized in Chinese culture. The ancient Chinese believed the peach to possess more vitality than any other tree because their blossoms appear before leaves sprout. When early rulers of China visited their territories, they were preceded by sorcerers armed with peach rods to protect them from spectral evils. On New Year's Eve, local magistrates would cut peach wood branches and place them over their doors to protect against evil influences. Peach wood was also used for the earliest known door gods during the Han. Another author writes:
Peachwood seals or figurines guarded gates and doors, and, as one Han account recites, "the buildings in the capital are made tranquil and pure; everywhere a good state of affairs prevails". Writes the author, further:
Similarly, peach trees would often be planted near the front door of a house to bring good fortune.
Peach kernels, tao ren (), are a common ingredient used in traditional Chinese medicine to dispel blood stasis and unblock bowels.
In an orchard of flowering peach trees, Liu Bei, Guan Yu, and Zhang Fei took an oath of brotherhood in the opening chapter of the classic Chinese novel Romance of the Three Kingdoms. Another peach orchard, in "The Peach Blossom Spring" by poet Tao Yuanming, is the setting of the favourite Chinese fable and a metaphor for utopias. A peach tree growing on a precipice was where the Taoist master Zhang Daoling tested his disciples.
The deity Shòu Xīng (), a god of longevity, is usually depicted with a very large forehead and holding a staff in his left hand and a large peach in his right hand due its associations with a long life. A long-standing traditional birthday food for seniors is a symbolic longevity peach (shòutáo bao - 寿桃包), a type of lotus seed bun shaped like a peach, frequent in Taiwan and Cantonese culture.
The term fēntáo (), which is variously translated as "half-eaten peach", "divided peach", or "sharing a peach", was first used by Han Fei, a Legalist philosopher, in his work Han Feizi. From this story it became a byword for homosexuality. The book records the incident when courtier Mizi Xia bit into an especially delicious peach and gave the remainder to his lover, Duke Ling of Wei, as a gift so that he could taste it, as well.
Korea
In Korea, peaches have been cultivated from ancient times. According to Samguk Sagi, peach trees were planted during the Three Kingdoms of Korea period, and Sallim gyeongje also mentions cultivation skills of peach trees. The peach is seen as the fruit of happiness, riches, honours, and longevity. The rare peach with double seeds is seen as a favorable omen of a mild winter. It is one of the 10 immortal plants and animals, so peaches appear in many minhwa (folk paintings). Peaches and peach trees are believed to chase away spirits, so peaches are not placed on tables for jesa (ancestor veneration), unlike other fruits.
An important piece of Korean art refer to peaches. Dream Journey to the Peach Blossom Land is the only existing signed and dated work by An Kyŏn. It depicts the imagined utopian Peach Blossom Land from a fable by the Chinese poet Tao Yuanming.
Japan
The world's sweetest peach is grown in Fukushima, Japan. The Guinness world record for the sweetest peach is currently held by a peach grown in Kanechika, Japan, with a sugar content of 22.2%. However, a fruit farm in rural Fukushima, Koji grew a much sweeter peach, with a Brix score of 32°. Degrees Brix measures the sugar content of the fruit, and is usually between 11 and 15 for a typical peach from a supermarket.
Momotarō, who's name literally means "peach child", is a folktale character named after the giant peach from which he was birthed.
Two traditional Japanese words for the color pink correspond to blossoming trees: one for peach blossoms (), and one for cherry blossoms ().
Vietnam
A Vietnamese mythic history states that in the spring of 1789, after marching to Ngọc Hồi and then winning a great victory against invaders from the Qing dynasty of China, Emperor Quang Trung ordered a messenger to gallop to Phú Xuân citadel (now Huế) and deliver a flowering peach branch to the Empress Ngọc Hân. This took place on the fifth day of the first lunar month, two days before the predicted end of the battle. The branch of peach flowers that was sent from the north to the centre of Vietnam was not only a message of victory from the Emperor to his consort, but also the start of a new spring of peace and happiness for all the Vietnamese people. In addition, since the land of Nhật Tân had freely given that very branch of peach flowers to the Emperor, it became the loyal garden of his dynasty.
The protagonists of The Tale of Kieu fell in love by a peach tree, and in Vietnam, the blossoming peach flower is the signal of spring. Finally, peach bonsai trees are used as decoration during Vietnamese New Year (Tết) in northern Vietnam.
Europe
Many famous artists have painted with peach fruits placed in prominence. Caravaggio, Vicenzo Campi, Pierre-Auguste Renoir, Claude Monet, Édouard Manet, Henri Fantin-Latour, Severin Roesen, Peter Paul Rubens, and Van Gogh are among the many influential artists who painted peaches and peach trees in various settings. Scholars suggest that many compositions are symbolic, some an effort to introduce realism. For example, Tresidder claims the artists of Renaissance symbolically used peach to represent heart, and a leaf attached to the fruit as the symbol for tongue, thereby implying speaking truth from one's heart; a ripe peach was also a symbol to imply a ripe state of good health. Caravaggio's paintings introduce realism by painting peach leaves that are molted, discolored, or in some cases have wormholes – conditions common in modern peach cultivation.
In literature, Roald Dahl deciding on using a peach in his children's fantasy novel James and the Giant Peach after considering many other fruits including an apple, pear, or cherry. He thought the flavor and flesh of the peach to be more exciting.
United States
South Carolina named the peach its official fruit in 1984. Starting in 1935, Georgia has been nicknamed the "Peach State". However, the peach did not officially become the state fruit of Georgia until 1995. In addition the state of Georgia is only the third largest producer of peaches in the United States and it is not not the state's leading crop. The reason for the association is a strong campaign for a more positive image. When Georgia reached peak production in the 1920s, elaborate festivals celebrated the fruit. Alabama named it the state tree fruit in 2006 in addition to the blackberry designated as the state fruit in 2004. Delaware's state flower has been the peach blossom since 1995, and peach pie became its official dessert in 2009.
Gallery
Paintings
| Biology and health sciences | Rosales | null |
51258 | https://en.wikipedia.org/wiki/Onion | Onion | An onion (Allium cepa L., from Latin cepa meaning "onion"), also known as the bulb onion or common onion, is a vegetable that is the most widely cultivated species of the genus Allium. The shallot is a botanical variety of the onion which was classified as a separate species until 2011. The onion's close relatives include garlic, scallion, leek, and chives.
The genus contains several other species variously called onions and cultivated for food, such as the Japanese bunching onion Allium fistulosum, the tree onion Allium × proliferum, and the Canada onion Allium canadense. The name wild onion is applied to a number of Allium species, but A. cepa is exclusively known from cultivation. Its ancestral wild original form is not known, although escapes from cultivation have become established in some regions. The onion is most frequently a biennial or a perennial plant, but is usually treated as an annual and harvested in its first growing season.
The onion plant has a fan of hollow, bluish-green leaves, and its bulb at the base of the plant begins to swell when a certain day-length is reached. The bulbs are composed of shortened, compressed, underground stems surrounded by fleshy modified scale (leaves) that envelop a central bud at the tip of the stem. In the autumn (or in spring, in the case of overwintering onions), the foliage dies down and the outer layers of the bulb become more dry, and brittle. The crop is harvested and dried and the onions are ready for use or storage. The crop is prone to attack by a number of pests and diseases, particularly the onion fly, the onion eelworm, and various fungi which can cause rotting. Some varieties of A. cepa, such as shallots and potato onions, produce multiple bulbs.
Onions are cultivated and used around the world. As a food item, they are often served raw as a vegetable or part of a prepared savoury dish, but can be eaten cooked or used to make pickles or chutneys. They are pungent when chopped and contain certain chemical substances which may irritate the eyes.
Taxonomy and etymology
The onion plant (Allium cepa), also known as the bulb onion or common onion, is the most widely cultivated species of the genus Allium. It was first officially described by Carl Linnaeus in his 1753 work Species Plantarum. Synonyms during its taxonomic history are:
Allium cepa var. aggregatum – G. Don
Allium cepa var. bulbiferum – Regel
Allium cepa var. cepa – Linnaeus
Allium cepa var. multiplicans – L.H. Bailey
Allium cepa var. proliferum – (Moench) Regel
Allium cepa var. solaninum – Alef
Allium cepa var. viviparum – (Metz) Mansf.
A. cepa is known exclusively from cultivation, but related wild species occur in Central Asia and Iran. The most closely related include A. vavilovii from Turkmenistan and A. asarense from Iran. The genus Allium contains other species variously called onions and cultivated for food, such as the Japanese bunching onion (A. fistulosum), Egyptian onion (A. × proliferum), and Canada onion (A. canadense). The vast majority of cultivars of A. cepa belong to the common onion group (A. cepa var. cepa) and are usually referred to simply as onions. The Aggregatum Group of cultivars (A. cepa var. aggregatum) includes both shallots, formerly classed as a separate species, and potato onions. Related species include garlic, leek, and chives.
Cepa is commonly accepted as Latin for "onion"; the generic name Allium is the classical Latin name for garlic.
It has an affinity with Spanish: cebolla, Italian: cipolla, Polish: cebula, and the German Zwiebel (this last altered by folk etymology). The English word "chive" is from the Old French chive , in turn from cepa.
Description
The onion is a biennial plant but is usually grown as an annual. Modern varieties typically grow to a height of . The leaves are yellowish- to bluish green and grow alternately in a flattened, fan-shaped swathe. They are fleshy, hollow, and cylindrical, with one flattened side. They are at their broadest about a quarter of the way up, beyond which they taper to blunt tips. The base of each leaf is a flattened, usually white sheath that grows out of the basal plate of a bulb. From the underside of the plate, a bundle of fibrous roots extends for a short way into the soil. As the onion matures, food reserves accumulate in the leaf bases, and the bulb of the onion swells.
In the autumn, the leaves die back, and the outer scales of the bulb become dry and brittle, so the crop is normally harvested. If left in the soil over winter, the growing point in the middle of the bulb begins to develop in the spring. New leaves appear, and a long, stout, hollow stem expands, topped by a bract protecting a developing inflorescence. The inflorescence takes the form of a rounded umbel of white flowers with parts in sixes. The seeds are glossy black and triangular in cross-section. The average pH of an onion is around 5.5.
History
Humans have grown and selectively bred onions in cultivation for at least 7,000 years. The geographic origin of the onion is uncertain; ancient records of onion use span both eastern and western Asia. Domestication likely took place in West or Central Asia. Onions have been variously described as having originated in Iran, western Pakistan and Central Asia. The onion species Allium fistulosum (spring onion, bunching onion) and Allium tuberosum (Chinese leek) were domesticated in China around 6000 BC alongside other vegetables, grains, and fruits.
Recipes using onions and other Allium species were recorded in cuneiform script on clay tablets in ancient Mesopotamia, around 2000 BC; the tablets are held in Yale University's Babylonian collection. The Assyriologist and "gourmet cook" Jean Bottero stated this was "a cuisine of striking richness, refinement, sophistication and artistry".
Ancient Egyptians revered the onion bulb, viewing its spherical shape and concentric rings as symbols of eternal life. Onions were used in Egyptian burials, as evidenced by onion traces found in the eye sockets of Ramesses IV. Pliny the Elder of the first century AD wrote about the use of onions and cabbage in Pompeii. He documented Roman beliefs about the onion's ability to improve ocular ailments, aid in sleep, and heal everything from oral sores and toothaches to dog bites, lumbago, and even dysentery. Archaeologists unearthing Pompeii long after its 79 AD volcanic burial have found gardens resembling those in Pliny's detailed narratives. According to texts collected in the fifth/sixth century AD under the authorial aegis of "Apicius" (said to have been a gourmet), onions were used in many Roman recipes.
In the Age of Discovery, onions were taken to North America by the first European settlers in part of the Columbian exchange. They found close relatives of the plant such as Allium tricoccum readily available and widely used in Native American gastronomy. According to diaries kept by some of the first English colonists, the bulb onion was one of the first crops planted in North America by the Pilgrim fathers. Between 1883 and 1939, inventors in the United States patented 97 inventions meant to make onion-growing more efficient through automation.
Uses
Culinary
Three colour varieties of onions offer different possibilities for the cook:
Yellow or brown onions are sweet, with many cultivars bred specifically to accentuate this sweetness, such as Vidalia, Walla Walla, Cévennes, and Bermuda. Yellow onions turn a rich, dark brown when caramelised and are used to add a sweet flavour to various dishes, such as French onion soup.
Red or purple onions, known for their sharp pungent flavour, are commonly cooked in many cuisines, and used raw and in grilling.
White onions are mild in flavour; they have a golden colour when cooked and a particularly sweet flavour when sautéed.
While the large, mature onion bulb is most often eaten, onions can be eaten at immature stages. Young plants may be harvested before bulbing occurs and used whole as spring onions or scallions. When an onion is harvested after bulbing has begun, but the onion is not yet mature, the plants are sometimes referred to as "summer" onions. Onions may be bred and grown to mature at smaller sizes, known as pearl, boiler, or pickler onions; these are not true pearl onions which are a different species. Pearl and boiler onions may be cooked as a vegetable rather than as an ingredient, while pickler onions are often preserved in vinegar as a long-lasting relish. Onions pickled in vinegar are eaten as a side serving with traditional pub food such as a ploughman's lunch.
Onions are commonly chopped and used as an ingredient in various hearty warm dishes, and may be used as a main ingredient in their own right, for example in French onion soup, creamed onions, and onion chutney. They are versatile and can be baked, boiled, braised, grilled, fried, roasted, sautéed, or eaten raw in salads. Onions are a major ingredient of some curries; the Persian-style dopiaza's name means "double onion", and it is used both in the dish's sour curry sauce and as a garnish. Onion powder is a seasoning made from finely ground, dehydrated onions; it is often included in seasoned salt and spice mixes.
Other uses
Onions have particularly large cells that are easy to observe under low magnification. Forming a single layer of cells, the bulb epidermis is easy to separate for educational, experimental, and breeding purposes. Onions are therefore commonly used in science education to teach the use of a microscope for observing cell structure. Onion skins can be boiled to make an orange-brown dye.
Composition
Nutrients
Most onion cultivars are about 89% water, 9% carbohydrates (including 4% sugar and 2% dietary fibre), 1% protein, and negligible fat (table). Onions contain low amounts of essential nutrients and have an energy value of 166 kJ (40 kilocalories) in a 100 g (3.5 oz) amount. Onions contribute savoury flavour to dishes without contributing significant caloric content.
Phytochemicals
Onion varieties vary widely in phytochemical content, particularly for polyphenols, with shallots having the highest level, six times the amount found in Vidalia onions. Yellow onions have the highest total flavonoid content, an amount 11 times higher than in white onions. Red onions have considerable content of anthocyanin pigments, with at least 25 different compounds identified representing 10% of total flavonoid content. Like garlic, onions can show an additional colour – pink-red – after cutting, an effect caused by reactions of amino acids with sulfur compounds. Onion polyphenols are under basic research to determine their possible biological properties in humans.
Adverse effects and toxicity
Some people suffer from allergic reactions after handling onions. Symptoms can include contact dermatitis, intense itching, rhinoconjunctivitis, blurred vision, bronchial asthma, sweating, and anaphylaxis. Allergic reactions may not occur when eating cooked onions, possibly due to the denaturing of the proteins from cooking.
Eye irritation
Freshly cut onions can produce a stinging sensation in the eyes of people nearby and often uncontrollable tears. This is caused by the release of a volatile liquid, syn-propanethial-S-oxide and its aerosol, which stimulates nerves in the eye. This gas is produced by a chain of reactions which serve as a defence mechanism: chopping an onion causes damage to cells which releases enzymes called alliinases. These break down amino acid sulfoxides and generate sulfenic acids. A specific sulfenic acid, 1-propenesulfenic acid, is rapidly acted on by a second enzyme, the lacrimatory factor synthase (LFS), producing the syn-propanethial-S-oxide. This gas diffuses through the air and soon reaches the eyes, where it activates sensory neurons. Lacrimal glands produce tears to dilute and flush out the irritant. Eye irritation can be minimised by cutting onions under running water or submerged in a basin of water. Leaving the root end intact also reduces irritation as the onion base has a higher concentration of sulphur compounds than the rest of the bulb.
The amount of sulfenic acids and lacrimal factor released and the irritation effect differs among Allium species. In 2008, the New Zealand Institute for Crop and Food Research created "no tears" onions by genetic modification to prevent the synthesis of lachrymatory factor synthase in onions. One study suggests that consumers prefer the flavour of onions with lower LFS content. Since the process impedes sulfur ingestion by the plant, some find LFS− onions inferior in flavour.
A method for efficiently differentiating LFS− and LFS+ onions has been developed based on mass spectrometry, with potential application in high-volume production; gas chromatography is also used to measure lachrymatory factor in onions. In early 2018, Bayer released the first crop yield of commercially available LFS-silenced onions under the name "Sunions". They were the product of 30 years of cross-breeding; genetic modification was not employed.
Guinea hen weed and honey garlic contain a similar lachrymatory factor. Synthetic onion lachrymatory factor has been used in a study related to tear production, and has been proposed as a nonlethal deterrent against thieves and intruders.
Onions are toxic to animals including dogs, cats, and guinea pigs.
Producing onions
Cultivation
Onions are best cultivated in fertile, well-drained soils. Sandy loams are good as they are low in sulphur, while clayey soils usually have a high sulphur content and produce pungent bulbs. Onions require a high level of nutrients in the soil. Phosphorus is often present in sufficient quantities, but may be applied before planting because of its low level of availability in cold soils. Nitrogen and potash can be applied at regular intervals during the growing season, the last application of nitrogen being at least four weeks before harvesting.
Bulbing onions are day-length sensitive; their bulbs begin growing only after the number of daylight hours has surpassed some minimal quantity. Most traditional European onions are referred to as "long-day" onions, producing bulbs only after 14 hours or more of daylight occurs. Southern European and North African varieties are often known as "intermediate-day" types, requiring only 12–13 hours of daylight to stimulate bulb formation. "Short-day" onions, which have been developed in more recent times, are planted in mild-winter areas in the autumn and form bulbs in the early spring and require only 11–12 hours of daylight to stimulate bulb formation. Onions are a cool-weather crop and can be grown in USDA zones 3 to 9. Hot temperatures or other stressful conditions cause them to "bolt", meaning that a flower stem begins to grow.
Onions are grown from seeds or from partially grown bulbs called "sets" or starter bulbs. Onion seeds are short-lived and fresh seeds germinate more effectively when sown in shallow rows, or "drills," with each drill 12" to 18" apart. In suitable climates, certain cultivars can be sown in late summer and autumn to overwinter in the ground and produce early crops the following year.
Onion bulbs are produced by sowing seeds in a dense pattern in early summer, then harvested in the autumn when the bulbs are still small, followed by drying and storage. These bulbs are planted the following spring and grow into mature bulbs later in the growing season. Certain cultivars used for growing and storing bulbs may not have as good storage characteristics as those grown directly from seed.
Routine care during the growing season involves keeping the rows free of competing weeds, especially when the plants are young. The plants are shallow-rooted and do not need much water when established. Bulbing usually takes place after 12 to 18 weeks. The bulbs can be gathered when needed to eat fresh, but if stored, they are harvested after the leaves have died back naturally. In dry weather, they may be left on the surface of the soil for a few days for drying, then are placed in nets, roped into strings, or laid in layers in shallow boxes to be stored in a cool, well-ventilated place.
Pests and diseases
Onions suffer from several pests and diseases. The most serious for the home gardener are likely to be the onion fly, stem and bulb eelworm, white rot, and neck rot. Diseases affecting the foliage include rust and smut, downy mildew, and white tip disease. The bulbs may be affected by splitting, white rot, and neck rot. Shanking is a condition in which the central leaves turn yellow and the inner part of the bulb collapses into an unpleasant-smelling slime. Most of these disorders are best treated by removing and burning affected plants. The larvae of the onion leaf miner or leek moth (Acrolepiopsis assectella) sometimes attack the foliage and may burrow down into the bulb.
The onion fly (Delia antiqua) lays eggs on the leaves and stems and on the ground close to onion, shallot, leek, and garlic plants. The fly is attracted to the crop by the smell of damaged tissue and is liable to occur after thinning. Plants grown from sets are less prone to attack. The larvae tunnel into the bulbs and the foliage wilts and turns yellow. The bulbs are disfigured and rot, especially in wet weather. Control measures may include crop rotation, the use of seed dressings, early sowing or planting, and the removal of infested plants.
The onion eelworm (Ditylenchus dipsaci), a tiny parasitic soil-living nematode, causes swollen, distorted foliage. Young plants are killed and older ones produce soft bulbs. No cure is known and affected plants should be uprooted and burned. The site should not be used for growing onions again for several years and should also be avoided for growing carrots, parsnips, and beans, which are also susceptible to the eelworm.
White rot of onions, leeks, and garlic is caused by the soil-borne fungus Sclerotium cepivorum. As the roots rot, the foliage turns yellow and wilts. The bases of the bulbs are attacked and become covered by a fluffy white mass of mycelia, which later produces small, globular black structures called sclerotia. These resting structures remain in the soil to reinfect a future crop. No cure for this fungal disease exists, so affected plants should be removed and destroyed and the ground used for unrelated crops in subsequent years.
Neck rot is a fungal disease affecting onions in storage. It is caused by Botrytis allii, which attacks the neck and upper parts of the bulb, causing a grey mould to develop. The symptoms often first occur where the bulb has been damaged and spread down the affected scales. Large quantities of spores are produced and crust-like sclerotia may also develop. In time, a dry rot sets in and the bulb becomes a dry, mummified structure. This disease may be present throughout the growing period, but only manifests itself when the bulb is in storage. Antifungal seed dressings are available and the disease can be minimised by preventing physical damage to the bulbs at harvesting, careful drying and curing of the mature onions, and correct storage in a cool, dry place with plenty of circulating air.
Onion oil is authorised for use in the European Union for use as a pesticide against carrot fly in umbelliferous crops (carrots, parsnips, parsley, celery, celeriac).
Production
Onions are a widely cultivated vegetable crop, produced in the second largest quantity after tomatoes. In 2021, the top global producers of onions were China, India, the United States, and Turkey. In 2022, world production of onions and shallots (as green produce) was 5.0 million tonnes, led by China with 17% of the total, and Mali, Angola, and Japan as secondary producers.
Storage
In the home, cooking onions and sweet onions are best stored at room temperature, optimally in a single layer, in large mesh bags in a dry, cool, dark, well-ventilated location. In this environment, cooking onions have a shelf life of three to four weeks and sweet onions one to two weeks. Cooking onions will absorb odours from apples and pears. Additionally, they draw moisture from vegetables with which they are stored which may cause them to decay.
Sweet onions have a greater water and sugar content than cooking onions. This makes them sweeter and milder tasting, but reduces their shelf life. Sweet onions can be stored refrigerated; they have a shelf life of around one month. Irrespective of type, any cut pieces of onion are best tightly wrapped, stored away from other produce, and used within two to three days.
Varieties
Common onion group (var. cepa)
Most of the diversity within A. cepa occurs within this group, the most economically important Allium crop. Plants within this group form large single bulbs, and are grown from seed or seed-grown sets. The majority of cultivated varieties grown for dry bulbs, salad onions, and pickling onions belong to this group. The range of diversity found among these cultivars includes variation in photoperiod (length of day that triggers bulbing), storage life, flavour, and skin colour.
Aggregatum group (var. aggregatum)
This group contains shallots and potato onions, also referred to as multiplier onions. The bulbs are smaller than those of common onions, and a single plant forms an aggregate cluster of several bulbs from a master. They are propagated almost exclusively from daughter bulbs, although reproduction from seed is possible. Shallots are the most important subgroup within this group and comprise the only cultivars cultivated commercially. They form aggregate clusters of small, narrowly ovoid to pear-shaped bulbs. Potato onions differ from shallots in forming larger bulbs with fewer bulbs per cluster, and having a flattened (onion-like) shape. Intermediate forms exist.
I'itoi onion is a prolific multiplier onion cultivated in the Baboquivari Peak Wilderness, Arizona area. This small-bulb type has a shallot-like flavour and is easy to grow and ideal for hot, dry climates. Bulbs are separated, and planted in the fall below the surface and apart. Bulbs will multiply into clumps and can be harvested throughout the cooler months. Tops die back in the heat of summer and may return with heavy rains; bulbs can remain in the ground or be harvested and stored in a cool dry place for planting in the fall. The plants rarely flower; propagation is by division.
Hybrids with A. cepa parentage
Some hybrids are cultivated that have A. cepa parentage, such as the diploid tree onion or Egyptian onion (A. ×proliferum), and the triploid onion (A. ×cornutum).
The tree onion or Egyptian onion produces bulblets in the umbel instead of flowers, and is now known to be a hybrid of A. cepa and A. fistulosum. It has previously been treated as a variety of A. cepa, for example A. cepa var. proliferum, A. cepa var. bulbiferum, and A. cepa var. viviparum. It has been grown for centuries in Japan and China for use as a salad onion.
The triploid onion is a hybrid species with three sets of chromosomes, two sets from A. cepa and the third set from an unknown parent. Various clones of the triploid onion are grown locally in different regions, such as 'Ljutika' in Croatia, and 'Pran', 'Poonch', and 'Srinagar' in the India-Kashmir region. 'Pran' is grown extensively in the northern Indian provinces of Jammu and Kashmir. There are very small genetic differences between 'Pran' and the Croatian clone 'Ljutika', implying a monophyletic origin for this species.
Spring onions or salad onions may be grown from the Welsh onion (A. fistulosum), as well as from A. cepa. Young plants of A. fistulosum and A. cepa look very similar, but may be distinguished by their leaves, which are circular in cross-section in A. fistulosum rather than flattened on one side.
In popular culture
The name 'the Big Onion' was formerly used of New York City, before it became 'the Big Apple', and Chicago became 'the Big Onion'.
The 10th century Exeter Book, written in Old English, contains a riddle which seems to be about an onion, with sexual overtones. The "wondrous creature, a joy to women" stands "in a bed"; "My column is erect and tall"; a woman "rubs me to redness" but at once "she feels my meeting"; the riddle ends "Wet will be that eye."
In the Odyssey, Homer included the lines "I saw the shining tunic about his skin, like the skin of a dried onion, so soft was it, and it shone in the sun". R. Drew Griffith comments that the double comparison of the tunic that Penelope gave to the disguised Odysseus to onion and sun "risks being funny", and notes that Theopompus indeed found it "ridiculous". Griffith suggests that Homer included the onion because of its capacity to produce tears, hinting at Penelope's sorrow at Odysseus's long absence.
Onion Johnnies were Breton farmers and agricultural labourers who travelled from Roscoff in Brittany, originally on foot and later on bicycles, selling strings of distinctive pink onions door to door in Britain.
In India, when the price of onions became very high in 2015, the Hindustan Times recorded that people shared many onion jokes, such as the punning (, "take love, give me onions").
| Biology and health sciences | Monocots | null |
51260 | https://en.wikipedia.org/wiki/Bulb | Bulb | In botany, a bulb is a short underground stem with fleshy leaves or leaf bases that function as food storage organs during dormancy. In gardening, plants with other kinds of storage organ are also called ornamental bulbous plants or just bulbs.
Description
The bulb's leaf bases, also known as scales, generally do not support leaves, but contain food reserves to enable the plant to survive adverse conditions. At the center of the bulb is a vegetative growing point or an unexpanded flowering shoot. The base is formed by a reduced stem, and plant growth occurs from this basal plate. Roots emerge from the underside of the base, and new stems and leaves from the upper side. Tunicate bulbs have dry, membranous outer scales that protect the continuous lamina of fleshy scales. Species in the genera Allium, Hippeastrum, Narcissus, and Tulipa all have tunicate bulbs. Non-tunicate bulbs, such as Lilium and Fritillaria species, lack the protective tunic and have looser scales.
Bulbous plant species cycle through vegetative and reproductive growth stages; the bulb grows to flowering size during the vegetative stage and the plant flowers during the reproductive stage. Certain environmental conditions are needed to trigger the transition from one stage to the next, such as the shift from a cold winter to spring. Once the flowering period is over, the plant enters a foliage period of about six weeks during which time the plant absorbs nutrients from the soil and energy from the sun for setting flowers for the next year. Bulbs dug up before the foliage period is completed will not bloom the following year but then should flower normally in subsequent years.
Plants that form bulbs
Plants that form underground storage organs, including bulbs as well as tubers and corms, are called geophytes. Some epiphytic orchids (family Orchidaceae) form above-ground storage organs called pseudobulbs, that superficially resemble bulbs.
Nearly all plants that form true bulbs are monocotyledons, and include:
Amaryllis, Crinum, Hippeastrum, Narcissus, and several other members of the amaryllis family Amaryllidaceae. This includes onion, garlic, and other alliums, members of the Amaryllid subfamily Allioideae.
Lily, tulip, and many other members of the lily family Liliaceae.
Two groups of Iris species, family Iridaceae: subgenus Xiphium (the "Dutch" irises) and subgenus Hermodactyloides (the miniature "rock garden" irises).
The only eudicot plants that produce true bulbs are just a few species in the genus Oxalis, such as Oxalis latifolia.
Bulbil
A bulbil is a small bulb, and may also be called a bulblet, bulbet, or bulbel.
Small bulbs can develop or propagate a large bulb. If one or several moderate-sized bulbs form to replace the original bulb, they are called renewal bulbs. Increase bulbs are small bulbs that develop either on each of the leaves inside a bulb, or else on the end of small underground stems connected to the original bulb.
Some lilies, such as the tiger lily Lilium lancifolium, form small bulbs, called bulbils, in their leaf axils. Several members of the onion family, Alliaceae, including Allium sativum (garlic), form bulbils in their flower heads, sometimes as the flowers fade, or even instead of the flowers (which is a form of apomixis). The so-called tree onion (Allium × proliferum) forms small onions which are large enough for pickling.
Some ferns, such as the hen-and-chicken fern, produce new plants at the tips of the fronds' pinnae that are sometimes referred to as bulbils.
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
51303 | https://en.wikipedia.org/wiki/Cowpox | Cowpox | Cowpox is an infectious disease caused by the cowpox virus (CPXV). It presents with large blisters in the skin, a fever and swollen glands, historically typically following contact with an infected cow, though in the last several decades more often (though overall rarely) from infected cats. The hands and face are most frequently affected and the spots are generally very painful.
The virus, part of the genus Orthopoxvirus, is closely related to the vaccinia virus. The virus is zoonotic, meaning that it is transferable between species, such as from cat to human. The transferral of the disease was first observed in dairy workers who touched the udders of infected cows and consequently developed the signature pustules on their hands. Cowpox is more commonly found in animals other than bovines, such as rodents. Cowpox is similar to, but much milder than, the highly contagious and often deadly smallpox disease. Its close resemblance to the mild form of smallpox and the observation that dairy farmers were immune to smallpox inspired the modern smallpox vaccine, created and administered by English physician Edward Jenner.
The first description of cowpox was given by Jenner in 1798. "Vaccination" is derived from the Latin adjective vaccinus, meaning "of or from the cow". Once vaccinated, a patient develops antibodies that make them immune to cowpox, but they also develop immunity to the smallpox virus, or Variola virus. The cowpox vaccinations and later incarnations proved so successful that in 1980, the World Health Organization announced that smallpox was the first disease to be eradicated by vaccination efforts worldwide. Other orthopox viruses remain prevalent in certain communities and continue to infect humans, such as the cowpox virus in Europe and monkeypox virus in Central and West Africa.
Medical use
Naturally occurring cases of cowpox were not common, but it was discovered that the vaccine could be "carried" in humans and reproduced and disseminated human-to-human. Jenner's original vaccination used lymph from the cowpox pustule on a milkmaid, and subsequent "arm-to-arm" vaccinations applied the same principle. As this transfer of human fluids came with its own set of complications, a safer manner of producing the vaccine was first introduced in Italy. The new method used cows to manufacture the vaccine using a process called "retrovaccination", in which a heifer was inoculated with humanized cowpox virus, and it was passed from calf to calf to produce massive quantities efficiently and safely. This then led to the next incarnation, "true animal vaccine", which used the same process but began with naturally-occurring cowpox virus, and not the humanized form.
This method of production proved to be lucrative and was taken advantage of by many entrepreneurs needing only calves and seed lymph from an infected cow to manufacture crude versions of the vaccine. W. F. Elgin of the National Vaccine Establishment presented his slightly refined technique to the Conference of State and Provincial Boards of Health of North America. A tuberculosis-free calf, stomach shaved, would be bound to an operating table, where incisions would be made on its lower body. Glycerinated lymph from a previously inoculated calf was spread along the cuts. After a few days, the cuts would have scabbed or crusted over. The crust was softened with sterilized water and mixed with glycerin, which disinfected it, then stored hermetically sealed in capillary tubes for later use.
At some point, the virus in use was no longer cowpox, but vaccinia. Scientists have not determined exactly when the change or mutation occurred, but the effects of vaccinia and cowpox virus as vaccine are nearly the same.
The virus is found in Europe, and mainly in the UK. Human cases today are very rare and most often contracted from domestic cats. The virus is not commonly found in cattle; the reservoir hosts for the virus are woodland rodents, particularly voles. From these rodents, domestic cats contract and transmit the virus to humans.
Symptoms in cats include lesions on the face, neck, forelimbs, and paws, and less commonly upper respiratory tract infections. Symptoms of infection with cowpox virus in humans are localized, pustular lesions generally found on the hands and limited to the site of introduction. The incubation period is 9 to 10 days. The virus is prevalent in late summer and autumn.
Origin
Discovery
In the years from 1770 to 1790, at least six people who had contact with a cow had independently tested the possibility of using the cowpox vaccine as an immunization for smallpox in humans. Among them were the English farmer Benjamin Jesty, in Dorset in 1774 and the German teacher Peter Plett in 1791. Jesty inoculated his wife and two young sons with cowpox, in a successful effort to immunize them to smallpox, an epidemic of which had arisen in their town. His patients who had contracted and recovered from the similar but milder cowpox (mainly milkmaids), seemed to be immune not only to further cases of cowpox, but also to smallpox. By scratching the fluid from cowpox lesions into the skin of healthy individuals, he was able to immunize those people against smallpox.
Reportedly, farmers and people working regularly with cattle and horses were often spared during smallpox outbreaks. Investigations by the British Army in 1790 showed that horse-mounted troops were less infected by smallpox than infantry, due to probable exposure to the similar horsepox virus (Variola equina). By the early 19th century, more than 100,000 people in Great Britain had been vaccinated. The arm-to-arm method of transfer of the cowpox vaccine was also used to distribute Jenner's vaccine throughout the Spanish Empire. Spanish king Charles IV's daughter had been stricken with smallpox in 1798, and after she recovered, he arranged for the rest of his family to be vaccinated.
In 1803, the king, convinced of the benefits of the vaccine, ordered his personal physician Francis Xavier de Balmis, to deliver it to the Spanish dominions in North and South America. To maintain the vaccine in an available state during the voyage, the physician recruited 22 young boys who had never had cowpox or smallpox before, aged three to nine years, from the orphanages of Spain. During the trip across the Atlantic, de Balmis vaccinated the orphans in a living chain. Two children were vaccinated immediately before departure, and when cowpox pustules had appeared on their arms, material from these lesions was used to vaccinate two more children.
In 1796, English medical practitioner Edward Jenner tested the theory that cowpox could protect someone from being infected by smallpox. There had long been speculation regarding the origins of Jenner's variolae vaccinae, until DNA sequencing data showed close similarities between horsepox and cowpox viruses. Jenner noted that farriers sometimes milked cows and that material from the equine disease could produce a vesicular disease in cows from which variolae vaccinae was derived. Contemporary accounts provide support for Jenner's speculation that the vaccine probably originated as an equine disease called "grease".
Although cowpox originates on the udder of cows, Jenner took his sample from a milkmaid, Sarah Nelmes.
Jenner extracted the pus of one of the lesions formed by cowpox on Nelmes to James Phipps, an eight-year-old boy who had never had smallpox. He eventually developed a scab and fever that was manageable. Approximately six weeks later, Jenner then introduced an active sample of the smallpox virus into Phipps to test the theory. After being observed for an extended amount of time, it was recorded that Phipps did not receive a reaction from it. Although Jenner was not the first person to conceive the notion of cowpox protecting against the smallpox virus, his experiment proved the theory.
In later years, Jenner popularized the experiment, calling it a vaccination from the Latin for cow, vacca. The number of vaccinations among people of that era increased drastically. It was widely considered to be a relatively safer procedure compared to the mainstream inoculation. Although Jenner was propelled into the spotlight from the vaccination popularity, he mainly focused on science behind why the cowpox allowed persons to not be infected by smallpox. The honour of the discovery of the vaccination is often attributed to Benjamin Jesty, but he was no scientist and did not repeat or publish his findings. He is considered to be the first to use cowpox as a vaccination, though the term vaccination was not invented yet.
During the midst of the smallpox outbreak, Jesty transferred pieces of cow udder which he knew had been infected with cowpox into the skin of his family members in the hopes of protecting them. Jesty did not publicize his findings, and Jenner, who performed his first inoculation 22 years later and publicized his findings, assumed credit. It is said that Jenner made this discovery by himself, possibly without knowing previous accounts 20 years earlier. Although Jesty may have been the first to discover it, Jenner made vaccination widely accessible and has therefore been credited for its invention.
Life cycle
The genome for the CPXV is over 220kbp. This makes it the largest genome in the Orthopoxviral species. It can be divided into three different regions. There are two end regions called R1 and R2 and a main central region that is roughly half of the size of the genome. There are also inverted terminal repeats that are located at the terminal sites of the genome and measure around 10kbp. These inverted terminal repeats can then be divided into two more distinct regions. The first section is around 7.5kbp long and includes a coding region. The other section includes a terminal region that can be repeated up to as many times as thirty and is composed of 50 nucleotides. The CPXV genome encodes only 30-40% of products of which are involved in the pathogenesis of the virus. The CPXV genome has the most complete set of genes out of all of the orthopoxviruses. This unique feature of CPXV makes it ideal to be able to mutate into different strains of the virus. It is a double stranded DNA virus. The virus does have an envelope that surrounds the virion. The cowpox's genome allows the virus to encode its own transcription machinery along with its own DNA replication machinery. The replication then takes place in the cytoplasm after the virus is in the cell and the virion is uncoated. The virion is then assembled and released from the host cell.
The genome is arranged so that both of the ends contain the genes responsible for evading the defenses from the immune system of the host which is only activated in the extracellular portion. These receptors are able to be stopped by cytokine and chemokine secretion by blocking the cytokine and chemokine found extracellularly. This is the process responsible for attachment and entry of the virion into the host cell. Because of the large size of the genome, it makes the virus more likely and capable to fight back against the immunes system defenses. Out of all of the poxviruses, CPXV has the most cytokine responses that fight back against the immune system. It encodes cytokine receptors such as TNF, CrmB, CrmC, CrmD, and CrmE proteins. Another set of receptors that CPXV have are lymphotoxins such as IL-1ß, IFN-y, IFN 1, β-chemokines, and IL-18. However, not all of the receptors of CPXV are still not known. CPXV also encodes four tumor necrosis factors (TNF) and lymphotoxin which are the biggest group of homologous receptors for the virus. These receptors play a crucial role that are involved with the immune system.
CPXV has two different types of inclusion bodies. All of the poxviruses have basophilic inclusions also called B-type inclusion bodies. The B-type inclusion bodies contain the factory where the virus produces necessary elements for the replication and maturation of the virion. CPXV has another inclusion body that is unique to only some chordopoxviruses called acidophilic inclusion bodies also called A-type inclusion bodies (ATIs). The ATIs are encoded by the cpxv158 gene and is then made the protein ATIP which is a late protein. However, the importance of these ATIs in the life cycle are still not well known or understood and research is still being done to better understand them. It is known that replication can still continue without the cpxv158 gene, and that the replication cycle shows no difference between a fully encoded virion versus the virion that had deleted cwpx158 gene. However, with studies done on mice, the lesions that were caused by the CPXV-BR△ati were able to heal faster due to less tissue that was lost than the CPXV-BR lesions that took longer to heal and lost more tissue. This suggests that this gene helps supports the idea that ATIs are partly involved in how the host responds to the virus infection.
Another way that the virus is able to control and infect the host is by regulating cellular signaling pathways. During the infection, CPXV is known to use MEK/ERK/1/2/Egr-1, JNK1/2, and PI3K/Akt pathways. Some of these pathways are not unique only to CPXV, but how they function in response to the host is unique to this virus.
One notable protein in the CPXV is the p28 protein. It is made up of 242 amino acids and contains two domains, and N terminal KilA-N and a C-terminal RING domain. One of those domains, the N-terminal KilA-N domain, allows for DNA to bind to it. The KilA-N domain facilitates this p28 protein that is translated early in the replication cycle in the cytoplasm and is then located in the cytoplasm for the rest of the life cycle of the virus. There is current research still being done to determine if the p28 protein could be a requital for an essential macrophage factor that is needed for the DNA replication.
Opposition to vaccination
The majority of the population at the time accepted the up-and-coming vaccination. However, there was still opposition from individuals who were reluctant to change from the inoculations. In addition, there became a growing concern from parties who were worried about the unknown repercussions of infecting a human with an animal disease. One way individuals expressed their discontent was to draw comics that sometimes depicted small cows growing from the sites of vaccination. Others publicly advocated for the continuance of the inoculations; however, this was not because of their discontent for the vaccinations. Some of their reluctance had to do with an apprehensiveness for change. They had become so familiar with the process, outcome, positives, and negatives of inoculations that they did not want to be surprised by the outcome or effects of the vaccinations. Jenner soon eased their minds after extensive trials. However, others advocated against vaccinations for different reason. Because of the high price of inoculation, Jenner experienced very few common folk who were not willing to accept the vaccination. Due to this, Jenner found many subjects for his tests. He was able to publish his results in a pamphlet in 1798: An Inquiry into the Causes and Effects of Variolae Vaccinae, a Disease, Discovered in some of the Western Counties of England particularly Gloucestershire, and known by the Name of Cow Pox.
Historical use
After inoculation, vaccination using the cowpox virus became the primary defense against smallpox. After infection by the cowpox virus, the body (usually) gains the ability to recognize the similar smallpox virus from its antigens and is able to fight the smallpox disease much more efficiently.
The cowpox virus contains 222 thousand base pairs of DNA, which contains the information for about 203-204 genes. This makes cowpox one of the most complicated viruses known. A significant number of these genes give instructions for key parts of the human immune system, giving a clue as to why the closely related smallpox is so lethal. The vaccinia virus now used for smallpox vaccination is sufficiently different from the cowpox virus found in the wild as to be considered a separate virus.
British Parliament
While the vaccination's popularity increased exponentially, so did its monetary value. This was realized by the British Parliament, which compensated Jenner 10,000 pounds for the vaccination. In addition, they later compensated Jenner an additional 20,000 pounds. In the coming years, Jenner continued advocacy for his vaccination over the still popular inoculation. Eventually, in 1840, the inoculation became banned in England and was replaced with the cowpox vaccination as the main medical solution to combat smallpox. The cowpox vaccination saved the British Army thousands of soldiers, by making them immune to the effects of smallpox in upcoming wars. The cowpox also saved the United Kingdom thousands of pounds.
Kinepox
Kinepox is an alternative term for the smallpox vaccine used in early 19th-century America. Popularized by Jenner in the late 1790s, kinepox was a far safer method for inoculating people against smallpox than the previous method, variolation, which had a 3% fatality rate.
In a famous letter to Meriwether Lewis in 1803, Thomas Jefferson instructed the Lewis and Clark Expedition to "carry with you some matter of the kine-pox; inform those of them with whom you may be, of its efficacy as a preservative from the smallpox; & encourage them in the use of it..." Jefferson had developed an interest in protecting American Indians from smallpox, having been aware of epidemics along the Missouri River during the previous century. A year before his special instructions to Lewis, Jefferson had persuaded a visiting delegation of North American Indian chieftains to be vaccinated with kinepox during the winter of 1801–1802. Unfortunately, Lewis never got the opportunity to use kinepox during the pair's expedition, as it had become inadvertently inactive—a common occurrence in a time before vaccines were stabilized with preservatives such as glycerol or kept at refrigeration temperatures.
Prevention
Today, the virus is found in Europe, mainly in the UK. Human cases are very rare (though in 2010 a laboratory worker contracted cowpox) and most often contracted from domestic cats. Human infections usually remain localized and self-limiting, but can become fatal in immunosuppressed patients. The virus is not commonly found in cattle; the reservoir hosts for the virus are woodland rodents, particularly voles. Domestic cats contract the virus from these rodents. Symptoms in cats include lesions on the face, neck, forelimbs, and paws, and, less commonly, upper respiratory tract infections. Symptoms of infection with cowpox virus in humans are localized, pustular lesions generally found on the hands and limited to the site of introduction. The incubation period is nine to ten days. The virus is most prevalent in late summer and autumn.
Immunity to cowpox is gained when the smallpox vaccine is administered. Although the vaccine now uses vaccinia virus, the poxviruses are similar enough that the body becomes immune to both cow- and smallpox.
Citations
General sources
Animal viral diseases
Bovine diseases
Cat diseases
Chordopoxvirinae
Infectious diseases
Smallpox vaccines
Virus-related cutaneous conditions
Zoonoses | Biology and health sciences | Viral diseases | Health |
51346 | https://en.wikipedia.org/wiki/Coconut | Coconut | The coconut tree (Cocos nucifera) is a member of the palm tree family (Arecaceae) and the only living species of the genus Cocos. The term "coconut" (or the archaic "cocoanut") can refer to the whole coconut palm, the seed, or the fruit, which botanically is a drupe, not a nut. They are ubiquitous in coastal tropical regions and are a cultural icon of the tropics.
The coconut tree provides food, fuel, cosmetics, folk medicine and building materials, among many other uses. The inner flesh of the mature seed, as well as the coconut milk extracted from it, forms a regular part of the diets of many people in the tropics and subtropics. Coconuts are distinct from other fruits because their endosperm contains a large quantity of an almost clear liquid, called "coconut water" or "coconut juice". Mature, ripe coconuts can be used as edible seeds, or processed for oil and plant milk from the flesh, charcoal from the hard shell, and coir from the fibrous husk. Dried coconut flesh is called copra, and the oil and milk derived from it are commonly used in cookingfrying in particularas well as in soaps and cosmetics. Sweet coconut sap can be made into drinks or fermented into palm wine or coconut vinegar. The hard shells, fibrous husks and long pinnate leaves can be used as material to make a variety of products for furnishing and decoration.
The coconut has cultural and religious significance in certain societies, particularly in the Austronesian cultures of the Western Pacific where it is featured in their mythologies, songs, and oral traditions. The fall of its mature fruit has led to a preoccupation with death by coconut. It also had ceremonial importance in pre-colonial animistic religions. It has also acquired religious significance in South Asian cultures, where it is used in rituals of Hinduism. It forms the basis of wedding and worship rituals in Hinduism. It also plays a central role in the Coconut Religion founded in 1963 in Vietnam.
Coconuts were first domesticated by the Austronesian peoples in Island Southeast Asia and were spread during the Neolithic via their seaborne migrations as far east as the Pacific Islands, and as far west as Madagascar and the Comoros. They played a critical role in the long sea voyages of Austronesians by providing a portable source of food and water, as well as providing building materials for Austronesian outrigger boats. Coconuts were also later spread in historic times along the coasts of the Indian and Atlantic Oceans by South Asian, Arab, and European sailors. Based on these separate introductions, coconut populations can still be divided into Pacific coconuts and Indo-Atlantic coconuts, respectively. Coconuts were introduced by Europeans to the Americas during the colonial era in the Columbian exchange, but there is evidence of a possible pre-Columbian introduction of Pacific coconuts to Panama by Austronesian sailors. The evolutionary origin of the coconut is under dispute, with theories stating that it may have evolved in Asia, South America, or Pacific islands.
Trees can grow up to tall and can yield up to 75 fruits per year, though fewer than 30 is more typical. Plants are intolerant to cold and prefer copious precipitation and full sunlight. Many insect pests and diseases affect the species and are a nuisance for commercial production. In 2022, about 73% of the world's supply of coconuts was produced by Indonesia, India, and the Philippines.
Description
Cocos nucifera is a large palm, growing up to tall, with pinnate leaves long, and pinnae long; old leaves break away cleanly, leaving the trunk smooth. On fertile soil, a tall coconut palm tree can yield up to 75 fruits per year, but more often yields less than 30. Given proper care and growing conditions, coconut palms produce their first fruit in six to ten years, taking 15 to 20 years to reach peak production.
True-to-type dwarf varieties of Pacific coconuts have been cultivated by the Austronesian peoples since ancient times. These varieties were selected for slower growth, sweeter coconut water, and often brightly colored fruits. Many modern varieties are also grown, including the Maypan, King, and Macapuno. These vary by the taste of the coconut water and color of the fruit, as well as other genetic factors.
Fruit
Botanically, the coconut fruit is a drupe, not a true nut. Like other fruits, it has three layers: the exocarp, mesocarp, and endocarp. The exocarp is the glossy outer skin, usually yellow-green to yellow-brown in color. The mesocarp is composed of a fiber, called coir, which has many traditional and commercial uses. Both the exocarp and the mesocarp make up the "husk" of the coconut, while the endocarp makes up the hard coconut "shell". The endocarp is around thick and has three distinctive germination pores (micropyles) on the distal end. Two of the pores are plugged (the "eyes"), while one is functional.
The interior of the endocarp is hollow and is lined with a thin brown seed coat around thick. The endocarp is initially filled with a multinucleate liquid endosperm (the coconut water). As development continues, cellular layers of endosperm deposit along the walls of the endocarp up to thick, starting at the distal end. They eventually form the edible solid endosperm (the "coconut meat" or "coconut flesh") which hardens over time. The small cylindrical embryo is embedded in the solid endosperm directly below the functional pore of the endosperm. During germination, the embryo pushes out of the functional pore and forms a haustorium (the coconut sprout) inside the central cavity. The haustorium absorbs the solid endosperm to nourish the seedling.
Coconut fruits have two distinctive forms depending on . Wild coconuts feature an elongated triangular fruit with a thicker husk and a smaller amount of endosperm. These allow the fruits to be more buoyant and make it easier for them to lodge into sandy shorelines, making their shape ideal for ocean dispersal.
Domesticated Pacific coconuts, on the other hand, are rounded in shape with a thinner husk and a larger amount of endosperm. Domesticated coconuts also contain more coconut water.
These two forms are referred to by the Samoan terms niu kafa for the elongated wild coconuts, and niu vai for the rounded domesticated Pacific coconuts.
A full-sized coconut fruit weighs about . Coconuts sold domestically in coconut-producing countries are typically not de-husked. Especially immature coconuts (6 to 8 months from flowering) are sold for coconut water and softer jelly-like coconut meat (known as "green coconuts", "young coconuts", or "water coconuts"), where the original coloration of the fruit is more aesthetically pleasing.
Whole mature coconuts (11 to 13 months from flowering) sold for export, however, typically have the husk removed to reduce weight and volume for transport. This results in the naked coconut "shell" with three pores more familiar in countries where coconuts are not grown locally. De-husked coconuts typically weigh around . coconuts are also easier for consumers to open, but have a shorter postharvest storage life of around two to three weeks at temperatures of or up to 2 months at . In comparison, mature coconuts with the husk intact can be stored for three to five months at normal room temperature .
Roots
Unlike some other plants, the palm tree has neither a taproot nor root hairs, but has a fibrous root system. The root system consists of an abundance of thin roots that grow outward from the plant near the surface. Only a few of the roots penetrate deep into the soil for stability. This type of root system is known as fibrous or adventitious, and is a characteristic of grass species. Other types of large trees produce a single downward-growing tap root with a number of feeder roots growing from it. 2,000–4,000 adventitious roots may grow, each about large. Decayed roots are replaced regularly as the tree grows new ones.
Inflorescence
The palm produces both the female and male flowers on the same inflorescence; thus, the palm is monoecious. However, there is some evidence that it may be polygamomonoecious and may occasionally have bisexual flowers. The female flower is much larger than the male flower. Flowering occurs continuously. Coconut palms are believed to be largely cross-pollinated, although most dwarf varieties are self-pollinating.
Taxonomy
Phylogeny
The evolutionary history and fossil distribution of Cocos nucifera and other members of the tribe Cocoseae is more ambiguous than modern-day dispersal and distribution, with its ultimate origin and pre-human dispersal still unclear. There are currently two major viewpoints on the origins of the genus Cocos, one in the Indo-Pacific, and another in South America. The vast majority of Cocos-like fossils have been recovered generally from only two regions in the world: New Zealand and west-central India. However, like most palm fossils, Cocos-like fossils are still putative, as they are usually difficult to identify.
The earliest Cocos-like fossil to be found was Cocos zeylandica, a fossil species described as small fruits, around × in size, recovered from the Miocene (~23 to 5.3 million years ago) of New Zealand in 1926. Since then, numerous other fossils of similar fruits were recovered throughout New Zealand from the Eocene, Oligocene, and possibly the Holocene. But research on them is still ongoing to determine their phylogenetic affinities. Endt & Hayward (1997) have noted their resemblance to members of the South American genus Parajubaea, rather than Cocos, and propose a South American origin. Conran et al. (2015), however, suggests that their diversity in New Zealand indicate that they evolved endemically, rather than being introduced to the islands by long-distance dispersal. In west-central India, numerous fossils of Cocos-like fruits, leaves, and stems have been recovered from the Deccan Traps. They include morphotaxa like Palmoxylon sundaran, Palmoxylon insignae, and Palmocarpon cocoides. Cocos-like fossils of fruits include Cocos intertrappeansis, Cocos pantii, and Cocos sahnii. They also include fossil fruits that have been tentatively identified as modern Cocos nucifera. These include two specimens named Cocos palaeonucifera and Cocos binoriensis, both dated by their authors to the Maastrichtian–Danian of the early Tertiary (70 to 62 million years ago). C. binoriensis has been claimed by their authors to be the earliest known fossil of Cocos nucifera.
Outside of New Zealand and India, only two other regions have reported Cocos-like fossils, namely Australia and Colombia. In Australia, a Cocos-like fossil fruit, measuring , were recovered from the Chinchilla Sand Formation dated to the latest Pliocene or basal Pleistocene. Rigby (1995) assigned them to modern Cocos nucifera based on its size. In Colombia, a single Cocos-like fruit was recovered from the middle to late Paleocene Cerrejón Formation. The fruit, however, was compacted in the fossilization process and it was not possible to determine if it had the diagnostic three pores that characterize members of the tribe Cocoseae. Nevertheless, Gomez-Navarro et al. (2009), assigned it to Cocos based on the size and the ridged shape of the fruit.
Further complicating measures to determine the evolutionary history of Cocos is the genetic diversity present within C. nucifera as well as its relatedness to other palms. Phylogenetic evidence supports the closest relatives of Cocos being either Syagrus or Attalea, both of which are found in South America. However, Cocos is not thought to be indigenous to South America, and the highest genetic diversity is present in Asian Cocos, indicating that at least the modern species Cocos nucifera is native to there. In addition, fossils of potential Cocos ancestors have been recovered from both Colombia and India. In order to resolve this enigma, a 2014 study proposed that the ancestors of Cocos had likely originated on the Caribbean coast of what is now Colombia, and during the Eocene the ancestral Cocos performed a long-distance dispersal across the Atlantic Ocean to North Africa. From here, island-hopping via coral atolls lining the Tethys Sea, potentially boosted by ocean currents at the time, would have proved crucial to dispersal, eventually allowing ancestral coconuts to reach India. The study contended that an adaptation to coral atolls would explain the prehistoric and modern distributions of Cocos, would have provided the necessary evolutionary pressures, and would account for morphological factors such as a thick husk to protect against ocean degradation and provide a moist medium in which to germinate on sparse atolls.
Etymology
The name coconut is derived from the 16th-century Portuguese word coco, meaning 'head' or 'skull' after the three indentations on the coconut shell that resemble facial features. Coco and coconut apparently came from 1521 encounters by Portuguese and Spanish explorers with Pacific Islanders, with the coconut shell reminding them of a ghost or witch in Portuguese folklore called coco (also côca). In the West it was originally called nux indica, a name used by Marco Polo in 1280 while in Sumatra. He took the term from the Arabs, who called it جوز هندي jawz hindī, translating to 'Indian nut'. Thenga, its Tamil/Malayalam name, was used in the detailed description of coconut found in Itinerario by Ludovico di Varthema published in 1510 and also in the later Hortus Indicus Malabaricus.
Carl Linnaeus first wanted to name the coconut genus Coccus from latinizing the Portuguese word coco, because he saw works by other botanists in middle of the 17th century use the name as well. He consulted the catalogue Herbarium Amboinense by Georg Eberhard Rumphius where Rumphius said that coccus was a homonym of coccum and coccus from Greek kokkos meaning "grain" or "berry", but Romans identified coccus with "kermes insects"; Rumphius preferred the word cocus as a replacement. However, the word cocus could also mean "cook" like coquus in Latin, so Linnaeus chose Cocos directly from the Portuguese word coco instead.
The specific name nucifera is derived from the Latin words nux (nut) and fera (bearing), for 'nut-bearing'.
Distribution and habitat
Coconuts have a nearly cosmopolitan distribution due to human cultivation and dispersal. However, their original distribution was in the Central Indo-Pacific, in the regions of Maritime Southeast Asia and Melanesia.
Origin
Modern genetic studies have identified the center of origin of coconuts as being the Central Indo-Pacific, the region between western Southeast Asia and Melanesia, where it shows greatest genetic diversity. Their cultivation and spread was closely tied to the early migrations of the Austronesian peoples who carried coconuts as canoe plants to the islands they settled. The similarities of the local names in the Austronesian region is also cited as evidence that the plant originated in the region. For example, the Polynesian and Melanesian term niu; Tagalog and Chamorro term niyog; and the Malay word nyiur or nyior. Other evidence for a Central Indo-Pacific origin is the native range of the coconut crab; and the higher amounts of C. nucifera-specific insect pests in the region (90%) in comparison to the Americas (20%), and Africa (4%).
A study in 2011 identified two highly genetically differentiated subpopulations of coconuts, one originating from Island Southeast Asia (the Pacific group) and the other from the southern margins of the Indian subcontinent (the Indo-Atlantic group). The Pacific group is the only one to display clear genetic and phenotypic indications that they were domesticated; including dwarf habit, self-pollination, and the round "niu vai" fruit morphology with larger endosperm-to-husk ratios. The distribution of the Pacific coconuts correspond to regions settled by Austronesian voyagers indicating that its spread was largely the result of human introductions. It is most strikingly displayed in Madagascar, an island settled by Austronesian sailors at around 2000 to 1500 BP. The coconut populations on the island show genetic admixture between the two subpopulations indicating that Pacific coconuts were first brought by the Austronesian settlers, which then interbred with the later Indo-Atlantic coconuts brought by Europeans from India.
Genetic studies of coconuts have also confirmed pre-Columbian populations of coconuts in Panama. However, it is not native and displays a genetic bottleneck resulting from a founder effect. A study in 2008 showed that the coconuts in the Americas are genetically closest related to the coconuts in the Philippines, and not to any other nearby coconut populations (including Polynesia). Such an origin indicates that the coconuts were not introduced naturally, such as by sea currents. The researchers concluded that it was brought by early Austronesian sailors to the Americas from at least 2,250 BP, and may be proof of pre-Columbian contact between Austronesian cultures and South American cultures. It is further strengthened by other similar botanical evidence of contact, like the pre-colonial presence of sweet potato in Oceanian cultures. During the colonial era, Pacific coconuts were further introduced to Mexico from the Spanish East Indies via the Manila galleons.
In contrast to the Pacific coconuts, Indo-Atlantic coconuts were largely spread by Arab and Persian traders into the East African coast. Indo-Atlantic coconuts were also introduced into the Atlantic Ocean by Portuguese ships from their colonies in coastal India and Sri Lanka; first introduced to coastal West Africa, then onwards into the Caribbean and the east coast of Brazil. All of these introductions are within the last few centuries, relatively recent in comparison to the spread of Pacific coconuts.
Natural habitat
The coconut palm thrives on sandy soils and is highly tolerant of salinity. It prefers areas with abundant sunlight and regular rainfall ( annually), which makes colonizing shorelines of the tropics relatively straightforward. Coconuts also need high humidity (at least 70–80%) for optimum growth, which is why they are rarely seen in areas with low humidity. However, they can be found in humid areas with low annual precipitation such as in Karachi, Pakistan, which receives only about of rainfall per year, but is consistently warm and humid.
Coconut palms require warm conditions for successful growth, and are intolerant of cold weather. Some seasonal variation is tolerated, with good growth where mean summer temperatures are between , and survival as long as winter temperatures are above ; they will survive brief drops to . Severe frost is usually fatal, although they have been known to recover from temperatures of . Due to this, there are not many coconut palms in California. They may grow but not fruit properly in areas with insufficient warmth or sunlight, such as Bermuda.
The conditions required for coconut trees to grow without any care are:
Mean daily temperature above every day of the year
Mean annual rainfall above
No or very little overhead canopy, since even small trees require direct sun
The main limiting factor for most locations which satisfy the rainfall and temperature requirements is canopy growth, except those locations near coastlines, where the sandy soil and salt spray limit the growth of most other trees.
Domestication
Wild coconuts are naturally restricted to coastal areas in sandy, saline soils. The fruit is adapted for ocean dispersal. Coconuts could not reach inland locations without human intervention (to carry seednuts, plant seedlings, etc.) and early germination on the palm (vivipary) was important.
Coconuts today can be grouped into two highly genetically distinct subpopulations: the Indo-Atlantic group originating from southern India and nearby regions (including Sri Lanka, the Laccadives, and the Maldives); and the Pacific group originating from the region between maritime Southeast Asia and Melanesia. Linguistic, archaeological, and genetic evidence all point to the early domestication of Pacific coconuts by the Austronesian peoples in maritime Southeast Asia during the Austronesian expansion (c. 3000 to 1500 BCE). Although archaeological remains dating to 1000 to 500 BCE also suggest that the Indo-Atlantic coconuts were also later independently cultivated by the Dravidian peoples, only Pacific coconuts show clear signs of domestication traits like dwarf habits, self-pollination, and rounded fruits. Indo-Atlantic coconuts, in contrast, all have the ancestral traits of tall habits and elongated triangular fruits.
The coconut played a critical role in the migrations of the Austronesian peoples. They provided a portable source of both food and water, allowing Austronesians to survive long sea voyages to colonize new islands as well as establish long-range trade routes. Based on linguistic evidence, the absence of words for coconut in the Taiwanese Austronesian languages makes it likely that the Austronesian coconut culture developed only after Austronesians started colonizing the Philippine islands. The importance of the coconut in Austronesian cultures is evidenced by shared terminology of even very specific parts and uses of coconuts, which were carried outwards from the Philippines during the Austronesian migrations. Indo-Atlantic type coconuts were also later spread by Arab and South Asian traders along the Indian Ocean basin, resulting in limited admixture with Pacific coconuts introduced earlier to Madagascar and the Comoros via the ancient Austronesian maritime trade network.
Coconuts can be broadly divided into two fruit typesthe ancestral niu kafa form with a thick-husked, angular fruit, and the niu vai form with a thin-husked, spherical fruit with a higher proportion of endosperm. The terms are derived from the Samoan language and was adopted into scientific usage by Harries (1978).
The niu kafa form is the wild ancestral type, with thick husks to protect the seed, an angular, highly ridged shape to promote buoyancy during ocean dispersal, and a pointed base that allowed fruits to dig into the sand, preventing them from being washed away during germination on a new island. It is the dominant form in the Indo-Atlantic coconuts. However, they may have also been partially selected for thicker husks for coir production, which was also important in Austronesian material culture as a source for cordage in building houses and boats.
The niu vai form is the domesticated form dominant in Pacific coconuts. They were selected for by the Austronesian peoples for their larger endosperm-to-husk ratio as well as higher coconut water content, making them more useful as food and water reserves for sea voyages. The decreased buoyancy and increased fragility of this spherical, thin-husked fruit would not matter for a species that had started to be dispersed by humans and grown in plantations. Niu vai endocarp fragments have been recovered in archaeological sites in the St. Matthias Islands of the Bismarck Archipelago. The fragments are dated to approximately 1000 BCE, suggesting that cultivation and artificial selection of coconuts were already practiced by the Austronesian Lapita people.
Coconuts can also be broadly divided into two general types based on habit: the "Tall" (var. typica) and "Dwarf" (var. nana) varieties. The two groups are genetically distinct, with the dwarf variety showing a greater degree of artificial selection for ornamental traits and for early germination and fruiting. The tall variety is outcrossing while dwarf palms are self-pollinating, which has led to a much greater degree of genetic diversity within the tall group.
The dwarf coconut cultivars are fully domesticated, in contrast to tall cultivars which display greater diversity in terms of domestication (and lack thereof). The fact that all dwarf coconuts share three genetic markers out of thirteen (which are only present at low frequencies in tall cultivars) makes it likely that they all originate from a single domesticated population. Philippine and Malayan dwarf coconuts diverged early into two distinct types. They usually remain genetically isolated when introduced to new regions, making it possible to trace their origins. Numerous other dwarf cultivars also developed as the initial dwarf cultivar was introduced to other regions and hybridized with various tall cultivars. The origin of dwarf varieties is Southeast Asia, which contain the tall cultivars that are genetically closest to dwarf coconuts.
Sequencing of the genome of the tall and dwarf varieties revealed that they diverged 2 to 8 million years ago and that the dwarf variety arose through alterations in genes involved in the metabolism of the plant hormone gibberellin.
Another ancestral variety is the niu leka of Polynesia (sometimes called the "Compact Dwarfs"). Although it shares similar characteristics to dwarf coconuts (including slow growth), it is genetically distinct and is thus believed to be independently domesticated, likely in Tonga. Other cultivars of niu leka may also exist in other islands of the Pacific, and some are probably descendants of advanced crosses between Compact Dwarfs and Southeast Asian Dwarf types.
Dispersal
Coconut fruit in the wild is light, buoyant, and highly water resistant. It is claimed that they evolved to disperse significant distances via marine currents. However, it can also be argued that the placement of the vulnerable eye of the nut (down when floating), and the site of the coir cushion are better positioned to ensure that the water-filled nut does not fracture when dropping on rocky ground, rather than for flotation.
It is also often stated that coconuts can travel 110 days, or , by sea and still be able to germinate. This figure has been questioned based on the extremely small sample size that forms the basis of the paper that makes this claim. Thor Heyerdahl provides an alternative, and much shorter, estimate based on his first-hand experience crossing the Pacific Ocean on the raft Kon-Tiki:
The nuts we had in baskets on deck remained edible and capable of germinating the whole way to Polynesia. But we had laid about half among the special provisions below deck, with the waves washing around them. Every single one of these was ruined by the sea water. And no coconut can float over the sea faster than a balsa raft moves with the wind behind it.
He also notes that several of the nuts began to germinate by the time they had been ten weeks at sea, precluding an unassisted journey of 100 days or more.
Drift models based on wind and ocean currents have shown that coconuts could not have drifted across the Pacific unaided. If they were naturally distributed and had been in the Pacific for a thousand years or so, then we would expect the eastern shore of Australia, with its own islands sheltered by the Great Barrier Reef, to have been thick with coconut palms: the currents were directly into, and down along this coast. However, both James Cook and William Bligh (put adrift after the Bounty mutiny) found no sign of the nuts along this stretch when he needed water for his crew. Nor were there coconuts on the east side of the African coast until Vasco da Gama, nor in the Caribbean when first visited by Christopher Columbus. They were commonly carried by Spanish ships as a source of fresh water.
These provide substantial circumstantial evidence that deliberate Austronesian voyagers were involved in carrying coconuts across the Pacific Ocean and that they could not have dispersed worldwide without human agency. More recently, genomic analysis of cultivated coconut (C. nucifera L.) has shed light on the movement. However, admixture, the transfer of genetic material, evidently occurred between the two populations.
Given that coconuts are ideally suited for inter-island group ocean dispersal, obviously some natural distribution did take place. However, the locations of the admixture events are limited to Madagascar and coastal east Africa, and exclude the Seychelles. This pattern coincides with the known trade routes of Austronesian sailors. Additionally, a genetically distinct subpopulation of coconut on the Pacific coast of Latin America has undergone a genetic bottleneck resulting from a founder effect; however, its ancestral population is the Pacific coconut from the Philippines. This, together with their use of the South American sweet potato, suggests that Austronesian peoples may have sailed as far east as the Americas. In the Hawaiian Islands, the coconut is regarded as a Polynesian introduction, first brought to the islands by early Polynesian voyagers (also Austronesians) from their homelands in the southern islands of Polynesia.
Specimens have been collected from the sea as far north as Norway (but it is not known where they entered the water). They have been found in the Caribbean and the Atlantic coasts of Africa and South America for less than 500 years (the Caribbean native inhabitants do not have a dialect term for them, but use the Portuguese name), but evidence of their presence on the Pacific coast of South America antedates Columbus's arrival in the Americas. They are now almost ubiquitous between 26° N and 26° S except for the interiors of Africa and South America.
The 2014 coral atoll origin hypothesis proposed that the coconut had dispersed in an island hopping fashion using the small, sometimes transient, coral atolls. It noted that by using these small atolls, the species could easily island-hop. Over the course of evolutionary time-scales the shifting atolls would have shortened the paths of colonization, meaning that any one coconut would not have to travel very far to find new land.
Ecology
Coconuts are susceptible to the phytoplasma disease, lethal yellowing. One recently selected cultivar, the 'Maypan', has been bred for resistance to this disease. Yellowing diseases affect plantations in Africa, India, Mexico, the Caribbean and the Pacific Region. Konan et al., 2007 explains much resistance with a few alleles at a few microsatellites. They find that 'Vanuatu Tall' and 'Sri-Lanka Green Dwarf' are the most resistant while 'West African Tall' breeds are especially susceptible.
The coconut palm is damaged by the larvae of many Lepidoptera (butterfly and moth) species which feed on it, including the African armyworm (Spodoptera exempta) and Batrachedra spp.: B. arenosella, B. atriloqua (feeds exclusively on C. nucifera), B. mathesoni (feeds exclusively on C. nucifera), and B. nuciferae.
Brontispa longissima (coconut leaf beetle) feeds on young leaves, and damages both seedlings and mature coconut palms. In 2007, the Philippines imposed a quarantine in Metro Manila and 26 provinces to stop the spread of the pest and protect the Philippine coconut industry managed by some 3.5 million farmers.
The fruit may also be damaged by eriophyid coconut mites (Eriophyes guerreronis). This mite infests coconut plantations, and is devastating; it can destroy up to 90% of coconut production. The immature seeds are infested and desapped by larvae staying in the portion covered by the perianth of the immature seed; the seeds then drop off or survive deformed. Spraying with wettable sulfur 0.4% or with Neem-based pesticides can give some relief, but is cumbersome and labor-intensive.
In Kerala, India, the main coconut pests are the coconut mite, the rhinoceros beetle, the red palm weevil, and the coconut leaf caterpillar. Research into countermeasures to these pests has yielded no results; researchers from the Kerala Agricultural University and the Central Plantation Crop Research Institute, Kasaragode, continue to work on countermeasures. The Krishi Vigyan Kendra, Kannur under Kerala Agricultural University has developed an innovative extension approach called the compact area group approach to combat coconut mites.
Cultivation
Coconut palms are normally cultivated in hot and wet tropical climates. They need year-round warmth and moisture to grow well and fruit. Coconut palms are hard to establish in dry climates, and cannot grow there without frequent irrigation; in drought conditions, the new leaves do not open well, and older leaves may become desiccated; fruit also tends to be shed.
The extent of cultivation in the tropics is threatening a number of habitats, such as mangroves; an example of such damage to an ecoregion is in the Petenes mangroves of the Yucatán.
Unique among plants, coconut trees can be irrigated with sea water. Although that is recommended for coconut plants that are over 2 years old.
Cultivars
Coconut has a number of commercial and traditional cultivars. They can be sorted mainly into tall cultivars, dwarf cultivars, and hybrid cultivars (hybrids between talls and dwarfs). Some of the dwarf cultivars such as 'Malayan dwarf' have shown some promising resistance to lethal yellowing, while other cultivars such as 'Jamaican tall' are highly affected by the same plant disease. Some cultivars are more drought resistant such as 'West coast tall' (India) while others such as 'Hainan Tall' (China) are more cold tolerant. Other aspects such as seed size, shape and weight, and copra thickness are also important factors in the selection of new cultivars. Some cultivars such as 'Fiji dwarf' form a large bulb at the lower stem and others are cultivated to produce very sweet coconut water with orange-colored husks (king coconut) used entirely in fruit stalls for drinking (Sri Lanka, India).
Harvesting
The two most common harvesting methods are the climbing method and the pole method. Climbing is the most widespread, but it is also more dangerous and requires skilled workers. Manually climbing trees is traditional in most countries and requires a specific posture that exerts pressure on the trunk with the feet. Climbers employed on coconut plantations often develop musculoskeletal disorders and risk severe injury or death from falling.
To avoid this, coconuts workers in the Philippines and Guam traditionally use bolos tied with a rope to the waist to cut grooves at regular intervals on the coconut trunks. This basically turns the trunk of the tree into a ladder, though it reduces the value of coconut timber recovered from the trees and can be an entry point for infection. Other manual methods to make climbing easier include using a system of pulleys and ropes; using pieces of vine, rope, or cloth tied to both hands or feet; using spikes attached to the feet or legs; or attaching coconut husks to the trunk with ropes. Modern methods use hydraulic elevators mounted on tractors or ladders. Mechanical coconut climbing devices and even automated robots have also been recently developed in countries like India, Sri Lanka, and Malaysia.
The pole method uses a long pole with a cutting device at the end. In the Philippines, the traditional tool is known as the halabas and is made from a long bamboo pole with a sickle-like blade mounted at the tip. Though safer and faster than the climbing method, its main disadvantage is that it does not allow workers to examine and clean the crown of coconuts for pests and diseases.
Determining whether to harvest is also important. Gatchalian et al. 1994 developed a sonometry technique for precisely determining the stage of ripeness of young coconuts.
A system of bamboo bridges and ladders directly connecting the tree canopies are also utilized in the Philippines for coconut plantations that harvest coconut sap (not fruits) for coconut vinegar and palm wine production. In other areas, like in Papua New Guinea, coconuts are simply collected when they fall to the ground.
A more controversial method employed by a small number of coconut farmers in Thailand and Malaysia use trained pig-tailed macaques to harvest coconuts. Thailand has been raising and training pig-tailed macaques to pick coconuts for around 400 years. Training schools for pig-tailed macaques still exist both in southern Thailand and in the Malaysian state of Kelantan.
The practice of using macaques to harvest coconuts was exposed in Thailand by the People for the Ethical Treatment of Animals (PETA) in 2019, resulting in calls for boycotts on coconut products. PETA later clarified that the use of macaques is not practiced in the Philippines, India, Brazil, Colombia, Hawaii, and other major coconut-producing regions.
Substitutes for cooler climates
In cooler climates (but not less than USDA Zone 9), a similar palm, the queen palm (Syagrus romanzoffiana), is used in landscaping. Its fruits are similar to coconut, but smaller. The queen palm was originally classified in the genus Cocos along with the coconut, but was later reclassified in Syagrus. A recently discovered palm, Beccariophoenix alfredii from Madagascar, is nearly identical to the coconut, more so than the queen palm and can also be grown in slightly cooler climates than the coconut palm. Coconuts can only be grown in temperatures above and need a daily temperature above to produce fruit.
Production
In 2022, world production of coconuts was 62 million tonnes, led by Indonesia, India, and the Philippines, with 73% combined of the total.
Indonesia
Indonesia is the world's largest producer of coconuts, with a gross production of 15 million tonnes.
Philippines
The Philippines is the world's second-largest producer of coconuts. It was the world's largest producer for decades until a decline in production due to aging trees as well as from typhoon devastations. Indonesia overtook it in 2010. It is still the largest producer of coconut oil and copra, accounting for 64% of global production. The production of coconuts plays an important role in the economy, with 25% of cultivated land (around 3.56 million hectares) used for coconut plantations and approximately 25 to 33% of the population reliant on coconuts for their livelihood.
Two important coconut products were first developed in the Philippines, Macapuno and Nata de coco. Macapuno is a coconut variety with a jelly-like coconut meat. Its meat is sweetened, cut into strands, and sold in glass jars as coconut strings, sometimes labeled as "coconut sport". Nata de coco, also called coconut gel, is another jelly-like coconut product made from fermented coconut water.
India
Traditional areas of coconut cultivation in India are the states of Kerala, Tamil Nadu, Karnataka, Puducherry, Andhra Pradesh, Goa, Maharashtra, Odisha, West Bengal and, Gujarat and the islands of Lakshadweep and Andaman and Nicobar. As per 2014–15 statistics from Coconut Development Board of Government of India, four southern states combined account for almost 90% of the total production in the country: Tamil Nadu (33.8%), Karnataka (25.2%), Kerala (24.0%), and Andhra Pradesh (7.2%). Other states, such as Goa, Maharashtra, Odisha, West Bengal, and those in the northeast (Tripura and Assam) account for the remaining productions. Though Kerala has the largest number of coconut trees, in terms of production per hectare, Tamil Nadu leads all other states. In Tamil Nadu, Coimbatore and Tirupur regions top the production list. The coconut tree is the official state tree of Kerala, India.
In Goa, the coconut tree has been reclassified by the government as a palm (rather than a tree), enabling farmers and developers to clear land with fewer restrictions and without needing permission from the forest department before cutting a coconut tree.
Middle East
The main coconut-producing area in the Middle East is the Dhofar region of Oman, but they can be grown all along the Persian Gulf, Arabian Sea, and Red Sea coasts, because these seas are tropical and provide enough humidity (through seawater evaporation) for coconut trees to grow. The young coconut plants need to be nursed and irrigated with drip pipes until they are old enough (stem bulb development) to be irrigated with brackish water or seawater alone, after which they can be replanted on the beaches. In particular, the area around Salalah maintains large coconut plantations similar to those found across the Arabian Sea in Kerala. The reasons why coconut are cultivated only in Yemen's Al Mahrah and Hadramaut governorates and in the Sultanate of Oman, but not in other suitable areas in the Arabian Peninsula, may originate from the fact that Oman and Hadramaut had long dhow trade relations with Burma, Malaysia, Indonesia, East Africa, and Zanzibar, as well as southern India and China. Omani people needed the coir rope from the coconut fiber to stitch together their traditional seagoing dhow vessels in which nails were never used. The know-how of coconut cultivation and necessary soil fixation and irrigation may have found its way into Omani, Hadrami and Al-Mahra culture by people who returned from those overseas areas.
The ancient coconut groves of Dhofar were mentioned by the medieval Moroccan traveller Ibn Battuta in his writings, known as Al Rihla. The annual rainy season known locally as khareef or monsoon makes coconut cultivation easy on the Arabian east coast.
Coconut trees also are increasingly grown for decorative purposes along the coasts of the United Arab Emirates and Saudi Arabia with the help of irrigation. The UAE has, however, imposed strict laws on mature coconut tree imports from other countries to reduce the spread of pests to other native palm trees, as the mixing of date and coconut trees poses a risk of cross-species palm pests, such as rhinoceros beetles and red palm weevils. The artificial landscaping may have been the cause for lethal yellowing, a viral coconut palm disease that leads to the death of the tree. It is spread by host insects that thrive on heavy turf grasses. Therefore, heavy turf grass environments (beach resorts and golf courses) also pose a major threat to local coconut trees. Traditionally, dessert banana plants and local wild beach flora such as Scaevola taccada and Ipomoea pes-caprae were used as humidity-supplying green undergrowth for coconut trees, mixed with sea almond and sea hibiscus. Due to growing sedentary lifestyles and heavy-handed landscaping, a decline in these traditional farming and soil-fixing techniques has occurred.
Sri Lanka
Sri Lanka is the world's fourth-largest producer of coconuts and is the second-largest producer of coconut oil and copra, accounting for 15% of the global production. The production of coconuts is the main source of Sri Lanka economy, with 12% of cultivated land and 409,244 hectares used for coconut growing (2017). Sri Lanka established its Coconut Development Authority and Coconut Cultivation Board and Coconut Research Institute in the early British Ceylon period.
United States
In the United States, coconut palms can be grown and reproduced outdoors without irrigation in Hawaii, southern and central Florida, and the territories of Puerto Rico, Guam, American Samoa, the U.S. Virgin Islands, and the Northern Mariana Islands. Coconut palms are also periodically successful in the Lower Rio Grande Valley region of southern Texas and in other microclimates in the southwest.
In Florida, wild populations of coconut palms extend up the East Coast from Key West to Jupiter Inlet, and up the West Coast from Marco Island to Sarasota. Many of the smallest coral islands in the Florida Keys are known to have abundant coconut palms sprouting from coconuts that have drifted or been deposited by ocean currents. Coconut palms are cultivated north of South Florida to roughly Cocoa Beach on the East Coast and Clearwater on the West Coast.
Australia
Coconuts are commonly grown around the northern coast of Australia, and in some warmer parts of New South Wales. However, they are mainly present as decoration, and the Australian coconut industry is small; Australia is a net importer of coconut products. Australian cities put much effort into de-fruiting decorative coconut trees to ensure that mature coconuts do not fall and injure people.
Allergens
Food
Coconut oil is increasingly used in the food industry. Proteins from coconut may cause allergic reactions, including anaphylaxis, in some people.
In the United States, the Food and Drug Administration declared that coconut must be disclosed as an ingredient on package labels as a "tree nut" with potential allergenicity.
Topical
Cocamidopropyl betaine (CAPB) is a surfactant manufactured from coconut oil that is increasingly used as an ingredient in personal hygiene products and cosmetics, such as shampoos, liquid soaps, cleansers and antiseptics, among others. CAPB may cause mild skin irritation, but allergic reactions to CAPB are rare and probably related to impurities rendered during the manufacturing process (which include amidoamine and dimethylaminopropylamine) rather than CAPB itself.
Uses
The coconut palm is grown throughout the tropics for decoration, as well as for its many culinary and nonculinary uses; virtually every part of the coconut palm can be used by humans in some manner and has significant economic value. Coconuts' versatility is sometimes noted in its naming. In Sanskrit, it is kalpa vriksha ("the tree which provides all the necessities of life"). In the Malay language, it is pokok seribu guna ("the tree of a thousand uses"). In the Philippines, the coconut is commonly called the "tree of life".
It is one of the most useful trees in the world.
Culinary
Nutrition
A reference serving of raw coconut flesh supplies of food energy and a high amount of total fat (33 grams), especially saturated fat (89% of total fat), along with a moderate quantity of carbohydrates (15g), and protein (3g). Micronutrients in significant content (more than 10% of the Daily Value) include the dietary minerals, manganese, copper, iron, phosphorus, selenium, and zinc (table).
Coconut meat
The edible white, fleshy part of the seed (the endosperm) is known as the "coconut meat", "coconut flesh", or "coconut kernel". In the coconut industry, coconut meat can be classified loosely into three different types depending on maturitynamely "Malauhog", "Malakanin" and "Malakatad". The terminology is derived from the Tagalog language. Malauhog (literally "mucus-like") refers to very young coconut meat (around 6 to 7 months old) which has a translucent appearance and a gooey texture that disintegrates easily. Malakanin (literally "cooked rice-like") refers to young coconut meat (around 7–8 months old) which has a more opaque white appearance, a soft texture similar to cooked rice, and can still be easily scraped off the coconut shell. Malakatad (literally "leather-like") refers to fully mature coconut meat (around 8 to 9 months old) with an opaque white appearance, a tough rubbery to leathery texture, and is difficult to separate from the shell.
Maturity is difficult to assess on an unopened coconut, and there is no technically proven method for determining maturity. Based on color and size, younger coconuts tend to be smaller and have brighter colors, while more mature coconuts have browner colors and are larger. They can also be determined traditionally by tapping on the coconut fruit. Malauhog has a "solid" sound when tapped, while Malakanin and Malakatad produce a "hollow" sound. Another method is to shake the coconut. Immature coconuts produce a sloshing sound when shaken (the sharper the sound, the younger it is), while fully mature coconuts do not.
Both "Malauhog" and "Malakanin" meats of immature coconuts can be eaten as is or used in salads, drinks, desserts, and pastries such as buko pie and es kelapa muda. Because of their soft textures, they are unsuitable for grating. Mature Malakatad coconut meat has a tough texture and thus is processed before consumption or made into copra. Freshly shredded mature coconut meat, known as "grated coconut", "shredded coconut", or "coconut flakes", is used in the extraction of coconut milk. They are also used as a garnish for various dishes, as in klepon and puto bumbong. They can also be cooked in sugar and eaten as a dessert in the Philippines known as bukayo.
Grated coconut that is dehydrated by drying or baking is known as "desiccated coconut". It contains less than 3% of the original moisture content of coconut meat. It is predominantly used in the bakery and confectionery industries (especially in non-coconut-producing countries) because of its longer shelf life compared to freshly grated coconut. Desiccated coconut is used in confections and desserts such as macaroons. Dried coconut is also used as the filling for many chocolate bars. Some dried coconut is purely coconut, but others are manufactured with other ingredients, such as sugar, propylene glycol, salt, and sodium metabisulfite.
Coconut meat can also be cut into larger pieces or strips, dried, and salted to make "coconut chips" or "coco chips". These can be toasted or baked to make bacon-like fixings.
Macapuno
A special cultivar of coconut known as macapuno produces a large amount of jelly-like coconut meat. Its meat fills the entire interior of the coconut shell, rather than just the inner surfaces. It was first developed for commercial cultivation in the Philippines and is used widely in Philippine cuisine for desserts, drinks, and pastries. It is also popular in Indonesia (where it is known as kopyor) for making beverages.
Coconut milk
Coconut milk, not to be confused with coconut water, is obtained by pressing the grated coconut meat, usually with hot water added which extracts the coconut oil, proteins, and aromatic compounds. It is used for cooking various dishes. Coconut milk contains 5% to 20% fat, while coconut cream contains around 20% to 50% fat. Most of the fat is saturated (89%), with lauric acid being the major fatty acid. Coconut milk can be diluted to create coconut milk beverages. These have a much lower fat content and are suitable as milk substitutes.
Coconut milk powder, a protein-rich powder, can be processed from coconut milk following centrifugation, separation, and spray drying.
Coconut milk and coconut cream extracted from grated coconut is often added to various desserts and savory dishes, as well as in curries and stews. It can also be diluted into a beverage. Various other products made from thickened coconut milk with sugar and/or eggs like coconut jam and coconut custard are also widespread in Southeast Asia. In the Philippines, sweetened reduced coconut milk is marketed as coconut syrup and is used for various desserts. Coconut oil extracted from coconut milk or copra is also used for frying, cooking, and making margarine, among other uses.
Coconut water
Coconut water serves as a suspension for the endosperm of the coconut during its nuclear phase of development. Later, the endosperm matures and deposits onto the coconut rind during the cellular phase. The water is consumed throughout the humid tropics, and has been introduced into the retail market as a processed sports drink. Mature fruits have significantly less liquid than young, immature coconuts, barring spoilage. Coconut water can be fermented to produce coconut vinegar.
Per 100-gram serving, coconut water contains 19 calories and no significant content of essential nutrients.
Coconut water can be drunk fresh or used in cooking as in binakol. It can also be fermented to produce a jelly-like dessert known as nata de coco.
Coconut flour
Coconut flour has also been developed for use in baking, to combat malnutrition.
Sprouted coconut
Newly germinated coconuts contain a spherical edible mass known as the sprouted coconut or coconut sprout. It has a crunchy watery texture and a slightly sweet taste. It is eaten as is or used as an ingredient in various dishes. It is produced as the endosperm nourishes the developing embryo. It is a haustorium, a spongy absorbent tissue formed from the distal part of embryo during coconut germination, which facilitates absorption of nutrients for the growing shoot and root.
Heart of palm
Apical buds of adult plants are edible, and are known as "palm cabbage" or heart of palm. They are considered a rare delicacy, as harvesting the buds kills the palms. Hearts of palm are eaten in salads, sometimes called "millionaire's salad".
Toddy and sap
The sap derived from incising the flower clusters of the coconut is drunk as toddy, also known as tubâ in the Philippines (both fermented and fresh), tuak (Indonesia and Malaysia), karewe (fresh and not fermented, collected twice a day, for breakfast and dinner) in Kiribati, and neera in South Asia. When left to ferment on its own, it becomes palm wine. Palm wine is distilled to produce arrack. In the Philippines, this alcoholic drink is called lambanog (historically also called vino de coco in Spanish) or "coconut vodka".
The sap can be reduced by boiling to create a sweet syrup or candy such as te kamamai in Kiribati or dhiyaa hakuru and addu bondi in the Maldives. It can be reduced further to yield coconut sugar also referred to as palm sugar or jaggery. A young, well-maintained tree can produce around of toddy per year, while a 40-year-old tree may yield around .
Coconut sap, usually extracted from cut inflorescence stalks is sweet when fresh and can be drunk as is such as in tuba fresca of Mexico (derived from the Philippine tubâ). They can also be processed to extract palm sugar. The sap when fermented can also be made into coconut vinegar or various palm wines (which can be further distilled to make arrack).
Coconut vinegar
Coconut vinegar, made from fermented coconut water or sap, is used extensively in Southeast Asian cuisine (notably the Philippines, where it is known as sukang tuba), as well as in some cuisines of India and Sri Lanka, especially Goan cuisine. A cloudy white liquid, it has a particularly sharp, acidic taste with a slightly yeasty note.
Coconut oil
Coconut oil is commonly used in cooking, especially for frying. It can be used in liquid form as would other vegetable oils, or in solid form similar to butter or lard.
Long-term consumption of coconut oil may have negative health effects similar to those from consuming other sources of saturated fats, including butter, beef fat, and palm oil. Its chronic consumption may increase the risk of cardiovascular diseases by raising total blood cholesterol levels through elevated blood levels of LDL cholesterol and lauric acid.
Coconut butter
Coconut butter is often used to describe solidified coconut oil, but has also been adopted as an alternate name for creamed coconut, a specialty product made of coconut milk solids or puréed coconut meat and oil. Having a creamy Consistency that is spreadable, reminiscent of Peanut butter albeit a little richer.
Copra
Copra is the dried meat of the seed and after processing produces coconut oil and coconut meal. Coconut oil, aside from being used in cooking as an ingredient and for frying, is used in soaps, cosmetics, hair oil, and massage oil. Coconut oil is also a main ingredient in Ayurvedic oils. In Vanuatu, coconut palms for copra production are generally spaced apart, allowing a tree density of .
It takes around 6,000 full-grown coconuts to produce one tonne of copra.
Husks and shells
The husk and shells can be used for fuel and are a source of charcoal. Activated carbon manufactured from coconut shell is considered extremely effective for the removal of impurities. The coconut's obscure origin in foreign lands led to the notion of using cups made from the shell to neutralise poisoned drinks. Coconut cups were frequently carved with scenes in relief and mounted with precious metals.
The husks can be used as flotation devices. As an abrasive, a dried half coconut shell with husk can be used to buff floors. It is known as a bunot in the Philippines and simply a "coconut brush" in Jamaica. The fresh husk of a brown coconut may serve as a dish sponge or body sponge.
Coconut cups, often with highly decorated mounts in precious metals, were an exotic luxury in medieval and Early Modern Europe, that were also thought to have medical benefits. A coco chocolatero was a simpler type of cup used to serve small quantities of beverages (such as chocolate drinks) between the 17th and 19th centuries in countries such as Mexico, Guatemala, and Venezuela.
In Asia, coconut shells are also used as bowls and in the manufacture of various handicrafts, including buttons carved from the dried shell. Coconut buttons are often used for Hawaiian aloha shirts. Tempurung, as the shell is called in the Malay language, can be used as a soup bowl andif fixed with a handlea ladle. In Thailand, the coconut husk is used as a potting medium to produce healthy forest tree saplings. The process of husk extraction from the coir bypasses the retting process, using a custom-built coconut husk extractor designed by ASEAN–Canada Forest Tree Seed Centre in 1986. Fresh husks contain more tannin than old husks. Tannin produces negative effects on sapling growth. The shell and husk can be burned for smoke to repel mosquitoes and are used in parts of South India for this purpose.
Half coconut shells are used in theatre Foley sound effects work, struck together to create the sound effect of a horse's hoofbeats. Dried half shells are used as the bodies of musical instruments, including the Chinese yehu and banhu, along with the Vietnamese đàn gáo and Arabo-Turkic rebab. In the Philippines, dried half shells are also used as a musical instrument in a folk dance called maglalatik.
The shell, freed from the husk, and heated on warm ashes, exudes an oily material that is used to soothe dental pains in traditional medicine of Cambodia.
In World War II, coastwatcher scout Biuku Gasa was the first of two from the Solomon Islands to reach the shipwrecked and wounded crew of Motor Torpedo Boat PT-109 commanded by future U.S. president John F. Kennedy. Gasa suggested, for lack of paper, delivering by dugout canoe a message inscribed on a husked coconut shell, reading "Nauru Isl commander / native knows posit / he can pilot / 11 alive need small boat / Kennedy." This coconut was later kept on the president's desk, and is now in the John F. Kennedy Library.
The Philippine Coast Guard used unconventional coconut husk boom to clean up the oil slick in the 2024 Manila Bay oil spill.
Coir
Coir (the fiber from the husk of the coconut) is used in ropes, mats, doormats, brushes, and sacks, as caulking for boats, and as stuffing fiber for mattresses. It is used in horticulture in potting compost, especially in orchid mix. The coir is used to make brooms in Cambodia.
Leaves
The stiff midribs of coconut leaves are used for making brooms in India, Indonesia (sapu lidi), Malaysia, the Maldives, and the Philippines (walis tingting). The green of the leaves (lamina) is stripped away, leaving the veins (long, thin, woodlike strips) which are tied together to form a broom or brush. A long handle made from some other wood may be inserted into the base of the bundle and used as a two-handed broom.
The leaves also provide material for baskets that can draw well water and for roofing thatch; they can be woven into mats, cooking skewers, and kindling arrows as well. Leaves are also woven into small pouches that are filled with rice and cooked to make pusô and ketupat.
Dried coconut leaves can be burned to ash, which can be harvested for lime. In India, the woven coconut leaves are used to build wedding marquees, especially in the states of Kerala, Karnataka, and Tamil Nadu.
The leaves are used for thatching houses, or for decorating climbing frames and meeting rooms in Cambodia, where the plant is known as dôô:ng.
Timber
Coconut trunks are used for building small bridges and huts; they are preferred for their straightness, strength, and salt resistance. In Kerala, coconut trunks are used for house construction. Coconut timber comes from the trunk, and is increasingly being used as an ecologically sound substitute for endangered hardwoods. It has applications in furniture and specialized construction, as notably demonstrated in Manila's Coconut Palace.
Hawaiians hollowed out the trunk to form drums, containers, or small canoes. The "branches" (leaf petioles) are strong and flexible enough to make a switch. The use of coconut branches in corporal punishment was revived in the Gilbertese community on Choiseul in the Solomon Islands in 2005.
Roots
The roots are used as a dye, a mouthwash, and a folk medicine for diarrhea and dysentery. A frayed piece of root can also be used as a toothbrush. In Cambodia, the roots are used in traditional medicine as a treatment for dysentery.
Other uses
The leftover fiber from coconut oil and coconut milk production, coconut meal, is used as livestock feed. The dried calyx is used as fuel in wood-fired stoves. Coconut water is traditionally used as a growth supplement in plant tissue culture and micropropagation. The smell of coconuts comes from the 6-pentyloxan-2-one molecule, known as δ-decalactone in the food and fragrance industries.
Tool and shelter for animals
Researchers from the Melbourne Museum in Australia observed the octopus species Amphioctopus marginatus use tools, specifically coconut shells, for defense and shelter. The discovery of this behavior was observed in Bali and North Sulawesi in Indonesia between 1998 and 2008. Amphioctopus marginatus is the first invertebrate known to be able to use tools.
A coconut can be hollowed out and used as a home for a rodent or a small bird. Halved, drained coconuts can also be hung up as bird feeders, and after the flesh has gone, can be filled with fat in winter to attract tits.
In culture
The coconut was a critical food item for the people of Polynesia, and the Polynesians brought it with them as they spread to new islands.
In the Ilocos region of the northern Philippines, the Ilocano people fill two halved coconut shells with diket (cooked sweet rice), and place liningta nga itlog (halved boiled egg) on top of it. This ritual, known as niniyogan, is an offering made to the deceased and one's ancestors. This accompanies the palagip (prayer to the dead).
A coconut () is an essential element of rituals in Hindu tradition. Often it is decorated with bright metal foils and other symbols of auspiciousness. It is offered during worship to a Hindu god or goddess. Narali Purnima is celebrated on a full moon day which usually signifies the end of monsoon season in India. The word Narali is derived from naral implying "coconut" in Marathi. Fishermen give an offering of coconut to the sea to celebrate the beginning of a new fishing season. Irrespective of their religious affiliations, fishermen of India often offer it to the rivers and seas in the hopes of having bountiful catches. Hindus often initiate the beginning of any new activity by breaking a coconut to ensure the blessings of the gods and successful completion of the activity. The Hindu goddess of well-being and wealth, Lakshmi, is often shown holding a coconut. In the foothills of the temple town of Palani, before going to worship Murugan for the Ganesha, coconuts are broken at a place marked for the purpose. Every day, thousands of coconuts are broken, and some devotees break as many as 108 coconuts at a time as per the prayer. They are also used in Hindu weddings as a symbol of prosperity.
The flowers are used sometimes in wedding ceremonies in Cambodia.
The Zulu Social Aid and Pleasure Club of New Orleans traditionally throws hand-decorated coconuts, one of the most valuable Mardi Gras souvenirs, to parade revelers. The tradition began in the 1910s, and has continued since. In 1987, a "coconut law" was signed by Governor Edwin Edwards exempting from insurance liability any decorated coconut "handed" from a Zulu float.
The coconut is also used as a target and prize in the traditional British fairground game coconut shy. The player buys some small balls which are then thrown as hard as possible at coconuts balanced on sticks. The aim is to knock a coconut off the stand and win it.
It was the main food of adherents of the now discontinued Vietnamese religion Đạo Dừa.
Myths and legends
Some South Asian, Southeast Asian, and Pacific Ocean cultures have origin myths in which the coconut plays the main role. In the Hainuwele myth from Maluku, a girl emerges from the blossom of a coconut tree. In Maldivian folklore, one of the main myths of origin reflects the dependence of the Maldivians on the coconut tree. In the story of Sina and the Eel, the origin of the coconut is related as the beautiful woman Sina burying an eel, which eventually became the first coconut.
According to urban legend, more deaths are caused by falling coconuts than by sharks annually.
Historical records
Literary evidence from the Ramayana and Sri Lankan chronicles indicates that the coconut was present in the Indian subcontinent before the 1st century BCE. The earliest direct description is given by Cosmas Indicopleustes in his Topographia Christiana written around 545, referred to as "the great nut of India". Another early mention of the coconut dates back to the "One Thousand and One Nights" story of Sinbad the Sailor wherein he bought and sold a coconut during his fifth voyage.
In March 1521, a description of the coconut was given by Antonio Pigafetta writing in Italian and using the words "cocho"/"cochi", as recorded in his journal after the first European crossing of the Pacific Ocean during the Magellan circumnavigation and meeting the inhabitants of what would become known as Guam and the Philippines. He explained how at Guam "they eat coconuts" ("mangiano cochi") and that the natives there also "anoint the body and the hair with coconut and beniseed oil" ("ongieno el corpo et li capili co oleo de cocho et de giongioli").
In politics
United States Vice President Kamala Harris said during a May 2023 White House ceremony "You think you just fell out of a coconut tree?", which became a meme among her supporters during her failed 2024 presidential campaign.
| Biology and health sciences | Monocots | null |
51399 | https://en.wikipedia.org/wiki/Buckingham%20%CF%80%20theorem | Buckingham π theorem | In engineering, applied mathematics, and physics, the Buckingham theorem is a key theorem in dimensional analysis. It is a formalisation of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n physical variables, then the original equation can be rewritten in terms of a set of p = n − k dimensionless parameters 1, 2, ..., p constructed from the original variables, where k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix.
The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown.
The Buckingham theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold.
History
Although named for Edgar Buckingham, the theorem was first proved by the French mathematician Joseph Bertrand in 1878. Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the theorem in the general case to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892, a heuristic proof with the use of series expansions, to 1894.
Formal generalization of the theorem for the case of arbitrarily many quantities was given first by in 1892, then in 1911—apparently independently—by both A. Federman and D. Riabouchinsky, and again in 1914 by Buckingham. It was Buckingham's article that introduced the use of the symbol "" for the dimensionless variables (or parameters), and this is the source of the theorem's name.
Statement
More formally, the number of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent.
In mathematical terms, if we have a physically meaningful equation such as
where are any physical variables, and there is a maximal dimensionally independent subset of size , then the above equation can be restated as
where are dimensionless parameters constructed from the by dimensionless equations — the so-called Pi groups — of the form
where the exponents are rational numbers. (They can always be taken to be integers by redefining as being raised to a power that clears all denominators.) If there are fundamental units in play, then .
Significance
The Buckingham theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful".
Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions.
Proof
For simplicity, it will be assumed that the space of fundamental and derived physical units forms a vector space over the real numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation:
represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity has units of (length over time squared), so it is represented as the vector with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers.
Rescaling units
Suppose we have quantities , where the units of contain length raised to the power . If we originally measure length in meters but later switch to centimeters, then the numerical value of would be rescaled by a factor of . Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on.
Formal proof
Given a system of dimensional variables in fundamental (basis) dimensions, the dimensional matrix is the matrix whose rows correspond to the fundamental dimensions and whose columns are the dimensions of the variables: the th entry (where and ) is the power of the th fundamental dimension in the th variable.
The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So the (column) vector that results from the multiplication
consists of the units of
in terms of the fundamental independent (basis) units.
If we rescale the th fundamental unit by a factor of , then gets rescaled by , where is the th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we take logarithms (the base is irrelevant), yielding which is an action of on . We define a physical law to be an arbitrary function such that is a permissible set of values for the physical system when . We further require to be invariant under this action. Hence it descends to a function . All that remains is to exhibit an isomorphism between and , the (log) space of pi groups .
We construct an matrix whose columns are a basis for . It tells us how to embed into as the kernel of . That is, we have an exact sequence
Taking tranposes yields another exact sequence
The first isomorphism theorem produces the desired isomorphism, which sends the coset to . This corresponds to rewriting the tuple into the pi groups coming from the columns of .
The International System of Units defines seven base units, which are the ampere, kelvin, second, metre, kilogram, candela and mole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (See orientational analysis and reference.)
Examples
Speed
This example is elementary but serves to demonstrate the procedure.
Suppose a car is driving at 100 km/h; how long does it take to go 200 km?
This question considers dimensioned variables: distance time and speed and we are seeking some law of the form Any two of these variables are dimensionally independent, but the three taken together are not. Thus there is dimensionless quantity.
The dimensional matrix is
in which the rows correspond to the basis dimensions and and the columns to the considered dimensions where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column states that represented by the column vector is expressible in terms of the basis dimensions as since
For a dimensionless constant we are looking for vectors such that the matrix-vector product equals the zero vector In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant:
If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written:
Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant.
Dimensional analysis has thus provided a general equation relating the three physical variables:
or, letting denote a zero of function
which can be written in the desired form (which recall was ) as
The actual relationship between the three variables is simply In other words, in this case has one physically relevant root, and it is unity. The fact that only a single value of will do and that it is equal to 1 is not revealed by the technique of dimensional analysis.
The simple pendulum
We wish to determine the period of small oscillations in a simple pendulum. It will be assumed that it is a function of the length the mass and the acceleration due to gravity on the surface of the Earth which has dimensions of length divided by time squared. The model is of the form
(Note that it is written as a relation, not as a function: is not written here as a function of )
Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need only dimensionless parameter, denoted by and the model can be re-expressed as
where is given by
for some values of
The dimensions of the dimensional quantities are:
The dimensional matrix is:
(The rows correspond to the dimensions and and the columns to the dimensional variables For instance, the 4th column, states that the variable has dimensions of )
We are looking for a kernel vector such that the matrix product of on yields the zero vector The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant:
Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written:
In fundamental terms:
which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant.
In this example, three of the four dimensional quantities are fundamental units, so the last (which is ) must be a combination of the previous.
Note that if (the coefficient of ) had been non-zero then there would be no way to cancel the value; therefore be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, is the only nontrivial way to construct a vector of a dimensionless parameter.)
The model can now be expressed as:
Then this implies that for some zero of the function If there is only one zero, call it then It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by
For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero.
Electric power
To demonstrate the application of the theorem, consider the power consumption of a stirrer with a given shape.
The power, P, in dimensions [M · L2/T3], is a function of the density, ρ [M/L3], and the viscosity of the fluid to be stirred, μ [M/(L · T)], as well as the size of the stirrer given by its diameter, D [L], and the angular speed of the stirrer, n [1/T]. Therefore, we have a total of n = 5 variables representing our example. Those n = 5 variables are built up from k = 3 independent dimensions, e.g., length: L (SI units: m), time: T (s), and mass: M (kg).
According to the -theorem, the n = 5 variables can be reduced by the k = 3 dimensions to form p = n − k = 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen as , commonly named the Reynolds number which describes the fluid flow regime, and , the power number, which is the dimensionless description of the stirrer.
Note that the two dimensionless quantities are not unique and depend on which of the n = 5 variables are chosen as the k = 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis if , n, and D are chosen to be the basis variables. If, instead, , n, and D are selected, the Reynolds number is recovered while the second dimensionless quantity becomes . We note that is the product of the Reynolds number and the power number.
Other examples
An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method.
The theorem has also been used in fields other than physics, for instance in sports science.
| Physical sciences | Basics | Basics and measurement |
51414 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20algebra | Fundamental theorem of algebra | The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.
Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed.
The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division.
Despite its name, it is not fundamental for modern algebra; it was named when algebra was synonymous with the theory of equations.
History
, in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation although incomplete, has four solutions (counting multiplicities): 1 (twice), and
As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type (with real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial , but he got a letter from Euler in 1742 in which it was shown that this polynomial is equal to
with
Also, Euler pointed out that
A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z).
At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981).
The first rigorous proof was published by Argand, an amateur mathematician, in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849.
The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it.
None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981.
Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). However, Fred Richman proved a reformulated version of the theorem that does work.
Equivalent statements
There are several equivalent formulations of the theorem:
Every univariate polynomial of positive degree with real coefficients has at least one complex root.
Every univariate polynomial of positive degree with complex coefficients has at least one complex root.
This implies immediately the previous assertion, as real numbers are also complex numbers. The converse results from the fact that one gets a polynomial with real coefficients by taking the product of a polynomial and its complex conjugate (obtained by replacing each coefficient with its complex conjugate). A root of this product is either a root of the given polynomial, or of its conjugate; in the latter case, the conjugate of this root is a root of the given polynomial.
Every univariate polynomial of positive degree with complex coefficients can be factorized as where are complex numbers.
The complex numbers are the roots of the polynomial. If a root appears in several factors, it is a multiple root, and the number of its occurrences is, by definition, the multiplicity of the root.
The proof that this statement results from the previous ones is done by recursion on : when a root has been found, the polynomial division by provides a polynomial of degree whose roots are the other roots of the given polynomial.
The next two statements are equivalent to the previous ones, although they do not involve any nonreal complex number. These statements can be proved from previous factorizations by remarking that, if is a non-real root of a polynomial with real coefficients, its complex conjugate is also a root, and is a polynomial of degree two with real coefficients (this is the complex conjugate root theorem). Conversely, if one has a factor of degree two, the quadratic formula gives a root.
Every univariate polynomial with real coefficients of degree larger than two has a factor of degree two with real coefficients.
Every univariate polynomial with real coefficients of positive degree can be factored as where is a real number and each is a monic polynomial of degree at most two with real coefficients. Moreover, one can suppose that the factors of degree two do not have any real root.
Proofs
All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra.
Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial with complex coefficients, the polynomial
has only real coefficients, and, if is a root of , then either or its conjugate is a root of . Here, is the polynomial obtained by replacing each coefficient of with its complex conjugate; the roots of are exactly the complex conjugates of the roots of
Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p(z) of degree n whose dominant coefficient is 1 behaves like zn when |z| is large enough. More precisely, there is some positive real number R such that
when |z| > R.
Real-analytic proofs
Even without using complex numbers, it is possible to show that a real-valued polynomial p(x): p(0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients. In other words, for some real-valued a and b, the coefficients of the linear remainder on dividing p(x) by x2 − ax − b simultaneously become zero.
where q(x) is a polynomial of degree n − 2. The coefficients Rp(x)(a, b) and Sp(x)(a, b) are independent of x and completely defined by the coefficients of p(x). In terms of representation, Rp(x)(a, b) and Sp(x)(a, b) are bivariate polynomials in a and b. In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b, all the roots of both Rp(x)(a, b) and Sp(x)(a, b) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain Rp(x)(a, b) and Sp(x)(a, b) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As Sp(a, b = 0) = p(0) has no roots, interlacing of Rp(x)(a, b) and Sp(x)(a, b) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of Rp(x)(a, b) and Sp(x)(a, b) must intersect for some real-valued a and b < 0.
Complex-analytic proofs
Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The maximum modulus principle applied to 1/p(z) implies that p(z0) = 0. In other words, z0 is a zero of p(z).
A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0, we can write
Here, the cj are simply the coefficients of the polynomial z → p(z + z0) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z0 this function has behavior asymptotically similar to the simpler polynomial . More precisely, the function
for some positive constant M in some neighborhood of z0. Therefore, if we define and let tracing a circle of radius r > 0 around z, then for any sufficiently small r (so that the bound M holds), we see that
When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, contradicting the definition of z0. Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|.
Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0.
Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number
where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is
The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n.
Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue. The proof of the latter statement is by contradiction.
Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function
which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that
On the other hand, R(z) expanded as a geometric series gives:
This formula is valid outside the closed disc of radius (the operator norm of A). Let Then
(in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue.
Finally, Rouché's theorem gives perhaps the shortest proof of the theorem.
Topological proofs
Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and:
If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D.
For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words,
When z traverses the circle once counter-clockwise then winds n times counter-clockwise around the origin (0,0), and P(R) likewise. At the other extreme, with |z| = 0, the curve P(0) is merely the single point p(0), which must be nonzero because p(z) is never zero. Thus p(0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P(0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously. At some R the winding number must change. But that can only happen if the curve P(R) includes the origin (0,0) for some R. But then for some z on that circle |z| = R we have p(z) = 0, contradicting our original assumption. Therefore, p(z) has at least one zero.
Algebraic proofs
These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases):
every polynomial with an odd degree and real coefficients has some real root;
every non-negative real number has a square root.
The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R() is algebraically closed.
By induction
As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that
If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define:
Then the coefficients of qt(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from {1, ..., n}. Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 − (zi + zj)z + zizj.
Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since has a root, where k is chosen so that ).
From Galois theory
Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension. Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [L:R] = [G:H] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [K:R] and [K:C] are powers of 2. Assuming by way of contradiction that [K:C] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [K:C] = 1, and therefore K = C, which completes the proof.
Geometric proofs
There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat.
A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, the Gauss–Bonnet theorem, when applied to the sphere S2, claims that
which proves that the sphere is not flat.
Let us now assume that n > 0 and
for each complex number z. Let us define
Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore,
We can use this functional equation to prove that g, given by
for w in C, and
for w ∈ S2\{0}, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ {∞}).
Now, a simple computation shows that
since the real part of an analytic function is harmonic. This proves that Kg = 0.
Corollaries
Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers:
The field of complex numbers is the algebraic closure of the field of real numbers.
Every polynomial in one variable z with complex coefficients is the product of a complex constant and polynomials of the form z + a with a complex.
Every polynomial in one variable x with real coefficients can be uniquely written as the product of a constant, polynomials of the form x + a with a real, and polynomials of the form x2 + ax + b with a and b real and a2 − 4b < 0 (which is the same thing as saying that the polynomial x2 + ax + b has no real roots). (By the Abel–Ruffini theorem, the real numbers a and b are not necessarily expressible in terms of the coefficients of the polynomial, the basic arithmetic operations and the extraction of n-th roots.) This implies that the number of non-real complex roots is always even and remains even when counted with their multiplicity.
Every rational function in one variable x, with real coefficients, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n (where n is a natural number, and a and b are real numbers), and rational functions of the form (ax + b)/(x2 + cx + d)n (where n is a natural number, and a, b, c, and d are real numbers such that c2 − 4d < 0). A corollary of this is that every rational function in one variable and real coefficients has an elementary primitive.
Every algebraic extension of the real field is isomorphic either to the real field or to the complex field.
Bounds on the zeros of a polynomial
While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simplest result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial satisfy an inequality |ζ| ≤ R∞, where
As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R∞. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients that is |ζ| ≤ Rp, where Rp is precisely the q-norm of the 2-vector q being the conjugate exponent of p, for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by
for 1 < p < ∞, and in particular
(where we define an to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n,
is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on , that is, the roots of
Finally, the distance from the roots ζ to any point can be estimated from below and above, seeing as zeros of the polynomial , whose coefficients are the Taylor expansion of P(z) at
Let ζ be a root of the polynomial
in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as
and using the Hölder's inequality we find
Now, if p = 1, this is
thus
In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have
thus
and simplifying,
Therefore
holds, for all 1 ≤ p ≤ ∞.
| Mathematics | Algebra | null |
51420 | https://en.wikipedia.org/wiki/Carbonic%20acid | Carbonic acid | Carbonic acid is a chemical compound with the chemical formula . The molecule rapidly converts to water and carbon dioxide in the presence of water. However, in the absence of water, it is quite stable at room temperature. The interconversion of carbon dioxide and carbonic acid is related to the breathing cycle of animals and the acidification of natural waters.
In biochemistry and physiology, the name "carbonic acid" is sometimes applied to aqueous solutions of carbon dioxide. These chemical species play an important role in the bicarbonate buffer system, used to maintain acid–base homeostasis.
Terminology in biochemical literature
In chemistry, the term "carbonic acid" strictly refers to the chemical compound with the formula . Some biochemistry literature effaces the distinction between carbonic acid and carbon dioxide dissolved in extracellular fluid.
In physiology, carbon dioxide excreted by the lungs may be called volatile acid or respiratory acid.
Anhydrous carbonic acid
At ambient temperatures, pure carbonic acid is a stable gas. There are two main methods to produce anhydrous carbonic acid: reaction of hydrogen chloride and potassium bicarbonate at 100 K in methanol and proton irradiation of pure solid carbon dioxide. Chemically, it behaves as a diprotic Brønsted acid.
Carbonic acid monomers exhibit three conformational isomers: cis–cis, cis–trans, and trans–trans.
At low temperatures and atmospheric pressure, solid carbonic acid is amorphous and lacks Bragg peaks in X-ray diffraction. But at high pressure, carbonic acid crystallizes, and modern analytical spectroscopy can measure its geometry.
According to neutron diffraction of dideuterated carbonic acid () in a hybrid clamped cell (Russian alloy/copper-beryllium) at 1.85 GPa, the molecules are planar and form dimers joined by pairs of hydrogen bonds. All three C-O bonds are nearly equidistant at 1.34 Å, intermediate between typical C-O and C=O distances (respectively 1.43 and 1.23 Å). The unusual C-O bond lengths are attributed to delocalized π bonding in the molecule's center and extraordinarily strong hydrogen bonds. The same effects also induce a very short O—O separation (2.13 Å), through the 136° O-H-O angle imposed by the doubly hydrogen-bonded 8-membered rings. Longer O—O distances are observed in strong intramolecular hydrogen bonds, e.g. in oxalic acid, where the distances exceed 2.4 Å.
In aqueous solution
In even a slight presence of water, carbonic acid dehydrates to carbon dioxide and water, which then catalyzes further decomposition. For this reason, carbon dioxide can be considered the carbonic acid anhydride.
The hydration equilibrium constant at 25 °C is in pure water and ≈ 1.2×10−3 in seawater. Hence the majority of carbon dioxide at geophysical or biological air-water interfaces does not convert to carbonic acid, remaining dissolved gas. However, the uncatalyzed equilibrium is reached quite slowly: the rate constants are 0.039 s−1 for hydration and 23 s−1 for dehydration.
In biological solutions
In the presence of the enzyme carbonic anhydrase, equilibrium is instead reached rapidly, and the following reaction takes precedence: HCO3^- {+} H^+ <=> CO2 {+} H2O
When the created carbon dioxide exceeds its solubility, gas evolves and a third equilibrium CO_2 (soln) <=> CO_2 (g) must also be taken into consideration. The equilibrium constant for this reaction is defined by Henry's law.
The two reactions can be combined for the equilibrium in solution: When Henry's law is used to calculate the denominator care is needed with regard to units since Henry's law constant can be commonly expressed with 8 different dimensionalities.
In water pH control
In wastewater treatment and agriculture irrigation, carbonic acid is used to acidify the water similar to sulfuric acid and sulfurous acid produced by sulfur burners.
Under high CO2 partial pressure
In the beverage industry, sparkling or "fizzy water" is usually referred to as carbonated water. It is made by dissolving carbon dioxide under a small positive pressure in water. Many soft drinks treated the same way effervesce.
Significant amounts of molecular exist in aqueous solutions subjected to pressures of multiple gigapascals (tens of thousands of atmospheres) in planetary interiors. Pressures of 0.6–1.6 GPa at 100 K, and 0.75–1.75 GPa at 300 K are attained in the cores of large icy satellites such as Ganymede, Callisto, and Titan, where water and carbon dioxide are present. Pure carbonic acid, being denser, is expected to have sunk under the ice layers and separate them from the rocky cores of these moons.
Relationship to bicarbonate and carbonate
Carbonic acid is the formal Brønsted–Lowry conjugate acid of the bicarbonate anion, stable in alkaline solution. The protonation constants have been measured to great precision, but depend on overall ionic strength . The two equilibria most easily measured are as follows: where brackets indicate the concentration of species. At 25 °C, these equilibria empirically satisfy decreases with increasing , as does . In a solution absent other ions (e.g. ), these curves imply the following stepwise dissociation constants: Direct values for these constants in the literature include and .
To interpret these numbers, note that two chemical species in an acid equilibrium are equiconcentrated when . In particular, the extracellular fluid (cytosol) in biological systems exhibits , so that carbonic acid will be almost 50%-dissociated at equilibrium.
Ocean acidification
The Bjerrum plot shows typical equilibrium concentrations, in solution, in seawater, of carbon dioxide and the various species derived from it, as a function of pH. As human industrialization has increased the proportion of carbon dioxide in Earth's atmosphere, the proportion of carbon dioxide dissolved in sea- and freshwater as carbonic acid is also expected to increase. This rise in dissolved acid is also expected to acidify those waters, generating a decrease in pH. It has been estimated that the increase in dissolved carbon dioxide has already caused the ocean's average surface pH to decrease by about 0.1 from pre-industrial levels.
| Physical sciences | Specific acids | Chemistry |
51423 | https://en.wikipedia.org/wiki/P-adic%20number | P-adic number | In number theory, given a prime number , the -adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; -adic numbers can be written in a form similar to (possibly infinite) decimals, but with digits based on a prime number rather than ten, and extending to the left rather than to the right.
For example, comparing the expansion of the rational number in base vs. the -adic expansion,
Formally, given a prime number , a -adic number can be defined as a series
where is an integer (possibly negative), and each is an integer such that A -adic integer is a -adic number such that
In general the series that represents a -adic number is not convergent in the usual sense, but it is convergent for the -adic absolute value where is the least integer such that (if all are zero, one has the zero -adic number, which has as its -adic absolute value).
Every rational number can be uniquely expressed as the sum of a series as above, with respect to the -adic absolute value. This allows considering rational numbers as special -adic numbers, and alternatively defining the -adic numbers as the completion of the rational numbers for the -adic absolute value, exactly as the real numbers are the completion of the rational numbers for the usual absolute value.
-adic numbers were first described by Kurt Hensel in 1897, though, with hindsight, some of Ernst Kummer's earlier work can be interpreted as implicitly using -adic numbers.
Motivation
Roughly speaking, modular arithmetic modulo a positive integer consists of "approximating" every integer by the remainder of its division by , called its residue modulo . The main property of modular arithmetic is that the residue modulo of the result of a succession of operations on integers is the same as the result of the same succession of operations on residues modulo . If one knows that the absolute value of the result is less than , this allows a computation of the result which does not involve any integer larger than .
For larger results, an old method, still in common use, consists of using several small moduli that are pairwise coprime, and applying the Chinese remainder theorem for recovering the result modulo the product of the moduli.
Another method discovered by Kurt Hensel consists of using a prime modulus , and applying Hensel's lemma for recovering iteratively the result modulo If the process is continued infinitely, this provides eventually a result which is a -adic number.
Basic lemmas
The theory of -adic numbers is fundamentally based on the two following lemmas
Every nonzero rational number can be written where , , and are integers and neither nor is divisible by . The exponent is uniquely determined by the rational number and is called its -adic valuation (this definition is a particular case of a more general definition, given below). The proof of the lemma results directly from the fundamental theorem of arithmetic.
Every nonzero rational number of valuation can be uniquely written where is a rational number of valuation greater than , and is an integer such that
The proof of this lemma results from modular arithmetic: By the above lemma, where and are integers coprime with .
By Bezout lemma, there exist integers and , with , such that
Setting (hence ), we have
To show the uniqueness of this representation, observe that if with
and ,
there holds by difference with and .
Write , where is coprime to ; then
, which is possible only if and .
Hence and .
The above process can be iterated starting from instead of , giving the following.
Given a nonzero rational number of valuation and a positive integer , there are a rational number of nonnegative valuation and uniquely defined nonnegative integers less than such that and
The -adic numbers are essentially obtained by continuing this infinitely to produce an infinite series.
p-adic series
The -adic numbers are commonly defined by means of -adic series.
A -adic series is a formal power series of the form
where is an integer and the are rational numbers that either are zero or have a nonnegative valuation (that is, the denominator of is not divisible by ).
Every rational number may be viewed as a -adic series with a single nonzero term, consisting of its factorization of the form with and both coprime with .
Two -adic series and
are equivalent if there is an integer such that, for every integer the rational number
is zero or has a -adic valuation greater than .
A -adic series is normalized if either all are integers such that and or all are zero. In the latter case, the series is called the zero series.
Every -adic series is equivalent to exactly one normalized series. This normalized series is obtained by a sequence of transformations, which are equivalences of series; see § Normalization of a -adic series, below.
In other words, the equivalence of -adic series is an equivalence relation, and each equivalence class contains exactly one normalized -adic series.
The usual operations of series (addition, subtraction, multiplication, division) are compatible with equivalence of -adic series. That is, denoting the equivalence with , if , and are nonzero -adic series such that one has
The -adic numbers are often defined as the equivalence classes of -adic series, in a similar way as the definition of the real numbers as equivalence classes of Cauchy sequences. The uniqueness property of normalization, allows uniquely representing any -adic number by the corresponding normalized -adic series. The compatibility of the series equivalence leads almost immediately to basic properties of -adic numbers:
Addition, multiplication and multiplicative inverse of -adic numbers are defined as for formal power series, followed by the normalization of the result.
With these operations, the -adic numbers form a field, which is an extension field of the rational numbers.
The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first non zero term of the corresponding normalized series; the valuation of zero is
The -adic absolute value of a nonzero -adic number , is for the zero -adic number, one has
Normalization of a p-adic series
Starting with the series the first above lemma allows getting an equivalent series such that the -adic valuation of is zero. For that, one considers the first nonzero If its -adic valuation is zero, it suffices to change into , that is to start the summation from . Otherwise, the -adic valuation of is and where the valuation of is zero; so, one gets an equivalent series by changing to and to Iterating this process, one gets eventually, possibly after infinitely many steps, an equivalent series that either is the zero series or is a series such that the valuation of is zero.
Then, if the series is not normalized, consider the first nonzero that is not an integer in the interval The second above lemma allows writing it one gets n equivalent series by replacing with and adding to Iterating this process, possibly infinitely many times, provides eventually the desired normalized -adic series.
Definition
There are several equivalent definitions of -adic numbers. The one that is given here is relatively elementary, since it does not involve any other mathematical concepts than those introduced in the preceding sections. Other equivalent definitions use completion of a discrete valuation ring (see ), completion of a metric space (see ), or inverse limits (see ).
A -adic number can be defined as a normalized -adic series. Since there are other equivalent definitions that are commonly used, one says often that a normalized -adic series represents a -adic number, instead of saying that it is a -adic number.
One can say also that any -adic series represents a -adic number, since every -adic series is equivalent to a unique normalized -adic series. This is useful for defining operations (addition, subtraction, multiplication, division) of -adic numbers: the result of such an operation is obtained by normalizing the result of the corresponding operation on series. This well defines operations on -adic numbers, since the series operations are compatible with equivalence of -adic series.
With these operations, -adic numbers form a field called the field of -adic numbers and denoted or There is a unique field homomorphism from the rational numbers into the -adic numbers, which maps a rational number to its -adic expansion. The image of this homomorphism is commonly identified with the field of rational numbers. This allows considering the -adic numbers as an extension field of the rational numbers, and the rational numbers as a subfield of the -adic numbers.
The valuation of a nonzero -adic number , commonly denoted is the exponent of in the first nonzero term of every -adic series that represents . By convention, that is, the valuation of zero is This valuation is a discrete valuation. The restriction of this valuation to the rational numbers is the -adic valuation of that is, the exponent in the factorization of a rational number as with both and coprime with .
p-adic integers
The -adic integers are the -adic numbers with a nonnegative valuation.
A -adic integer can be represented as a sequence
of residues mod for each integer , satisfying the compatibility relations for .
Every integer is a -adic integer (including zero, since ). The rational numbers of the form with coprime with and are also -adic integers (for the reason that has an inverse mod for every ).
The -adic integers form a commutative ring, denoted or , that has the following properties.
It is an integral domain, since it is a subring of a field, or since the first term of the series representation of the product of two non zero -adic series is the product of their first terms.
The units (invertible elements) of are the -adic numbers of valuation zero.
It is a principal ideal domain, such that each ideal is generated by a power of .
It is a local ring of Krull dimension one, since its only prime ideals are the zero ideal and the ideal generated by , the unique maximal ideal.
It is a discrete valuation ring, since this results from the preceding properties.
It is the completion of the local ring which is the localization of at the prime ideal
The last property provides a definition of the -adic numbers that is equivalent to the above one: the field of the -adic numbers is the field of fractions of the completion of the localization of the integers at the prime ideal generated by .
Topological properties
The -adic valuation allows defining an absolute value on -adic numbers: the -adic absolute value of a nonzero -adic number is
where is the -adic valuation of . The -adic absolute value of is This is an absolute value that satisfies the strong triangle inequality since, for every and one has
if and only if
Moreover, if one has
This makes the -adic numbers a metric space, and even an ultrametric space, with the -adic distance defined by
As a metric space, the -adic numbers form the completion of the rational numbers equipped with the -adic absolute value. This provides another way for defining the -adic numbers. However, the general construction of a completion can be simplified in this case, because the metric is defined by a discrete valuation (in short, one can extract from every Cauchy sequence a subsequence such that the differences between two consecutive terms have strictly decreasing absolute values; such a subsequence is the sequence of the partial sums of a -adic series, and thus a unique normalized -adic series can be associated to every equivalence class of Cauchy sequences; so, for building the completion, it suffices to consider normalized -adic series instead of equivalence classes of Cauchy sequences).
As the metric is defined from a discrete valuation, every open ball is also closed. More precisely, the open ball equals the closed ball where is the least integer such that Similarly, where is the greatest integer such that
This implies that the -adic numbers form a locally compact space, and the -adic integers—that is, the ball —form a compact space.
p-adic expansion of rational numbers
The decimal expansion of a positive rational number is its representation as a series
where is an integer and each is also an integer such that This expansion can be computed by long division of the numerator by the denominator, which is itself based on the following theorem: If is a rational number such that there is an integer such that and with The decimal expansion is obtained by repeatedly applying this result to the remainder which in the iteration assumes the role of the original rational number .
The -adic expansion of a rational number is defined similarly, but with a different division step. More precisely, given a fixed prime number , every nonzero rational number can be uniquely written as where is a (possibly negative) integer, and are coprime integers both coprime with , and is positive. The integer is the -adic valuation of , denoted and is its -adic absolute value, denoted (the absolute value is small when the valuation is large). The division step consists of writing
where is an integer such that and is either zero, or a rational number such that (that is, ).
The -adic expansion of is the formal power series
obtained by repeating indefinitely the above division step on successive remainders. In a -adic expansion, all are integers such that
If with , the process stops eventually with a zero remainder; in this case, the series is completed by trailing terms with a zero coefficient, and is the representation of in base-.
The existence and the computation of the -adic expansion of a rational number results from Bézout's identity in the following way. If, as above, and and are coprime, there exist integers and such that So
Then, the Euclidean division of by gives
with
This gives the division step as
so that in the iteration
is the new rational number.
The uniqueness of the division step and of the whole -adic expansion is easy: if one has This means divides Since and the following must be true: and Thus, one gets and since divides it must be that
The -adic expansion of a rational number is a series that converges to the rational number, if one applies the definition of a convergent series with the -adic absolute value.
In the standard -adic notation, the digits are written in the same order as in a standard base- system, namely with the powers of the base increasing to the left. This means that the production of the digits is reversed and the limit happens on the left hand side.
The -adic expansion of a rational number is eventually periodic. Conversely, a series with converges (for the -adic absolute value) to a rational number if and only if it is eventually periodic; in this case, the series is the -adic expansion of that rational number. The proof is similar to that of the similar result for repeating decimals.
Example
Let us compute the 5-adic expansion of Bézout's identity for 5 and the denominator 3 is (for larger examples, this can be computed with the extended Euclidean algorithm). Thus
For the next step, one has to expand (the factor 5 has to be viewed as a "shift" of the -adic valuation, similar to the basis of any number expansion, and thus it should not be itself expanded). To expand , we start from the same Bézout's identity and multiply it by , giving
The "integer part" is not in the right interval. So, one has to use Euclidean division by for getting giving
and the expansion in the first step becomes
Similarly, one has
and
As the "remainder" has already been found, the process can be continued easily, giving coefficients for odd powers of five, and for even powers.
Or in the standard 5-adic notation
with the ellipsis on the left hand side.
Positional notation
It is possible to use a positional notation similar to that which is used to represent numbers in base .
Let be a normalized -adic series, i.e. each is an integer in the interval One can suppose that by setting for (if ), and adding the resulting zero terms to the series.
If the positional notation consists of writing the consecutively, ordered by decreasing values of , often with appearing on the right as an index:
So, the computation of the example above shows that
and
When a separating dot is added before the digits with negative index, and, if the index is present, it appears just after the separating dot. For example,
and
If a -adic representation is finite on the left (that is, for large values of ), then it has the value of a nonnegative rational number of the form with integers. These rational numbers are exactly the nonnegative rational numbers that have a finite representation in base . For these rational numbers, the two representations are the same.
Modular properties
The quotient ring may be identified with the ring of the integers modulo This can be shown by remarking that every -adic integer, represented by its normalized -adic series, is congruent modulo with its partial sum whose value is an integer in the interval A straightforward verification shows that this defines a ring isomorphism from to
The inverse limit of the rings is defined as the ring formed by the sequences such that and for every .
The mapping that maps a normalized -adic series to the sequence of its partial sums is a ring isomorphism from to the inverse limit of the This provides another way for defining -adic integers (up to an isomorphism).
This definition of -adic integers is specially useful for practical computations, as allowing building -adic integers by successive approximations.
For example, for computing the -adic (multiplicative) inverse of an integer, one can use Newton's method, starting from the inverse modulo ; then, each Newton step computes the inverse modulo from the inverse modulo
The same method can be used for computing the -adic square root of an integer that is a quadratic residue modulo . This seems to be the fastest known method for testing whether a large integer is a square: it suffices to test whether the given integer is the square of the value found in . Applying Newton's method to find the square root requires to be larger than twice the given integer, which is quickly satisfied.
Hensel lifting is a similar method that allows to "lift" the factorization modulo of a polynomial with integer coefficients to a factorization modulo for large values of . This is commonly used by polynomial factorization algorithms.
Notation
There are several different conventions for writing -adic expansions. So far this article has used a notation for -adic expansions in which powers of increase from right to left. With this right-to-left notation the 3-adic expansion of for example, is written as
When performing arithmetic in this notation, digits are carried to the left. It is also possible to write -adic expansions so that the powers of increase from left to right, and digits are carried to the right. With this left-to-right notation the 3-adic expansion of is
-adic expansions may be written with other sets of digits instead of }. For example, the -adic expansion of can be written using balanced ternary digits }, with representing negative one, as
In fact any set of integers which are in distinct residue classes modulo may be used as -adic digits. In number theory, Teichmüller representatives are sometimes used as digits.
is a variant of the -adic representation of rational numbers that was proposed in 1979 by Eric Hehner and Nigel Horspool for implementing on computers the (exact) arithmetic with these numbers.
Cardinality
Both and are uncountable and have the cardinality of the continuum. For this results from the -adic representation, which defines a bijection of on the power set For this results from its expression as a countably infinite union of copies of :
Algebraic closure
contains and is a field of characteristic .
Because can be written as sum of squares, cannot be turned into an ordered field.
The field of real numbers has only a single proper algebraic extension: the complex numbers . In other words, this quadratic extension is already algebraically closed. By contrast, the algebraic closure of , denoted has infinite degree, that is, has infinitely many inequivalent algebraic extensions. Also contrasting the case of real numbers, although there is a unique extension of the -adic valuation to the latter is not (metrically) complete. Its (metric) completion is called or . Here an end is reached, as is algebraically closed. However unlike this field is not locally compact.
and are isomorphic as rings, so we may regard as endowed with an exotic metric. The proof of existence of such a field isomorphism relies on the axiom of choice, and does not provide an explicit example of such an isomorphism (that is, it is not constructive).
If is any finite Galois extension of , the Galois group is solvable. Thus, the Galois group is prosolvable.
Multiplicative group
contains the -th cyclotomic field () if and only if . For instance, the -th cyclotomic field is a subfield of if and only if , or . In particular, there is no multiplicative -torsion in if . Also, is the only non-trivial torsion element in .
Given a natural number , the index of the multiplicative group of the -th powers of the non-zero elements of in is finite.
The number , defined as the sum of reciprocals of factorials, is not a member of any -adic field; but for . For one must take at least the fourth power. (Thus a number with similar properties as — namely a -th root of — is a member of for all .)
Local–global principle
Helmut Hasse's local–global principle is said to hold for an equation if it can be solved over the rational numbers if and only if it can be solved over the real numbers and over the -adic numbers for every prime . This principle holds, for example, for equations given by quadratic forms, but fails for higher polynomials in several indeterminates.
Rational arithmetic with Hensel lifting
Generalizations and related concepts
The reals and the -adic numbers are the completions of the rationals; it is also possible to complete other fields, for instance general algebraic number fields, in an analogous way. This will be described now.
Suppose D is a Dedekind domain and E is its field of fractions. Pick a non-zero prime ideal P of D. If x is a non-zero element of E, then xD is a fractional ideal and can be uniquely factored as a product of positive and negative powers of non-zero prime ideals of D. We write ordP(x) for the exponent of P in this factorization, and for any choice of number c greater than 1 we can set
Completing with respect to this absolute value |⋅|P yields a field EP, the proper generalization of the field of p-adic numbers to this setting. The choice of c does not change the completion (different choices yield the same concept of Cauchy sequence, so the same completion). It is convenient, when the residue field D/P is finite, to take for c the size of D/P.
For example, when E is a number field, Ostrowski's theorem says that every non-trivial non-Archimedean absolute value on E arises as some |⋅|P. The remaining non-trivial absolute values on E arise from the different embeddings of E into the real or complex numbers. (In fact, the non-Archimedean absolute values can be considered as simply the different embeddings of E into the fields Cp, thus putting the description of all
the non-trivial absolute values of a number field on a common footing.)
Often, one needs to simultaneously keep track of all the above-mentioned completions when E is a number field (or more generally a global field), which are seen as encoding "local" information. This is accomplished by adele rings and idele groups.
p-adic integers can be extended to p-adic solenoids . There is a map from to the circle group whose fibers are the p-adic integers , in analogy to how there is a map from to the circle whose fibers are .
| Mathematics | Prime numbers | null |
51426 | https://en.wikipedia.org/wiki/Cantor%27s%20diagonal%20argument | Cantor's diagonal argument | Cantor's diagonal argument (among various similar names) is a mathematical proof that there are infinite sets which cannot be put into one-to-one correspondence with the infinite set of natural numbersinformally, that there are sets which in some sense contain more elements than there are positive integers. Such sets are now called uncountable sets, and the size of infinite sets is treated by the theory of cardinal numbers, which Cantor began.
Georg Cantor published this proof in 1891, but it was not his first proof of the uncountability of the real numbers, which appeared in 1874.
However, it demonstrates a general technique that has since been used in a wide range of proofs, including the first of Gödel's incompleteness theorems and Turing's answer to the Entscheidungsproblem. Diagonalization arguments are often also the source of contradictions like Russell's paradox and Richard's paradox.
Uncountable set
Cantor considered the set T of all infinite sequences of binary digits (i.e. each digit is zero or one).
He begins with a constructive proof of the following lemma:
If s1, s2, ... , sn, ... is any enumeration of elements from T, then an element s of T can be constructed that doesn't correspond to any sn in the enumeration.
The proof starts with an enumeration of elements from T, for example
{|
|-
| s1 = || (0, || 0, || 0, || 0, || 0, || 0, || 0, || ...)
|-
| s2 = || (1, || 1, || 1, || 1, || 1, || 1, || 1, || ...)
|-
| s3 = || (0, || 1, || 0, || 1, || 0, || 1, || 0, || ...)
|-
| s4 = || (1, || 0, || 1, || 0, || 1, || 0, || 1, || ...)
|-
| s5 = || (1, || 1, || 0, || 1, || 0, || 1, || 1, || ...)
|-
| s6 = || (0, || 0, || 1, || 1, || 0, || 1, || 1, || ...)
|-
| s7 = || (1, || 0, || 0, || 0, || 1, || 0, || 0, || ...)
|-
| ...
|}
Next, a sequence s is constructed by choosing the 1st digit as complementary to the 1st digit of s1 (swapping 0s for 1s and vice versa), the 2nd digit as complementary to the 2nd digit of s2, the 3rd digit as complementary to the 3rd digit of s3, and generally for every n, the nth digit as complementary to the nth digit of sn. For the example above, this yields
{|
|-
| s1 || = || (0, || 0, || 0, || 0, || 0, || 0, || 0, || ...)
|-
| s2 || = || (1, || 1, || 1, || 1, || 1, || 1, || 1, || ...)
|-
| s3 || = || (0, || 1, || 0, || 1, || 0, || 1, || 0, || ...)
|-
| s4 || = || (1, || 0, || 1, || 0, || 1, || 0, || 1, || ...)
|-
| s5 || = || (1, || 1, || 0, || 1, || 0, || 1, || 1, || ...)
|-
| s6 || = || (0, || 0, || 1, || 1, || 0, || 1, || 1, || ...)
|-
| s7 || = || (1, || 0, || 0, || 0, || 1, || 0, || 0, || ...)
|-
| ...
|-
|
|-
| s || = || (1, || 0, || 1, || 1, || 1, || 0, || 1, || ...)
|}
By construction, s is a member of T that differs from each sn, since their nth digits differ (highlighted in the example).
Hence, s cannot occur in the enumeration.
Based on this lemma, Cantor then uses a proof by contradiction to show that:
The set T is uncountable.
The proof starts by assuming that T is countable.
Then all its elements can be written in an enumeration s1, s2, ... , sn, ... .
Applying the previous lemma to this enumeration produces a sequence s that is a member of T, but is not in the enumeration. However, if T is enumerated, then every member of T, including this s, is in the enumeration. This contradiction implies that the original assumption is false. Therefore, T is uncountable.
Real numbers
The uncountability of the real numbers was already established by Cantor's first uncountability proof, but it also follows from the above result. To prove this, an injection will be constructed from the set T of infinite binary strings to the set R of real numbers. Since T is uncountable, the image of this function, which is a subset of R, is uncountable. Therefore, R is uncountable. Also, by using a method of construction devised by Cantor, a bijection will be constructed between T and R. Therefore, T and R have the same cardinality, which is called the "cardinality of the continuum" and is usually denoted by or .
An injection from T to R is given by mapping binary strings in T to decimal fractions, such as mapping t = 0111... to the decimal 0.0111.... This function, defined by , is an injection because it maps different strings to different numbers.
Constructing a bijection between T and R is slightly more complicated.
Instead of mapping 0111... to the decimal 0.0111..., it can be mapped to the base b number: 0.0111...b. This leads to the family of functions: . The functions are injections, except for . This function will be modified to produce a bijection between T and R.
General sets
A generalized form of the diagonal argument was used by Cantor to prove Cantor's theorem: for every set S, the power set of S—that is, the set of all subsets of S (here written as P(S))—cannot be in bijection with S itself. This proof proceeds as follows:
Let f be any function from S to P(S). It suffices to prove f cannot be surjective. That means that some member T of P(S), i.e. some subset of S, is not in the image of f. As a candidate consider the set:
.
For every s in S, either s is in T or not. If s is in T, then by definition of T, s is not in f(s), so T is not equal to f(s). On the other hand, if s is not in T, then by definition of T, s is in f(s), so again T is not equal to f(s); cf. picture.
For a more complete account of this proof, see Cantor's theorem.
Consequences
Ordering of cardinals
With equality defined as the existence of a bijection between their underlying sets, Cantor also defines binary predicate of cardinalities and in terms of the existence of injections between and . It has the properties of a preorder and is here written "". One can embed the naturals into the binary sequences, thus proving various injection existence statements explicitly, so that in this sense , where denotes the function space . But following from the argument in the previous sections, there is no surjection and so also no bijection, i.e. the set is uncountable. For this one may write , where "" is understood to mean the existence of an injection together with the proven absence of a bijection (as opposed to alternatives such as the negation of Cantor's preorder, or a definition in terms of assigned ordinals). Also in this sense, as has been shown, and at the same time it is the case that , for all sets .
Assuming the law of excluded middle, characteristic functions surject onto powersets, and then . So the uncountable is also not enumerable and it can also be mapped onto . Classically, the Schröder–Bernstein theorem is valid and says that any two sets which are in the injective image of one another are in bijection as well. Here, every unbounded subset of is then in bijection with itself, and every subcountable set (a property in terms of surjections) is then already countable, i.e. in the surjective image of . In this context the possibilities are then exhausted, making "" a non-strict partial order, or even a total order when assuming choice. The diagonal argument thus establishes that, although both sets under consideration are infinite, there are actually more infinite sequences of ones and zeros than there are natural numbers.
Cantor's result then also implies that the notion of the set of all sets is inconsistent: If were the set of all sets, then would at the same time be bigger than and a subset of .
In the absence of excluded middle
Also in constructive mathematics, there is no surjection from the full domain onto the space of functions or onto the collection of subsets , which is to say these two collections are uncountable. Again using "" for proven injection existence in conjunction with bijection absence, one has and . Further, , as previously noted. Likewise, , and of course , also in constructive set theory.
It is however harder or impossible to order ordinals and also cardinals, constructively. For example, the Schröder–Bernstein theorem requires the law of excluded middle. In fact, the standard ordering on the reals, extending the ordering of the rational numbers, is not necessarily decidable either. Neither are most properties of interesting classes of functions decidable, by Rice's theorem, i.e. the set of counting numbers for the subcountable sets may not be recursive and can thus fail to be countable. The elaborate collection of subsets of a set is constructively not exchangeable with the collection of its characteristic functions. In an otherwise constructive context (in which the law of excluded middle is not taken as axiom), it is consistent to adopt non-classical axioms that contradict consequences of the law of excluded middle. Uncountable sets such as or may be asserted to be subcountable.
This is a notion of size that is redundant in the classical context, but otherwise need not imply countability. The existence of injections from the uncountable or into is here possible as well. So the cardinal relation fails to be antisymmetric. Consequently, also in the presence of function space sets that are even classically uncountable, intuitionists do not accept this relation to constitute a hierarchy of transfinite sizes.
When the axiom of powerset is not adopted, in a constructive framework even the subcountability of all sets is then consistent. That all said, in common set theories, the non-existence of a set of all sets also already follows from Predicative Separation.
In a set theory, theories of mathematics are modeled. Weaker logical axioms mean fewer constraints and so allow for a richer class of models. A set may be identified as a model of the field of real numbers when it fulfills some axioms of real numbers or a constructive rephrasing thereof. Various models have been studied, such as the Cauchy reals or the Dedekind reals, among others. The former relate to quotients of sequences while the later are well-behaved cuts taken from a powerset, if they exist. In the presence of excluded middle, those are all isomorphic and uncountable. Otherwise, variants of the Dedekind reals can be countable or inject into the naturals, but not jointly. When assuming countable choice, constructive Cauchy reals even without an explicit modulus of convergence are then Cauchy-complete and Dedekind reals simplify so as to become isomorphic to them. Indeed, here choice also aids diagonal constructions and when assuming it, Cauchy-complete models of the reals are uncountable.
Open questions
Motivated by the insight that the set of real numbers is "bigger" than the set of natural numbers, one is led to ask if there is a set whose cardinality is "between" that of the integers and that of the reals. This question leads to the famous continuum hypothesis. Similarly, the question of whether there exists a set whose cardinality is between |S| and |P(S)| for some infinite S leads to the generalized continuum hypothesis.
Diagonalization in broader context
Russell's paradox has shown that set theory that includes an unrestricted comprehension scheme is contradictory. Note that there is a similarity between the construction of T and the set in Russell's paradox. Therefore, depending on how we modify the axiom scheme of comprehension in order to avoid Russell's paradox, arguments such as the non-existence of a set of all sets may or may not remain valid.
Analogues of the diagonal argument are widely used in mathematics to prove the existence or nonexistence of certain objects. For example, the conventional proof of the unsolvability of the halting problem is essentially a diagonal argument. Also, diagonalization was originally used to show the existence of arbitrarily hard complexity classes and played a key role in early attempts to prove P does not equal NP.
Version for Quine's New Foundations
The above proof fails for W. V. Quine's "New Foundations" set theory (NF). In NF, the naive axiom scheme of comprehension is modified to avoid the paradoxes by introducing a kind of "local" type theory. In this axiom scheme,
{ s ∈ S: s ∉ f(s) }
is not a set — i.e., does not satisfy the axiom scheme. On the other hand, we might try to create a modified diagonal argument by noticing that
{ s ∈ S: s ∉ f({s}) }
is a set in NF. In which case, if P1(S) is the set of one-element subsets of S and f is a proposed bijection from P1(S) to P(S), one is able to use proof by contradiction to prove that |P1(S)| < |P(S)|.
The proof follows by the fact that if f were indeed a map onto P(S), then we could find r in S, such that f({r}) coincides with the modified diagonal set, above. We would conclude that if r is not in f({r}), then r is in f({r}) and vice versa.
It is not possible to put P1(S) in a one-to-one relation with S, as the two have different types, and so any function so defined would violate the typing rules for the comprehension scheme.
| Mathematics | Set theory | null |
51440 | https://en.wikipedia.org/wiki/Quaternion | Quaternion | In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. The algebra of quaternions is often denoted by (for Hamilton), or in blackboard bold by Quaternions are not a field, because multiplication of quaternions is not, in general, commutative. Quaternions provide a definition of the quotient of two vectors in a three-dimensional space. Quaternions are generally represented in the form
where the coefficients , , , are real numbers, and , are the basis vectors or basis elements.
Quaternions are used in pure mathematics, but also have practical uses in applied mathematics, particularly for calculations involving three-dimensional rotations, such as in three-dimensional computer graphics, computer vision, robotics, magnetic resonance imaging and crystallographic texture analysis. They can be used alongside other methods of rotation, such as Euler angles and rotation matrices, or as an alternative to them, depending on the application.
In modern terms, quaternions form a four-dimensional associative normed division algebra over the real numbers, and therefore a ring, also a division ring and a domain. It is a special case of a Clifford algebra, classified as It was the first noncommutative division algebra to be discovered.
According to the Frobenius theorem, the algebra is one of only two finite-dimensional division rings containing a proper subring isomorphic to the real numbers; the other being the complex numbers. These rings are also Euclidean Hurwitz algebras, of which the quaternions are the largest associative algebra (and hence the largest ring). Further extending the quaternions yields the non-associative octonions, which is the last normed division algebra over the real numbers. The next extension gives the sedenions, which have zero divisors and so cannot be a normed division algebra.
The unit quaternions give a group structure on the 3-sphere isomorphic to the groups Spin(3) and SU(2), i.e. the universal cover group of SO(3). The positive and negative basis vectors form the eight-element quaternion group.
History
Quaternions were introduced by Hamilton in 1843. Important precursors to this work included Euler's four-square identity (1748) and Olinde Rodrigues' parameterization of general rotations by four parameters (1840), but neither of these writers treated the four-parameter rotations as an algebra. Carl Friedrich Gauss had discovered quaternions in 1819, but this work was not published until 1900.
Hamilton knew that the complex numbers could be interpreted as points in a plane, and he was looking for a way to do the same for points in three-dimensional space. Points in space can be represented by their coordinates, which are triples of numbers, and for many years he had known how to add and subtract triples of numbers. However, for a long time, he had been stuck on the problem of multiplication and division. He could not figure out how to calculate the quotient of the coordinates of two points in space. In fact, Ferdinand Georg Frobenius later proved in 1877 that for a division algebra over the real numbers to be finite-dimensional and associative, it cannot be three-dimensional, and there are only three such division algebras: (complex numbers) and (quaternions) which have dimension 1, 2, and 4 respectively.
The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, when Hamilton was on his way to the Royal Irish Academy to preside at a council meeting. As he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions,
into the stone of Brougham Bridge as he paused on it. Although the carving has since faded away, there has been an annual pilgrimage since 1989 called the Hamilton Walk for scientists and mathematicians who walk from Dunsink Observatory to the Royal Canal bridge in remembrance of Hamilton's discovery.
On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves, describing the train of thought that led to his discovery. This letter was later published in a letter to the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science; Hamilton states:
Hamilton called a quadruple with these rules of multiplication a quaternion, and he devoted most of the remainder of his life to studying and teaching them. Hamilton's treatment is more geometric than the modern approach, which emphasizes quaternions' algebraic properties. He founded a school of "quaternionists", and he tried to popularize quaternions in several books. The last and longest of his books, Elements of Quaternions, was 800 pages long; it was edited by his son and published shortly after his death.
After Hamilton's death, the Scottish mathematical physicist Peter Tait became the chief exponent of quaternions. At this time, quaternions were a mandatory examination topic in Dublin. Topics in physics and geometry that would now be described using vectors, such as kinematics in space and Maxwell's equations, were described entirely in terms of quaternions. There was even a professional research association, the Quaternion Society, devoted to the study of quaternions and other hypercomplex number systems.
From the mid-1880s, quaternions began to be displaced by vector analysis, which had been developed by Josiah Willard Gibbs, Oliver Heaviside, and Hermann von Helmholtz. Vector analysis described the same phenomena as quaternions, so it borrowed some ideas and terminology liberally from the literature on quaternions. However, vector analysis was conceptually simpler and notationally cleaner, and eventually quaternions were relegated to a minor role in mathematics and physics. A side-effect of this transition is that Hamilton's work is difficult to comprehend for many modern readers. Hamilton's original definitions are unfamiliar and his writing style was wordy and difficult to follow.
However, quaternions have had a revival since the late 20th century, primarily due to their utility in describing spatial rotations. The representations of rotations by quaternions are more compact and quicker to compute than the representations by matrices. In addition, unlike Euler angles, they are not susceptible to "gimbal lock". For this reason, quaternions are used in computer graphics, computer vision, robotics, nuclear magnetic resonance image sampling, control theory, signal processing, attitude control, physics, bioinformatics, molecular dynamics, computer simulations, and orbital mechanics. For example, it is common for the attitude control systems of spacecraft to be commanded in terms of quaternions. Quaternions have received another boost from number theory because of their relationships with the quadratic forms.
Quaternions in physics
The finding of 1924 that in quantum mechanics the spin of an electron and other matter particles (known as spinors) can be described using quaternions (in the form of the famous Pauli spin matrices) furthered their interest; quaternions helped to understand how rotations of electrons by 360° can be discerned from those by 720° (the "Plate trick"). , their use has not overtaken rotation groups.
Definition
A quaternion is an expression of the form
where , , , , are real numbers, and , , , are symbols that can be interpreted as unit-vectors pointing along the three spatial axes. In practice, if one of , , , is 0, the corresponding term is omitted; if , , , are all zero, the quaternion is the zero quaternion, denoted 0; if one of , , equals 1, the corresponding term is written simply , or .
Hamilton describes a quaternion , as consisting of a scalar part and a vector part. The quaternion is called the vector part (sometimes imaginary part) of , and is the scalar part (sometimes real part) of . A quaternion that equals its real part (that is, its vector part is zero) is called a scalar or real quaternion, and is identified with the corresponding real number. That is, the real numbers are embedded in the quaternions. (More properly, the field of real numbers is isomorphic to a subset of the quaternions. The field of complex numbers is also isomorphic to three subsets of quaternions.) A quaternion that equals its vector part is called a vector quaternion.
The set of quaternions is a 4-dimensional vector space over the real numbers, with as a basis, by the component-wise addition
and the component-wise scalar multiplication
A multiplicative group structure, called the Hamilton product, denoted by juxtaposition, can be defined on the quaternions in the following way:
The real quaternion is the identity element.
The real quaternions commute with all other quaternions, that is for every quaternion and every real quaternion . In algebraic terminology this is to say that the field of real quaternions are the center of this quaternion algebra.
The product is first given for the basis elements (see next subsection), and then extended to all quaternions by using the distributive property and the center property of the real quaternions. The Hamilton product is not commutative, but is associative, thus the quaternions form an associative algebra over the real numbers.
Additionally, every nonzero quaternion has an inverse with respect to the Hamilton product:
Thus the quaternions form a division algebra.
Multiplication of basis elements
The multiplication with of the basis elements , and is defined by the fact that is a multiplicative identity, that is,
The products of other basis elements are
Combining these rules,
Center
The center of a noncommutative ring is the subring of elements such that for every . The center of the quaternion algebra is the subfield of real quaternions. In fact, it is a part of the definition that the real quaternions belong to the center. Conversely, if belongs to the center, then
and . A similar computation with instead of shows that one has also . Thus is a real quaternion.
The quaternions form a division algebra. This means that the non-commutativity of multiplication is the only property that makes quaternions different from a field. This non-commutativity has some unexpected consequences, among them that a polynomial equation over the quaternions can have more distinct solutions than the degree of the polynomial. For example, the equation has infinitely many quaternion solutions, which are the quaternions such that . Thus these "roots of –1" form a unit sphere in the three-dimensional space of vector quaternions.
Hamilton product
For two elements and , their product, called the Hamilton product () (), is determined by the products of the basis elements and the distributive law. The distributive law makes it possible to expand the product so that it is a sum of products of basis elements. This gives the following expression:
Now the basis elements can be multiplied using the rules given above to get:
Scalar and vector parts
A quaternion of the form , where is a real number, is called scalar, and a quaternion of the form , where , , and are real numbers, and at least one of , , or is nonzero, is called a vector quaternion. If is any quaternion, then is called its scalar part and is called its vector part. Even though every quaternion can be viewed as a vector in a four-dimensional vector space, it is common to refer to the vector part as vectors in three-dimensional space. With this convention, a vector is the same as an element of the vector space
Hamilton also called vector quaternions right quaternions and real numbers (considered as quaternions with zero vector part) scalar quaternions.
If a quaternion is divided up into a scalar part and a vector part, that is,
then the formulas for addition, multiplication, and multiplicative inverse are
where "" and "" denote respectively the dot product and the cross product.
Conjugation, the norm, and reciprocal
Conjugation of quaternions is analogous to conjugation of complex numbers and to transposition (also known as reversal) of elements of Clifford algebras. To define it, let be a quaternion. The conjugate of is the quaternion . It is denoted by , qt, , or . Conjugation is an involution, meaning that it is its own inverse, so conjugating an element twice returns the original element. The conjugate of a product of two quaternions is the product of the conjugates in the reverse order. That is, if and are quaternions, then , not .
The conjugation of a quaternion, in stark contrast to the complex setting, can be expressed with multiplication and addition of quaternions:
Conjugation can be used to extract the scalar and vector parts of a quaternion. The scalar part of is , and the vector part of is .
The square root of the product of a quaternion with its conjugate is called its norm and is denoted (Hamilton called this quantity the tensor of q, but this conflicts with the modern meaning of "tensor"). In formulas, this is expressed as follows:
This is always a non-negative real number, and it is the same as the Euclidean norm on considered as the vector space . Multiplying a quaternion by a real number scales its norm by the absolute value of the number. That is, if is real, then
This is a special case of the fact that the norm is multiplicative, meaning that
for any two quaternions and . Multiplicativity is a consequence of the formula for the conjugate of a product.
Alternatively it follows from the identity
(where denotes the usual imaginary unit) and hence from the multiplicative property of determinants of square matrices.
This norm makes it possible to define the distance between and as the norm of their difference:
This makes a metric space.
Addition and multiplication are continuous in regard to the associated metric topology.
This follows with exactly the same proof as for the real numbers from the fact that is a normed algebra.
Unit quaternion
A unit quaternion is a quaternion of norm one. Dividing a nonzero quaternion by its norm produces a unit quaternion called the versor of :
Every nonzero quaternion has a unique polar decomposition while the zero quaternion can be formed from any unit quaternion.
Using conjugation and the norm makes it possible to define the reciprocal of a nonzero quaternion. The product of a quaternion with its reciprocal should equal 1, and the considerations above imply that the product of and is 1 (for either order of multiplication). So the reciprocal of is defined to be
Since the multiplication is non-commutative, the quotient quantities or are different (except if and are scalar multiples of each other or if one is a scalar): the notation is ambiguous and should not be used.
Algebraic properties
The set of all quaternions is a vector space over the real numbers with dimension 4. Multiplication of quaternions is associative and distributes over vector addition, but with the exception of the scalar subset, it is not commutative. Therefore, the quaternions are a non-commutative, associative algebra over the real numbers. Even though contains copies of the complex numbers, it is not an associative algebra over the complex numbers.
Because it is possible to divide quaternions, they form a division algebra. This is a structure similar to a field except for the non-commutativity of multiplication. Finite-dimensional associative division algebras over the real numbers are very rare. The Frobenius theorem states that there are exactly three: , , and . The norm makes the quaternions into a normed algebra, and normed division algebras over the real numbers are also very rare: Hurwitz's theorem says that there are only four: , , , and (the octonions). The quaternions are also an example of a composition algebra and of a unital Banach algebra.
Because the product of any two basis vectors is plus or minus another basis vector, the set forms a group under multiplication. This non-abelian group is called the quaternion group and is denoted . The real group ring of is a ring which is also an eight-dimensional vector space over It has one basis vector for each element of The quaternions are isomorphic to the quotient ring of by the ideal generated by the elements , , , and . Here the first term in each of the differences is one of the basis elements , and , and the second term is one of basis elements , and , not the additive inverses of , and .
Quaternions and three-dimensional geometry
The vector part of a quaternion can be interpreted as a coordinate vector in therefore, the algebraic operations of the quaternions reflect the geometry of Operations such as the vector dot and cross products can be defined in terms of quaternions, and this makes it possible to apply quaternion techniques wherever spatial vectors arise. A useful application of quaternions has been to interpolate the orientations of key-frames in computer graphics.
For the remainder of this section, , , and will denote both the three imaginary basis vectors of and a basis for Replacing by , by , and by sends a vector to its additive inverse, so the additive inverse of a vector is the same as its conjugate as a quaternion. For this reason, conjugation is sometimes called the spatial inverse.
For two vector quaternions and their dot product, by analogy to vectors in is
It can also be expressed in a component-free manner as
This is equal to the scalar parts of the products . Note that their vector parts are different.
The cross product of and relative to the orientation determined by the ordered basis , and is
(Recall that the orientation is necessary to determine the sign.) This is equal to the vector part of the product (as quaternions), as well as the vector part of . It also has the formula
For the commutator, , of two vector quaternions one obtains
In general, let and be quaternions and write
where and are the scalar parts, and and are the vector parts of and . Then we have the formula
This shows that the noncommutativity of quaternion multiplication comes from the multiplication of vector quaternions. It also shows that two quaternions commute if and only if their vector parts are collinear. Hamilton showed that this product computes the third vertex of a spherical triangle from two given vertices and their associated arc-lengths, which is also an algebra of points in Elliptic geometry.
Unit quaternions can be identified with rotations in and were called versors by Hamilton. Also see Quaternions and spatial rotation for more information about modeling three-dimensional rotations using quaternions.
See Hanson (2005) for visualization of quaternions.
Matrix representations
Just as complex numbers can be represented as matrices, so can quaternions. There are at least two ways of representing quaternions as matrices in such a way that quaternion addition and multiplication correspond to matrix addition and matrix multiplication. One is to use 2 × 2 complex matrices, and the other is to use 4 × 4 real matrices. In each case, the representation given is one of a family of linearly related representations. These are injective homomorphisms from to the matrix rings and , respectively.
The quaternion can be represented using a 2 × 2 complex matrix as
This representation has the following properties:
Constraining any two of , and to zero produces a representation of complex numbers. For example, setting produces a diagonal complex matrix representation of complex numbers, and setting produces a real matrix representation.
The norm of a quaternion (the square root of the product with its conjugate, as with complex numbers) is the square root of the determinant of the corresponding matrix.
The scalar part of a quaternion is one half of the matrix trace.
The conjugate of a quaternion corresponds to the conjugate transpose of the matrix.
By restriction this representation yields an isomorphism between the subgroup of unit quaternions and their image SU(2). Topologically, the unit quaternions are the 3-sphere, so the underlying space of SU(2) is also a 3-sphere. The group is important for describing spin in quantum mechanics; see Pauli matrices.
There is a strong relation between quaternion units and Pauli matrices. The 2 × 2 complex matrix above can be written as , so in this representation the quaternion units correspond to = , so multiplying any two Pauli matrices always yields a quaternion unit matrix, all of them except for −1. One obtains −1 via ; e.g. the last equality is
The representation in is not unique. A different convention, that preserves the direction of cyclic ordering between the quaternions and the Pauli matrices, is to choose This gives an alternative representation,
Using 4 × 4 real matrices, that same quaternion can be written as
However, the representation of quaternions in is not unique. For example, the same quaternion can also be represented as
There exist 48 distinct matrix representations of this form in which one of the matrices represents the scalar part and the other three are all skew-symmetric. More precisely, there are 48 sets of quadruples of matrices with these symmetry constraints such that a function sending , and to the matrices in the quadruple is a homomorphism, that is, it sends sums and products of quaternions to sums and products of matrices.
In this representation, the conjugate of a quaternion corresponds to the transpose of the matrix. The fourth power of the norm of a quaternion is the determinant of the corresponding matrix. The scalar part of a quaternion is one quarter of the matrix trace. As with the 2 × 2 complex representation above, complex numbers can again be produced by constraining the coefficients suitably; for example, as block diagonal matrices with two 2 × 2 blocks by setting .
Each 4×4 matrix representation of quaternions corresponds to a multiplication table of unit quaternions. For example, the last matrix representation given above corresponds to the multiplication table
which is isomorphic — through — to
Constraining any such multiplication table to have the identity in the first row and column and for the signs of the row headers to be opposite to those of the column headers, then there are 3 possible choices for the second column (ignoring sign), 2 possible choices for the third column (ignoring sign), and 1 possible choice for the fourth column (ignoring sign); that makes 6 possibilities. Then, the second column can be chosen to be either positive or negative, the third column can be chosen to be positive or negative, and the fourth column can be chosen to be positive or negative, giving 8 possibilities for the sign. Multiplying the possibilities for the letter positions and for their signs yields 48. Then replacing with , with , with , and with and removing the row and column headers yields a matrix representation of .
Lagrange's four-square theorem
Quaternions are also used in one of the proofs of Lagrange's four-square theorem in number theory, which states that every nonnegative integer is the sum of four integer squares. As well as being an elegant theorem in its own right, Lagrange's four square theorem has useful applications in areas of mathematics outside number theory, such as combinatorial design theory. The quaternion-based proof uses Hurwitz quaternions, a subring of the ring of all quaternions for which there is an analog of the Euclidean algorithm.
Quaternions as pairs of complex numbers
Quaternions can be represented as pairs of complex numbers. From this perspective, quaternions are the result of applying the Cayley–Dickson construction to the complex numbers. This is a generalization of the construction of the complex numbers as pairs of real numbers.
Let be a two-dimensional vector space over the complex numbers. Choose a basis consisting of two elements and . A vector in can be written in terms of the basis elements and as
If we define and , then we can multiply two vectors using the distributive law. Using as an abbreviated notation for the product leads to the same rules for multiplication as the usual quaternions. Therefore, the above vector of complex numbers corresponds to the quaternion . If we write the elements of as ordered pairs and quaternions as quadruples, then the correspondence is
Square roots
Square roots of −1
In the complex numbers, there are exactly two numbers, and , that give −1 when squared. In there are infinitely many square roots of minus one: the quaternion solution for the square root of −1 is the unit sphere in To see this, let be a quaternion, and assume that its square is −1. In terms of , , , and , this means
To satisfy the last three equations, either or , , and are all 0. The latter is impossible because a is a real number and the first equation would imply that Therefore, and In other words: A quaternion squares to −1 if and only if it is a vector quaternion with norm 1. By definition, the set of all such vectors forms the unit sphere.
Only negative real quaternions have infinitely many square roots. All others have just two (or one in the case of 0).
As a union of complex planes
Each antipodal pair of square roots of −1 creates a distinct copy of the complex numbers inside the quaternions. If then the copy is the image of the function
This is an injective ring homomorphism from to which defines a field isomorphism from onto its image. The images of the embeddings corresponding to and − are identical.
Every non-real quaternion generates a subalgebra of the quaternions that is isomorphic to and is thus a planar subspace of write as the sum of its scalar part and its vector part:
Decompose the vector part further as the product of its norm and its versor:
(This is not the same as .) The versor of the vector part of , , is a right versor with –1 as its square. A straightforward verification shows that
defines an injective homomorphism of normed algebras from into the quaternions. Under this homomorphism, is the image of the complex number .
As is the union of the images of all these homomorphisms, one can view the quaternions as a pencil of planes intersecting on the real line. Each of these complex planes contains exactly one pair of antipodal points of the sphere of square roots of minus one.
Commutative subrings
The relationship of quaternions to each other within the complex subplanes of can also be identified and expressed in terms of commutative subrings. Specifically, since two quaternions and commute (i.e., ) only if they lie in the same complex subplane of , the profile of as a union of complex planes arises when one seeks to find all commutative subrings of the quaternion ring.
Square roots of arbitrary quaternions
Any quaternion (represented here in scalar–vector representation) has at least one square root which solves the equation . Looking at the scalar and vector parts in this equation separately yields two equations, which when solved gives the solutions
where is the norm of and is the norm of . For any scalar quaternion , this equation provides the correct square roots if is interpreted as an arbitrary unit vector.
Therefore, nonzero, non-scalar quaternions, or positive scalar quaternions, have exactly two roots, while 0 has exactly one root (0), and negative scalar quaternions have infinitely many roots, which are the vector quaternions located on , i.e., where the scalar part is zero and the vector part is located on the 2-sphere with radius .
Functions of a quaternion variable
Like functions of a complex variable, functions of a quaternion variable suggest useful physical models. For example, the original electric and magnetic fields described by Maxwell were functions of a quaternion variable. Examples of other functions include the extension of the Mandelbrot set and Julia sets into 4-dimensional space.
Exponential, logarithm, and power functions
Given a quaternion,
the exponential is computed as
and the logarithm is
It follows that the polar decomposition of a quaternion may be written
where the angle
and the unit vector is defined by:
Any unit quaternion may be expressed in polar form as:
The power of a quaternion raised to an arbitrary (real) exponent is given by:
Geodesic norm
The geodesic distance between unit quaternions and is defined as:
and amounts to the absolute value of half the angle subtended by and along a great arc of the sphere.
This angle can also be computed from the quaternion dot product without the logarithm as:
Three-dimensional and four-dimensional rotation groups
The word "conjugation", besides the meaning given above, can also mean taking an element to where is some nonzero quaternion. All elements that are conjugate to a given element (in this sense of the word conjugate) have the same real part and the same norm of the vector part. (Thus the conjugate in the other sense is one of the conjugates in this sense.)
Thus the multiplicative group of nonzero quaternions acts by conjugation on the copy of consisting of quaternions with real part equal to zero. Conjugation by a unit quaternion (a quaternion of absolute value 1) with real part is a rotation by an angle , the axis of the rotation being the direction of the vector part. The advantages of quaternions are:
Avoiding gimbal lock, a problem with systems such as Euler angles.
Faster and more compact than matrices.
Nonsingular representation (compared with Euler angles for example).
Pairs of unit quaternions represent a rotation in 4D space (see Rotations in 4-dimensional Euclidean space: Algebra of 4D rotations).
The set of all unit quaternions (versors) forms a 3-sphere and a group (a Lie group) under multiplication, double covering the group of real orthogonal 3×3 matrices of determinant 1 since two unit quaternions correspond to every rotation under the above correspondence. See plate trick.
The image of a subgroup of versors is a point group, and conversely, the preimage of a point group is a subgroup of versors. The preimage of a finite point group is called by the same name, with the prefix binary. For instance, the preimage of the icosahedral group is the binary icosahedral group.
The versors' group is isomorphic to , the group of complex unitary 2×2 matrices of determinant 1.
Let be the set of quaternions of the form where and are either all integers or all half-integers. The set is a ring (in fact a domain) and a lattice and is called the ring of Hurwitz quaternions. There are 24 unit quaternions in this ring, and they are the vertices of a regular 24 cell with Schläfli symbol They correspond to the double cover of the rotational symmetry group of the regular tetrahedron. Similarly, the vertices of a regular 600 cell with Schläfli symbol } can be taken as the unit icosians, corresponding to the double cover of the rotational symmetry group of the regular icosahedron. The double cover of the rotational symmetry group of the regular octahedron corresponds to the quaternions that represent the vertices of the disphenoidal 288-cell.
Quaternion algebras
The Quaternions can be generalized into further algebras called quaternion algebras. Take to be any field with characteristic different from 2, and and to be elements of ; a four-dimensional unitary associative algebra can be defined over with basis and , where , and (so ).
Quaternion algebras are isomorphic to the algebra of 2×2 matrices over or form division algebras over , depending on the choice of and .
Quaternions as the even part of
The usefulness of quaternions for geometrical computations can be generalised to other dimensions by identifying the quaternions as the even part of the Clifford algebra This is an associative multivector algebra built up from fundamental basis elements using the product rules
If these fundamental basis elements are taken to represent vectors in 3D space, then it turns out that the reflection of a vector in a plane perpendicular to a unit vector can be written:
Two reflections make a rotation by an angle twice the angle between the two reflection planes, so
corresponds to a rotation of 180° in the plane containing σ1 and σ2. This is very similar to the corresponding quaternion formula,
Indeed, the two structures and are isomorphic. One natural identification is
and it is straightforward to confirm that this preserves the Hamilton relations
In this picture, so-called "vector quaternions" (that is, pure imaginary quaternions) correspond not to vectors but to bivectors – quantities with magnitudes and orientations associated with particular 2D planes rather than 1D directions. The relation to complex numbers becomes clearer, too: in 2D, with two vector directions and , there is only one bivector basis element , so only one imaginary. But in 3D, with three vector directions, there are three bivector basis elements , , , so three imaginaries.
This reasoning extends further. In the Clifford algebra there are six bivector basis elements, since with four different basic vector directions, six different pairs and therefore six different linearly independent planes can be defined. Rotations in such spaces using these generalisations of quaternions, called rotors, can be very useful for applications involving homogeneous coordinates. But it is only in 3D that the number of basis bivectors equals the number of basis vectors, and each bivector can be identified as a pseudovector.
There are several advantages for placing quaternions in this wider setting:
Rotors are a natural part of geometric algebra and easily understood as the encoding of a double reflection.
In geometric algebra, a rotor and the objects it acts on live in the same space. This eliminates the need to change representations and to encode new data structures and methods, which is traditionally required when augmenting linear algebra with quaternions.
Rotors are universally applicable to any element of the algebra, not just vectors and other quaternions, but also lines, planes, circles, spheres, rays, and so on.
In the conformal model of Euclidean geometry, rotors allow the encoding of rotation, translation and scaling in a single element of the algebra, universally acting on any element. In particular, this means that rotors can represent rotations around an arbitrary axis, whereas quaternions are limited to an axis through the origin.
Rotor-encoded transformations make interpolation particularly straightforward.
Rotors carry over naturally to pseudo-Euclidean spaces, for example, the Minkowski space of special relativity. In such spaces rotors can be used to efficiently represent Lorentz boosts, and to interpret formulas involving the gamma matrices.
For further detail about the geometrical uses of Clifford algebras, see Geometric algebra.
Brauer group
The quaternions are "essentially" the only (non-trivial) central simple algebra (CSA) over the real numbers, in the sense that every CSA over the real numbers is Brauer equivalent to either the real numbers or the quaternions. Explicitly, the Brauer group of the real numbers consists of two classes, represented by the real numbers and the quaternions, where the Brauer group is the set of all CSAs, up to equivalence relation of one CSA being a matrix ring over another. By the Artin–Wedderburn theorem (specifically, Wedderburn's part), CSAs are all matrix algebras over a division algebra, and thus the quaternions are the only non-trivial division algebra over the real numbers.
CSAs – finite dimensional rings over a field, which are simple algebras (have no non-trivial 2-sided ideals, just as with fields) whose center is exactly the field – are a noncommutative analog of extension fields, and are more restrictive than general ring extensions. The fact that the quaternions are the only non-trivial CSA over the real numbers (up to equivalence) may be compared with the fact that the complex numbers are the only non-trivial finite field extension of the real numbers.
Quotations
| Mathematics | Linear algebra | null |
51442 | https://en.wikipedia.org/wiki/Zorn%27s%20lemma | Zorn's lemma | Zorn's lemma, also known as the Kuratowski–Zorn lemma, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element.
The lemma was proved (assuming the axiom of choice) by Kazimierz Kuratowski in 1922 and independently by Max Zorn in 1935. It occurs in the proofs of several theorems of crucial importance, for instance the Hahn–Banach theorem in functional analysis, the theorem that every vector space has a basis, Tychonoff's theorem in topology stating that every product of compact spaces is compact, and the theorems in abstract algebra that in a ring with identity every proper ideal is contained in a maximal ideal and that every field has an algebraic closure.
Zorn's lemma is equivalent to the well-ordering theorem and also to the axiom of choice, in the sense that within ZF (Zermelo–Fraenkel set theory without the axiom of choice) any one of the three is sufficient to prove the other two. An earlier formulation of Zorn's lemma is the Hausdorff maximal principle which states that every totally ordered subset of a given partially ordered set is contained in a maximal totally ordered subset of that partially ordered set.
Motivation
To prove the existence of a mathematical object that can be viewed as a maximal element in some partially ordered set in some way, one can try proving the existence of such an object by assuming there is no maximal element and using transfinite induction and the assumptions of the situation to get a contradiction. Zorn's lemma tidies up the conditions a situation needs to satisfy in order for such an argument to work and enables mathematicians to not have to repeat the transfinite induction argument by hand each time, but just check the conditions of Zorn's lemma.
Statement of the lemma
Preliminary notions:
A set P equipped with a binary relation ≤ that is reflexive ( for every x), antisymmetric (if both and hold, then ), and transitive (if and then ) is said to be (partially) ordered by ≤. Given two elements x and y of P with x ≤ y, y is said to be greater than or equal to x. The word "partial" is meant to indicate that not every pair of elements of a partially ordered set is required to be comparable under the order relation, that is, in a partially ordered set P with order relation ≤ there may be elements x and y with neither x ≤ y nor y ≤ x. An ordered set in which every pair of elements is comparable is called totally ordered.
Every subset S of a partially ordered set P can itself be seen as partially ordered by restricting the order relation inherited from P to S. A subset S of a partially ordered set P is called a chain (in P) if it is totally ordered in the inherited order.
An element m of a partially ordered set P with order relation ≤ is maximal (with respect to ≤) if there is no other element of P greater than m, that is, if there is no s in P with s ≠ m and m ≤ s. Depending on the order relation, a partially ordered set may have any number of maximal elements. However, a totally ordered set can have at most one maximal element.
Given a subset S of a partially ordered set P, an element u of P is an upper bound of S if it is greater than or equal to every element of S. Here, S is not required to be a chain, and u is required to be comparable to every element of S but need not itself be an element of S.
Zorn's lemma can then be stated as:
In fact, property (1) is redundant, since property (2) says, in particular, that the empty chain has an upper bound in , implying is nonempty. However, in practice, one often checks (1) and then verifies (2) only for nonempty chains, since the case of the empty chain is taken care by (1).
In the terminology of Bourbaki, a partially ordered set is called inductive if each chain has an upper bound in the set (in particular, the set is then nonempty). Then the lemma can be stated as:
For some applications, the following variant may be useful.
Indeed, let with the partial ordering from . Then, for a chain in , an upper bound in is in and so satisfies the hypothesis of Zorn's lemma and a maximal element in is a maximal element in as well.
Example applications
Every vector space has a basis
Zorn's lemma can be used to show that every vector space V has a basis.
If V = {0}, then the empty set is a basis for V. Now, suppose that V ≠ {0}. Let P be the set consisting of all linearly independent subsets of V. Since V is not the zero vector space, there exists a nonzero element v of V, so P contains the linearly independent subset {v}. Furthermore, P is partially ordered by set inclusion (see inclusion order). Finding a maximal linearly independent subset of V is the same as finding a maximal element in P.
To apply Zorn's lemma, take a chain T in P (that is, T is a subset of P that is totally ordered). If T is the empty set, then {v} is an upper bound for T in P. Suppose then that T is non-empty. We need to show that T has an upper bound, that is, there exists a linearly independent subset B of V containing all the members of T.
Take B to be the union of all the sets in T. We wish to show that B is an upper bound for T in P. To do this, it suffices to show that B is a linearly independent subset of V.
Suppose otherwise, that B is not linearly independent. Then there exists vectors v1, v2, ..., vk ∈ B and scalars a1, a2, ..., ak, not all zero, such that
Since B is the union of all the sets in T, there are some sets S1, S2, ..., Sk ∈ T such that vi ∈ Si for every i = 1, 2, ..., k. As T is totally ordered, one of the sets S1, S2, ..., Sk must contain the others, so there is some set Si that contains all of v1, v2, ..., vk. This tells us there is a linearly dependent set of vectors in Si, contradicting that Si is linearly independent (because it is a member of P).
The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal linearly independent subset B of V.
Finally, we show that B is indeed a basis of V. It suffices to show that B is a spanning set of V. Suppose for the sake of contradiction that B is not spanning. Then there exists some v ∈ V not covered by the span of B. This says that B ∪ {v} is a linearly independent subset of V that is larger than B, contradicting the maximality of B. Therefore, B is a spanning set of V, and thus, a basis of V.
Every nontrivial ring with unity contains a maximal ideal
Zorn's lemma can be used to show that every nontrivial ring R with unity contains a maximal ideal.
Let P be the set consisting of all proper ideals in R (that is, all ideals in R except R itself). Since R is non-trivial, the set P contains the trivial ideal {0}. Furthermore, P is partially ordered by set inclusion. Finding a maximal ideal in R is the same as finding a maximal element in P.
To apply Zorn's lemma, take a chain T in P. If T is empty, then the trivial ideal {0} is an upper bound for T in P. Assume then that T is non-empty. It is necessary to show that T has an upper bound, that is, there exists an ideal I ⊆ R containing all the members of T but still smaller than R (otherwise it would not be a proper ideal, so it is not in P).
Take I to be the union of all the ideals in T. We wish to show that I is an upper bound for T in P. We will first show that I is an ideal of R. For I to be an ideal, it must satisfy three conditions:
I is a nonempty subset of R,
For every x, y ∈ I, the sum x + y is in I,
For every r ∈ R and every x ∈ I, the product rx is in I.
#1 - I is a nonempty subset of R.
Because T contains at least one element, and that element contains at least 0, the union I contains at least 0 and is not empty. Every element of T is a subset of R, so the union I only consists of elements in R.
#2 - For every x, y ∈ I, the sum x + y is in I.
Suppose x and y are elements of I. Then there exist two ideals J, K ∈ T such that x is an element of J and y is an element of K. Since T is totally ordered, we know that J ⊆ K or K ⊆ J. Without loss of generality, assume the first case. Both x and y are members of the ideal K, therefore their sum x + y is a member of K, which shows that x + y is a member of I.
#3 - For every r ∈ R and every x ∈ I, the product rx is in I.
Suppose x is an element of I. Then there exists an ideal J ∈ T such that x is in J. If r ∈ R, then rx is an element of J and hence an element of I. Thus, I is an ideal in R.
Now, we show that I is a proper ideal. An ideal is equal to R if and only if it contains 1. (It is clear that if it is R then it contains 1; on the other hand, if it contains 1 and r is an arbitrary element of R, then r1 = r is an element of the ideal, and so the ideal is equal to R.) So, if I were equal to R, then it would contain 1, and that means one of the members of T would contain 1 and would thus be equal to R – but R is explicitly excluded from P.
The hypothesis of Zorn's lemma has been checked, and thus there is a maximal element in P, in other words a maximal ideal in R.
Proof sketch
A sketch of the proof of Zorn's lemma follows, assuming the axiom of choice. Suppose the lemma is false. Then there exists a partially ordered set, or poset, P such that every totally ordered subset has an upper bound, and that for every element in P there is another element bigger than it. For every totally ordered subset T we may then define a bigger element b(T), because T has an upper bound, and that upper bound has a bigger element. To actually define the function b, we need to employ the axiom of choice (explicitly: let , that is, the set of upper bounds for T. The axiom of choice furnishes ).
Using the function b, we are going to define elements a0 < a1 < a2 < a3 < ... < aω < aω+1 <…, in P. This uncountable sequence is really long: the indices are not just the natural numbers, but all ordinals. In fact, the sequence is too long for the set P; there are too many ordinals (a proper class), more than there are elements in any set (in other words, given any set of ordinals, there exists a larger ordinal), and the set P will be exhausted before long and then we will run into the desired contradiction.
The ai are defined by transfinite recursion: we pick a0 in P arbitrary (this is possible, since P contains an upper bound for the empty set and is thus not empty) and for any other ordinal w we set aw = b({av : v < w}). Because the av are totally ordered, this is a well-founded definition.
The above proof can be formulated without explicitly referring to ordinals by considering the initial segments {av : v < w} as subsets of P. Such sets can be easily characterized as well-ordered chains S ⊆ P where each x ∈ S satisfies x = b({y ∈ S : y < x}). Contradiction is reached by noting that we can always find a "next" initial segment either by taking the union of all such S (corresponding to the limit ordinal case) or by appending b(S) to the "last" S (corresponding to the successor ordinal case).
This proof shows that actually a slightly stronger version of Zorn's lemma is true:
Alternatively, one can use the same proof for the Hausdorff maximal principle. This is the proof given for example in Halmos' Naive Set Theory or in below.
Finally, the Bourbaki–Witt theorem can also be used to give a proof.
Proof
The basic idea of the proof is to reduce the proof to proving the following weak form of Zorn's lemma:
(Note that, strictly speaking, (1) is redundant since (2) implies the empty set is in .) Note the above is a weak form of Zorn's lemma since Zorn's lemma says in particular that any set of subsets satisfying the above (1) and (2) has a maximal element ((3) is not needed). The point is that, conversely, Zorn's lemma follows from this weak form. Indeed, let be the set of all chains in . Then it satisfies all of the above properties (it is nonempty since the empty subset is a chain.) Thus, by the above weak form, we find a maximal element in ; i.e., a maximal chain in . By the hypothesis of Zorn's lemma, has an upper bound in . Then this is a maximal element since if , then is larger than or equal to and so . Thus, .
The proof of the weak form is given in Hausdorff maximal principle#Proof. Indeed, the existence of a maximal chain is exactly the assertion of the Hausdorff maximal principle.
The same proof also shows the following equivalent variant of Zorn's lemma:
Indeed, trivially, Zorn's lemma implies the above lemma. Conversely, the above lemma implies the aforementioned weak form of Zorn's lemma, since a union gives a least upper bound.
Zorn's lemma implies the axiom of choice
A proof that Zorn's lemma implies the axiom of choice illustrates a typical application of Zorn's lemma. (The structure of the proof is exactly the same as the one for the Hahn–Banach theorem.)
Given a set of nonempty sets and its union (which exists by the axiom of union), we want to show there is a function
such that for each . For that end, consider the set
.
It is partially ordered by extension; i.e., if and only if is the restriction of . If is a chain in , then we can define the function on the union by setting when . This is well-defined since if , then is the restriction of . The function is also an element of and is a common extension of all 's. Thus, we have shown that each chain in has an upper bound in . Hence, by Zorn's lemma, there is a maximal element in that is defined on some . We want to show . Suppose otherwise; then there is a set . As is nonempty, it contains an element . We can then extend to a function by setting and . (Note this step does not need the axiom of choice.) The function is in and , a contradiction to the maximality of .
Essentially the same proof also shows that Zorn's lemma implies the well-ordering theorem: take to be the set of all well-ordered subsets of a given set and then shows a maximal element of is .
History
The Hausdorff maximal principle is an early statement similar to Zorn's lemma.
Kazimierz Kuratowski proved in 1922 a version of the lemma close to its modern formulation (it applies to sets ordered by inclusion and closed under unions of well-ordered chains). Essentially the same formulation (weakened by using arbitrary chains, not just well-ordered) was independently given by Max Zorn in 1935, who proposed it as a new axiom of set theory replacing the well-ordering theorem, exhibited some of its applications in algebra, and promised to show its equivalence with the axiom of choice in another paper, which never appeared.
The name "Zorn's lemma" appears to be due to John Tukey, who used it in his book Convergence and Uniformity in Topology in 1940. Bourbaki's Théorie des Ensembles of 1939 refers to a similar maximal principle as "le théorème de Zorn". The name "Kuratowski–Zorn lemma" prevails in Poland and Russia.
Equivalent forms of Zorn's lemma
Zorn's lemma is equivalent (in ZF) to three main results:
Hausdorff maximal principle
Axiom of choice
Well-ordering theorem.
A well-known joke alluding to this equivalency (which may defy human intuition) is attributed to Jerry Bona:
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?"
Zorn's lemma is also equivalent to the strong completeness theorem of first-order logic.
Moreover, Zorn's lemma (or one of its equivalent forms) implies some major results in other mathematical areas. For example,
Banach's extension theorem which is used to prove one of the most fundamental results in functional analysis, the Hahn–Banach theorem
Every vector space has a basis, a result from linear algebra (to which it is equivalent). In particular, the real numbers, as a vector space over the rational numbers, possess a Hamel basis.
Every commutative unital ring has a maximal ideal, a result from ring theory known as Krull's theorem, to which Zorn's lemma is equivalent
Tychonoff's theorem in topology (to which it is also equivalent)
Every proper filter is contained in an ultrafilter, a result that yields the completeness theorem of first-order logic
In this sense, Zorn's lemma is a powerful tool, applicable to many areas of mathematics.
Analogs under weakenings of the axiom of choice
A weakened form of Zorn's lemma can be proven from ZF + DC (Zermelo–Fraenkel set theory with the axiom of choice replaced by the axiom of dependent choice). Zorn's lemma can be expressed straightforwardly by observing that the set having no maximal element would be equivalent to stating that the set's ordering relation would be entire, which would allow us to apply the axiom of dependent choice to construct a countable chain. As a result, any partially ordered set with exclusively finite chains must have a maximal element.
More generally, strengthening the axiom of dependent choice to higher ordinals allows us to generalize the statement in the previous paragraph to higher cardinalities. In the limit where we allow arbitrarily large ordinals, we recover the proof of the full Zorn's lemma using the axiom of choice in the preceding section.
In popular culture
The 1970 film Zorns Lemma is named after the lemma.
The lemma was referenced on The Simpsons in the episode "Bart's New Friend".
| Mathematics | Axiomatic systems | null |
51462 | https://en.wikipedia.org/wiki/Machine | Machine | A machine is a physical system that uses power to apply forces and control movement to perform an action. The term is commonly applied to artificial devices, such as those employing engines or motors, but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power, and include a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement. They can also include computers and sensors that monitor performance and plan movement, often called mechanical systems.
Renaissance natural philosophers identified six simple machines which were the elementary devices that put a load into motion, and calculated the ratio of output force to input force, known today as mechanical advantage.
Modern machines are complex systems that consist of structural elements, mechanisms and control components and include interfaces for convenient use. Examples include: a wide range of vehicles, such as trains, automobiles, boats and airplanes; appliances in the home and office, including computers, building air handling and water handling systems; as well as farm machinery, machine tools and factory automation systems and robots.
Etymology
The English word machine comes through Middle French from Latin , which in turn derives from the Greek (Doric , Ionic 'contrivance, machine, engine', a derivation from 'means, expedient, remedy'). The word mechanical (Greek: ) comes from the same Greek roots. A wider meaning of 'fabric, structure' is found in classical Latin, but not in Greek usage. This meaning is found in late medieval French, and is adopted from the French into English in the mid-16th century.
In the 17th century, the word machine could also mean a scheme or plot, a meaning now expressed by the derived machination. The modern meaning develops out of specialized application of the term to stage engines used in theater and to military siege engines, both in the late 16th and early 17th centuries. The OED traces the formal, modern meaning to John Harris' Lexicon Technicum (1704), which has:
Machine, or Engine, in Mechanicks, is whatsoever hath Force sufficient either to raise or stop the Motion of a Body. Simple Machines are commonly reckoned to be Six in Number, viz. the Ballance, Leaver, Pulley, Wheel, Wedge, and Screw. Compound Machines, or Engines, are innumerable.
The word engine used as a (near-) synonym both by Harris and in later language derives ultimately (via Old French) from Latin 'ingenuity, an invention'.
History
The hand axe, made by chipping flint to form a wedge, in the hands of a human transforms force and movement of the tool into a transverse splitting forces and movement of the workpiece. The hand axe is the first example of a wedge, the oldest of the six classic simple machines, from which most machines are based. The second oldest simple machine was the inclined plane (ramp), which has been used since prehistoric times to move heavy objects.
The other four simple machines were invented in the ancient Near East. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia , and then in ancient Egyptian technology . The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever.
Three of the simple machines were studied and described by Greek philosopher Archimedes around the 3rd century BC: the lever, pulley and screw. Archimedes discovered the principle of mechanical advantage in the lever. Later Greek philosophers defined the classic five simple machines (excluding the inclined plane) and were able to roughly calculate their mechanical advantage. Hero of Alexandria (–75 AD) in his work Mechanics lists five mechanisms that can "set a load in motion"; lever, windlass, pulley, wedge, and screw, and describes their fabrication and uses. However, the Greeks' understanding was limited to statics (the balance of forces) and did not include dynamics (the tradeoff between force and distance) or the concept of work.
The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi ad-Din Muhammad ibn Ma'ruf in Ottoman Egypt.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny.
The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.
During the Renaissance, the dynamics of the Mechanical Powers, as the simple machines were called, began to be studied from the standpoint of how much useful work they could perform, leading eventually to the new concept of mechanical work. In 1586 Flemish engineer Simon Stevin derived the mechanical advantage of the inclined plane, and it was included with the other simple machines. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in Le Meccaniche ("On Mechanics"). He was the first to understand that simple machines do not create energy, they merely transform it.
The classic rules of sliding friction in machines were discovered by Leonardo da Vinci (1452–1519), but remained unpublished in his notebooks. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785).
James Watt patented his parallel motion linkage in 1782, which made the double acting steam engine practical. The Boulton and Watt steam engine and later designs powered steam locomotives, steam ships, and factories.
The Industrial Revolution was a period from 1750 to 1850 where changes in agriculture, manufacturing, mining, transportation, and technology had a profound effect on the social, economic and cultural conditions of the times. It began in the United Kingdom, then subsequently spread throughout Western Europe, North America, Japan, and eventually the rest of the world.
Starting in the later part of the 18th century, there began a transition in parts of Great Britain's previously manual labour and draft-animal-based economy towards machine-based manufacturing. It started with the mechanisation of the textile industries, the development of iron-making techniques and the increased use of refined coal.
Simple machines
The idea that a machine can be decomposed into simple movable elements led Archimedes to define the lever, pulley and screw as simple machines. By the time of the Renaissance this list increased to include the wheel and axle, wedge and inclined plane. The modern approach to characterizing machines focusses on the components that allow movement, known as joints.
Wedge (hand axe): Perhaps the first example of a device designed to manage power is the hand axe, also called biface and Olorgesailie. A hand axe is made by chipping stone, generally flint, to form a bifacial edge, or wedge. A wedge is a simple machine that transforms lateral force and movement of the tool into a transverse splitting force and movement of the workpiece. The available power is limited by the effort of the person using the tool, but because power is the product of force and movement, the wedge amplifies the force by reducing the movement. This amplification, or mechanical advantage is the ratio of the input speed to output speed. For a wedge this is given by 1/tanα, where α is the tip angle. The faces of a wedge are modeled as straight lines to form a sliding or prismatic joint.
Lever: The lever is another important and simple device for managing power. This is a body that pivots on a fulcrum. Because the velocity of a point farther from the pivot is greater than the velocity of a point near the pivot, forces applied far from the pivot are amplified near the pivot by the associated decrease in speed. If a is the distance from the pivot to the point where the input force is applied and b is the distance to the point where the output force is applied, then a/b is the mechanical advantage of the lever. The fulcrum of a lever is modeled as a hinged or revolute joint.
Wheel: The wheel is an important early machine, such as the chariot. A wheel uses the law of the lever to reduce the force needed to overcome friction when pulling a load. To see this notice that the friction associated with pulling a load on the ground is approximately the same as the friction in a simple bearing that supports the load on the axle of a wheel. However, the wheel forms a lever that magnifies the pulling force so that it overcomes the frictional resistance in the bearing.
The classification of simple machines to provide a strategy for the design of new machines was developed by Franz Reuleaux, who collected and studied over 800 elementary machines. He recognized that the classical simple machines can be separated into the lever, pulley and wheel and axle that are formed by a body rotating about a hinge, and the inclined plane, wedge and screw that are similarly a block sliding on a flat surface.
Simple machines are elementary examples of kinematic chains or linkages that are used to model mechanical systems ranging from the steam engine to robot manipulators. The bearings that form the fulcrum of a lever and that allow the wheel and axle and pulleys to rotate are examples of a kinematic pair called a hinged joint. Similarly, the flat surface of an inclined plane and wedge are examples of the kinematic pair called a sliding joint. The screw is usually identified as its own kinematic pair called a helical joint.
This realization shows that it is the joints, or the connections that provide movement, that are the primary elements of a machine. Starting with four types of joints, the rotary joint, sliding joint, cam joint and gear joint, and related connections such as cables and belts, it is possible to understand a machine as an assembly of solid parts that connect these joints called a mechanism .
Two levers, or cranks, are combined into a planar four-bar linkage by attaching a link that connects the output of one crank to the input of another. Additional links can be attached to form a six-bar linkage or in series to form a robot.
Mechanical systems
A mechanical system manages power to accomplish a task that involves forces and movement. Modern machines are systems consisting of (i) a power source and actuators that generate forces and movement, (ii) a system of mechanisms that shape the actuator input to achieve a specific application of output forces and movement, (iii) a controller with sensors that compare the output to a performance goal and then directs the actuator input, and (iv) an interface to an operator consisting of levers, switches, and displays. This can be seen in Watt's steam engine in which the power is provided by steam expanding to drive the piston. The walking beam, coupler and crank transform the linear movement of the piston into rotation of the output pulley. Finally, the pulley rotation drives the flyball governor which controls the valve for the steam input to the piston cylinder.
The adjective "mechanical" refers to skill in the practical application of an art or science, as well as relating to or caused by movement, physical forces, properties or agents such as is dealt with by mechanics. Similarly Merriam-Webster Dictionary defines "mechanical" as relating to machinery or tools.
Power flow through a machine provides a way to understand the performance of devices ranging from levers and gear trains to automobiles and robotic systems. The German mechanician Franz Reuleaux wrote, "a machine is a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinate motion." Notice that forces and motion combine to define power.
More recently, Uicker et al. stated that a machine is "a device for applying power or changing its direction."McCarthy and Soh describe a machine as a system that "generally consists of a power source and a mechanism for the controlled use of this power."
Power sources
Human and animal effort were the original power sources for early machines.
Waterwheel: Waterwheels appeared around the world around 300 BC to use flowing water to generate rotary motion, which was applied to milling grain, and powering lumber, machining and textile operations. Modern water turbines use water flowing through a dam to drive an electric generator.
Windmill: Early windmills captured wind power to generate rotary motion for milling operations. Modern wind turbines also drives a generator. This electricity in turn is used to drive motors forming the actuators of mechanical systems.
Engine: The word engine derives from "ingenuity" and originally referred to contrivances that may or may not be physical devices. A steam engine uses heat to boil water contained in a pressure vessel; the expanding steam drives a piston or a turbine. This principle can be seen in the aeolipile of Hero of Alexandria. This is called an external combustion engine.
An automobile engine is called an internal combustion engine because it burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston. A jet engine uses a turbine to compress air which is burned with fuel so that it expands through a nozzle to provide thrust to an aircraft, and so is also an "internal combustion engine."
Power plant: The heat from coal and natural gas combustion in a boiler generates steam that drives a steam turbine to rotate an electric generator. A nuclear power plant uses heat from a nuclear reactor to generate steam and electric power. This power is distributed through a network of transmission lines for industrial and individual use.
Motors: Electric motors use either AC or DC electric current to generate rotational movement. Electric servomotors are the actuators for mechanical systems ranging from robotic systems to modern aircraft.
Fluid Power: Hydraulic and pneumatic systems use electrically driven pumps to drive water or air respectively into cylinders to power linear movement.
Electrochemical: Chemicals and materials can also be sources of power. They may chemically deplete or need re-charging, as is the case with batteries, or they may produce power without changing their state, which is the case for solar cells and thermoelectric generators. All of these, however, still require their energy to come from elsewhere. With batteries, it is the already existing chemical potential energy inside. In solar cells and thermoelectrics, the energy source is light and heat respectively.
Mechanisms
The mechanism of a mechanical system is assembled from components called machine elements. These elements provide structure for the system and control its movement.
The structural components are, generally, the frame members, bearings, splines, springs, seals, fasteners and covers. The shape, texture and color of covers provide a styling and operational interface between the mechanical system and its users.
The assemblies that control movement are also called "mechanisms." Mechanisms are generally classified as gears and gear trains, which includes belt drives and chain drives, cam and follower mechanisms, and linkages, though there are other special mechanisms such as clamping linkages, indexing mechanisms, escapements and friction devices such as brakes and clutches.
The number of degrees of freedom of a mechanism, or its mobility, depends on the number of links and joints and the types of joints used to construct the mechanism. The general mobility of a mechanism is the difference between the unconstrained freedom of the links and the number of constraints imposed by the joints. It is described by the Chebychev–Grübler–Kutzbach criterion.
Gears and gear trains
The transmission of rotation between contacting toothed wheels can be traced back to the Antikythera mechanism of Greece and the south-pointing chariot of China. Illustrations by the renaissance scientist Georgius Agricola show gear trains with cylindrical teeth. The implementation of the involute tooth yielded a standard gear design that provides a constant speed ratio. Some important features of gears and gear trains are:
The ratio of the pitch circles of mating gears defines the speed ratio and the mechanical advantage of the gear set.
A planetary gear train provides high gear reduction in a compact package.
It is possible to design gear teeth for gears that are non-circular, yet still transmit torque smoothly.
The speed ratios of chain and belt drives are computed in the same way as gear ratios. See bicycle gearing.
Cam and follower mechanisms
A cam and follower is formed by the direct contact of two specially shaped links. The driving link is called the cam (also see cam shaft) and the link that is driven through the direct contact of their surfaces is called the follower. The shape of the contacting surfaces of the cam and follower determines the movement of the mechanism.
Linkages
A linkage is a collection of links connected by joints. Generally, the links are the structural elements and the joints allow movement. Perhaps the single most useful example is the planar four-bar linkage. However, there are many more special linkages:
Watt's linkage is a four-bar linkage that generates an approximate straight line. It was critical to the operation of his design for the steam engine. This linkage also appears in vehicle suspensions to prevent side-to-side movement of the body relative to the wheels. Also see the article Parallel motion.
The success of Watt's linkage lead to the design of similar approximate straight-line linkages, such as Hoeken's linkage and Chebyshev's linkage.
The Peaucellier linkage generates a true straight-line output from a rotary input.
The Sarrus linkage is a spatial linkage that generates straight-line movement from a rotary input.
The Klann linkage and the Jansen linkage are recent inventions that provide interesting walking movements. They are respectively a six-bar and an eight-bar linkage.
Planar mechanism
A planar mechanism is a mechanical system that is constrained so the trajectories of points in all the bodies of the system lie on planes parallel to a ground plane. The rotational axes of hinged joints that connect the bodies in the system are perpendicular to this ground plane.
Spherical mechanism
A spherical mechanism is a mechanical system in which the bodies move in a way that the trajectories of points in the system lie on concentric spheres. The rotational axes of hinged joints that connect the bodies in the system pass through the center of these circle.
Spatial mechanism
A spatial mechanism is a mechanical system that has at least one body that moves in a way that its point trajectories are general space curves. The rotational axes of hinged joints that connect the bodies in the system form lines in space that do not intersect and have distinct common normals.
Flexure mechanisms
A flexure mechanism consists of a series of rigid bodies connected by compliant elements (also known as flexure joints) that is designed to produce a geometrically well-defined motion upon application of a force.
Machine elements
The elementary mechanical components of a machine are termed machine elements. These elements consist of three basic types (i) structural components such as frame members, bearings, axles, splines, fasteners, seals, and lubricants, (ii) mechanisms that control movement in various ways such as gear trains, belt or chain drives, linkages, cam and follower systems, including brakes and clutches, and (iii) control components such as buttons, switches, indicators, sensors, actuators and computer controllers. While generally not considered to be a machine element, the shape, texture and color of covers are an important part of a machine that provide a styling and operational interface between the mechanical components of a machine and its users.
Structural components
A number of machine elements provide important structural functions such as the frame, bearings, splines, spring and seals.
The recognition that the frame of a mechanism is an important machine element changed the name three-bar linkage into four-bar linkage. Frames are generally assembled from truss or beam elements.
Bearings are components designed to manage the interface between moving elements and are the source of friction in machines. In general, bearings are designed for pure rotation or straight line movement.
Splines and keys are two ways to reliably mount an axle to a wheel, pulley or gear so that torque can be transferred through the connection.
Springs provides forces that can either hold components of a machine in place or acts as a suspension to support part of a machine.
Seals are used between mating parts of a machine to ensure fluids, such as water, hot gases, or lubricant do not leak between the mating surfaces.
Fasteners such as screws, bolts, spring clips, and rivets are critical to the assembly of components of a machine. Fasteners are generally considered to be removable. In contrast, joining methods, such as welding, soldering, crimping and the application of adhesives, usually require cutting the parts to disassemble the components
Controllers
Controllers combine sensors, logic, and actuators to maintain the performance of components of a machine. Perhaps the best known is the flyball governor for a steam engine. Examples of these devices range from a thermostat that as temperature rises opens a valve to cooling water to speed controllers such as the cruise control system in an automobile. The programmable logic controller replaced relays and specialized control mechanisms with a programmable computer. Servomotors that accurately position a shaft in response to an electrical command are the actuators that make robotic systems possible.
Computing machines
Charles Babbage designed machines to tabulate logarithms and other functions in 1837. His Difference engine can be considered an advanced mechanical calculator and his Analytical Engine a forerunner of the modern computer, though none of the larger designs were completed in Babbage's lifetime.
The Arithmometer and the Comptometer are mechanical computers that are precursors to modern digital computers. Models used to study modern computers are termed State machine and Turing machine.
Molecular machines
The biological molecule myosin reacts to ATP and ADP to alternately engage with an actin filament and change its shape in a way that exerts a force, and then disengage to reset its shape, or conformation. This acts as the molecular drive that causes muscle contraction. Similarly the biological molecule kinesin has two sections that alternately engage and disengage with microtubules causing the molecule to move along the microtubule and transport vesicles within the cell, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "In effect, the motile cilium is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines. Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. " Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. These molecules are increasingly considered to be nanomachines.
Researchers have used DNA to construct nano-dimensioned four-bar linkages.
Impact
Mechanization and automation
Mechanization (or mechanisation in BE) is providing human operators with machinery that assists them with the muscular requirements of work or displaces muscular work. In some fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an un-geared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines.
Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provides human operators with machinery to assist them with the muscular requirements of work, automation greatly decreases the need for human sensory and mental requirements as well. Automation plays an increasingly important role in the world economy and in daily experience.
Automata
An automaton (plural: automata or automatons) is a self-operating machine. The word is sometimes used to describe a robot, more specifically an autonomous robot. A Toy Automaton was patented in 1863.
Mechanics
Usher reports that Hero of Alexandria's treatise on Mechanics focussed on the study of lifting heavy weights. Today mechanics refers to the mathematical analysis of the forces and movement of a mechanical system, and consists of the study of the kinematics and dynamics of these systems.
Dynamics of machines
The dynamic analysis of machines begins with a rigid-body model to determine reactions at the bearings, at which point the elasticity effects are included. The rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid, which means that they do not deform under the action of applied forces, simplifies the analysis by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body.
The dynamics of a rigid body system is defined by its equations of motion, which are derived using either Newtons laws of motion or Lagrangian mechanics. The solution of these equations of motion defines how the configuration of the system of rigid bodies changes as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems.
Kinematics of machines
The dynamic analysis of a machine requires the determination of the movement, or kinematics, of its component parts, known as kinematic analysis. The assumption that the system is an assembly of rigid components allows rotational and translational movement to be modeled mathematically as Euclidean, or rigid, transformations. This allows the position, velocity and acceleration of all points in a component to be determined from these properties for a reference point, and the angular position, angular velocity and angular acceleration of the component.
Machine design
Machine design refers to the procedures and techniques used to address the three phases of a machine's lifecycle:
invention, which involves the identification of a need, development of requirements, concept generation, prototype development, manufacturing, and verification testing;
performance engineering involves enhancing manufacturing efficiency, reducing service and maintenance demands, adding features and improving effectiveness, and validation testing;
recycle is the decommissioning and disposal phase and includes recovery and reuse of materials and components.
| Technology | Basics_8 | null |
51469 | https://en.wikipedia.org/wiki/Mold | Mold | A mold () or mould () is one of the structures that certain fungi can form. The dust-like, colored appearance of molds is due to the formation of spores containing fungal secondary metabolites. The spores are the dispersal units of the fungi. Not all fungi form molds. Some fungi form mushrooms; others grow as single cells and are called microfungi (for example yeasts).
A large and taxonomically diverse number of fungal species form molds. The growth of hyphae results in discoloration and a fuzzy appearance, especially on food. The network of these tubular branching hyphae, called a mycelium, is considered a single organism. The hyphae are generally transparent, so the mycelium appears like very fine, fluffy white threads over the surface. Cross-walls (septa) may delimit connected compartments along the hyphae, each containing one or multiple, genetically identical nuclei. The dusty texture of many molds is caused by profuse production of asexual spores (conidia) formed by differentiation at the ends of hyphae. The mode of formation and shape of these spores is traditionally used to classify molds. Many of these spores are colored, making the fungus much more obvious to the human eye at this stage in its life-cycle.
Molds are considered to be microbes and do not form a specific taxonomic or phylogenetic grouping, but can be found in the divisions Zygomycota and Ascomycota. In the past, most molds were classified within the Deuteromycota. Mold had been used as a common name for now non-fungal groups such as water molds or slime molds that were once considered fungi.
Molds cause biodegradation of natural materials, which can be unwanted when it becomes food spoilage or damage to property. They also play important roles in biotechnology and food science in the production of various pigments, foods, beverages, antibiotics, pharmaceuticals and enzymes. Some diseases of animals and humans can be caused by certain molds: disease may result from allergic sensitivity to mold spores, from growth of pathogenic molds within the body, or from the effects of ingested or inhaled toxic compounds (mycotoxins) produced by molds.
Biology
There are thousands of known species of mold fungi with diverse life-styles including saprotrophs, mesophiles, psychrophiles and thermophiles, and a very few opportunistic pathogens of humans. They all require moisture for growth and some live in aquatic environments. Like all fungi, molds derive energy not through photosynthesis but from the organic matter on which they live, utilizing heterotrophy. Typically, molds secrete hydrolytic enzymes, mainly from the hyphal tips. These enzymes degrade complex biopolymers such as starch, cellulose and lignin into simpler substances which can be absorbed by the hyphae. In this way, molds play a major role in causing decomposition of organic material, enabling the recycling of nutrients throughout ecosystems. Many molds also synthesize mycotoxins and siderophores which, together with lytic enzymes, inhibit the growth of competing microorganisms. Molds can also grow on stored food for animals and humans, making the food unpalatable or toxic and are thus a major source of food losses and illness. Many strategies for food preservation (salting, pickling, jams, bottling, freezing, drying) are to prevent or slow mold growth as well as the growth of other microbes.
Molds reproduce by producing large numbers of small spores, which may contain a single nucleus or be multinucleate. Mold spores can be asexual (the products of mitosis) or sexual (the products of meiosis); many species can produce both types. Some molds produce small, hydrophobic spores that are adapted for wind dispersal and may remain airborne for long periods; in some the cell walls are darkly pigmented, providing resistance to damage by ultraviolet radiation. Other mold spores have slimy sheaths and are more suited to water dispersal. Mold spores are often spherical or ovoid single cells, but can be multicellular and variously shaped. Spores may cling to clothing or fur; some are able to survive extremes of temperature and pressure.
Although molds can grow on dead organic matter everywhere in nature, their presence is visible to the unaided eye only when they form large colonies. A mold colony does not consist of discrete organisms but is an interconnected network of hyphae called a mycelium. All growth occurs at hyphal tips, with cytoplasm and organelles flowing forwards as the hyphae advance over or through new food sources. Nutrients are absorbed at the hyphal tip. In artificial environments such as buildings, humidity and temperature are often stable enough to foster the growth of mold colonies, commonly seen as a downy or furry coating growing on food or other surfaces.
Few molds can begin growing at temperatures of or below, so food is typically refrigerated at this temperature. When conditions do not enable growth to take place, molds may remain alive in a dormant state depending on the species, within a large range of temperatures. The many different mold species vary enormously in their tolerance to temperature and humidity extremes. Certain molds can survive harsh conditions such as the snow-covered soils of Antarctica, refrigeration, highly acidic solvents, anti-bacterial soap and even petroleum products such as jet fuel.
Xerophilic molds are able to grow in relatively dry, salty, or sugary environments, where water activity (aw) is less than 0.85; other molds need more moisture.
Common molds
Common genera of molds include:
Acremonium
Alternaria
Aspergillus
Cladosporium
Fusarium
Mucor
Penicillium
Rhizopus
Stachybotrys
Trichoderma
Trichophyton
Food production
The Kōji molds are a group of Aspergillus species, notably Aspergillus oryzae, and secondarily A. sojae, that have been cultured in eastern Asia for many centuries. They are used to ferment a soybean and wheat mixture to make soybean paste and soy sauce. Koji molds break down the starch in rice, barley, sweet potatoes, etc., a process called saccharification, in the production of sake, shōchū and other distilled spirits. Koji molds are also used in the preparation of Katsuobushi.
Red rice yeast is a product of the mold Monascus purpureus grown on rice, and is common in Asian diets. The yeast contains several compounds collectively known as monacolins, which are known to inhibit cholesterol synthesis. A study has shown that red rice yeast used as a dietary supplement, combined with fish oil and healthy lifestyle changes, may help reduce "bad" cholesterol as effectively as certain commercial statin drugs. Nonetheless, other work has shown it may not be reliable (perhaps due to non-standardization) and even toxic to liver and kidneys.
Some sausages, such as salami, incorporate starter cultures of molds to improve flavor and reduce bacterial spoilage during curing. Penicillium nalgiovense, for example, may appear as a powdery white coating on some varieties of dry-cured sausage.
Other molds that have been used in food production include:
Fusarium venenatum – quorn
Geotrichum candidum – cheese
Neurospora sitophila – oncom
Penicillium spp. – various cheeses including Brie and Blue cheese
Rhizomucor miehei – microbial rennet for making vegetarian and other cheeses
Rhizopus oligosporus – tempeh
Rhizopus oryzae – tempeh, jiuqu for jiuniang or precursor for making Chinese rice wine
Pharmaceuticals from molds
Alexander Fleming's accidental discovery of the antibiotic penicillin involved a Penicillium mold called Penicillium rubrum (although the species was later established to be Penicillium rubens). Fleming continued to investigate penicillin, showing that it could inhibit various types of bacteria found in infections and other ailments, but he was unable to produce the compound in large enough amounts necessary for production of a medicine. His work was expanded by a team at Oxford University; Clutterbuck, Lovell, and Raistrick, who began to work on the problem in 1931. This team was also unable to produce the pure compound in any large amount, and found that the purification process diminished its effectiveness and negated the anti-bacterial properties it had.
Howard Florey, Ernst Chain, Norman Heatley, Edward Abraham, also all at Oxford, continued the work. They enhanced and developed the concentration technique by using organic solutions rather than water, and created the "Oxford Unit" to measure penicillin concentration within a solution. They managed to purify the solution, increasing its concentration by 45–50 times, but found that a higher concentration was possible. Experiments were conducted and the results published in 1941, though the quantities of penicillin produced were not always high enough for the treatments required. As this was during the Second World War, Florey sought US government involvement. With research teams in the UK and some in the US, industrial-scale production of crystallized penicillin was developed during 1941–1944 by the USDA and by Pfizer.
Several statin cholesterol-lowering drugs (such as lovastatin, from Aspergillus terreus) are derived from molds.
The immunosuppressant drug cyclosporine, used to suppress the rejection of transplanted organs, is derived from the mold Tolypocladium inflatum.
Health effects
Molds are ubiquitous, and mold spores are a common component of household and workplace dust; however, when mold spores are present in large quantities, they can present a health hazard to humans, potentially causing allergic reactions and respiratory problems.
Some molds also produce mycotoxins that can pose serious health risks to humans and animals. Some studies claim that exposure to high levels of mycotoxins can lead to neurological problems and, in some cases, death. Prolonged exposure, e.g. daily home exposure, may be particularly harmful. Research on the health impacts of mold has not been conclusive. The term "toxic mold" refers to molds that produce mycotoxins, such as Stachybotrys chartarum, and not to all molds in general.
Mold in the home can usually be found in damp, dark or steamy areas, e.g. bathrooms, kitchens, cluttered storage areas, recently flooded areas, basement areas, plumbing spaces, areas with poor ventilation and outdoors in humid environments. Symptoms caused by mold allergy are: watery, itchy eyes; a chronic cough; headaches or migraines; difficulty breathing; rashes; tiredness; sinus problems; nasal blockage and frequent sneezing.
Molds can also pose a hazard to human and animal health when they are consumed following the growth of certain mold species in stored food. Some species produce toxic secondary metabolites, collectively termed mycotoxins, including aflatoxins, ochratoxins, fumonisins, trichothecenes, citrinin, and patulin. These toxic properties may be used for the benefit of humans when the toxicity is directed against other organisms; for example, penicillin adversely affects the growth of Gram-positive bacteria (e.g. Clostridium species), certain spirochetes and certain fungi.
Growth in buildings and homes
Mold growth in buildings generally occurs as fungi colonize porous building materials, such as wood. Many building products commonly incorporate paper, wood products, or solid wood members, such as paper-covered drywall, wood cabinets, and insulation. Interior mold colonization can lead to a variety of health problems as microscopic airborne reproductive spores, analogous to tree pollen, are inhaled by building occupants. High quantities of indoor airborne spores as compared to exterior conditions are strongly suggestive of indoor mold growth. Determination of airborne spore counts is accomplished by way of an air sample, in which a specialized pump with a known flow rate is operated for a known period of time. To account for background levels, air samples should be drawn from the affected area, a control area, and the exterior.
The air sampler pump draws in air and deposits microscopic airborne particles on a culture medium. The medium is cultured in a laboratory and the fungal genus and species are determined by visual microscopic observation. Laboratory results also quantify fungal growth by way of a spore count for comparison among samples. The pump operation time is recorded and when multiplied by pump flow rate results in a specific volume of air obtained. Although a small volume of air is actually analyzed, common laboratory reports extrapolate the spore count data to estimate spores that would be present in a cubic meter of air.
Mold spores are drawn to specific environments, making it easier for them to grow. These spores will usually only turn into a full-blown outbreak if certain conditions are met. Various practices can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels that can facilitate mold growth. Air filtration reduces the number of spores available for germination, especially when a High Efficiency Particulate Air (HEPA) filter is used. A properly functioning AC unit also reduces the relative humidity in rooms. The United States Environmental Protection Agency (EPA) currently recommends that relative humidity be maintained below 60%, ideally between 30% and 50%, to inhibit mold growth.
Eliminating the moisture source is the first step at fungal remediation. Removal of affected materials may also be necessary for remediation, if materials are easily replaceable and not part of the load-bearing structure. Professional drying of concealed wall cavities and enclosed spaces such as cabinet toekick spaces may be required. Post-remediation verification of moisture content and fungal growth is required for successful remediation. Many contractors perform post-remediation verification themselves, but property owners may benefit from independent verification. Left untreated, mold can potentially cause serious cosmetic and structural damage to a property.
Use in art
Various artists have used mold in various artistic fashions. Daniele Del Nero, for example, constructs scale models of houses and office buildings and then induces mold to grow on them, giving them an unsettling, reclaimed-by-nature look. Stacy Levy sandblasts enlarged images of mold onto glass, then allows mold to grow in the crevasses she has made, creating a macro-micro portrait. Sam Taylor-Johnson has made a number of time-lapse films capturing the gradual decay of classically arranged still lifes.
| Biology and health sciences | Basics | Plants |
51470 | https://en.wikipedia.org/wiki/Mycelium | Mycelium | Mycelium (: mycelia) is a root-like structure of a fungus consisting of a mass of branching, thread-like hyphae. Its normal form is that of branched, slender, entangled, anastomosing, hyaline threads. Fungal colonies composed of mycelium are found in and on soil and many other substrates. A typical single spore germinates into a monokaryotic mycelium, which cannot reproduce sexually; when two compatible monokaryotic mycelia join and form a dikaryotic mycelium, that mycelium may form fruiting bodies such as mushrooms. A mycelium may be minute, forming a colony that is too small to see, or may grow to span thousands of acres as in Armillaria.
Through the mycelium, a fungus absorbs nutrients from its environment. It does this in a two-stage process. First, the hyphae secrete enzymes onto or into the food source, which break down biological polymers into smaller units such as monomers. These monomers are then absorbed into the mycelium by facilitated diffusion and active transport.
Mycelia are vital in terrestrial and aquatic ecosystems for their role in the decomposition of plant material. They contribute to the organic fraction of soil, and their growth releases carbon dioxide back into the atmosphere (see carbon cycle). Ectomycorrhizal extramatrical mycelium, as well as the mycelium of arbuscular mycorrhizal fungi, increase the efficiency of water and nutrient absorption of most plants and confers resistance to some plant pathogens. Mycelium is an important food source for many soil invertebrates. They are vital to agriculture and are important to almost all species of plants, many species with the fungi. Mycelium is a primary factor in some plants' health, nutrient intake and growth, with mycelium being a major factor to plant fitness.
Networks of mycelia can transport water and spikes of electrical potential.
Sclerotia are compact or hard masses of mycelium.
Uses
Agriculture
One of the primary roles of fungi in an ecosystem is to decompose organic compounds. Petroleum products and some pesticides (typical soil contaminants) are organic molecules (i.e., they are built on a carbon structure), and thereby show a potential carbon source for fungi. Hence, fungi have the potential to eradicate such pollutants from their environment unless the chemicals prove toxic to the fungus. This biological degradation is a process known as mycoremediation.
Mycelial mats have been suggested as having potential as biological filters, removing chemicals and microorganisms from soil and water. The use of fungal mycelium to accomplish this has been termed mycofiltration.
Knowledge of the relationship between mycorrhizal fungi and plants suggests new ways to improve crop yields.
When spread on logging roads, mycelium can act as a binder, holding disturbed new soil in place thus preventing washouts until woody plants can establish roots.
Fungi are essential for converting biomass into compost, as they decompose feedstock components such as lignin, which many other composting microorganisms cannot. Turning a backyard compost pile will commonly expose visible networks of mycelia that have formed on the decaying organic material within. Compost is an essential soil amendment and fertilizer for organic farming and gardening. Composting can divert a substantial fraction of municipal solid waste from landfills.
Commercial
Alternatives to polystyrene and plastic packaging can be produced by growing mycelium in agricultural waste.
Mycelium has also been used as a material in furniture, and artificial leather.
One of the main commercial uses of mycelium is its use to create artificial leather. Animal leather contributes to a significant environmental footprint, as livestock farming is associated with deforestation, greenhouse gas emissions, and grazing. In addition, the production of synthetic leathers from polyvinyl chloride and polyurethane require the use of hazardous chemicals and fossil fuels, and they are not biodegradable (like plastic). Fungal-based artificial leather is cheaper to produce, has less of an environmental footprint, and is biodegradable. It costs between 18 and 28 cents to produce a square meter of raw mycelium, while it costs between $5.81 and $6.24 to produce a square meter of raw animal hide. Fungal growth is carbon neutral and pure mycelium is 94% biodegradable. However, the use of polymeric materials such as polyester or polylactic acid to improve artificial leather’s properties can negatively affect the biodegradability of the material.
To create leather, fungal mycelium is grown either using liquid-state or solid-state fermentation. In liquid-state fermentation, companies typically use laboratory media or agricultural byproducts to grow fungal biomass. The fungal biomass is then separated into fibers and processed using fiber suspension, filtration, pressing, and drying. These techniques are also commonly utilized in traditional papermaking processes. In solid-state fermentation, mycelium is grown on forestry bioproducts, like sawdust, in an environment with high carbon dioxide concentrations and controlled humidity and temperature. The mycelium mat formed on top of the particle bed is dehydrated, chemically treated, and then compressed to a desired thickness and engraved with a pattern.
Construction material
Mycelium is a strong candidate for sustainable construction primarily due to its lightweight biodegradable structure and its capacity to be grown from waste sources. In addition to this, mycelium has a relatively high strength-to-weight ratio and a much lower embodied energy compared to traditional building materials. Because mycelium takes the form of any mold it's grown in, it can also be advantageous for customization purposes, especially if it's employed as an architectural or aesthetic feature. Current research has also indicated that mycelium does not release toxic resins in the event of a fire because it has a charring effect similar to mass timber. Mycelium plays an interesting role in acoustic insulation, boasting of an absorbance of 70–75% for frequencies of 1500 Hz or less.
Strengths and weaknesses
Mycelium bio-composites have shown strong potential for structural applications, with much higher strength-to-weight ratios than that of conventional materials due primarily to its low density. Compared to conventional building materials, mycelium also has a number of desirable properties that make it an attractive alternative. For example, it has low thermal conductivity and can provide high acoustic insulation. It is biodegradable, has much lower embodied energy, and can serve as a carbon sink, which makes mycelium bio-composites a possible solution to the emissions, energy, and waste associated with building construction.
While mycelium proposes interesting implications as a structural material, there are several significant disadvantages that make it difficult to be practically implemented in large-scale projects. For one, mycelium does not have particularly high compressive strength on its own, ranging from 0.1-0.2 MPa. This is in stark comparison to traditional concrete, which typically has a compressive strength of 17-28 MPa. Even more, because mycelium is considered a living material, it holds specific requirements that make it susceptible to environmental conditions. For instance, it requires a constant source of air in order to stay alive, needs a relatively humid habitat to grow, and cannot be exposed to large amounts of water for fear of contamination and decay.
Mechanical properties
Three separate fungi species (Colorius versicolor, Trametes ochracea, and Ganoderma sessile) were mixed independently with 2 substrates (apple and vine) and tested under separate incubation conditions in order to quantify certain mechanical properties of mycelium. In order to do this, samples were grown in molds, incubated, and dried over the course of 12 days. Samples were tested for water absorption using ASTM C272 guidelines and compared against an EPS material. Tiles of uniform size were cut from the fabricated mold and put under an Instron 3345 machine going at 1 mm/min, up until 20% deformation.
Throughout a 4 stage process, the impact of various substrate and fungal mixes was investigated along with properties of mycelium such as density, water absorption, and compressive strength. Samples were separated into two separate incubation methods and inspected for differences in color, texture, and growth. For the same fungi within each incubation method, minimal differences were recorded. However, across disparate substrate mixtures within the same fungi, colorization and external growth varied between the test samples. While loss of organic matter was calculated, no uniform correlation was found between substrate used and chemical properties of the material. For each of the substrate-fungi mixtures, average densities ranged from 174.1 kg/m3 to 244.9 kg/m3, with the Ganoderma sessile fungi and apple substrate combination being the most dense. Compression tests revealed the Ganoderma sessile fungi and vine substrate to have the highest strength of the samples tested, but no numerical value was provided. For reference, surrounding literature has provided a ballpark estimate of 1-72 kPa. Beyond this, mycelium has a thermal conductivity of 0.05–0.07W/m·K which is less than that of typical concrete.
Construction
The construction of mycelium structures is primarily categorized into three approaches. These include growing blocks in molds, growing in place monolithic structures, and bio-welded units. The first approach cultivates mycelium and its substrate in forms, after which it is dried in ovens and then transported and assembled on site. The second approach uses existing formwork and adapts cast-in-place concrete techniques to grow monolithic mycelium structures in place. The third approach is a hybrid of the previous two referred to as myco-welding, where individual pre-grown units are grown together into a larger monolithic structure.
Studies using grow-in-place methods and myco-welding have explored how to cultivate mycelium and re-use formwork in construction and investigated post-tensioning and friction connections. Research in fabrication has revealed some common challenges faced in construction of mycelium structures, mostly related to the growth of the fungi. It can be difficult to cultivate living material into formwork and it is susceptible to contamination if not properly sterilized. The fungi needs to be kept refrigerated to prevent hardening and properly manage growth and substrate consumption. Additionally, the thickness of fungal growth is limited by the presence of oxygen; if there is no oxygen, the center of the growth can die or be contaminated.
Environmental impact
Researchers have performed life-cycle assessments to evaluate the environmental impact of mycelium bio-composites. Life cycle analysis showed the viability of mycelium as a carbon sink material and as a sustainable alternative to conventional building materials. Use of mycelium as a natural adhesive material may provide environmental benefits, as the fungal-based composites that mycelium is used to create are low cost, low emission, and sustainable. These composites also have a wide range of applications and uses, many of which are in industries responsible for significant environmental pollution, like construction and packaging.
Modern construction and packaging materials are industrially fabricated, non-recyclable, and pollutive: wood products lead to severe deforestation and weather fluctuation; cement is nonbiodegradable and causes high emissions both in production and demolition. Mycelium appears to be cheaper and more sustainable than its counterparts.
Mycelium’s adhesive properties are largely responsible for its diverse array of applications, as it allows them to bind certain substances together. These properties are products of their biological processes, as they secrete corrosive enzymes that allow them to degrade and colonize organic substrates. During degradation, mycelium develops a dense network of thin strands that fuse together within the organic substrate, creating solid material that can hold multiple substrates together. This self-assembly property of mycelium is quite unique, and allows mycelium to grow on a wide range of organic material, including organic waste.
Potential ecological role
Plants appear to communicate within an ecosystem using mycelium, the fungal network produced by mycorrhiza fungi. Mycelial networks constitute 20-30% of soil biomass, though traditional biomass measures fail to detect them. Some 83% of plants appear to exhibit mutualistic association with mycelium as an extension of their root systems, with varying levels of reliance. By some estimates, mycelial networks receive well over 10% of the photosynthesis output of their host plants.
This mutualism is initiated by hyphal connections in which mycelial strands infect and attach themselves to plant hyphae, penetrating the cell wall but not entering through the membrane into the plant cytoplasm. Mycelium interacts with the cell at the periarbuscular membrane, which behaves as a sort of exchange medium for nutrients and can produce electrical gradients allowing for electrophysiological signals to be sent and received. In modeling studies, different fungi supply different levels of nutrients and growth-promoting materials, with plants tending to root towards (and thus being infected by) fungi supplying most mineral phosphorus and nitrogen (both essential for plant growth).
Mycorrhizal mycelial associations may intensify competition between individuals of the same species, while alleviating competition between species, via the promotion of inferior competitors, thus promoting plant diversity within its network. In doing so, mycorrhizal fungi promote community ecology, with an added complexity of niche differentiation of different networks and types of mycorrhizal fungi that root at different depths, disperse different organic compounds and nutrients, and have unique interactions with specific species of plants.
Mycelial biology and memory
Several studies have documented the memory capacity of mycelial networks and their adaptability to specific environmental conditions. Mycelia have been specialized for different functions in various climates and develop symbiotic or pathogenic relationships with other organisms, such as the human pathogen Candida auris, which has developed a unique approach of evading detection by human neutrophils through adaptive selection–a process of fungal learning and memory. Additionally, these functions can change based on the scale of the mycelia and nature of the symbiotic relationship; commensal and mutual relationships between fungi and plants form through a separate process known as mycorrhizal association, which are called mycorrhiza. Additionally, hyphal organization into mycelial networks can be deterministic for a variety of functions including biomass retention, water recycling, expansion of future hyphae on a resource efficient approach towards desired nutrient gradients, and the subsequent distribution of these resources across the hyphal network. On a macroscopic scale, many mycelia operate with a sort of hierarchy having a “trunk” or main mycelium, with smaller “branches” branching off. Some saprotrophic basidiomycetes are able to remember past decisions about directional nutrition gradients and will build future mycelium in that direction.
Mycelial memory and intelligence
Current research on collective mycelial intelligence is limited, and while many studies have observed memory and the exchange of electric charge across mycelial networks, this is insufficient evidence to make conclusions about how sensory data is processed in these networks. However, some examples of increased thermal resistance in filamentous fungi suggest a power-law relationship for memory and exposure to a stimulus. Mycelia have also demonstrated the ability to edit their genetic structures within a lifetime due to antibiotic or other extracellular stressors, which can cause rapid acquisition of resistance genes, like those in C. auris. Additionally, plasmodial slime molds demonstrate a similar method of information sharing, as both mycelia and slime molds make use of cAMP molecules for aggregation and signaling.
Sclerotium
Sclerotium is a compact mass of hardened mycelium. For many years, sclerotia were mistaken for individual organisms and described as separate species. However, in the mid 19th century, it was proven that sclerotia was simply a stage in the life cycle of many fungi. Sclerotia are composed of thick, dense shells with dark cells. They are rich in hyphae emergency supplies, such as oil, and they contain small amounts of water. They can survive in dry environments for many years without losing the ability to grow. The size of sclerotia can range from less than a millimeter to tens of centimeters in diameter.
| Biology and health sciences | Fungus | null |
51472 | https://en.wikipedia.org/wiki/Spore | Spore | In biology, a spore is a unit of sexual (in fungi) or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. Spores form part of the life cycles of many plants, algae, fungi and protozoa. They were thought to have appeared as early as the mid-late Ordovician period as an adaptation of early land plants.
Bacterial spores are not part of a sexual cycle, but are resistant structures used for survival under unfavourable conditions. Myxozoan spores release amoeboid infectious germs ("amoebulae") into their hosts for parasitic infection, but also reproduce within the hosts through the pairing of two nuclei within the plasmodium, which develops from the amoebula.
In plants, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte. In some rare cases, diploid spore is also produced in some algae, or fungi. Under favourable conditions, the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte, which eventually goes on to produce gametes. Two gametes fuse to form a zygote, which develops into a new sporophyte. This cycle is known as alternation of generations.
The spores of seed plants are produced internally, and the megaspores (formed within the ovules) and the microspores are involved in the formation of more complex structures that form the dispersal units, the seeds and pollen grains.
Definition
The term spore derives from the ancient Greek word σπορά spora, meaning "seed, sowing", related to σπόρος , "sowing", and σπείρειν , "to sow".
In common parlance, the difference between a "spore" and a "gamete" is that a spore will germinate and develop into a sporeling, while a gamete needs to combine with another gamete to form a zygote before developing further.
The main difference between spores and seeds as dispersal units is that spores are unicellular, the first cell of a gametophyte, while seeds contain within them a developing embryo (the multicellular sporophyte of the next generation), produced by the fusion of the male gamete of the pollen tube with the female gamete formed by the megagametophyte within the ovule. Spores germinate to give rise to haploid gametophytes, while seeds germinate to give rise to diploid sporophytes.
Classification of spore-producing organisms
Plants
Vascular plant spores are always haploid. Vascular plants are either homosporous (or isosporous) or heterosporous. Plants that are homosporous produce spores of the same size and type.
Heterosporous plants, such as seed plants, spikemosses, quillworts, and ferns of the order Salviniales produce spores of two different sizes: the larger spore (megaspore) in effect functioning as a "female" spore and the smaller (microspore) functioning as a "male". Such plants typically give rise to the two kind of spores from within separate sporangia, either a megasporangium that produces megaspores or a microsporangium that produces microspores. In flowering plants, these sporangia occur within the carpel and anthers, respectively.
Fungi
Fungi commonly produce spores during sexual and asexual reproduction. Spores are usually haploid and grow into mature haploid individuals through mitotic division of cells (Urediniospores and Teliospores among rusts are dikaryotic). Dikaryotic cells result from the fusion of two haploid gamete cells. Among sporogenic dikaryotic cells, karyogamy (the fusion of the two haploid nuclei) occurs to produce a diploid cell. Diploid cells undergo meiosis to produce haploid spores.
Classification of spores
Spores can be classified in several ways such as by their spore producing structure, function, origin during life cycle, and mobility.
Below is a table listing the mode of classification, name, identifying characteristic, examples, and images of different spore species.
External anatomy
Under high magnification, spores often have complex patterns or ornamentation on their exterior surfaces. A specialized terminology has been developed to describe features of such patterns. Some markings represent apertures, places where the tough outer coat of the spore can be penetrated when germination occurs. Spores can be categorized based on the position and number of these markings and apertures. Alete spores show no lines. In monolete spores, there is a single narrow line (laesura) on the spore. Indicating the prior contact of two spores that eventually separated. In trilete spores, each spore shows three narrow lines radiating from a center pole. This shows that four spores shared a common origin and were initially in contact with each other forming a tetrahedron. A wider aperture in the shape of a groove may be termed a colpus. The number of colpi distinguishes major groups of plants. Eudicots have tricolpate spores (i.e. spores with three colpi).
Spore tetrads and trilete spores
Envelope-enclosed spore tetrads are taken as the earliest evidence of plant life on land, dating from the mid-Ordovician (early Llanvirn, ~), a period from which no macrofossils have yet been recovered.
Individual trilete spores resembling those of modern cryptogamic plants first appeared in the fossil record at the end of the Ordovician period.
Dispersal
In fungi, both asexual and sexual spores or sporangiospores of many fungal species are actively dispersed by forcible ejection from their reproductive structures. This ejection ensures exit of the spores from the reproductive structures as well as travelling through the air over long distances. Many fungi thereby possess specialized mechanical and physiological mechanisms as well as spore-surface structures, such as hydrophobins, for spore ejection. These mechanisms include, for example, forcible discharge of ascospores enabled by the structure of the ascus and accumulation of osmolytes in the fluids of the ascus that lead to explosive discharge of the ascospores into the air.
The forcible discharge of single spores termed ballistospores involves formation of a small drop of water (Buller's drop), which upon contact with the spore leads to its projectile release with an initial acceleration of more than 10,000 g. Other fungi rely on alternative mechanisms for spore release, such as external mechanical forces, exemplified by puffballs. Attracting insects, such as flies, to fruiting structures, by virtue of their having lively colours and a putrid odour, for dispersal of fungal spores is yet another strategy, most prominently used by the stinkhorns.
In Common Smoothcap moss (Atrichum undulatum), the vibration of sporophyte has been shown to be an important mechanism for spore release.
In the case of spore-shedding vascular plants such as ferns, wind distribution of very light spores provides great capacity for dispersal. Also, spores are less subject to animal predation than seeds because they contain almost no food reserve; however they are more subject to fungal and bacterial predation. Their chief advantage is that, of all forms of progeny, spores require the least energy and materials to produce.
In the spikemoss Selaginella lepidophylla, dispersal is achieved in part by an unusual type of diaspore, a tumbleweed.
Origin
Spores have been found in microfossils dating back to the mid-late Ordovician period. Two hypothesized initial functions of spores relate to whether they appeared before or after land plants. The heavily studied hypothesis is that spores were an adaptation of early land plant species, such as embryophytes, that allowed for plants to easily disperse while adapting to their non-aquatic environment. This is particularly supported by the observation of a thick spore wall in cryptospores. These spore walls would have protected potential offspring from novel weather elements. The second more recent hypothesis is that spores were an early predecessor of land plants and formed during errors in the meiosis of algae, a hypothesized early ancestor of land plants.
Whether spores arose before or after land plants, their contributions to topics in fields like paleontology and plant phylogenetics have been useful. The spores found in microfossils, also known as cryptospores, are well preserved due to the fixed material they are in as well as how abundant and widespread they were during their respective time periods. These microfossils are especially helpful when studying the early periods of earth as macrofossils such as plants are not common nor well preserved. Both cryptospores and modern spores have diverse morphology that indicate possible environmental conditions of earlier periods of Earth and evolutionary relationships of plant species.
Gallery
| Biology and health sciences | Biological reproduction | null |
51474 | https://en.wikipedia.org/wiki/Seat%20belt | Seat belt | A seat belt, also known as a safety belt or spelled seatbelt, is a vehicle safety device designed to secure the driver or a passenger of a vehicle against harmful movement that may result during a collision or a sudden stop. A seat belt reduces the likelihood of death or serious injury in a traffic collision by reducing the force of secondary impacts with interior strike hazards, by keeping occupants positioned correctly for maximum effectiveness of the airbag (if equipped), and by preventing occupants being ejected from the vehicle in a crash or if the vehicle rolls over.
When in motion, the driver and passengers are traveling at the same speed as the vehicle. If the vehicle suddenly stops or crashes, the occupants continue at the same speed the vehicle was going before it stopped.
A seat belt applies an opposing force to the driver and passengers to prevent them from falling out or making contact with the interior of the car (especially preventing contact with, or going through, the windshield). Seat belts are considered primary restraint systems (PRSs), because of their vital role in occupant safety.
Effectiveness
An analysis conducted in the United States in 1984 compared a variety of seat belt types alone and in combination with air bags. The range of fatality reduction for front seat passengers was broad, from 20% to 55%, as was the range of major injury, from 25% to 60%. More recently, the Centers for Disease Control and Prevention has summarized these data by stating "seat belts reduce serious crash-related injuries and deaths by about half." Most malfunctions are a result of there being too much slack in the seat belt at the time of the accident.
It has been suggested that although seat belt usage reduces the probability of death in any given accident, mandatory seat belt laws have little or no effect on the overall number of traffic fatalities because seat belt usage also disincentivizes safe driving behaviors, thereby increasing the total number of accidents. This idea, known as compensating-behavior theory, is not supported by the evidence.
In case of vehicle rollover in a U.S. passenger car or SUV, from 1994 to 2004, wearing a seat belt reduced the risk of fatalities or incapacitating injuries and increased the probability of no injury:
In case of vehicle rollover in a U.S. passenger car, there are % fatalities in 1994 and % in 2014 when user is restrained. There are % fatalities in 1994 and % in 2014 when the user is unrestrained.
In case of vehicle rollover, there are % incapacitating injury in 1994 and % in 2014 when the user is restrained. There are % incapacitating injury in 1994 and % in 2014 when user is unrestrained.
The probability of no injury is % in 1994 and % in 2014 when the user is restrained. There were % no injury in 1994 and % in 2014 when the user is unrestrained.
History
Seat belts were invented by English engineer George Cayley, to use on his glider, in the mid-19th century.
In 1946, C. Hunter Shelden opened a neurological practice at Huntington Memorial Hospital in Pasadena, California. In the early 1950s, Shelden made a major contribution to the automotive industry with his idea of retractable seat belts. This came about from his care of the high number of head injuries coming through the emergency room. He investigated the early seat belts with primitive designs that were implicated in these injuries and deaths.
Nash was the first American car manufacturer to offer seat belts as a factory option, in its 1949 models. They were installed in 40,000 cars, but buyers did not want them and requested that dealers remove them. The feature was "met with insurmountable sales resistance" and Nash reported that after one year "only 1,000 had been used" by customers.
Ford offered seat belts as an option in 1955. These were not popular, with only 2% of Ford buyers choosing to pay for seat belts in 1956.
To reduce the high level of injuries Shelden was seeing, he proposed, in late 1955, retractable seat belts, recessed steering wheels, reinforced roofs, roll bars, automatic door locks, and passive restraints such as air bags be made mandatory.
Glenn W. Sheren, of Mason, Michigan, submitted a patent application on March 31, 1955, for an automotive seat belt and was awarded in 1958. This was a continuation of an earlier patent application that Sheren had filed on September 22, 1952.
The first modern three-point seat belt (the so-called CIR-Griswold restraint) commonly used in consumer vehicles was patented in 1955 by the Americans Roger W. Griswold and Hugh DeHaven.
Saab introduced seat belts as standard equipment in 1958. After the Saab GT 750 was introduced at the New York Motor Show in 1958 with safety belts fitted as standard, the practice became commonplace.
Vattenfall, the Swedish national electric utility, did a study of all fatal, on-the-job accidents among their employees. The study revealed that the majority of fatalities occurred while the employees were on the road on company business. In response, two Vattenfall safety engineers, Bengt Odelgard and Per-Olof Weman, started to develop a seat belt. Their work was presented to Swedish manufacturer Volvo in the late 1950s, and set the standard for seat belts in Swedish cars. The three-point seat belt was developed to its modern form by Swedish inventor Nils Bohlin for Volvo, which introduced it in 1959 as standard equipment. In addition to designing an effective three-point belt, Bohlin demonstrated its effectiveness in a study of 28,000 accidents in Sweden. Unbelted occupants sustained fatal injuries throughout the whole speed scale, whereas none of the belted occupants was fatally injured at accident speeds below 60 mph. No belted occupant was fatally injured if the passenger compartment remained intact. Bohlin was granted for the device.
Subsequently, in 1966, Congress passed the National Traffic and Motor Vehicle Safety Act, requiring all automobiles to comply with certain safety standards.
The first compulsory seat belt law was put in place in 1970, in the state of Victoria, Australia, requiring their use by drivers and front-seat passengers. This legislation was enacted after trialing Hemco seat belts, designed by Desmond Hemphill (1926–2001), in the front seats of police vehicles, lowering the incidence of officer injury and death. Mandatory seat belt laws in the United States began to be introduced in the 1980s and faced opposition, with some consumers going to court to challenge the laws. Some cut seat belts out of their cars.
Material
The 'belt' part of the typical seatbelt seen in vehicles worldwide is referred to as the 'webbing'. Modern seat belt webbing has a high tensile strength, about 3000-6000lbs, to resist tearing at high loads such as during high-speed collisions or while restraining larger passengers.
While nylon was used in some early seat belts (and is still used for lap belts), it was replaced by 100% polyester due to its better UV resistance, lower extensibility and higher stiffness. Nylon was also prone to stretching much more than polyester, and was prone to wear and tear, with tiny abrasions drastically reducing tensile strength causing a lack of reliability in one of the most important safety measures in a vehicle. Seat belts are commonly 46 or 48 mm wide with a 2/2 herringbone twill weaving pattern to maximize the thread density. Modern seatbelt weaves also feature snag-proof selvedges reinforced with strong polyester threads to prevent the wear and tear, while remaining flexible. The weave features about 300 warp threads for every 46mm wide webbing, leading to around 150 ends per inch of webbing.
Accident investigators often examine the webbing of a seatbelt to determine if an occupant of a vehicle was wearing their seatbelt during a collision. The material of the webbing may contain traces of the occupant's clothing. Certain materials such as nylons may become permanently affixed or melted onto the fabric as a result of heat produced by friction, whereas fiber based clothing leaves no remains on modern webbing.
Types
Two-point
A two-point belt attaches at its two endpoints. A simple strap was first used March 12, 1910, by pilot Benjamin Foulois, a pioneering aviator with the Aeronautical Division, U.S. Signal Corps, so he might remain at the controls during turbulence.
The Irvin Air Chute Company made the seat belt for use by professional race car driver Barney Oldfield when his team decided the daredevil should have a "safety harness" for the 1923 Indianapolis 500.
Lap
A lap belt is a strap that goes over the waist. This was the most common type of belt prior to legislation requiring three-point belts and is found in older cars. Coaches are equipped with lap belts (although many newer coaches have three-point belts), as are passenger aircraft seats.
University of Minnesota professor James J. (Crash) Ryan was the inventor of, and held the patent for, the automatic retractable lap safety belt. Ralph Nader cited Ryan's work in Unsafe at Any Speed and, following hearings led by Senator Abraham Ribicoff, President Lyndon Johnson signed two bills in 1966 requiring safety belts in all passenger vehicles starting in 1968.
Until the 1980s, three-point belts were commonly available only in the front outboard seats of cars; the back seats were often only fitted with lap belts. Evidence of the potential of lap belts to cause separation of the lumbar vertebrae and the sometimes-associated paralysis, or "seat belt syndrome" led to the progressive revision of passenger safety regulations in nearly all developed countries to require three-point belts, first in all outboard seating positions, and eventually in all seating positions in passenger vehicles. Since September 1, 2007, all new cars sold in the U.S. require a lap and shoulder belt in the center rear seat. In addition to regulatory changes, "seat belt syndrome" has led to a liability for vehicle manufacturers. One Los Angeles case resulted in a $45 million jury verdict against Ford; the resulting $30 million judgment (after deductions for another defendant who settled prior to trial) was affirmed on appeal in 2006.
While lap belts are exceedingly rare to spot in modern cars, they are the standard in commercial airliners. The lift-lever style of commercial aircraft buckles allows for the seatbelt to be easily clasped and unclasped, accessible quickly in case of an emergency where a passenger must evacuate, and fulfills the minimum safety requirements provided by the FAA while remaining low-cost to produce. Furthermore, in case of any collision, a passenger in economy class has only around 9 inches for their head to travel forward, meaning restraining the torso and head is relatively unnecessary as the head has little room to accelerate before collision.
Sash
A "sash" or shoulder harness is a strap that goes diagonally over the vehicle occupant's outboard shoulder and is buckled inboard of their lap. The shoulder harness may attach to the lap belt tongue, or it may have a tongue and buckle completely separate from those of the lap belt. Shoulder harnesses of this separate or semi-separate type were installed in conjunction with lap belts in the outboard front seating positions of many vehicles in the North American market starting at the inception of the shoulder belt requirement of the U.S. National Highway Traffic Safety Administration's (NHTSA) Federal Motor Vehicle Safety Standard 208 on January 1, 1968. However, if the shoulder strap is used without the lap belt, the vehicle occupant is likely to "submarine", or slide forward in the seat and out from under the belt, in a frontal collision. In the mid-1970s, three-point belt systems such as Chrysler's "Uni-Belt" began to supplant the separate lap and shoulder belts in American-made cars, though such three-point belts had already been supplied in European vehicles such as Volvo, Mercedes-Benz, and Saab for some years.
Three-point
A three-point belt is a Y-shaped arrangement, similar to the separate lap and sash belts, but unified. Like the separate lap-and-sash belt, in a collision, the three-point belt spreads out the energy of the moving body over the chest, pelvis, and shoulders. Volvo introduced the first production three-point belt in 1959. The first car with a three-point belt was a Volvo PV 544 that was delivered to a dealer in Kristianstad on August 13, 1959. The first car model to have the three-point seat belt as a standard item was the 1959 Volvo 122, first outfitted with a two-point belt at initial delivery in 1958, replaced with the three-point seat belt the following year. The three-point belt was developed by Nils Bohlin, who had earlier also worked on ejection seats at Saab. Volvo then made the new seat belt design patent open in the interest of safety and made it available to other car manufacturers for free.
Belt-in-Seat
The Belt-in-Seat (BIS) is a three-point harness with the shoulder belt attached to the seat itself, rather than to the vehicle structure. The first car using this system was the Range Rover Classic, which offered BIS as standard on the front seats from 1970. Some cars like the Renault Vel Satis use this system for the front seats. A General Motors assessment concluded seat-mounted three-point belts offer better protection especially to smaller vehicle occupants, though GM did not find a safety performance improvement in vehicles with seat-mounted belts versus belts mounted to the vehicle body.
Belt-in-Seat type belts have been used by automakers in convertibles and pillarless hardtops, where there is no "B" pillar to affix the upper mount of the belt. Chrysler and Cadillac are well known for using this design. Antique auto enthusiasts sometimes replace original seats in their cars with BIS-equipped front seats, providing a measure of safety not available when these cars were new. However, modern BIS systems typically use electronics that must be installed and connected with the seats and the vehicle's electrical system in order to function properly.
4-, 5-, and 6-point
Five-point harnesses are typically found in child safety seats and in racing cars. The lap portion is connected to a belt between the legs and there are two shoulder belts, making a total of five points of attachment to the seat. A 4-point harness is similar, but without the strap between the legs, while a 6-point harness has two belts between the legs. In NASCAR, the 6-point harness became popular after the death of Dale Earnhardt, who was wearing a five-point harness when he suffered his fatal crash. As it was first thought that his belt had broken, and broke his neck at impact, some teams ordered a six-point harness in response.
Seven-point
Aerobatic aircraft frequently use a combination harness consisting of a five-point harness with a redundant lap belt attached to a different part of the aircraft. While providing redundancy for negative-g maneuvers (which lift the pilot out of the seat), they also require the pilot to unlatch two harnesses if it is necessary to parachute from a failed aircraft.
Technology
Locking retractors
The purpose of locking retractors (sometimes called ELR belts, for "Emergency Locking Retractors") is to provide the seated occupant the convenience of some free movement of the upper torso within the compartment while providing a method of limiting this movement in the event of a crash. Starting in 1996, all passenger vehicles were required to lock pre-crash, meaning they have a locking mechanism in the retractor or in the latch plate. Seat belts are stowed on spring-loaded reels called "retractors" equipped with inertial locking mechanisms that stop the belt from extending off the reel during severe deceleration.
There are two main types of inertial seat belt locks. A webbing-sensitive lock is based on a centrifugal clutch activated by the rapid acceleration of the strap (webbing) from the reel. The belt can be pulled from the reel only slowly and gradually, as when the occupant extends the belt to fasten it. A sudden rapid pull of the belt—as in a sudden braking or collision event—causes the reel to lock, restraining the occupant in position. The first automatic locking retractor for seat belts and shoulder harnesses in the U.S. was the Irving "Dynalock" safety device. These "Auto-lock" front lap belts were optional on AMC cars with bucket seats in 1967.
A vehicle-sensitive lock is based on a pendulum swung away from its plumb position by rapid deceleration or rollover of the vehicle. In the absence of rapid deceleration or rollover, the reel is unlocked and the belt strap may be pulled from the reel against the spring tension of the reel. The vehicle occupant can move around with relative freedom while the spring tension of the reel keeps the belt taut against the occupant. When the pendulum swings away from its normal plumb position due to sudden deceleration or rollover, a pawl is engaged, the reel locks and the strap restrains the belted occupant in position. Dual-sensing locking retractors use both vehicle G-loading and webbing payout rate to initiate the locking mechanism.
Pretensioners and webclamps
Seat belts in many newer vehicles are also equipped with "pretensioners" or "web clamps", or both.
Pretensioners preemptively tighten the belt to prevent the occupant from jerking forward in a crash. Mercedes-Benz first introduced pretensioners on the 1981 S-Class. In the event of a crash, a pretensioner will tighten the belt almost instantaneously. This reduces the motion of the occupant in a violent crash. Like airbags, pretensioners are triggered by sensors in the car's body, and many pretensioners have used explosively expanding gas to drive a piston that retracts the belt. Pretensioners also lower the risk of "submarining", which occurs when a passenger slides forward under a loosely fitted seat belt.
Some systems also pre-emptively tighten the belt during fast accelerations and strong decelerations, even if no crash has happened. This has the advantage that it may help prevent the driver from sliding out of position during violent evasive maneuvers, which could cause loss of control of the vehicle. These pre-emptive safety systems may prevent some collisions from happening, as well as reduce injuries in the event an actual collision occurs. Pre-emptive systems generally use electric pretensioners, which can operate repeatedly and for a sustained period, rather than pyrotechnic pretensioners, which can only operate a single time.
Webclamps stop the webbing in the event of an accident and limit the distance the webbing can spool out (caused by the unused webbing tightening on the central drum of the mechanism). These belts also often incorporate an energy management loop ("rip stitching") in which a section of the webbing is looped and stitched with special stitching. The function of this is to "rip" at a predetermined load, which reduces the maximum force transmitted through the belt to the occupant during a violent collision, reducing injuries to the occupant.
A study demonstrated that standard automotive three-point restraints fitted with pyrotechnic or electric pretensioners were not able to eliminate all interior passenger compartment head strikes in rollover test conditions. Electric pretensioners are often incorporated on vehicles equipped with precrash systems; they are designed to reduce seat belt slack in a potential collision and assist in placing the occupants in a more optimal seating position. The electric pretensioners also can operate on a repeated or sustained basis, providing better protection in the event of an extended rollover or a multiple collision accident.
Inflatable
The inflatable seat belt was invented by Donald Lewis and tested at the Automotive Products Division of Allied Chemical Corporation. Inflatable seat belts have tubular inflatable bladders contained within an outer cover. When a crash occurs, the bladder inflates with gas to increase the area of the restraint contacting the occupant and also shortening the length of the restraint to tighten the belt around the occupant, improving the protection. The inflatable sections may be shoulder-only or lap and shoulder. The system supports the head during the crash better than a web-only belt. It also provides side impact protection. In 2013, Ford began offering rear-seat inflatable seat belts on a limited set of models, such as the Explorer and Flex.
Automatic
Seat belts that automatically move into position around a vehicle occupant once the adjacent door is closed and/or the engine is started were developed as a countermeasure against low usage rates of manual seat belts, particularly in the United States. The 1972 Volkswagen ESVW1 Experimental Safety Vehicle presented passive seat belts. Volvo tried to develop a passive three point seat belt. In 1973, Volkswagen announced they had a functional passive seat belt. The first commercial car to use automatic seat belts was the 1975 Volkswagen Golf.
Automatic seat belts received a boost in the United States in 1977 when Brock Adams, United States Secretary of Transportation in the Carter Administration, mandated that by 1983 every new car should have either airbags or automatic seat belts. There was strong lobbying against the passive restraint requirement by the auto industry. Adams was criticized by Ralph Nader, who said that the 1983 deadline was too late. The Volkswagen Rabbit also had automatic seat belts, and VW said that by early 1978, 90,000 cars had sold with them.
General Motors introduced a three-point non-motorized passive belt system in 1980 to comply with the passive restraint requirement. However, it was used as an active lap-shoulder belt because of unlatching the belt to exit the vehicle. Despite this common practice, field studies of belt use still showed an increase in wearing rates with this door-mounted system. General Motors began offering automatic seat belts on the Chevrolet Chevette. However, the company reported disappointing sales because of this feature. For the 1981 model year, the new Toyota Cressida became the first car to offer motorized automatic passive seat belts.
A study released in 1978 by the United States Department of Transportation said that cars with automatic seat belts had a fatality rate of .78 per 100 million miles, compared with 2.34 for cars with regular, manual belts.
In 1981, Drew Lewis, the first Transportation Secretary of the Reagan Administration, influenced by studies done by the auto industry, dropped the mandate; the decision was overruled in a federal appeals court the following year, and then by the Supreme Court. In 1984, the Reagan Administration reversed its course, though in the meantime the original deadline had been extended; Elizabeth Dole, then Transportation Secretary, proposed that the two passive safety restraints be phased into vehicles gradually, from vehicle model year 1987 to vehicle model year 1990, when all vehicles would be required to have either automatic seat belts or driver side air bags. Though more awkward for vehicle occupants, most manufacturers opted to use less expensive automatic belts rather than airbags during this time period.
When driver side airbags became mandatory on all passenger vehicles in model year 1995, most manufacturers stopped equipping cars with automatic seat belts. Exceptions include the 1995–96 Ford Escort/Mercury Tracer and the Eagle Summit Wagon, which had automatic safety belts along with dual airbags.
Systems
Manual lap belt with automatic motorized shoulder belt: When the door is opened, the shoulder belt moves from a fixed point near the seat back on a track mounted in the door frame of the car to a point at the other end of the track near the windshield. Once the door is closed and the car is started, the belt moves rearward along the track to its original position, thus securing the passenger. The lap belt must be fastened manually.
Manual lap belt with automatic non-motorized shoulder belt: This system was used in American-market vehicles such as the Hyundai Excel and Volkswagen Jetta. The shoulder belt is fixed to the aft upper corner of the vehicle door and is not motorized. The lap belt must be fastened manually.
Automatic shoulder and lap belts: This system was mainly used in General Motors vehicles, though it was also used on some Honda Civic hatchbacks and Nissan Sentra coupes. When the door is opened, the belts go from a fixed point in the middle of the car by the floor to the retractors on the door. Passengers must slide into the car under the belts. When the door closes, the seat belt retracts into the door. The belts have normal release buttons that are supposed to be used only in an emergency, but in practice are routinely used in the same manner as manual seat belt clasps. This system also found use by American Specialty Cars when they created the 1991-1994 convertible special edition of the Nissan 240SX, a car that traditionally had a motorized shoulder belt.
Disadvantages
Automatic belt systems generally offer inferior occupant crash protection. In systems with belts attached to the door rather than a sturdier fixed portion of the vehicle body, a crash that causes the vehicle door to open leaves the occupant without belt protection. In such a scenario, the occupant may be thrown from the vehicle and suffer greater injury or death.
Because many automatic belt system designs compliant with the U.S. passive-restraint mandate did not meet the anchorage requirements of Canada (CMVSS 210)—which were not weakened to accommodate automatic belts—vehicle models that had been eligible for easy importation in either direction across the U.S.-Canada border when equipped with manual belts became ineligible for importation in either direction once the U.S. variants obtained automatic belts and the Canadian versions retained manual belts, although some Canadian versions also had automatic seat belts. Two particular models affected were the Dodge Spirit and Plymouth Acclaim.
Automatic belt systems also present several operational disadvantages. Motorists who would normally wear seat belts must still fasten the manual lap belt, thus rendering redundant the automation of the shoulder belt. Those who do not fasten the lap belt wind up inadequately protected only by the shoulder belt. In a crash, without a lap belt, such a vehicle occupant is likely to "submarine" (be thrown forward under the shoulder belt) and be seriously injured. Motorized or door-affixed shoulder belts hinder access to the vehicle, making it difficult to enter and exit—particularly if the occupant is carrying items such as a box or a purse. Vehicle owners tend to disconnect the motorized or door-affixed shoulder belt to relieve the nuisance when entering and exiting the vehicle, leaving only a lap belt for crash protection. Also, many automatic seat belt systems are incompatible with child safety seats, or only compatible with special modifications.
Homologation and testing
Starting in 1971 and ending in 1972, the United States conducted a research project on seat belt effectiveness on a total of 40,000 vehicle occupants using car accident reports collected during that time. Of these 40,000 occupants, 18% were reported wearing lap belts, or two-point safety belts, 2% were reported wearing a three-point safety belt, and the remaining 80% were reported as wearing no safety belt. The results concluded that users of the two-point lap belt had a 73% lower fatality rate, a 53% lower serious injury rate, and a 38% lower injury rate than the occupants that were reported unrestrained. Similarly, users of the three-point safety belt had a 60% lower serious injury rate and a 41% lower rate of all other injuries. Out of the 2% described as wearing a three-point safety belt, no fatalities were reported.
This study and others led to the Restraint Systems Evaluation Program (RSEP), started by the NHTSA in 1975 to increase the reliability and authenticity of past studies. A study as part of this program used data taken from 15,000 tow-away accidents that involved only car models made between 1973 and 1975. The study found that for injuries considered “moderate” or worse, individuals wearing a three-point safety belt had a 56.5% lower injury rate than those wearing no safety belt. The study also concluded that the effectiveness of the safety belt did not differ with the size of a car. It was determined that the variation among results of the many studies conducted in the 1960s and 70s was due to the use of different methodologies, and could not be attributed to any significant variation in the effectiveness of safety belts.
Wayne State University's Automotive Safety Research Group, as well as other researchers, are testing ways to improve seat belt effectiveness and general vehicle safety apparatuses. Wayne State's Bioengineering Center uses human cadavers in their crash test research. The center's director, Albert King, wrote in 1995 that the vehicle safety improvements made possible since 1987 by the use of cadavers in research had saved nearly 8,500 lives each year, and indicated that improvements made to three-point safety belts save an average of 61 lives every year.
The New Car Assessment Program (NCAP) was put in place by the United States National Highway Traffic Safety Administration in 1979. The NCAP is a government program that evaluates vehicle safety designs and sets standards for foreign and domestic automobile companies. The agency developed a rating system and requires access to safety test results. , manufacturers are required to place an NCAP star rating on the automobile price sticker.
In 2004, The European New Car Assessment Program (Euro NCAP), started testing seat belts and whiplash safety on all test cars at the Thatcham Research Centre with crash test dummies.
Experimental
Research and development efforts are ongoing to improve the safety performance of vehicle seat belts. Some experimental designs include:
Criss-cross: Experimental safety belt presented in the Volvo SCC. It forms a cross-brace across the chest.
3+2 Point: Experimental safety belt from Autoliv similar to the criss-cross. The 3+2 improves protection against rollovers and side impacts.
Four point "belt and suspenders": An experimental design from Ford where the "suspenders" are attached to the backrest, not to the frame of the car.
3-point Adjustable: Experimental safety belt from GWR Safety Systems that allowed the car Hiriko, designed by MIT, to fold without compromising the safety and comfort of the occupants.
In rear seats
In 1955 (as a 1956 package), Ford offered lap-only seat belts in the rear seats as an option within the Lifeguard safety package. In 1967, Volvo started to install lap belts in the rear seats. In 1972, Volvo upgraded the rear seat belts to a three-point belt.
In crashes, unbelted rear passengers increase the risk of belted front seat occupants' death by nearly five times.
Child occupants
As with adult drivers and passengers, the advent of seat belts was accompanied by calls for their use by child occupants, including legislation requiring such use. Generally, children using adult seat belts suffer significantly lower injury risk when compared to non-buckled children.
The UK extended compulsory seat belt wearing to child passengers under the age of 14 in 1989. It was observed that this measure was accompanied by a 10% increase in fatalities and a 12% increase in injuries among the target population. In crashes, small children who wear adult seat belts can suffer "seat-belt syndrome" injuries including severed intestines, ruptured diaphragms, and spinal damage. There is also research suggesting that children in inappropriate restraints are at significantly increased risk of head injury. One of the authors of this research said, "The early graduation of kids into adult lap and shoulder belts is a leading cause of child-occupant injuries and deaths."
As a result of such findings, many jurisdictions now advocate or require child passengers to use specially designed child restraints. Such systems include separate child-sized seats with their own restraints and booster cushions for children using adult restraints. In some jurisdictions, children below a certain size are forbidden to travel in front car seats."
Automated reminders and engine start interlocks
In Europe, the U.S., and some other parts of the world, most modern cars include a seat-belt reminder light for the driver and some also include a reminder for the passenger, when present, activated by a pressure sensor under the passenger seat. Some cars will intermittently flash the reminder light and sound the chime until the driver (and sometimes the front passenger, if present) fasten their seat belts.
In 2005, in Sweden, 70% of all cars that were newly registered were equipped with seat belt reminders for the driver. Since November 2014, seat belt reminders are mandatory for the driver's seat on new cars sold in Europe.
Two specifications define the standard of seat belt reminder: UN Regulation 16, Section 8.4 and the Euro NCAP assessment protocol (Euro NCAP, 2013).
European Union seat belt reminder
In the European Union, seat belt reminders are mandatory in all new passenger cars for the driver seat. In 2014, EC Regulation 661/2009 made UN Regulation 16 applicable.
Amendment of UN Regulation 16 made seat belt reminders mandatory in
all front and rear seats of passenger cars and vans,
all front seats of buses and trucks.
This improvement applies from 1 September 2019 for new types of motor vehicles and from 1 September 2021 for all new motor vehicles.
U.S. regulation history
The Federal Motor Vehicle Safety Standard No. 208 (FMVSS 208) was amended by the NHTSA to require a seat belt/starter interlock system to prevent passenger cars from being started with an unbelted front-seat occupant. This mandate applied to passenger cars built after August 1973, i.e., starting with the 1974 model year. The specifications required the system to permit the car to be started only if the belt of an occupied seat were fastened after the occupant sat down, so pre-buckling the belts would not defeat the system.
The interlock systems used logic modules complex enough to require special diagnostic computers, and were not entirely dependable—an override button was provided under the hood of equipped cars, permitting one (but only one) "free" starting attempt each time it was pressed. However, the interlock system spurred severe backlash from an American public who largely rejected seat belts. In 1974, Congress acted to prohibit NHTSA from requiring or permitting a system that prevents a vehicle from starting or operating with an unbelted occupant, or that gives an audible warning of an unfastened belt for more than 8 seconds after the ignition is turned on. This prohibition took effect on 27 October 1974, shortly after the 1975 model year began.
In response to the Congressional action, NHTSA once again amended FMVSS 208, requiring vehicles to come with a seat belt reminder system that gives an audible signal for 4 to 8 seconds and a warning light for at least 60 seconds after the ignition is turned on if the driver's seat belt is not fastened. This is called a seat belt reminder (SBR) system. In the mid-1990s, the Swedish insurance company Folksam worked with Saab and Ford to determine the requirements for the most efficient seat belt reminder. One characteristic of the optimal SBR, according to the research, is that the audible warning becomes increasingly penetrating the longer the seat belt remains unfastened.
Efficacy
In 2001, Congress directed NHSTA to study the benefits of technology meant to increase the use of seat belts. NHSTA found that seat belt usage had increased to 73% since the initial introduction of the SBR system. In 2002, Ford demonstrated that seat belts were used more in Fords with seat belt reminders than in those without: 76% and 71% respectively. In 2007, Honda conducted a similar study and found that 90% of people who drove Hondas with seat belt reminders used a seat belt, while 84% of people who drove Hondas without seat belt reminders used a seat belt.
In 2003, the Transportation Research Board Committee, chaired by two psychologists, reported that "Enhanced SBRs" (ESBRs) could save an additional 1,000 lives a year. Research by the Insurance Institute for Highway Safety found that Ford's ESBR, which provides an intermittent chime for up to five minutes if the driver is unbelted, sounding for 6 seconds then pausing for 30, increased seat belt use by 5 percentage points. Farmer and Wells found that driver fatality rates were 6% lower for vehicles with ESBR compared with otherwise-identical vehicles without.
Delayed start
Starting with the 2020 model year, some Chevrolet cars refused to shift from Park to Drive for 20 seconds if the driver is unbuckled and the car is in "teen driver" mode. A similar feature was previously available on some General Motors fleet cars.
Regulation by country
International regulations
Several countries apply UN-ECE vehicle regulations 14 and 16:
UN Regulation No. 14: safety belt anchorages
UN Regulation No. 16:
Safety belts, restraint systems, child restraint systems, and ISOFIX child restraint systems for occupants of power-driven vehicles
Vehicles equipped with safety belts, safety belt reminders, restraint systems, child restraint systems and ISOFIX child restraint systems and i-Size child restraint systems
UN Regulation No. 44: restraining devices for child occupants of power-driven vehicles ("Child Restraint Systems")
UN Regulation No. 129: Enhanced Child Restraint Systems
Local regulations
Legislation
In observational studies of car crash morbidity and mortality, experiments using both crash test dummies and human cadavers indicate that wearing seat belts greatly reduces the risk of death and injury in the majority of car crashes.
This has led many countries to adopt mandatory seat belt wearing laws. It is generally accepted that, in comparing like-for-like accidents, a vehicle occupant not wearing a properly fitted seat belt has a significantly and substantially higher chance of death and serious injury. One large observation studying using U.S. data showed that the odds ratio of crash death is 0.46 with a three-point belt when compared with no belt. In another study that examined injuries presenting to the ER pre- and post-seat belt law introduction, it was found that 40% more escaped injury and 35% more escaped mild and moderate injuries.
The effects of seat belt laws are disputed by those who observe that their passage did not reduce road fatalities. There has also been concern that instead of legislating for a general protection standard for vehicle occupants, laws that required a particular technical approach would rapidly become dated as motor manufacturers would tool up for a particular standard that could not easily be changed. For example, in 1969 there were competing designs for lap and three-point seat belts, rapidly tilting seats, and airbags being developed. As countries started to mandate seat belt restraints the global auto industry invested in the tooling and standardized exclusively on seat belts, and ignored other restraint designs such as airbags for several decades
As of 2016, seat belt laws can be divided into two categories: primary and secondary. A primary seat belt law allows an officer to issue a citation for lack of seat belt use without any other citation, whereas a secondary seat belt law allows an officer to issue a seat belt citation only in the presence of a different violation. In the United States, fifteen states enforce secondary laws, while 34 states, as well as the District of Columbia, American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the Virgin Islands enforce primary seat belt laws. New Hampshire lacks either a primary or secondary seat belt law.
Risk compensation
Some have proposed that the number of deaths was influenced by the development of risk compensation, which says that drivers adjust their behavior in response to the increased sense of personal safety wearing a seat belt provides.
In one trial subjects were asked to drive go-karts around a track under various conditions. It was found that subjects who started driving unbelted drove consistently faster when subsequently belted. Similarly, a study of habitual non-seat belt wearers driving in freeway conditions found evidence that they had adapted to use by adopting higher driving speeds and closer following distances.
A 2001 analysis of U.S. crash data aimed to establish the effects of legislation on driving fatalities and found that previous estimates of seat belt effectiveness had been significantly overstated.
According to the analysis, seat belts decreased fatalities by 1.35% for each 10% increase in seat belt use. The study controlled for endogenous motivations of seat belt use, because that creates an artificial correlation between seat belt use and fatalities, leading to the conclusion that seat belts cause fatalities. For example, drivers in high-risk areas are more likely to use seat belts and are more likely to be in accidents, creating a non-causal correlation between seat belt use and mortality. After accounting for the endogeneity of seat belt usage, Cohen and Einav found no evidence that the risk compensation effect makes seat belt-wearing drivers more dangerous, a finding at variance with other research.
Increased traffic
Other statistical analyses have included adjustments for factors such as increased traffic and age, and based on these adjustments, which results in a reduction of morbidity and mortality due to seat belt use. However, Smeed's law predicts a fall in accident rate with increasing car ownership and has been demonstrated independently of seat belt legislation.
Mass transit considerations
Buses
School buses
In the U.S., six states—California, Florida, Louisiana, New Jersey, New York, and Texas—require seat belts on school buses.
Pros and cons have been debated about the use of seat belts in school buses. School buses, which are much bigger than the average vehicle, allow for the mass transportation of students from place to place. The American School Bus Council states in a brief article: "The children are protected like eggs in an egg carton—compartmentalized, and surrounded with padding and structural integrity to secure the entire container." Although school buses are considered safe for mass transit of students, this will not guarantee that the students will be injury-free if an impact were to occur. Seat belts in buses are sometimes believed to make recovering from a roll or tip harder for passengers, as they could be easily trapped in their own safety belts.
In 2015, for the first time, NHTSA endorsed seat belts on school buses.
Motor coaches
In the European Union, all new long-distance buses and coaches must be fitted with seat belts.
Australia has required lap/sash seat belts in new coaches since 1994. These must comply with Australian Design Rule 68, which requires the seat belt, seat and seat anchorage to withstand 20g deceleration and an impact by an unrestrained occupant to the rear.
In the United States, NHTSA now requires lap-shoulder seat belts in new "over-the-road" buses (includes most coaches) starting in 2016.
Trains
The use of seat belts in trains has been investigated. Concerns about survival space intrusion in train crashes and increased injuries to unrestrained or incorrectly restrained passengers led researchers to discourage the use of seat belts in trains.
"It has been shown that there is no net safety benefit for passengers who choose to wear 3-point restraints on passenger-carrying rail vehicles. Generally, passengers who choose not to wear restraints in a vehicle modified to accept 3-point restraints receive marginally more severe injuries."
Airplanes
All aerobatic aircraft and gliders (sailplanes) are fitted with four or five-point harnesses, as are many types of light aircraft and many types of military aircraft. The seat belts in these aircraft have the dual function of crash protection and keeping the pilot(s) and crew in their seat(s) during turbulence and aerobatic maneuvers. Airliners are fitted with lap belts. Unlike road vehicles, airliner seat belts are not primarily designed for crash protection. Their main purpose is to keep passengers in their seats during events such as turbulence. Many civil aviation authorities require a "fasten seat belt" sign in the cabin that can be activated by a pilot during taxiing, takeoff, turbulence, and landing. The International Civil Aviation Organization recommends the use of child restraints. Some airline authorities, including the UK Civil Aviation Authority (CAA), permit the use of airline infant lap belts (sometimes known as an infant loop or belly belt) to secure an infant under age two sitting on an adult's lap.
| Technology | Basics_7 | null |
51480 | https://en.wikipedia.org/wiki/Hypha | Hypha | A hypha (; : hyphae) is a long, branching, filamentous structure of a fungus, oomycete, or actinobacterium. In most fungi, hyphae are the main mode of vegetative growth, and are collectively called a mycelium.
Structure
A hypha consists of one or more cells surrounded by a tubular cell wall. In most fungi, hyphae are divided into cells by internal cross-walls called "septa" (singular septum). Septa are usually perforated by pores large enough for ribosomes, mitochondria, and sometimes nuclei to flow between cells. The major structural polymer in fungal cell walls is typically chitin, in contrast to plants and oomycetes that have cellulosic cell walls. Some fungi have aseptate hyphae, meaning their hyphae are not partitioned by septa.
Hyphae have an average diameter of 4–6 μm.
Growth
Hyphae grow at their tips. During tip growth, cell walls are extended by the external assembly and polymerization of cell wall components, and the internal production of new cell membrane. The Spitzenkörper is an intracellular organelle associated with tip growth. It is composed of an aggregation of membrane-bound vesicles containing cell wall components. The Spitzenkörper is part of the endomembrane system of fungi, holding and releasing vesicles it receives from the Golgi apparatus. These vesicles travel to the cell membrane via the cytoskeleton and release their contents (including various cysteine-rich proteins including cerato-platanins and hydrophobins) outside the cell by the process of exocytosis, where they can then be transported to where they are needed. Vesicle membranes contribute to growth of the cell membrane while their contents form new cell wall. The Spitzenkörper moves along the apex of the hyphal strand and generates apical growth and branching; the apical growth rate of the hyphal strand parallels and is regulated by the movement of the Spitzenkörper.
As a hypha extends, septa may be formed behind the growing tip to partition each hypha into individual cells. Hyphae can branch through the bifurcation of a growing tip, or by the emergence of a new tip from an established hypha.
Behaviour
The direction of hyphal growth can be controlled by environmental stimuli, such as the application of an electric field. Hyphae can also sense reproductive units from some distance, and grow towards them. Hyphae can weave through a permeable surface to penetrate it.
Modifications
Hyphae may be modified in many different ways to serve specific functions. Some parasitic fungi form haustoria that function in absorption within the host cells. The arbuscules of mutualistic mycorrhizal fungi serve a similar function in nutrient exchange, so are important in assisting nutrient and water absorption by plants. Ectomycorrhizal extramatrical mycelium greatly increases the soil area available for exploitation by plant hosts by funneling water and nutrients to ectomycorrhizas, complex fungal organs on the tips of plant roots. Hyphae are found enveloping the gonidia in lichens, making up a large part of their structure. In nematode-trapping fungi, hyphae may be modified into trapping structures such as constricting rings and adhesive nets. Mycelial cords can be formed to transfer nutrients over larger distances. Bulk fungal tissues, cords, and membranes, such as those of mushrooms and lichens, are mainly composed of felted and often anastomosed hyphae.
Types
Classification based on cell division
Septate (with septa)
Aspergillus and many other species have septate hyphae.
Aseptate (non-septate) or coenocytic (without septa)
Non-septate hyphae are associated with Mucor, some zygomycetes, and other fungi.
Pseudohyphae are distinguished from true hyphae by their method of growth, relative frailty and lack of cytoplasmic connection between the cells.
Yeasts form pseudohyphae. They are the result of a sort of incomplete budding where the cells elongate but remain attached after division. Some yeasts can also form true septate hyphae.
Classification based on cell wall and overall form
Characteristics of hyphae can be important in fungal classification. In basidiomycete taxonomy, hyphae that comprise the fruiting body can be identified as generative, skeletal, or binding hyphae.
Generative hyphae are relatively undifferentiated and can develop reproductive structures. They are typically thin-walled, occasionally developing slightly thickened walls, usually have frequent septa, and may or may not have clamp connections. They may be embedded in mucilage or gelatinized materials.
Skeletal hyphae are of two basic types. The classical form is thick-walled and very long in comparison to the frequently septate generative hyphae, which are unbranched or rarely branched, with little cell content. They have few septa and lack clamp connections. Fusiform skeletal hyphae are the second form of skeletal hyphae. Unlike typical skeletal hyphae these are swollen centrally and often exceedingly broad, hence giving the hypha a fusiform shape.
Binding hyphae are thick-walled and frequent branched. Often they resemble deer antlers or defoliated trees because of the many tapering branches.
Based on the generative, skeletal and binding hyphal types, in 1932 E. J. H. Corner applied the terms monomitic, dimitic, and trimitic to hyphal systems, in order to improve the classification of polypores.
Every fungus must contain generative hyphae. A fungus which only contains this type, as do fleshy mushrooms such as agarics, is referred to as monomitic.
If a fungus contains the obligate generative hyphae (as mentioned in the last point, "every fungus must contain generative hyphae") and just one of the other two types (either skeletal or binding hyphae), it is called dimitic. In fact dimitic fungi almost always contain generative and skeletal hyphae; there is one exceptional genus, Laetiporus that includes only generative and binding hyphae.
Skeletal and binding hyphae give leathery and woody fungi such as polypores their tough consistency. If a fungus contains all three types (example: Trametes), it is called trimitic.
Fungi that form fusiform skeletal hyphae bound by generative hyphae are said to have sarcodimitic hyphal systems. A few fungi form fusiform skeletal hyphae, generative hyphae, and binding hyphae, and these are said to have sarcotrimitic hyphal systems. These terms were introduced as a later refinement by E. J. H. Corner in 1966.
Classification based on refractive appearance
Hyphae are described as "gloeoplerous" ("gloeohyphae") if their high refractive index gives them an oily or granular appearance under the microscope. These cells may be yellowish or clear (hyaline). They can sometimes selectively be coloured by sulphovanillin or other reagents. The specialized cells termed cystidia can also be gloeoplerous.
Classification based on growth location
Hyphae might be categorized as 'vegetative' or 'aerial.' Aerial hyphae of fungi produce asexual reproductive spores.
| Biology and health sciences | Fungus | null |
51483 | https://en.wikipedia.org/wiki/Mutualism%20%28biology%29 | Mutualism (biology) | Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples are:
the nutrient exchange between vascular plants and mycorrhizal fungi,
the fertilization of flowering plants by pollinators,
the ways plants use fruits and edible seeds to encourage animal aid in seed dispersal, and
the way corals become photosynthetic with the help of the microorganism zooxanthellae.
Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, and with parasitism, in which one species benefits at the expense of the other. However, mutualism may evolve from interactions that began with imbalanced benefits, such as parasitism.
The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean "mutual aid among species".
Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualism and symbiosis, they have been largely used interchangeably in the past, and confusion on their use has persisted.
Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as:
about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements.
estimates of tropical rainforest plants with seed dispersal mutualisms with animals range at least from 70% to 93.5%. In addition, mutualism is thought to have driven the evolution of much of the biological diversity we see, such as flower forms (important for pollination mutualisms) and co-evolution between groups of species.
A prominent example of pollination mutualism is with bees and flowering plants. Bees use these plants as their food source with pollen and nectar. In turn, they transfer pollen to other nearby flowers, inadvertently allowing for cross-pollination. Cross-pollination has become essential in plant reproduction and fruit/seed production. The bees get their nutrients from the plants, and allow for successful fertilization of plants, demonstrating a mutualistic relationship between two seemingly-unlike species.
Mutualism has also been linked to major evolutionary events, such as the evolution of the eukaryotic cell (symbiogenesis) and the colonization of land by plants in association with mycorrhizal fungi.
Types
Resource-resource relationships
Mutualistic relationships can be thought of as a form of "biological barter" in mycorrhizal associations between plant roots and fungi, with the plant providing carbohydrates to the fungus in return for primarily phosphate but also nitrogenous compounds. Other examples include rhizobia bacteria that fix nitrogen for leguminous plants (family Fabaceae) in return for energy-containing carbohydrates. Metabolite exchange between multiple mutualistic species of bacteria has also been observed in a process known as cross-feeding.
Service-resource relationships
Service-resource relationships are common. Three important types are pollination, cleaning symbiosis, and zoochory.
In pollination, a plant trades food resources in the form of nectar or pollen for the service of pollen dispersal. However, daciniphilous Bulbophyllum orchid species trade sex pheromone precursor or booster components via floral synomones/attractants in a true mutualistic interactions with males of Dacini fruit flies (Diptera: Tephritidae: Dacinae).
Phagophiles feed (resource) on ectoparasites, thereby providing anti-pest service, as in cleaning symbiosis.
Elacatinus and Gobiosoma, genera of gobies, feed on ectoparasites of their clients while cleaning them.
Zoochory is the dispersal of the seeds of plants by animals. This is similar to pollination in that the plant produces food resources (for example, fleshy fruit, overabundance of seeds) for animals that disperse the seeds (service). Plants may advertise these resources using colour and a variety of other fruit characteristics, e.g., scent. Fruit of the aardvark cucumber (Cucumis humifructus) is buried so deeply that the plant is solely reliant upon the aardvark's keen sense of smell to detect its ripened fruit, extract, consume and then scatter its seeds; C. humifructus's geographical range is thus restricted to that of the aardvark's.
Another type is ant protection of aphids, where the aphids trade sugar-rich honeydew (a by-product of their mode of feeding on plant sap) in return for defense against predators such as ladybugs.
Service-service relationships
Strict service-service interactions are very rare, for reasons that are far from clear. One example is the relationship between sea anemones and anemone fish in the family Pomacentridae: the anemones provide the fish with protection from predators (which cannot tolerate the stings of the anemone's tentacles) and the fish defend the anemones against butterflyfish (family Chaetodontidae), which eat anemones. However, in common with many mutualisms, there is more than one aspect to it: in the anemonefish-anemone mutualism, waste ammonia from the fish feeds the symbiotic algae that are found in the anemone's tentacles. Therefore, what appears to be a service-service mutualism in fact has a service-resource component. A second example is that of the relationship between some ants in the genus Pseudomyrmex and trees in the genus Acacia, such as the whistling thorn and bullhorn acacia. The ants nest inside the plant's thorns. In exchange for shelter, the ants protect acacias from attack by herbivores (which they frequently eat when those are small enough, introducing a resource component to this service-service relationship) and competition from other plants by trimming back vegetation that would shade the acacia. In addition, another service-resource component is present, as the ants regularly feed on lipid-rich food-bodies called Beltian bodies that are on the Acacia plant.
In the neotropics, the ant Myrmelachista schumanni makes its nest in special cavities in Duroia hirsute. Plants in the vicinity that belong to other species are killed with formic acid. This selective gardening can be so aggressive that small areas of the rainforest are dominated by Duroia hirsute. These peculiar patches are known by local people as "devil's gardens".
In some of these relationships, the cost of the ant's protection can be quite expensive. Cordia sp. trees in the Amazonian rainforest have a kind of partnership with Allomerus sp. ants, which make their nests in modified leaves. To increase the amount of living space available, the ants will destroy the tree's flower buds. The flowers die and leaves develop instead, providing the ants with more dwellings. Another type of Allomerus sp. ant lives with the Hirtella sp. tree in the same forests, but in this relationship, the tree has turned the tables on the ants. When the tree is ready to produce flowers, the ant abodes on certain branches begin to wither and shrink, forcing the occupants to flee, leaving the tree's flowers to develop free from ant attack.
The term "species group" can be used to describe the manner in which individual organisms group together. In this non-taxonomic context one can refer to "same-species groups" and "mixed-species groups." While same-species groups are the norm, examples of mixed-species groups abound. For example, zebra (Equus burchelli) and wildebeest (Connochaetes taurinus) can remain in association during periods of long distance migration across the Serengeti as a strategy for thwarting predators. Cercopithecus mitis and Cercopithecus ascanius, species of monkey in the Kakamega Forest of Kenya, can stay in close proximity and travel along exactly the same routes through the forest for periods of up to 12 hours. These mixed-species groups cannot be explained by the coincidence of sharing the same habitat. Rather, they are created by the active behavioural choice of at least one of the species in question.
Evolution
Mutualistic symbiosis can sometimes evolve from parasitism or commensalism. Symbiogenesis, a leading theory on the evolution of Eukaryotes states the origin of the mitochondria and cell nucleus emerged from a parasitic relationship of ancient Archaea and Bacteria. Fungi's relationship to plants in the form of mycelium evolved from parasitism and commensalism. Under certain conditions species of fungi previously in a state of mutualism can turn parasitic on weak or dying plants. Likewise the symbiotic relationship of clown fish and sea anemones emerged from a commensalist relationship. Once a mutualistic relationship emerges both symbionts are pushed towards co-evolution with each other.
Mathematical modeling
Mathematical treatments of mutualisms, like the study of mutualisms in general, have lagged behind those for predation, or predator-prey, consumer-resource, interactions. In models of mutualisms, the terms "type I" and "type II" functional responses refer to the linear and saturating relationships, respectively, between the benefit provided to an individual of species 1 (dependent variable) and the density of species 2 (independent variable).
Type I functional response
One of the simplest frameworks for modeling species interactions is the Lotka–Volterra equations. In this model, the changes in population densities of the two mutualists are quantified as:
where
= the population density of species i.
= the intrinsic growth rate of the population of species i.
= the negative effect of within-species crowding on species i.
= the beneficial effect of the density of species j on species i.
Mutualism is in essence the logistic growth equation modified for mutualistic interaction. The mutualistic interaction term represents the increase in population growth of one species as a result of the presence of greater numbers of another species. As the mutualistic interactive term β is always positive, this simple model may lead to unrealistic unbounded growth. So it may be more realistic to include a further term in the formula, representing a saturation mechanism, to avoid this occurring.
Type II functional response
In 1989, David Hamilton Wright modified the above Lotka–Volterra equations by adding a new term, βM/K, to represent a mutualistic relationship. Wright also considered the concept of saturation, which means that with higher densities, there is a decrease in the benefits of further increases of the mutualist population. Without saturation, depending on the size of parameter α, species densities would increase indefinitely. Because that is not possible due to environmental constraints and carrying capacity, a model that includes saturation would be more accurate. Wright's mathematical theory is based on the premise of a simple two-species mutualism model in which the benefits of mutualism become saturated due to limits posed by handling time. Wright defines handling time as the time needed to process a food item, from the initial interaction to the start of a search for new food items and assumes that processing of food and searching for food are mutually exclusive. Mutualists that display foraging behavior are exposed to the restrictions on handling time. Mutualism can be associated with symbiosis.
Handling time interactions
In 1959, C. S. Holling performed his classic disc experiment that assumed that
the number of food items captured is proportional to the allotted searching time; and
that there is a handling time variable that exists separately from the notion of search time. He then developed an equation for the Type II functional response, which showed that the feeding rate is equivalent to
where
a = the instantaneous discovery rate
x = food item density
TH = handling time
The equation that incorporates Type II functional response and mutualism is:
where
N and M = densities of the two mutualists
r = intrinsic rate of increase of N
c = coefficient measuring negative intraspecific interaction. This is equivalent to inverse of the carrying capacity, 1/K, of N, in the logistic equation.
a = instantaneous discovery rate
b = coefficient converting encounters with M to new units of N
or, equivalently,
where
X = 1/aTH
β = b/TH
This model is most effectively applied to free-living species that encounter a number of individuals of the mutualist part in the course of their existences. Wright notes that models of biological mutualism tend to be similar qualitatively, in that the featured isoclines generally have a positive decreasing slope, and by and large similar isocline diagrams. Mutualistic interactions are best visualized as positively sloped isoclines, which can be explained by the fact that the saturation of benefits accorded to mutualism or restrictions posed by outside factors contribute to a decreasing slope.
The type II functional response is visualized as the graph of vs. M.
Structure of networks
Mutualistic networks made up out of the interaction between plants and pollinators were found to have a similar structure in very different ecosystems on different continents, consisting of entirely different species. The structure of these mutualistic networks may have large consequences for the way in which pollinator communities respond to increasingly harsh conditions and on the community carrying capacity.
Mathematical models that examine the consequences of this network structure for the stability of pollinator communities suggest that the specific way in which plant-pollinator networks are organized minimizes competition between pollinators, reduce the spread of indirect effects and thus enhance ecosystem stability and may even lead to strong indirect facilitation between pollinators when conditions are harsh. This means that pollinator species together can survive under harsh conditions. But it also means that pollinator species collapse simultaneously when conditions pass a critical point. This simultaneous collapse occurs, because pollinator species depend on each other when surviving under difficult conditions.
Such a community-wide collapse, involving many pollinator species, can occur suddenly when increasingly harsh conditions pass a critical point and recovery from such a collapse might not be easy. The improvement in conditions needed for pollinators to recover could be substantially larger than the improvement needed to return to conditions at which the pollinator community collapsed.
Humans
Humans are involved in mutualisms with other species: their gut flora is essential for efficient digestion. Infestations of head lice might have been beneficial for humans by fostering an immune response that helps to reduce the threat of body louse borne lethal diseases.
Some relationships between humans and domesticated animals and plants are to different degrees mutualistic. For example, agricultural varieties of maize provide food for humans and are unable to reproduce without human intervention because the leafy sheath does not fall open, and the seedhead (the "corn on the cob") does not shatter to scatter the seeds naturally.
In traditional agriculture, some plants have mutualist as companion plants, providing each other with shelter, soil fertility and/or natural pest control. For example, beans may grow up cornstalks as a trellis, while fixing nitrogen in the soil for the corn, a phenomenon that is used in Three Sisters farming.
One researcher has proposed that the key advantage Homo sapiens had over Neanderthals in competing over similar habitats was the former's mutualism with dogs.
Intestinal microbiota
The microbiota in the human intestine coevolved with the human species, and this relationship is considered to be a mutualism that is beneficial both to the human host and the bacteria in the gut population. The mucous layer of the intestine contains commensal bacteria that produce bacteriocins, modify the pH of the intestinal contents, and compete for nutrition to inhibit colonization by pathogens. The gut microbiota, containing trillions of microorganisms, possesses the metabolic capacity to produce and regulate multiple compounds that reach the circulation and act to influence the function of distal organs and systems. Breakdown of the protective mucosal barrier of the gut can contribute to the development of colon cancer.
Evolution of mutualism
Evolution by type
Every generation of every organism needs nutrients and similar nutrients more than they need particular defensive characteristics, as the fitness benefit of these vary heavily especially by environment. This may be the reason that hosts are more likely to evolve to become dependent on vertically transmitted bacterial mutualists which provide nutrients than those providing defensive benefits. This pattern is generalized beyond bacteria by Yamada et al. 2015's demonstration that undernourished Drosophila are heavily dependent on their fungal symbiont Issatchenkia orientalis for amino acids.
Mutualism breakdown
Mutualisms are not static, and can be lost by evolution. Sachs and Simms (2006) suggest that this can occur via four main pathways:
One mutualist shifts to parasitism, and no longer benefits its partner, such as headlice
One partner abandons the mutualism and lives autonomously
One partner may go extinct
A partner may be switched to another species
There are many examples of mutualism breakdown. For example, plant lineages inhabiting nutrient-rich environments have evolutionarily abandoned mycorrhizal mutualisms many times independently. Evolutionarily, headlice may have been mutualistic as they allow for early immunity to various body-louse borne disease; however, as these diseases became eradicated, the relationship has become less mutualistic and more parasitic.
Measuring and defining mutualism
Measuring the exact fitness benefit to the individuals in a mutualistic relationship is not always straightforward, particularly when the individuals can receive benefits from a variety of species, for example most plant-pollinator mutualisms. It is therefore common to categorise mutualisms according to the closeness of the association, using terms such as obligate and facultative. Defining "closeness", however, is also problematic. It can refer to mutual dependency (the species cannot live without one another) or the biological intimacy of the relationship in relation to physical closeness (e.g., one species living within the tissues of the other species).
| Biology and health sciences | Ecology | Biology |
51510 | https://en.wikipedia.org/wiki/Silk | Silk | Silk is a natural protein fiber, some forms of which can be woven into textiles. The protein fiber of silk is composed mainly of fibroin and is most commonly produced by certain insect larvae to form cocoons. The best-known silk is obtained from the cocoons of the larvae of the mulberry silkworm Bombyx mori reared in captivity (sericulture). The shimmering appearance of silk is due to the triangular prism-like structure of the silk fibre, which allows silk cloth to refract incoming light at different angles, thus producing different colors.
Harvested silk is produced by several insects; but, generally, only the silk of various moth caterpillars has been used for textile manufacturing. There has been some research into other types of silk, which differ at the molecular level. Silk is mainly produced by the larvae of insects undergoing complete metamorphosis, but some insects, such as webspinners and raspy crickets, produce silk throughout their lives. Silk production also occurs in hymenoptera (bees, wasps, and ants), silverfish, caddisflies, mayflies, thrips, leafhoppers, beetles, lacewings, fleas, flies, and midges. Other types of arthropods produce silk, most notably various arachnids, such as spiders.
Etymology
The word silk comes from , from and , "silken", ultimately from the Chinese word "sī" and other Asian sources—compare Mandarin "silk", Manchurian , Mongolian .
History
The production of silk originated in China in the Neolithic period, although it would eventually reach other places of the world ( culture, 4th millennium BC). Silk production remained confined to China until the Silk Road opened at some point during the latter part of the 1st millennium BC, though China maintained its virtual monopoly over silk production for another thousand years.
Wild silk
Several kinds of wild silk, produced by caterpillars other than the mulberry silkworm, have been known and spun in China, South Asia, and Europe since ancient times. However, the scale of production was always far smaller than for cultivated silks. There are several reasons for this: first, they differ from the domesticated varieties in colour and texture and are therefore less uniform; second, cocoons gathered in the wild have usually had the pupa emerge from them before being discovered so the silk thread that makes up the cocoon has been torn into shorter lengths; and third, many wild cocoons are covered in a mineral layer that prevents attempts to reel from them long strands of silk. Thus, the only way to obtain silk suitable for spinning into textiles in areas where commercial silks are not cultivated was by tedious and labor-intensive carding.
Some natural silk structures have been used without being unwound or spun. Spider webs were used as a wound dressing in ancient Greece and Rome, and as a base for painting from the 16th century. Butterfly caterpillar nests were pasted together to make a fabric in the Aztec Empire.
Commercial silks originate from reared silkworm pupae, which are bred to produce a white-colored silk thread with no mineral on the surface. The pupae are killed by either dipping them in boiling water before the adult moths emerge or by piercing them with a needle. These factors all contribute to the ability of the whole cocoon to be unravelled as one continuous thread, permitting a much stronger cloth to be woven from the silk.
Wild silks also tend to be more difficult to dye than silk from the cultivated silkworm. A technique known as demineralizing allows the mineral layer around the cocoon of wild silk moths to be removed, leaving only variability in color as a barrier to creating a commercial silk industry based on wild silks in the parts of the world where wild silk moths thrive, such as in Africa and South America.
China
Silk use in fabric was first developed in ancient China. The earliest evidence for silk is the presence of the silk protein fibroin in soil samples from two tombs at the neolithic site Jiahu in Henan, which date back about 8,500 years. The earliest surviving example of silk fabric dates from about 3630 BC, and was used as the wrapping for the body of a child at a Yangshao culture site in Qingtaicun near Xingyang, Henan.
Legend gives credit for developing silk to a Chinese empress, Leizu (Hsi-Ling-Shih, Lei-Tzu). Silks were originally reserved for the emperors of China for their own use and gifts to others, but spread gradually through Chinese culture and trade both geographically and socially, and then to many regions of Asia. Because of its texture and lustre, silk rapidly became a popular luxury fabric in the many areas accessible to Chinese merchants. Silk was in great demand, and became a staple of pre-industrial international trade. Silk was also used as a surface for writing, especially during the Warring States period (475–221 BCE). The fabric was light, it survived the damp climate of the Yangtze region, absorbed ink well, and provided a white background for the text. In July 2007, archaeologists discovered intricately woven and dyed silk textiles in a tomb in Jiangxi province, dated to the Eastern Zhou dynasty roughly 2,500 years ago. Although historians have suspected a long history of a formative textile industry in ancient China, this find of silk textiles employing "complicated techniques" of weaving and dyeing provides direct evidence for silks dating before the Mawangdui-discovery and other silks dating to the Han dynasty (202 BC – 220 AD).
Silk is described in a chapter of the Fan Shengzhi shu from the Western Han (202 BC – 9 AD). There is a surviving calendar for silk production in an Eastern Han (25–220 AD) document. The two other known works on silk from the Han period are lost. The first evidence of the long distance silk trade is the finding of silk in the hair of an Egyptian mummy of the 21st dynasty, c.1070 BC. The silk trade reached as far as the Indian subcontinent, the Middle East, Europe, and North Africa. This trade was so extensive that the major set of trade routes between Europe and Asia came to be known as the Silk Road.
The emperors of China strove to keep knowledge of sericulture secret to maintain the Chinese monopoly. Nonetheless, sericulture reached Korea with technological aid from China around 200 BC, the ancient Kingdom of Khotan by AD 50, and India by AD 140.
In the ancient era, silk from China was the most lucrative and sought-after luxury item traded across the Eurasian continent, and many civilizations, such as the ancient Persians, benefited economically from trade.
Japan
Archaeological evidence indicates that sericulture has been practiced since the Yayoi period. The silk industry was dominant from the 1930s to 1950s, but is less common now.
Silk from East Asia had declined in importance after silkworms were smuggled from China to the Byzantine Empire. However, in 1845, an epidemic of flacherie among European silkworms devastated the silk industry there. This led to a demand for silk from China and Japan, where as late as the nineteenth and early twentieth centuries, Japanese exports competed directly with Chinese in the international market in such low value-added, labor-intensive products as raw silk.
Between 1850 and 1930, raw silk ranked as the leading export for both countries, accounting for 20%–40% of Japan's total exports and 20%–30% of China's. Between the 1890s and the 1930s, Japanese silk exports quadrupled, making Japan the largest silk exporter in the world. This increase in exports was mostly due to the economic reforms during the Meiji period and the decline of the Qing dynasty in China, which led to rapid industrialization of Japan whilst the Chinese industries stagnated.
During World War II, embargoes against Japan had led to adoption of synthetic materials such as nylon, which led to the decline of the Japanese silk industry and its position as the lead silk exporter of the world. Today, China exports the largest volume of raw silk in the world.
India
Silk has a long history in India. It is known as Resham in eastern and north India, and Pattu in southern parts of India. Recent archaeological discoveries in Harappa and Chanhu-daro suggest that sericulture, employing wild silk threads from native silkworm species, existed in South Asia during the time of the Indus Valley civilisation (now in Pakistan and India) dating between 2450 BC and 2000 BC. Shelagh Vainker, a silk expert at the Ashmolean Museum in Oxford, who sees evidence for silk production in China "significantly earlier" than 2500–2000 BC, suggests, "people of the Indus civilization either harvested silkworm cocoons or traded with people who did, and that they knew a considerable amount about silk."
India is the second largest producer of silk in the world after China. About 97% of the raw mulberry silk comes from six Indian states, namely, Andhra Pradesh, Karnataka, Jammu and Kashmir, Tamil Nadu, Bihar, and West Bengal. North Bangalore, the upcoming site of a $20 million "Silk City" Ramanagara and Mysore, contribute to a majority of silk production in Karnataka.
In Tamil Nadu, mulberry cultivation is concentrated in the Coimbatore, Erode, Bhagalpuri, Tiruppur, Salem, and Dharmapuri districts. Hyderabad, Andhra Pradesh, and Gobichettipalayam, Tamil Nadu, were the first locations to have automated silk reeling units in India.
In the northeastern state of Assam, three different types of indigenous variety of silk are produced, collectively called Assam silk: Muga silk, Eri silk and Pat silk. Muga, the golden silk, and Eri are produced by silkworms that are native only to Assam. They have been reared since ancient times similar to other East and South-East Asian countries.
Thailand
Silk is produced year-round in Thailand by two types of silkworms, the cultured Bombycidae and wild Saturniidae. Most production is after the rice harvest in the southern and northeastern parts of the country. Women traditionally weave silk on hand looms and pass the skill on to their daughters, as weaving is considered to be a sign of maturity and eligibility for marriage. Thai silk textiles often use complicated patterns in various colours and styles. Most regions of Thailand have their own typical silks. A single thread filament is too thin to use on its own so women combine many threads to produce a thicker, usable fiber. They do this by hand-reeling the threads onto a wooden spindle to produce a uniform strand of raw silk. The process takes around 40 hours to produce a half kilogram of silk. Many local operations use a reeling machine for this task, but some silk threads are still hand-reeled. The difference is that hand-reeled threads produce three grades of silk: two fine grades that are ideal for lightweight fabrics, and a thick grade for heavier material.
The silk fabric is soaked in extremely cold water and bleached before dyeing to remove the natural yellow coloring of Thai silk yarn. To do this, skeins of silk thread are immersed in large tubs of hydrogen peroxide. Once washed and dried, the silk is woven on a traditional hand-operated loom.
Bangladesh
The Rajshahi Division of northern Bangladesh is the hub of the country's silk industry. There are three types of silk produced in the region: mulberry, endi, and tassar. Bengali silk was a major item of international trade for centuries. It was known as Ganges silk in medieval Europe. Bengal was the leading exporter of silk between the 16th and 19th centuries.
Central Asia
The 7th century CE murals of Afrasiyab in Samarkand, Sogdiana, show a Chinese Embassy carrying silk and a string of silkworm cocoons to the local Sogdian ruler.
Middle East
In the Torah, a scarlet cloth item called in Hebrew "sheni tola'at" שני תולעת – literally "crimson of the worm" – is described as being used in purification ceremonies, such as those following a leprosy outbreak (Leviticus 14), alongside cedar wood and hyssop (za'atar). Eminent scholar and leading medieval translator of Jewish sources and books of the Bible into Arabic, Rabbi Saadia Gaon, translates this phrase explicitly as "crimson silk" – חריר קרמז حرير قرمز.
In Islamic teachings, Muslim men are forbidden to wear silk. Many religious jurists believe the reasoning behind the prohibition lies in avoiding clothing for men that can be considered feminine or extravagant. There are disputes regarding the amount of silk a fabric can consist of (e.g., whether a small decorative silk piece on a cotton caftan is permissible or not) for it to be lawful for men to wear, but the dominant opinion of most Muslim scholars is that the wearing of silk by men is forbidden. Modern attire has raised a number of issues, including, for instance, the permissibility of wearing silk neckties, which are masculine articles of clothing.
Ancient Mediterranean
In the Odyssey, 19.233, when Odysseus, while pretending to be someone else, is questioned by Penelope about her husband's clothing, he says that he wore a shirt "gleaming like the skin of a dried onion" (varies with translations, literal translation here) which could refer to the lustrous quality of silk fabric.
Aristotle wrote of Coa vestis, a wild silk textile from Kos.
Sea silk from certain large sea shells was also valued.
The Roman Empire knew of and traded in silk, and Chinese silk was the most highly priced luxury good imported by them. During the reign of emperor Tiberius, sumptuary laws were passed that forbade men from wearing silk garments, but these proved ineffectual. The Historia Augusta mentions that the third-century emperor Elagabalus was the first Roman to wear garments of pure silk, whereas it had been customary to wear fabrics of silk/cotton or silk/linen blends. Despite the popularity of silk, the secret of silk-making only reached Europe around AD 550, via the Byzantine Empire. Contemporary accounts state that monks working for the emperor Justinian I smuggled silkworm eggs to Constantinople from China inside hollow canes. All top-quality looms and weavers were located inside the Great Palace complex in Constantinople, and the cloth produced was used in imperial robes or in diplomacy, as gifts to foreign dignitaries. The remainder was sold at very high prices.
Medieval and modern Europe
Italy was the most important producer of silk during the Medieval age. The first center to introduce silk production to Italy was the city of Catanzaro during the 11th century in the region of Calabria. The silk of Catanzaro supplied almost all of Europe and was sold in a large market fair in the port of Reggio Calabria, to Spanish, Venetian, Genovese, and Dutch merchants. Catanzaro became the lace capital of the world with a large silkworm breeding facility that produced all the laces and linens used in the Vatican. The city was world-famous for its fine fabrication of silks, velvets, damasks, and brocades.
Another notable center was the Italian city-state of Lucca which largely financed itself through silk-production and silk-trading, beginning in the 12th century. Other Italian cities involved in silk production were Genoa, Venice, and Florence. The Piedmont area of Northern Italy became a major silk producing area when water-powered silk throwing machines were developed.
The Silk Exchange in Valencia from the 15th century—where previously in 1348 also perxal (percale) was traded as some kind of silk—illustrates the power and wealth of one of the great Mediterranean mercantile cities.
Silk was produced in and exported from the province of Granada, Spain, especially the Alpujarras region, until the Moriscos, whose industry it was, were expelled from Granada in 1571.
Since the 15th century, silk production in France has been centered around the city of Lyon where many mechanic tools for mass production were first introduced in the 17th century.
James I attempted to establish silk production in England, purchasing and planting 100,000 mulberry trees, some on land adjacent to Hampton Court Palace, but they were of a species unsuited to the silk worms, and the attempt failed. In 1732 John Guardivaglio set up a silk throwing enterprise at Logwood mill in Stockport; in 1744, Burton Mill was erected in Macclesfield; and in 1753 Old Mill was built in Congleton. These three towns remained the centre of the English silk throwing industry until silk throwing was replaced by silk waste spinning. British enterprise also established silk filature in Cyprus in 1928. In England in the mid-20th century, raw silk was produced at Lullingstone Castle in Kent. Silkworms were raised and reeled under the direction of Zoe Lady Hart Dyke, later moving to Ayot St Lawrence in Hertfordshire in 1956.
During World War II, supplies of silk for UK parachute manufacture were secured from the Middle East by Peter Gaddum.
North America
Wild silk taken from the nests of native butterfly and moth caterpillars was used by the Aztecs to make containers and as paper. Silkworms were introduced to Oaxaca from Spain in the 1530s and the region profited from silk production until the early 17th century, when the king of Spain banned export to protect Spain's silk industry. Silk production for local consumption has continued until the present day, sometimes spinning wild silk.
King James I introduced silk-growing to the British colonies in America around 1619, ostensibly to discourage tobacco planting. The Shakers in Kentucky adopted the practice.
The history of industrial silk in the United States is largely tied to several smaller urban centers in the Northeast region. Beginning in the 1830s, Manchester, Connecticut emerged as the early center of the silk industry in America, when the Cheney Brothers became the first in the United States to properly raise silkworms on an industrial scale; today the Cheney Brothers Historic District showcases their former mills. With the mulberry tree craze of that decade, other smaller producers began raising silkworms. This economy particularly gained traction in the vicinity of Northampton, Massachusetts and its neighboring Williamsburg, where a number of small firms and cooperatives emerged. Among the most prominent of these was the cooperative utopian Northampton Association for Education and Industry, of which Sojourner Truth was a member. Following the destructive Mill River Flood of 1874, one manufacturer, William Skinner, relocated his mill from Williamsburg to the then-new city of Holyoke. Over the next 50 years he and his sons would maintain relations between the American silk industry and its counterparts in Japan, and expanded their business to the point that by 1911, the Skinner Mill complex contained the largest silk mill under one roof in the world, and the brand Skinner Fabrics had become the largest manufacturer of silk satins internationally. Other efforts later in the 19th century would also bring the new silk industry to Paterson, New Jersey, with several firms hiring European-born textile workers and granting it the nickname "Silk City" as another major center of production in the United States.
World War II interrupted the silk trade from Asia, and silk prices increased dramatically. U.S. industry began to look for substitutes, which led to the use of synthetics such as nylon. Synthetic silks have also been made from lyocell, a type of cellulose fiber, and are often difficult to distinguish from real silk (see spider silk for more on synthetic silks).
Malaysia
In Terengganu, which is now part of Malaysia, a second generation of silkworm was being imported as early as 1764 for the country's silk textile industry, especially songket. However, since the 1980s, Malaysia is no longer engaged in sericulture but does plant mulberry trees.
Vietnam
In Vietnamese legend, silk appeared in the first millennium AD and is still being woven today.
Production process
The process of silk production is known as sericulture. The entire production process of silk can be divided into several steps which are typically handled by different entities. Extracting raw silk starts by cultivating the silkworms on mulberry leaves. Once the worms start pupating in their cocoons, these are dissolved in boiling water in order for individual long fibres to be extracted and fed into the spinning reel.
To produce 1 kg of silk, 104 kg of mulberry leaves must be eaten by 3000 silkworms. It takes about 5000 silkworms to make a pure silk kimono. The major silk producers are China (54%) and India (14%). Other statistics:
The environmental impact of silk production is potentially large when compared with other natural fibers. A life-cycle assessment of Indian silk production shows that the production process has a large carbon and water footprint, mainly due to the fact that it is an animal-derived fiber and more inputs such as fertilizer and water are needed per unit of fiber produced.
Properties
Physical properties
Silk fibers from the Bombyx mori silkworm have a triangular cross section with rounded corners, 5–10 μm wide. The fibroin-heavy chain is composed mostly of beta-sheets, due to a 59-mer amino acid repeat sequence with some variations. The flat surfaces of the fibrils reflect light at many angles, giving silk a natural sheen. The cross-section from other silkworms can vary in shape and diameter: crescent-like for Anaphe and elongated wedge for tussah. Silkworm fibers are naturally extruded from two silkworm glands as a pair of primary filaments (brin), which are stuck together, with sericin proteins that act like glue, to form a bave. Bave diameters for tussah silk can reach 65 μm. See cited reference for cross-sectional SEM photographs.
Silk has a smooth, soft texture that is not slippery, unlike many synthetic fibers.
Silk is one of the strongest natural fibers, but it loses up to 20% of its strength when wet. It has a good moisture regain of 11%. Its elasticity is moderate to poor: if elongated even a small amount, it remains stretched. It can be weakened if exposed to too much sunlight. It may also be attacked by insects, especially if left dirty.
One example of the durable nature of silk over other fabrics is demonstrated by the recovery in 1840 of silk garments from a wreck of 1782: 'The most durable article found has been silk; for besides pieces of cloaks and lace, a pair of black satin breeches, and a large satin waistcoat with flaps, were got up, of which the silk was perfect, but the lining entirely gone ... from the thread giving way ... No articles of dress of woollen cloth have yet been found.'
Silk is a poor conductor of electricity and thus susceptible to static cling. Silk has a high emissivity for infrared light, making it feel cool to the touch.
Unwashed silk chiffon may shrink up to 8% due to a relaxation of the fiber macrostructure, so silk should either be washed prior to garment construction, or dry cleaned. Dry cleaning may still shrink the chiffon up to 4%. Occasionally, this shrinkage can be reversed by a gentle steaming with a press cloth. There is almost no gradual shrinkage nor shrinkage due to molecular-level deformation.
Natural and synthetic silk is known to manifest piezoelectric properties in proteins, probably due to its molecular structure.
Silkworm silk was used as the standard for the denier, a measurement of linear density in fibers. Silkworm silk therefore has a linear density of approximately 1 den, or 1.1 dtex.
Chemical properties
Silk emitted by the silkworm consists of two main proteins, sericin and fibroin, fibroin being the structural center of the silk, and sericin being the sticky material surrounding it. Fibroin is made up of the amino acids Gly-Ser-Gly-Ala-Gly-Ala and forms beta pleated sheets. Hydrogen bonds form between chains, and side chains are oriented above and below the plane of the hydrogen bond network.
The high proportion (50%) of glycine allows tight packing. This is because glycine has no side chain and is therefore unencumbered by steric strain. The addition of alanine and serine makes the fibres strong and resistant to breaking. This tensile strength is due to the many interceded hydrogen bonds, and when stretched the force is applied to these numerous bonds and they do not break.
Silk resists most mineral acids, except for sulfuric acid, which dissolves it. It is yellowed by perspiration. Chlorine bleach will also destroy silk fabrics.
Variants
Regenerated silk fiber
RSF is produced by chemically dissolving silkworm cocoons, leaving their molecular structure intact. The silk fibers dissolve into tiny thread-like structures known as microfibrils. The resulting solution is extruded through a small opening, causing the microfibrils to reassemble into a single fiber. The resulting material is reportedly twice as stiff as silk.
Applications
Clothing
Silk's absorbency makes it comfortable to wear in warm weather and while active. Its low conductivity keeps warm air close to the skin during cold weather. It is often used for clothing such as shirts, ties, blouses, formal dresses, high-fashion clothes, lining, lingerie, pajamas, robes, dress suits, sun dresses, and traditional Asian clothing. Silk is also excellent for insect-proof clothing, protecting the wearer from mosquitoes and horseflies.
Fabrics that are often made from silk include satin, charmeuse, habutai, chiffon, taffeta, crêpe de chine, dupioni, noil, tussah, and shantung, among others.
Furniture
Silk's attractive lustre and drape makes it suitable for many furnishing applications. It is used for upholstery, wall coverings, window treatments (if blended with another fiber), rugs, bedding, and wall hangings.
Industry
Silk had many industrial and commercial uses, such as in parachutes, bicycle tires, comforter filling, and artillery gunpowder bags.
Medicine
A special manufacturing process removes the outer sericin coating of the silk, which makes it suitable as non-absorbable surgical sutures. Sometimes wearing silk is suggested for people with atopic dermatitis but, even though it is safe for the skin, it does not improve symptoms of the condition.
Biomaterial
Silk began to serve as a biomedical material for sutures in surgeries as early as the second century CE. In the past 30 years, it has been widely studied and used as a biomaterial due to its mechanical strength, biocompatibility, tunable degradation rate, ease to load cellular growth factors (for example, BMP-2), and its ability to be processed into several other formats such as films, gels, particles, and scaffolds. Silks from Bombyx mori, a kind of cultivated silkworm, are the most widely investigated silks.
Silks derived from Bombyx mori are generally made of two parts: the silk fibroin fiber which contains a light chain of 25 kDa and a heavy chain of 350 kDa (or 390 kDa) linked by a single disulfide bond and a glue-like protein, sericin, comprising 25 to 30 percentage by weight. Silk fibroin contains hydrophobic beta sheet blocks, interrupted by small hydrophilic groups. The beta-sheets contribute much to the high mechanical strength of silk fibers, which achieves 740 MPa, tens of times that of poly(lactic acid) and hundreds of times that of collagen. This impressive mechanical strength has made silk fibroin very competitive for applications in biomaterials. Indeed, silk fibers have found their way into tendon tissue engineering, where mechanical properties matter greatly. In addition, mechanical properties of silks from various kinds of silkworms vary widely, which provides more choices for their use in tissue engineering.
Most products fabricated from regenerated silk are weak and brittle, with only ≈1–2% of the mechanical strength of native silk fibers due to the absence of appropriate secondary and hierarchical structure,
Biocompatibility
Biocompatibility, i.e., to what level the silk will cause an immune response, is a critical issue for biomaterials. The issue arose during its increasing clinical use. Wax or silicone is usually used as a coating to avoid fraying and potential immune responses when silk fibers serve as suture materials. Although the lack of detailed characterization of silk fibers, such as the extent of the removal of sericin, the surface chemical properties of coating material, and the process used, make it difficult to determine the real immune response of silk fibers in literature, it is generally believed that sericin is the major cause of immune response. Thus, the removal of sericin is an essential step to assure biocompatibility in biomaterial applications of silk. However, further research fails to prove clearly the contribution of sericin to inflammatory responses based on isolated sericin and sericin based biomaterials. In addition, silk fibroin exhibits an inflammatory response similar to that of tissue culture plastic in vitro when assessed with human mesenchymal stem cells (hMSCs) or lower than collagen and PLA when implant rat MSCs with silk fibroin films in vivo. Thus, appropriate degumming and sterilization will assure the biocompatibility of silk fibroin, which is further validated by in vivo experiments on rats and pigs. There are still concerns about the long-term safety of silk-based biomaterials in the human body in contrast to these promising results. Even though silk sutures serve well, they exist and interact within a limited period depending on the recovery of wounds (several weeks), much shorter than that in tissue engineering. Another concern arises from biodegradation because the biocompatibility of silk fibroin does not necessarily assure the biocompatibility of the decomposed products. In fact, different levels of immune responses and diseases have been triggered by the degraded products of silk fibroin.
Biodegradability
Biodegradability (also known as biodegradation)—the ability to be disintegrated by biological approaches, including bacteria, fungi, and cells—is another significant property of biomaterials. Biodegradable materials can minimize the pain of patients from surgeries, especially in tissue engineering, since there is no need for surgery in order to remove the implanted scaffold. Wang et al. showed the in vivo degradation of silk via aqueous 3D scaffolds implanted into Lewis rats. Enzymes are the means used to achieve degradation of silk in vitro. Protease XIV from Streptomyces griseus and α-chymotrypsin from bovine pancreases are two popular enzymes for silk degradation. In addition, gamma radiation, as well as cell metabolism, can also regulate the degradation of silk.
Compared with synthetic biomaterials such as polyglycolides and polylactides, silk is advantageous in some aspects of biodegradation. The acidic degraded products of polyglycolides and polylactides will decrease the pH of the ambient environment and thus adversely influence the metabolism of cells, which is not an issue for silk. In addition, silk materials can retain strength over a desired period from weeks to months on an as-needed basis, by mediating the content of beta sheets.
Genetic modification
Genetic modification of domesticated silkworms has been used to alter the composition of the silk. As well as possibly facilitating the production of more useful types of silk, this may allow other industrially or therapeutically useful proteins to be made by silkworms.
Cultivation
Silk moths lay eggs on specially prepared paper. The eggs hatch and the caterpillars (silkworms) are fed fresh mulberry leaves. After about 35 days and 4 moltings, the caterpillars are 10,000 times heavier than when hatched and are ready to begin spinning a cocoon. A straw frame is placed over the tray of caterpillars, and each caterpillar begins spinning a cocoon by moving its head in a pattern. Two glands produce liquid silk and force it through openings in the head called spinnerets. Liquid silk is coated in sericin, a water-soluble protective gum, and solidifies on contact with the air. Within 2–3 days, the caterpillar spins about of filament and is completely encased in a cocoon. The silk farmers then heat the cocoons to kill them, leaving some to metamorphose into moths to breed the next generation of caterpillars. Harvested cocoons are then soaked in boiling water to soften the sericin holding the silk fibers together in a cocoon shape. The fibers are then unwound to produce a continuous thread. Since a single thread is too fine and fragile for commercial use, anywhere from three to ten strands are spun together to form a single thread of silk.
Animal rights
As the process of harvesting the silk from the cocoon kills the larvae by boiling, sericulture has been criticized by animal welfare activists, including People for the Ethical Treatment of Animals (PETA), who urge people not to buy silk items.
Mahatma Gandhi was critical of silk production because of his Ahimsa (non-violent) philosophy, which led to the promotion of cotton and Ahimsa silk, a type of wild silk made from the cocoons of wild and semi-wild silk moths.
| Technology | Fabrics and fibers | null |
51513 | https://en.wikipedia.org/wiki/Arrow | Arrow | An arrow is a fin-stabilized projectile launched by a bow. A typical arrow usually consists of a long, stiff, straight shaft with a weighty (and usually sharp and pointed) arrowhead attached to the front end, multiple fin-like stabilizers called fletchings mounted near the rear, and a slot at the rear end called a nock for engaging the bowstring. A container or bag carrying additional arrows for convenient reloading is called a quiver.
The use of bows and arrows by humans predates recorded history and is common to most cultures. A craftsman who makes arrows is a fletcher, and one who makes arrowheads is an arrowsmith.
History
The oldest evidence of likely arrowheads, dating to c. 64,000 years ago, were found in Sibudu Cave, current South Africa. Likely arrowheads made from animal bones have been discovered in the Fa Hien Cave in Sri Lanka which are also the oldest evidence for the use of arrows outside of Africa dating to c. 48,000 years ago. The oldest evidence of the use of bows to shoot arrows dates to about 10,000 years ago; it is based on pinewood arrows found in the Ahrensburg valley north of Hamburg. They had shallow grooves on the base, indicating that they were shot from a bow. The oldest bow so far recovered is about 8,000 years old, found in the Holmegård swamp in Denmark. Archery seems to have arrived in the Americas with the Arctic small tool tradition, about 4,500 years ago.
Size
Arrow sizes vary greatly across cultures, ranging from eighteen inches to five feet (45 cm to 152 cm). However, most modern arrows are to in length. Arrows recovered from the Mary Rose, an English warship that sank in 1545 whose remains were raised in 1982, were mostly long. Very short arrows have been used, shot through a guide attached either to the bow (an "overdraw") or to the archer's wrist (the Turkish "siper"). These may fly farther than heavier arrows, and an enemy without suitable equipment may find himself unable to return them.
Components
Shaft
The shaft is the primary structural element of the arrow, to which the other components are attached. Traditional arrow shafts are made from strong, lightweight wood, bamboo, or reeds, while modern shafts may be made from aluminium, carbon fibre reinforced plastic, or a combination of materials. Such shafts are typically made from an aluminium core wrapped with a carbon fibre outer. A traditional premium material is Port Orford Cedar.
Spine
The stiffness of the shaft is known as its spine, referring to how little the shaft bends when compressed, hence an arrow which bends less is said to have more spine. In order to strike consistently, a group of arrows must be similarly spined. "Center-shot" bows, in which the arrow passes through the central vertical axis of the bow riser, may obtain consistent results from arrows with a wide range of spines. However, most traditional bows are not center-shot and the arrow has to deflect around the handle in the archer's paradox; such bows tend to give most consistent results with a narrower range of arrow spine that allows the arrow to deflect correctly around the bow. Bows with higher draw weight will generally require stiffer arrows, with more spine (less flexibility) to give the correct amount of flex when shot.
GPI rating
The weight of an arrow shaft can be expressed in GPI (grains per inch). The length of a shaft in inches multiplied by its GPI rating gives the weight of the shaft in grains. For example, a shaft that is long and has a GPI of 9.5 weighs . This does not include the other elements of a finished arrow, so a complete arrow will be heavier than the shaft alone.
Footed arrows
Sometimes a shaft will be made of two different types of wood fastened together, resulting in what is known as a footed arrow. Known by some as the finest of wood arrows, footed arrows were used both by early Europeans and Native Americans. Footed arrows will typically consist of a short length of hardwood near the head of the arrow, with the remainder of the shaft consisting of softwood. By reinforcing the area most likely to break, the arrow is more likely to survive impact, while maintaining overall flexibility and lighter weight.
Barreled arrow shafts
A barreled arrow shaft is one that tapers in diameter bi-directionally. This allows for an arrow that has an optimum weight yet retains enough strength to resist flex.
Barreled arrow shafts are considered the zenith of pre-industrial archery technology, reaching their peak design among the Ottomans.
Arrowhead
The arrowhead or projectile point is the primary functional part of the arrow, and plays the largest role in determining its purpose. Some arrows may simply use a sharpened tip of the solid shaft, but it is far more common for separate arrowheads to be made, usually from metal, horn, or some other hard material. Arrowheads are usually separated by function:
Bodkin points are short, rigid points with a small cross-section. They were made of unhardened iron and may have been used for better or longer flight, or for cheaper production. It has been mistakenly suggested that the bodkin came into its own as a means of penetrating armour, but research has found no hardened bodkin points, so it is likely that it was first designed either to extend range or as a cheaper and simpler alternative to the broadhead. In a modern test, a direct hit from a hard steel bodkin point penetrated Damascus chain armour. However, archery was not effective against plate armour, which became available to knights of fairly modest means by the late 14th century.
Blunts are unsharpened arrowheads occasionally used for types of target shooting, for shooting at stumps or other targets of opportunity, or hunting small game when the goal is to concuss the target without penetration. Blunts are commonly made of metal or hard rubber. They may stun, and occasionally, the arrow shaft may penetrate the head and the target; safety is still important with blunt arrows.
Judo points have spring wires extending sideways from the tip. These catch on grass and debris to prevent the arrow from being lost in the vegetation. Used for practice and for small game.
Broadheads were used for war and are still used for hunting. Medieval broadheads could be made from steel, sometimes with hardened edges. They usually have two to four sharp blades that cause massive bleeding in the victim. Their function is to deliver a wide cutting edge so as to kill as quickly as possible by cleanly cutting major blood vessels, and cause further trauma on removal. They are expensive, damage most targets, and are usually not used for practice. Broadheads are illegal in the UK.
There are two main types of broadheads used by hunters: the fixed-blade and the mechanical types. While the fixed-blade broadhead keeps its blades rigid and unmovable on the broadhead at all times, the mechanical broadhead deploys its blades upon contact with the target, its blades swinging out to wound the target. The mechanical head flies better because it is more streamlined, but has less penetration as it uses some of the kinetic energy in the arrow to deploy its blades. Preferences for fixed or mechanical broadheads vary widely in the hunting community, typically varying by species. In general, both types are widely used on game up to the size of white-tailed deer; as game size increases, fixed blades become more common. Broadheads used for hunting dangerous game such as African buffalo are almost always fixed-blade.
Field tips or field points are similar to target points and have a distinct shoulder, so that missed outdoor shots do not become as stuck in obstacles such as tree stumps. They are also used for shooting practice by hunters, by offering similar flight characteristics and weights as broadheads, without getting lodged in target materials and causing excessive damage upon removal.
Target points are bullet-shaped with a conical point, designed to penetrate target butts easily without causing excessive damage to them.
Safety arrows are designed to be used in various forms of reenactment combat, to reduce the risk when shot at people. These arrows may have heads that are very wide or padded, such as the large foam ball tip used in archery tag. In combination with bows of restricted draw weight and draw length, these heads may reduce to acceptable levels the risks of shooting arrows at suitably armoured people. The parameters will vary depending on the specific rules being used and on the levels of risk felt acceptable to the participants. For instance, SCA combat rules require a padded head at least in diameter, with bows not exceeding and of draw for use against well-armoured individuals.
Arrowheads may be attached to the shaft with a cap, a socketed tang, or inserted into a split in the shaft and held by a process called hafting. Points attached with caps are simply slid snugly over the end of the shaft, or may be held on with hot glue. Split-shaft construction involves splitting the arrow shaft lengthwise, inserting the arrowhead, and securing it using a ferrule, sinew, or wire.
Fletchings
Fletchings are found at the back of the arrow and act as airfoils to provide a small amount of force used to stabilize the flight of the arrow. They are designed to keep the arrow pointed in the direction of travel by strongly damping down any tendency to pitch or yaw. Some cultures, for example most in New Guinea, did not use fletching on their arrows. Also, arrows without fletching (called bare shaft) are used for training purposes, because they make certain errors by the archer more visible.
Fletchings are traditionally made from feathers (often from a goose or turkey) bound to the arrow's shaft, but are now often made of plastic (known as "vanes"). Historically, some arrows used for the proofing of armour used copper vanes. Flight archers may use razor blades for fletching, in order to reduce air resistance. With conventional three-feather fletching, one feather, called the "cock" feather, is at a right angle to the nock, and is normally nocked so that it will not contact the bow when the arrow is shot. Four-feather fletching is usually symmetrical and there is no preferred orientation for the nock; this makes nocking the arrow slightly easier.
Natural feathers are usually prepared by splitting and sanding the quill before gluing. Further, the feather may be trimmed to shape, die-cut or burned by a hot electrically heated wire. It is crucial that all the feathers of an arrow have the same drag, so manual trimming is rarely used by modern fletchers. The burning-wire method is popular because different shapes are possible by bending the wire, and the fletching can be symmetrically trimmed after gluing by rotating the arrow on a fixture.
Some fletchings are dyed. Two-toned fletchings usually make each fletching from two feathers knit together. The front fletching is often camouflaged, and the rear fletching bright so that the archer can easily track the arrow.
Artisans who make arrows by hand are known as "fletchers", a word related to the French word for arrow, flèche. This is the same derivation as the verb "fletch", meaning to provide an arrow with its feathers. Glue and thread are the traditional methods of attaching fletchings. A "fletching jig" is often used in modern times, to hold the fletchings in exactly the right orientation on the shaft while the glue hardens.
Whenever natural fletching is used, the feathers on any one arrow must come from the same wing of the bird, the most common being the right-wing flight feathers of turkeys. The slight cupping of natural feathers requires them to be fletched with a right-twist for right wing, a left-twist for left wing. This rotation, through a combination of gyroscopic stabilization and increased drag on the rear of the arrow, helps the arrow to fly straight away. Artificial helical fletchings have the same effect. Most arrows will have three fletches, but some have four or even more. Fletchings generally range from in length; flight arrows intended to travel the maximum possible distance typically have very low fletching, while hunting arrows with broadheads require long and high fletching to stabilize them against the aerodynamic effect of the head. Fletchings may also be cut in different ways, the two most common being parabolic (i.e. a smooth curved shape) and shield (i.e. shaped as one-half of a very narrow shield) cut.
In modern archery with screw-in points, right-hand rotation is generally preferred as it makes the points self-tighten. In traditional archery, some archers prefer a left rotation because it gets the hard (and sharp) quill of the feather farther away from the arrow-shelf and the shooter's hand.
A flu-flu is a type of fletching normally made by using long sections of full length feathers taken from a turkey; in most cases, six or more sections are used rather than the traditional three. Alternatively two long feathers can be spiraled around the end of the arrow shaft. The extra fletching generates more drag and slows the arrow down rapidly after a short distance of about or so. Flu-flu arrows are often used for hunting birds, or for children's archery, and can also be used to play flu-flu golf.
Wraps
Wraps are thin pre-cut sheets of material, often vinyl or plastic, used to wrap the nock end of an arrow, primarily as an aid in bonding vanes and feather fletchings to the shaft. Wraps can also make the eventual removal of vanes and vane-glue easier. Additionally, they add a decorative aspect to arrow building, which can provide archers an opportunity to personalize their arrows. Brightly colored wraps can also make arrows much easier to find in the brush, and to see in downrange targets.
Nocks
In English it is common to say "nock an arrow" when one readies a shot. A nock is a notch in the rearmost end of an arrow. It helps keep the arrow correctly rotated, keeps the arrow from slipping sideways during the draw or after the release, and helps maximize the arrow's energy (i.e. its range and lethality) by helping an archer place the arrow at the fastest-moving place on the bowstring. Some archers mark the nock position with beads, knots or wrappings of thread. Most compound bow shooters use a D-loop, a length of string material (or sometimes a metal bracket) attached to the string above and below the nocking point. A release aid is typically attached to the D-loop in preparation for a shot.
The main purpose of a nock is to control the rotation of the arrow. Arrows bend when released. If the bend hits the bowstave, the arrow's aim will be thrown off. Wooden arrows have a preferred bending-plane. Synthetic arrows have a designed bending plane. Usually this plane is determined by the grain of the wood of the arrow, or the structure of a synthetic arrow. The nock's slot should be rotated at an angle chosen so that when the arrow bends, it avoids or slides on the bowstave. Almost always this means that the slot of the nock must be perpendicular to the wood's grain, viewed from behind.
Self nocks are slots cut in the back of the arrow. These are simple, but can break at the base of the slot. Self nocks are often reinforced with glued servings of fiber near the base of the slot. The sturdiest nocks are separate pieces made from wood, plastic, or horn that are then attached to the end of the arrow.
Modern nocks, and traditional Turkish nocks, are often constructed so as to curve around the string or even pinch it slightly, so that the arrow is unlikely to slip off.
Ancient Arab archery sometimes used "nockless arrows". In shooting at enemies, Arabs saw them pick up Arab arrows and shoot them back. So Arabs developed bowstrings with a small ring tied where the nock would normally be placed. The rear end of the arrow would be sharpened to a point, rather than slit for a nock. The rear end of the arrow would slip into the ring. The arrow could be drawn and released as usual. Then the enemy could collect the arrows, yet not shoot them back with a conventional bow. Also, since there was no nock, the nock could not break, and the arrow was less expensive. A piece of battle advice was to have several rings tied to the bowstring in case one broke. A practical disadvantage compared to a nock would be preserving the optimal rotation of the arrow, so that when it flexes, it does not hit the bowstave. The bend direction of the arrow might have been indicated by its fletching.
"Some arrow materials like hollow cane/bamboo/reed shafting lend themselves to nock inserts. Softer woods like pine or cedar also required some sort of reinforcement of hardwood, bone or horn which kept the string from splitting their shaft upon release. Hardwood such as oak and ash did not need additional reinforcement. To reinforce a nock, most often a slit was cut into the end of the shaft, and a sliver of harder material, the same width as the shaft, was glued into the slot. The arrow was then rotated 90 degrees, and a shallower slot was cut for the string. When made in this manner, the string actually pushed the wood or bone insert rather than the soft wood itself, preventing the shaft from splitting. Another method of preventing nocks from splitting was to bind the arrow between the nock and the back of the fletch with sinew and hide glue or a rough cord such as silk attached with adhesive, whether it be fish glue or birch tar."
Finishes and cresting
Arrows are usually finished so that they are not softened by rain, fog or condensation. Traditional finishes are varnishes or lacquers. Arrows sometimes need to be repaired, so it's important that the paints be compatible with glues used to attach arrowheads, fletchings, and nocks. For this reason, arrows are rarely protected by waxing.
Crests are rings or bands of paint, often brightly colored, applied to arrows on a lathe-like tool called a cresting machine, usually for the purpose of personalization. Like wraps, cresting may also be done to make arrows easier to see.
Symbolism
An arrow symbol (→) is a simple graphical or typographical representation of an arrow, consisting of a triangle or chevron at the end of a straight line. It is used to indicate a direction, such as on signs and as road surface markings.
A symbol often used by aromantic people is arrows or an arrow as the word arrow is a homophone to the shortened word aro used by aromantic people to refer to themselves.
Ancient Indian astronomers often associate the interdependent trigonometrical components with the picture of a bow and arrow, the arrow (utkrama-jyā) equivalent to the present day secant.
| Technology | Archery | null |
51518 | https://en.wikipedia.org/wiki/Dam | Dam | A dam is a barrier that stops or restricts the flow of surface water or underground streams. Reservoirs created by dams not only suppress floods but also provide water for activities such as irrigation, human consumption, industrial use, aquaculture, and navigability. Hydropower is often used in conjunction with dams to generate electricity. A dam can also be used to collect or store water which can be evenly distributed between locations. Dams generally serve the primary purpose of retaining water, while other structures such as floodgates or levees (also known as dikes) are used to manage or prevent water flow into specific land regions.
The word dam can be traced back to Middle English, and before that, from Middle Dutch, as seen in the names of many old cities, such as Amsterdam and Rotterdam.
Ancient dams were built in Mesopotamia and the Middle East for water control. The earliest known dam is the Jawa Dam in Jordan, dating to 3,000 BC. Egyptians also built dams, such as Sadd-el-Kafara Dam for flood control. In modern-day India, Dholavira had an intricate water-management system with 16 reservoirs and dams. The Great Dam of Marib in Yemen, built between 1750 and 1700 BC, was an engineering wonder, and Eflatun Pinar, a Hittite dam and spring temple in Turkey, dates to the 15th and 13th centuries BC. The Kallanai Dam in South India, built in the 2nd century AD, is one of the oldest water regulating structures still in use.
Roman engineers built dams with advanced techniques and materials, such as hydraulic mortar and Roman concrete, which allowed for larger structures. They introduced reservoir dams, arch-gravity dams, arch dams, buttress dams, and multiple arch buttress dams. In Iran, bridge dams were used for hydropower and water-raising mechanisms.
During the Middle Ages, dams were built in the Netherlands to regulate water levels and prevent sea intrusion. In the 19th century, large-scale arch dams were constructed around the British Empire, marking advances in dam engineering techniques. The era of large dams began with the construction of the Aswan Low Dam in Egypt in 1902. The Hoover Dam, a massive concrete arch-gravity dam, was built between 1931 and 1936 on the Colorado River. By 1997, there were an estimated 800,000 dams worldwide, with some 40,000 of them over 15 meters high.
History
Ancient dams
Early dam building took place in Mesopotamia and the Middle East. Dams were used to control water levels, for Mesopotamia's weather affected the Tigris and Euphrates Rivers.
The earliest known dam is the Jawa Dam in Jordan, northeast of the capital Amman. This gravity dam featured an originally and stone wall, supported by a earthen rampart. The structure is dated to 3000 BC. However, the oldest continuously operational dam is Lake Homs Dam, built in Syria between 1319-1304 BC.
The Ancient Egyptian Sadd-el-Kafara Dam at Wadi Al-Garawi, about south of Cairo, was long at its base and wide. The structure was built around 2800 or 2600 BC as a diversion dam for flood control, but was destroyed by heavy rain during construction or shortly afterwards. During the Twelfth Dynasty in the 19th century BC, the Pharaohs Senosert III, Amenemhat III, and Amenemhat IV dug a canal long linking the Fayum Depression to the Nile in Middle Egypt. Two dams called Ha-Uar running east–west were built to retain water during the annual flood and then release it to surrounding lands. The lake called Mer-wer or Lake Moeris covered and is known today as Birket Qarun.
By the mid-late third millennium BC, an intricate water-management system in Dholavira in modern-day India was built. The system included 16 reservoirs, dams and various channels for collecting water and storing it.
One of the engineering wonders of the ancient world was the Great Dam of Marib in Yemen. Initiated sometime between 1750 and 1700 BC, it was made of packed earth – triangular in cross-section, in length and originally high – running between two groups of rocks on either side, to which it was linked by substantial stonework. Repairs were carried out during various periods, most importantly around 750 BC, and 250 years later the dam height was increased to . After the end of the Kingdom of Saba, the dam fell under the control of the Ḥimyarites (c. 115 BC) who undertook further improvements, creating a structure high, with five spillways, two masonry-reinforced sluices, a settling pond, and a canal to a distribution tank. These works were not finished until 325 AD when the dam permitted the irrigation of .
Eflatun Pınar is a Hittite dam and spring temple near Konya, Turkey. It is thought to date from the Hittite empire between the 15th and 13th centuries BC.
The Kallanai is constructed of unhewn stone, over long, high and wide, across the main stream of the Kaveri River in Tamil Nadu, South India. The basic structure dates to the 2nd century AD and is considered one of the oldest water diversion or water regulating structures still in use. The purpose of the dam was to divert the waters of the Kaveri across the fertile delta region for irrigation via canals.
Du Jiang Yan is the oldest surviving irrigation system in China that included a dam that directed waterflow. It was finished in 251 BC. A large earthen dam, made by Sunshu Ao, the prime minister of Chu (state), flooded a valley in modern-day northern Anhui Province that created an enormous irrigation reservoir ( in circumference), a reservoir that is still present today.
Roman engineering
Roman dam construction was characterized by "the Romans' ability to plan and organize engineering construction on a grand scale." Roman planners introduced the then-novel concept of large reservoir dams which could secure a permanent water supply for urban settlements over the dry season. Their pioneering use of water-proof hydraulic mortar and particularly Roman concrete allowed for much larger dam structures than previously built, such as the Lake Homs Dam, possibly the largest water barrier to that date, and the Harbaqa Dam, both in Roman Syria. The highest Roman dam was the Subiaco Dam near Rome; its record height of remained unsurpassed until its accidental destruction in 1305.
Roman engineers made routine use of ancient standard designs like embankment dams and masonry gravity dams. Apart from that, they displayed a high degree of inventiveness, introducing most of the other basic dam designs which had been unknown until then. These include arch-gravity dams, arch dams, buttress dams and multiple arch buttress dams, all of which were known and employed by the 2nd century AD (see List of Roman dams). Roman workforces also were the first to build dam bridges, such as the Bridge of Valerian in Iran.
In Iran, bridge dams such as the Band-e Kaisar were used to provide hydropower through water wheels, which often powered water-raising mechanisms. One of the first was the Roman-built dam bridge in Dezful, which could raise water 50 cubits (c. 23 m) to supply the town. Also diversion dams were known. Milling dams were introduced which the Muslim engineers called the Pul-i-Bulaiti. The first was built at Shustar on the River Karun, Iran, and many of these were later built in other parts of the Islamic world. Water was conducted from the back of the dam through a large pipe to drive a water wheel and watermill. In the 10th century, Al-Muqaddasi described several dams in Persia. He reported that one in Ahwaz was more than long, and that it had many water-wheels raising the water into aqueducts through which it flowed into reservoirs of the city. Another one, the Band-i-Amir Dam, provided irrigation for 300 villages.
Middle Ages
Shāh Abbās Arch (Persian: طاق شاه عباس), also known as Kurit Dam, is the thinnest arch dam in the world and one of the oldest arch dams in Asia. It was constructed some 700 years ago in Tabas county, South Khorasan Province, Iran. It stands 60 meters tall, and in crest is a one meter width. Some historians believe the dam was built by Shāh Abbās I, whereas others believe that he repaired it.
In the Netherlands, a low-lying country, dams were often built to block rivers to regulate the water level and to prevent the sea from entering the marshlands. Such dams often marked the beginning of a town or city because it was easy to cross the river at such a place, and often influenced Dutch place names. The present Dutch capital, Amsterdam (old name Amstelredam), started with a dam on the river Amstel in the late 12th century, and Rotterdam began with a dam on the river Rotte, a minor tributary of the Nieuwe Maas. The central square of Amsterdam, covering the original site of the 800-year-old dam, still carries the name Dam Square.
Industrial revolution
The Romans were the first to build arch dams, where the reaction forces from the abutment stabilizes the structure from the external hydrostatic pressure, but it was only in the 19th century that the engineering skills and construction materials available were capable of building the first large-scale arch dams.
Three pioneering arch dams were built around the British Empire in the early 19th century. Henry Russel of the Royal Engineers oversaw the construction of the Mir Alam dam in 1804 to supply water to the city of Hyderabad (it is still in use today). It had a height of and consisted of 21 arches of variable span.
In the 1820s and 30s, Lieutenant-Colonel John By supervised the construction of the Rideau Canal in Canada near modern-day Ottawa and built a series of curved masonry dams as part of the waterway system. In particular, the Jones Falls Dam, built by John Redpath, was completed in 1832 as the largest dam in North America and an engineering marvel. In order to keep the water in control during construction, two sluices, artificial channels for conducting water, were kept open in the dam. The first was near the base of the dam on its east side. A second sluice was put in on the west side of the dam, about above the base. To make the switch from the lower to upper sluice, the outlet of Sand Lake was blocked off.
Hunts Creek near the city of Parramatta, Australia, was dammed in the 1850s, to cater to the demand for water from the growing population of the city. The masonry arch dam wall was designed by Lieutenant Percy Simpson who was influenced by the advances in dam engineering techniques made by the Royal Engineers in India. The dam cost £17,000 and was completed in 1856 as the first engineered dam built in Australia, and the second arch dam in the world built to mathematical specifications.
The first such dam was opened two years earlier in France. It was the first French arch dam of the industrial era, and it was built by François Zola in the municipality of Aix-en-Provence to improve the supply of water after the 1832 cholera outbreak devastated the area. After royal approval was granted in 1844, the dam was constructed over the following decade. Its construction was carried out on the basis of the mathematical results of scientific stress analysis.
The 75-miles dam near Warwick, Australia, was possibly the world's first concrete arch dam. Designed by Henry Charles Stanley in 1880 with an overflow spillway and a special water outlet, it was eventually heightened to .
In the latter half of the nineteenth century, significant advances in the scientific theory of masonry dam design were made. This transformed dam design from an art based on empirical methodology to a profession based on a rigorously applied scientific theoretical framework. This new emphasis was centered around the engineering faculties of universities in France and in the United Kingdom. William John Macquorn Rankine at the University of Glasgow pioneered the theoretical understanding of dam structures in his 1857 paper On the Stability of Loose Earth. Rankine theory provided a good understanding of the principles behind dam design. In France, J. Augustin Tortene de Sazilly explained the mechanics of vertically faced masonry gravity dams, and Zola's dam was the first to be built on the basis of these principles.
Modern era
The era of large dams was initiated with the construction of the Aswan Low Dam in Egypt in 1902, a gravity masonry buttress dam on the Nile River. Following their 1882 invasion and occupation of Egypt, the British began construction in 1898. The project was designed by Sir William Willcocks and involved several eminent engineers of the time, including Sir Benjamin Baker and Sir John Aird, whose firm, John Aird & Co., was the main contractor. Capital and financing were furnished by Ernest Cassel. When initially constructed between 1899 and 1902, nothing of its scale had ever before been attempted; on completion, it was the largest masonry dam in the world.
The Hoover Dam is a massive concrete arch-gravity dam, constructed in the Black Canyon of the Colorado River, on the border between the US states of Arizona and Nevada between 1931 and 1936 during the Great Depression. In 1928, Congress authorized the project to build a dam that would control floods, provide irrigation water and produce hydroelectric power. The winning bid to build the dam was submitted by a consortium called Six Companies, Inc. Such a large concrete structure had never been built before, and some of the techniques were unproven. The torrid summer weather and the lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned over the dam to the federal government on 1 March 1936, more than two years ahead of schedule.
By 1997, there were an estimated 800,000 dams worldwide, some 40,000 of them over high. In 2014, scholars from the University of Oxford published a study of the cost of large dams – based on the largest existing dataset – documenting significant cost overruns for a majority of dams and questioning whether benefits typically offset costs for such dams.
Types
Dams can be formed by human agency, natural causes, or even by the intervention of wildlife such as beavers. Man-made dams are typically classified according to their size (height), intended purpose or structure.
By structure
Based on structure and material used, dams are classified as easily created without materials, arch-gravity dams, embankment dams or masonry dams, with several subtypes.
Arch dams
In the arch dam, stability is obtained by a combination of arch and gravity action. If the upstream face is vertical the entire weight of the dam must be carried to the foundation by gravity, while the distribution of the normal hydrostatic pressure between vertical cantilever and arch action will depend upon the stiffness of the dam in a vertical and horizontal direction. When the upstream face is sloped the distribution is more complicated. The normal component of the weight of the arch ring may be taken by the arch action, while the normal hydrostatic pressure will be distributed as described above. For this type of dam, firm reliable supports at the abutments (either buttress or canyon side wall) are more important. The most desirable place for an arch dam is a narrow canyon with steep side walls composed of sound rock. The safety of an arch dam is dependent on the strength of the side wall abutments, hence not only should the arch be well seated on the side walls but also the character of the rock should be carefully inspected.
Two types of single-arch dams are in use, namely the constant-angle and the constant-radius dam. The constant-radius type employs the same face radius at all elevations of the dam, which means that as the channel grows narrower towards the bottom of the dam the central angle subtended by the face of the dam becomes smaller. Jones Falls Dam, in Canada, is a constant radius dam. In a constant-angle dam, also known as a variable radius dam, this subtended angle is kept constant and the variation in distance between the abutments at various levels is taken care of by varying the radii. Constant-radius dams are much less common than constant-angle dams. Parker Dam on the Colorado River is a constant-angle arch dam.
A similar type is the double-curvature or thin-shell dam. Wildhorse Dam near Mountain City, Nevada, in the United States is an example of the type. This method of construction minimizes the amount of concrete necessary for construction but transmits large loads to the foundation and abutments. The appearance is similar to a single-arch dam but with a distinct vertical curvature to it as well lending it the vague appearance of a concave lens as viewed from downstream.
The multiple-arch dam consists of a number of single-arch dams with concrete buttresses as the supporting abutments, as for example the Daniel-Johnson Dam, Québec, Canada. The multiple-arch dam does not require as many buttresses as the hollow gravity type but requires a good rock foundation because the buttress loads are heavy.
Gravity dams
In a gravity dam, the force that holds the dam in place against the push from the water is Earth's gravity pulling down on the mass of the dam. The water presses laterally (downstream) on the dam, tending to overturn the dam by rotating about its toe (a point at the bottom downstream side of the dam). The dam's weight counteracts that force, tending to rotate the dam the other way about its toe. The designer ensures that the dam is heavy enough that the dam's weight wins that contest. In engineering terms, that is true whenever the resultant of the forces of gravity acting on the dam and water pressure on the dam acts in a line that passes upstream of the toe of the dam. The designer tries to shape the dam so if one were to consider the part of the dam above any particular height to be a whole dam itself, that dam also would be held in place by gravity, i.e., there is no tension in the upstream face of the dam holding the top of the dam down. The designer does this because it is usually more practical to make a dam of material essentially just piled up than to make the material stick together against vertical tension. The shape that prevents tension in the upstream face also eliminates a balancing compression stress in the downstream face, providing additional economy.
For this type of dam, it is essential to have an impervious foundation with high bearing strength. Permeable foundations have a greater likelihood of generating uplift pressures under the dam. Uplift pressures are hydrostatic pressures caused by the water pressure of the reservoir pushing up against the bottom of the dam. If large enough uplift pressures are generated there is a risk of destabilizing the concrete gravity dam.
On a suitable site, a gravity dam can prove to be a better alternative to other types of dams. When built on a solid foundation, the gravity dam probably represents the best-developed example of dam building. Since the fear of flood is a strong motivator in many regions, gravity dams are built in some instances where an arch dam would have been more economical.
Gravity dams are classified as "solid" or "hollow" and are generally made of either concrete or masonry. The solid form is the more widely used of the two, though the hollow dam is frequently more economical to construct. Grand Coulee Dam is a solid gravity dam and Braddock Locks & Dam is a hollow gravity dam.
Arch-gravity dams
A gravity dam can be combined with an arch dam into an arch-gravity dam for areas with massive amounts of water flow but less material available for a pure gravity dam. The inward compression of the dam by the water reduces the lateral (horizontal) force acting on the dam. Thus, the gravitational force required by the dam is lessened, i.e., the dam does not need to be so massive. This enables thinner dams and saves resources.
Barrages
A barrage dam is a special kind of dam that consists of a line of large gates that can be opened or closed to control the amount of water passing the dam. The gates are set between flanking piers which are responsible for supporting the water load, and are often used to control and stabilize water flow for irrigation systems. An example of this type of dam is the now-decommissioned Red Bluff Diversion Dam on the Sacramento River near Red Bluff, California.
Barrages that are built at the mouths of rivers or lagoons to prevent tidal incursions or use the tidal flow for tidal power are known as tidal barrages.
Embankment dams
Embankment dams are made of compacted earth, and are of two main types: rock-fill and earth-fill. Like concrete gravity dams, embankment dams rely on their weight to hold back the force of water.
Fixed-crest dams
A fixed-crest dam is a concrete barrier across a river. Fixed-crest dams are designed to maintain depth in the channel for navigation. They pose risks to boaters who may travel over them, as they are hard to spot from the water and create induced currents that are difficult to escape.
By size
There is variability, both worldwide and within individual countries, such as in the United States, in how dams of different sizes are categorized. Dam size influences construction, repair, and removal costs and affects the dams' potential range and magnitude of environmental disturbances.
Large dams
The International Commission on Large Dams (ICOLD) defines a "large dam" as "A dam with a height of or greater from lowest foundation to crest or a dam between metres and 15 metres impounding more than ". "Major dams" are over in height. The Report of the World Commission on Dams also includes in the "large" category, dams which are between high with a reservoir capacity of more than . Hydropower dams can be classified as either "high-head" (greater than 30 m in height) or "low-head" (less than 30 m in height).
, ICOLD's World Register of Dams contains 58,700 large dam records. The tallest dam in the world is the Jinping-I Dam in China.
Small dams
As with large dams, small dams have multiple uses, such as, but not limited to, hydropower production, flood protection, and water storage. Small dams can be particularly useful on farms to capture runoff for later use, for example, during the dry season. Small scale dams have the potential to generate benefits without displacing people as well, and small, decentralised hydroelectric dams can aid rural development in developing countries. In the United States alone, there are approximately 2,000,000 or more "small" dams that are not included in the Army Corps of Engineers National Inventory of dams. Records of small dams are kept by state regulatory agencies and therefore information about small dams is dispersed and uneven in geographic coverage.
Countries worldwide consider small hydropower plants (SHPs) important for their energy strategies, and there has been a notable increase in interest in SHPs. Couto and Olden (2018) conducted a global study and found 82,891 small hydropower plants (SHPs) operating or under construction. Technical definitions of SHPs, such as their maximum generation capacity, dam height, reservoir area, etc., vary by country.
Non-jurisdictional dams
A dam is non-jurisdictional when its size (usually "small") excludes it from being subject to certain legal regulations. The technical criteria for categorising a dam as "jurisdictional" or "non-jurisdictional" varies by location. In the United States, each state defines what constitutes a non-jurisdictional dam. In the state of Colorado a non-jurisdictional dam is defined as a dam creating a reservoir with a capacity of 100 acre-feet or less and a surface area of 20 acres or less and with a height measured as defined in Rules 4.2.5.1. and 4.2.19 of 10 feet or less. In contrast, the state of New Mexico defines a jurisdictional dam as 25 feet or greater in height and storing more than 15 acre-feet or a dam that stores 50 acre-feet or greater and is six feet or more in height (section 72-5-32 NMSA), suggesting that dams that do not meet these requirements are non-jurisdictional. Most US dams, 2.41 million of a total of 2.5 million dams, are not under the jurisdiction of any public agency (i.e., they are non-jurisdictional), nor are they listed on the National Inventory of Dams (NID).
Small dams incur risks similar to large dams. However, the absence of regulation (unlike more regulated large dams) and of an inventory of small dams (i.e., those that are non-jurisdictional) can lead to significant risks for both humans and ecosystems. For example, according to the US National Park Service (NPS), "Non-jurisdictional—means a structure which does not meet the minimum criteria, as listed in the Federal Guidelines for Dam Safety, to be included in dam safety programs. The non-jurisdictional structure does not receive a hazard classification and is not considered for any further requirements or activities under the NPS dam safety program." Small dams can be dangerous individually (i.e., they can fail), but also collectively, as an aggregation of small dams along a river or within a geographic area can multiply risks. Graham's 1999 study of US dam failures resulting in fatalities from 1960 to 1998 concluded that the failure of dams between 6.1 and 15 m high (typical height range of smaller dams) caused 86% of the deaths, and the failure of dams less than 6.1 m high caused 2% of the deaths. Non-jurisdictional dams may pose hazards because their design, construction, maintenance, and surveillance is unregulated. Scholars have noted that more research is needed to better understand the environmental impact of small dams (e.g., their potential to alter the flow, temperature, sediment and plant and animal diversity of a river).
By use
Saddle dam
A saddle dam is an auxiliary dam constructed to confine the reservoir created by a primary dam either to permit a higher water elevation and storage or to limit the extent of a reservoir for increased efficiency. An auxiliary dam is constructed in a low spot or "saddle" through which the reservoir would otherwise escape. On occasion, a reservoir is contained by a similar structure called a dike to prevent inundation of nearby land. Dikes are commonly used for reclamation of arable land from a shallow lake, similar to a levee, which is a wall or embankment built along a river or stream to protect adjacent land from flooding.
Weir
A weir (sometimes called an "overflow dam") is a small dam that is often used in a river channel to create an impoundment lake for water abstraction purposes. It can also be used for flow measurement or retardation.
Check dam
A check dam is a small dam designed to reduce flow velocity and control soil erosion. Conversely, a wing dam is a structure that only partly restricts a waterway, creating a faster channel that resists the accumulation of sediment.
Dry dam
A dry dam, also known as a flood retarding structure, is designed to control flooding. It normally holds back no water and allows the channel to flow freely, except during periods of intense flow that would otherwise cause flooding downstream.
Diversionary dam
A diversionary dam is designed to divert all or a portion of the flow of a river from its natural course. The water may be redirected into a canal or tunnel for irrigation and/or hydroelectric power production.
Underground dam
Underground dams are used to trap groundwater and store all or most of it below the surface for extended use in a localized area. In some cases, they are also built to prevent saltwater from intruding into a freshwater aquifer. Underground dams are typically constructed in areas where water resources are minimal and need to be efficiently stored, such as in deserts and on islands like the Fukuzato Dam in Okinawa, Japan. They are most common in northeastern Africa and the arid areas of Brazil while also being used in the southwestern United States, Mexico, India, Germany, Italy, Greece, France and Japan.
There are two types of underground dams: "sub-surface" and a "sand-storage". A sub-surface dam is built across an aquifer or drainage route from an impervious layer (such as solid bedrock) up to just below the surface. They can be constructed of a variety of materials to include bricks, stones, concrete, steel or PVC. Once built, the water stored behind the dam raises the water table and is then extracted with wells. A sand-storage dam is a weir built in stages across a stream or wadi. It must be strong, as floods will wash over its crest. Over time, sand accumulates in layers behind the dam, which helps store water and, most importantly, prevent evaporation. The stored water can be extracted with a well, through the dam body, or by means of a drain pipe.
Tailings dam
A tailings dam is typically an earth-fill embankment dam used to store tailings, which are produced during mining operations after separating the valuable fraction from the uneconomic fraction of an ore. Conventional water retention dams can serve this purpose, but due to cost, a tailings dam is more viable. Unlike water retention dams, a tailings dam is raised in succession throughout the life of the particular mine. Typically, a base or starter dam is constructed, and as it fills with a mixture of tailings and water, it is raised. Material used to raise the dam can include the tailings (depending on their size) along with soil.
There are three raised tailings dam designs, the "upstream", "downstream", and "centerline", named according to the movement of the crest during raising. The specific design used is dependent upon topography, geology, climate, the type of tailings, and cost. An upstream tailings dam consists of trapezoidal embankments being constructed on top but toe to crest of another, moving the crest further upstream. This creates a relatively flat downstream side and a jagged upstream side which is supported by tailings slurry in the impoundment. The downstream design refers to the successive raising of the embankment that positions the fill and crest further downstream. A centerlined dam has sequential embankment dams constructed directly on top of another while fill is placed on the downstream side for support and slurry supports the upstream side.
Because tailings dams often store toxic chemicals from the mining process, modern designs incorporate an impervious geomembrane liner to prevent seepage. Water/slurry levels in the tailings pond must be managed for stability and environmental purposes as well.
By material
Steel dams
A steel dam is a type of dam briefly experimented with around the start of the 20th century which uses steel plating (at an angle) and load-bearing beams as the structure. Intended as permanent structures, steel dams were an (failed) experiment to determine if a construction technique could be devised that was cheaper than masonry, concrete or earthworks, but sturdier than timber crib dams.
Timber dams
Timber dams were widely used in the early part of the industrial revolution and in frontier areas due to ease and speed of construction. Rarely built in modern times because of their relatively short lifespan and the limited height to which they can be built, timber dams must be kept constantly wet in order to maintain their water retention properties and limit deterioration by rot, similar to a barrel. The locations where timber dams are most economical to build are those where timber is plentiful, cement is costly or difficult to transport, and either a low head diversion dam is required or longevity is not an issue. Timber dams were once numerous, especially in the North American West, but most have failed, been hidden under earth embankments, or been replaced with entirely new structures. Two common variations of timber dams were the "crib" and the "plank".
Timber crib dams were erected of heavy timbers or dressed logs in the manner of a log house and the interior filled with earth or rubble. The heavy crib structure supported the dam's face and the weight of the water. Splash dams were timber crib dams used to help float logs downstream in the late 19th and early 20th centuries.
"Timber plank dams" were more elegant structures that employed a variety of construction methods using heavy timbers to support a water retaining arrangement of planks.
Other types
Cofferdams
A cofferdam is a barrier, usually temporary, constructed to exclude water from an area that is normally submerged. Made commonly of wood, concrete, or steel sheet piling, cofferdams are used to allow construction on the foundation of permanent dams, bridges, and similar structures. When the project is completed, the cofferdam will usually be demolished or removed unless the area requires continuous maintenance. ( | Technology | Structures | null |
51628 | https://en.wikipedia.org/wiki/Sweet%20potato | Sweet potato | The sweet potato or sweetpotato (Ipomoea batatas) is a dicotyledonous plant that belongs to the bindweed or morning glory family, Convolvulaceae. Its large, starchy, sweet-tasting tuberous roots are used as a root vegetable. The young shoots and leaves are sometimes eaten as greens. Cultivars of the sweet potato have been bred to bear tubers with flesh and skin of various colors. Sweet potato is only distantly related to the common potato (Solanum tuberosum), both being in the order Solanales. Although darker sweet potatoes are often referred to as "yams" in parts of North America, the species is even more distant from the true yams, which are monocots in the order Dioscoreales.
The sweet potato is native to the tropical regions of South America in what is present-day Ecuador. Of the approximately 50 genera and more than 1,000 species of Convolvulaceae, I. batatas is the only crop plant of major importance—some others are used locally (e.g., I. aquatica "kangkong" as a green vegetable), but many are poisonous. The genus Ipomoea that contains the sweet potato also includes several garden flowers called morning glories, but that term is not usually extended to I. batatas. Some cultivars of I. batatas are grown as ornamental plants under the name tuberous morning glory, and used in a horticultural context. Sweet potatoes can also be called yams in North America. When soft varieties were first grown commercially there, there was a need to differentiate between the two. Enslaved Africans had already been calling the 'soft' sweet potatoes 'yams' because they resembled the unrelated yams in Africa. Thus, 'soft' sweet potatoes were referred to as 'yams' to distinguish them from the 'firm' varieties.
Description
The plant is a herbaceous perennial vine, bearing alternate triangle-shaped or palmately lobed leaves and medium-sized sympetalous flowers. The stems are usually crawling on the ground and form adventitious roots at the nodes. The leaves are screwed along the stems. The leaf stalk is long. The leaf blades are very variable, long, the shape is heart-, kidney- to egg-shaped, rounded or triangular and spear-shaped, the edge can be entire, toothed or often three to seven times lobed, cut or divided. Most of the leaf surfaces are bare, rarely hairy, and the tip is rounded to pointed. The leaves are mostly green in color, but the accumulation of anthocyanins, especially along the leaf veins, can make them purple. Depending on the variety, the total length of a stem can be between . Some cultivars also form shoots up to in length. However, these do not form underground storage organs.
The hermaphrodite, five-fold and short-stalked flowers are single or few in stalked, zymous inflorescences that arise from the leaf axils and stand upright. It produces flowers when the day is short. The small sepals are elongated and tapering to a point and spiky and (rarely only 7) long, usually finely haired or ciliate. The inner three are a little longer. The long, overgrown and funnel-shaped, folded crown, with a shorter hem, can be lavender to purple-lavender in color, the throat is usually darker in color, but white crowns can also appear. The enclosed stamens are of unequal length with glandular filaments. The two-chamber ovary is upper constant with a relatively short stylus. Seeds are only produced from cross-pollination.
The flowers open before sunrise and stay open for a few hours. They close again in the morning and begin to wither. The edible tuberous root is long and tapered, with a smooth skin whose color ranges between yellow, orange, red, brown, purple, and beige. Its flesh ranges from beige through white, red, pink, violet, yellow, orange, and purple. Sweet potato cultivars with white or pale yellow flesh are less sweet and moist than those with red, pink or orange flesh.
Taxonomy
The sweet potato originates in South America in what is present-day Ecuador. The domestication of sweet potato occurred in either Central or South America. In Central America, domesticated sweet potatoes were present at least 5,000 years ago, with the origin of I. batatas possibly between the Yucatán Peninsula of Mexico and the mouth of the Orinoco River in Venezuela. The cultigen was most likely spread by local people to the Caribbean and South America by 2500 BCE.
I. trifida, a diploid, is the closest wild relative of the sweet potato, which originated with an initial cross between a tetraploid and another diploid parent, followed by a second complete genome duplication event. The oldest radiocarbon dating remains of the sweet potato known today were discovered in caves from the Chilca Canyon, in the south-central zone of Peru, and yield an age of 8080 ± 170 BC.
Transgenicity
The genome of cultivated sweet potatoes contains sequences of DNA from Agrobacterium (sensu lato; specifically, one related to Rhizobium rhizogenes), with genes actively expressed by the plants. The T-DNA transgenes were not observed in closely related wild relatives of the sweet potato. Studies indicated that the sweet potato genome evolved over millennia, with eventual domestication of the crop taking advantage of natural genetic modifications. These observations make sweet potatoes the first known example of a naturally transgenic food crop.
Cultivation
Dispersal history
Before the arrival of Europeans to the Americas, sweet potato was grown in Polynesia, generally spread by vine cuttings rather than by seeds. Sweet potato has been radiocarbon-dated in the Cook Islands to 1210–1400 CE. A common hypothesis is that a vine cutting was brought to central Polynesia by Polynesians who had traveled to South America and back, and spread from there across Polynesia to Easter Island, Hawaii and New Zealand. Genetic similarities have been found between Polynesian peoples and indigenous Americans including the Zenú, a people inhabiting the Pacific coast of present-day Colombia, indicating that Polynesians could have visited South America and taken sweet potatoes prior to European contact. Dutch linguists and specialists in Amerindian languages Willem Adelaar and Pieter Muysken have suggested that the word for sweet potato is shared by Polynesian languages and languages of South America: Proto-Polynesian * (compare Rapa Nui , Hawaiian , Māori ) may be connected with Quechua and Aymara ~ . Adelaar and Muysken assert that the similarity in the word for sweet potato is proof of either incidental contact or sporadic contact between the Central Andes and Polynesia.
Some researchers, citing divergence time estimates, suggest that sweet potatoes might have been present in Polynesia thousands of years before humans arrived there. However, the present scholarly consensus favours the pre-Columbian contact model.
The sweet potato arrived in Europe with the Columbian exchange. It is recorded, for example, in Elinor Fettiplace's Receipt Book, compiled in England in 1604.
Sweet potatoes were first introduced to the Philippines during the Spanish colonial period (1521–1898) via the Manila galleons, along with other New World crops. It was introduced to the Fujian of China in about 1594 from Luzon, in response to a major crop failure. The growing of sweet potatoes was encouraged by the Governor Chin Hsüeh-tseng (Jin Xuezeng).
Sweet potatoes were also introduced to the Ryukyu Kingdom, present-day Okinawa, Japan, in the early 1600s by the Portuguese. Sweet potatoes became a staple in Japan because they were important in preventing famine when rice harvests were poor. Aoki Konyō helped popularize the cultivation of the sweet potato in Japan, and the Tokugawa bakufu sponsored, published, and disseminated a vernacular Japanese translation of his research monograph on sweet potatoes to encourage their growth more broadly. Sweet potatoes were planted in Shōgun Tokugawa Yoshimune's private garden. It was first introduced to Korea in 1764. Kang P'il-ri and Yi Kwang-ryŏ embarked on a project to grow sweet potatoes in Seoul in 1766, using the knowledge of Japanese cultivators they learned in Tongnae starting in 1764. The project succeeded for a year but ultimately failed in winter 1767 after Kang's unexpected death.
Names
Although the soft, orange sweet potato is often called a "yam" in parts of North America, the sweet potato is very distinct from the botanical yam (Dioscorea), which has a cosmopolitan distribution, and belongs to the monocot family Dioscoreaceae. A different crop plant, the oca (Oxalis tuberosa, a species of wood sorrel), is called a "yam" in many parts of the world.
Although the sweet potato is not closely related botanically to the common potato, they have a shared etymology. The first Europeans to taste sweet potatoes were members of Christopher Columbus's expedition in 1492. Later explorers found many cultivars under an assortment of local names, but the name which stayed was the indigenous Taíno name of batata. The Spanish combined this with the Quechua word for potato, , to create the word for the common potato.
Though the sweet potato is also called () in Hebrew, this is not a direct loan of the Taíno word. Rather, the Spanish was loaned into Arabic as (), owing to the lack of a sound in Arabic, while the sweet potato was called (); literally ('sweet potato'). The Arabic was loaned into Hebrew as designating the sweet potato only, as Hebrew had its own word for the common potato, (, literally 'earth apple'; compare French pomme de terre).
Some organizations and researchers advocate for the styling of the name as one word—sweetpotato—instead of two, to emphasize the plant's genetic uniqueness from both common potatoes and yams and to avoid confusion of it being classified as a type of common potato. In its current usage in American English, the styling of the name as two words is still preferred.
In Argentina, Colombia, Venezuela, Puerto Rico, and the Dominican Republic, the sweet potato is called . In Brazil, the sweet potato is called . In Mexico, Bolivia, Peru, Chile, Central America, and the Philippines, the sweet potato is known as (alternatively spelled in the Philippines), derived from the Nahuatl word .
In Peru and Bolivia, the general word in Quechua for the sweet potato is , but there are variants used such as , (Ayacucho Quechua), and (Bolivian Quechua), strikingly similar to the Polynesian name and its regional Oceanic cognates (, , , etc.), which has led some scholars to suspect an instance of pre-Columbian trans-oceanic contact. This theory is also supported by genetic evidence.
In Australia, about 90% of production is devoted to the orange cultivar 'Beauregard', which was originally developed by the Louisiana Agricultural Experiment Station in 1981.
In New Zealand, the Māori varieties bore elongated tubers with white skin and a whitish flesh, which points to pre-European cross-Pacific travel. Known as kumara (from the Māori language ), the most common cultivar now is the red 'Owairaka', but orange ('Beauregard'), gold, purple and other cultivars are also grown.
Habitat
The plant does not tolerate frost. It grows best at an average temperature of , with abundant sunshine and warm nights. Annual rainfalls of are considered most suitable, with a minimum of in the growing season. The crop is sensitive to drought at the tuber initiation stage 50–60 days after planting, and it is not tolerant to waterlogging, which may cause tuber rots and reduce the growth of storage roots if aeration is poor.
Depending on the cultivar and conditions, tuberous roots mature in two to nine months. With care, early-maturing cultivars can be grown as an annual summer crop in temperate areas, such as the Eastern United States and China. Sweet potatoes rarely flower when the daylight is longer than 11 hours, as is normal outside of the tropics. They are mostly propagated by stem or root cuttings or by adventitious shoots called "slips" that grow out from the tuberous roots during storage. True seeds are used for breeding only.
They grow well in many farming conditions and have few natural enemies; pesticides are rarely needed. Sweet potatoes are grown on a variety of soils, but well-drained, light- and medium-textured soils with a pH range of 4.5–7.0 are more favorable for the plant. They can be grown in poor soils with little fertilizer. However, sweet potatoes are very sensitive to aluminium toxicity and will die about six weeks after planting if lime is not applied at planting in this type of soil. As they are sown by vine cuttings rather than seeds, sweet potatoes are relatively easy to plant. As the rapidly growing vines shade out weeds, little weeding is needed. A commonly used herbicide to rid the soil of any unwelcome plants that may interfere with growth is DCPA, also known as Dacthal. In the tropics, the crop can be maintained in the ground and harvested as needed for market or home consumption. In temperate regions, sweet potatoes are most often grown on larger farms and are harvested before first frosts.
Sweet potatoes are cultivated throughout tropical and warm temperate regions wherever there is sufficient water to support their growth. Sweet potatoes became common as a food crop in the islands of the Pacific Ocean, South India, Uganda and other African countries.
A cultivar of the sweet potato called the boniato is grown in the Caribbean; its flesh is cream-colored, unlike the more common orange hue seen in other cultivars. Boniatos are not as sweet and moist as other sweet potatoes, but their consistency and delicate flavor are different from the common orange-colored sweet potato.
Sweet potatoes have been a part of the diet in the U.S. for most of its history, especially in the Southeast. The average per capita consumption of sweet potatoes in the United States is only about per year, down from in 1920. "Orange sweet potatoes (the most common type encountered in the US) received higher appearance liking scores compared with yellow or purple cultivars." Purple and yellow sweet potatoes were not as well liked by consumers compared to orange sweet potatoes "possibly because of the familiarity of orange color that is associated with sweet potatoes."
In the Southeastern U.S., sweet potatoes are traditionally cured to improve storage, flavor, and nutrition, and to allow wounds on the periderm of the harvested root to heal. Proper curing requires drying the freshly dug roots on the ground for two to three hours, then storage at with 90 to 95% relative humidity from five to fourteen days. Cured sweet potatoes can keep for thirteen months when stored at with >90% relative humidity. Colder temperatures injure the roots.
Production
In 2020, global production of sweet potatoes was 89 million tonnes, led by China with 55% of the world total (table). Secondary producers were Malawi, Tanzania, and Nigeria.
Diseases
Sweet potato suffers from Sweet potato chlorotic stunt virus (a Crinivirus). In synergy with other any of a large number of other viruses, Untiveros et al., 2007 finds SPCSV produces an even more severe symptomology. I. batatas suffers from several Phytophthoras including P. carotovorum, P. odoriferum, and P. wasabiae.
Uses
Nutrition
Cooked sweet potato (baked in skin) is 76% water, 21% carbohydrates, 2% protein, and contains negligible fat (table). In a 100 gram reference amount, baked sweet potato provides 90 calories, and rich contents (20% or more of the Daily Value, DV) of vitamin A (120% DV), vitamin C (24% DV), manganese (24% DV), and vitamin B6 (20% DV). It is a moderate source (10–19% DV) of some B vitamins and potassium. Between 50% and 90% of the sugar content is sucrose. Maltose content is very low, but baking can increase the maltose content from between 10% and 20%.
Sweet potato cultivars with dark orange flesh have more beta-carotene (converted to a higher vitamin A content once digested) than those with light-colored flesh, and their increased cultivation is being encouraged in Africa where vitamin A deficiency is a serious health problem. Sweet potato leaves are edible and can be prepared like spinach or turnip greens.
Comparison to other food staples
The table below presents the relative performance of sweet potato (in column) to other staple foods on a dry weight basis to account for their different water contents. While sweet potato provides less edible energy and protein per unit weight than cereals, it has higher nutrient density than cereals.
According to a study by the United Nations Food and Agriculture Organization, sweet potatoes are the most efficient staple food to grow in terms of farmland, yielding approximately / day.
Culinary
The starchy tuberous roots of the sweet potato are by far the most important product of the plant. In some tropical areas, the tubers are a staple food crop. The tuber is often cooked before consumption as this increases its nutrition and digestibility, although the American colonists in the Southeast ate raw sweet potatoes as a staple food.
The vines' tips and young leaves are edible as a green vegetable with a characteristic flavor. Older growths may be used as animal fodder.
Africa
Amukeke (sun-dried slices of root) and inginyo (sun-dried crushed root) are a staple food for people in northeastern Uganda. Amukeke is mainly served for breakfast, eaten with peanut sauce. Inginyo is mixed with cassava flour and tamarind to make atapa. People eat atapa with smoked fish cooked in peanut sauce or with dried cowpea leaves cooked in peanut sauce. Emukaru (earth-baked root) is eaten as a snack anytime and is mostly served with tea or with peanut sauce. Similar uses are also found in South Sudan.
The young leaves and vine tips of sweet potato leaves are widely consumed as a vegetable in West African countries (Guinea, Sierra Leone and Liberia, for example), as well as in northeastern Uganda, East Africa. According to FAO leaflet No. 13 – 1990, sweet potato leaves and shoots are a good source of vitamins A, C, and B2 (riboflavin), and according to research done by A. Khachatryan, are an excellent source of lutein.
In Kenya, Rhoda Nungo of the home economics department of the Ministry of Agriculture has written a guide to using sweet potatoes in modern recipes. This includes uses both in the mashed form and as flour from the dried tubers to replace part of the wheat flour and sugar in baked products such as cakes, chapatis, mandazis, bread, buns and cookies. A nutritious juice drink is made from the orange-fleshed cultivars, and deep-fried snacks are also included.
In Egypt, sweet potato tubers are known as () and are a common street food in winter, when street vendors with carts fitted with ovens sell them to people passing time by the Nile or the sea. The cultivars used are an orange-fleshed one as well as a white/cream-fleshed one. They are also baked at home as a snack or dessert, drenched with honey.
In Ethiopia, the commonly found cultivars are black-skinned, cream-fleshed and called bitatis or mitatis. They are cultivated in the eastern and southern lower highlands and harvested during the rainy season (June/July). In recent years, better yielding orange-fleshed cultivars were released for cultivation by Haramaya University as a less sugary sweet potato with higher vitamin A content. Sweet potatoes are widely eaten boiled as a favored snack.
In South Africa, sweet potatoes are often eaten as a side dish such as soetpatats.
Asia
In East Asia, roasted sweet potatoes are popular street food. In China, sweet potatoes, typically yellow cultivars, are baked in a large iron drum and sold as street food during winter. In Korea, sweet potatoes, known as , are roasted in a drum can, baked in foil or on an open fire, typically during winter. In Japan, a dish similar to the Korean preparation is called yaki-imo (roasted sweet potato), which typically uses either the yellow-fleshed "Japanese sweet potato" or the purple-fleshed "Okinawan sweet potato", which is known as .
Sweet potato soup, served during winter, consists of boiling sweet potato in water with rock sugar and ginger. In Fujian cuisine and Taiwanese cuisine, sweet potato is often cooked with rice to make congee. Steamed and dried sweet potato is a delicacy from Liancheng County. Sweet potato greens are a common side dish in Taiwanese cuisine, often boiled or sautéed and served with a garlic and soy sauce mixture, or simply salted before serving. They, as well as dishes featuring the sweet potato root, are commonly found at bento () restaurants. In northeastern Chinese cuisine, sweet potatoes are often cut into chunks and fried, before being drenched into a pan of boiling syrup.
In some regions of India, sweet potato is roasted slowly over kitchen coals at night and eaten with some dressing, while the easier way in the south is simply boiling or pressure cooking before peeling, cubing and seasoning for a vegetable dish as part of the meal. In the Indian state of Tamil Nadu, it is known as . It is boiled and consumed as evening snack. In some parts of India, fresh sweet potato is chipped, dried and then ground into flour; this is then mixed with wheat flour and baked into chapatti (bread). Between 15 and 20 percent of the sweet potato harvest is converted by some Indian communities into pickles and snack chips. A part of the tuber harvest is used in India as cattle fodder.
In Pakistan, sweet potato is known as and is cooked as a vegetable dish and also with meat dishes (chicken, mutton or beef). The ash-roasted sweet potatoes are sold as a snack and street food in Pakistani bazaars especially during the winter months.
In Sri Lanka, it is called , and tubers are used mainly for breakfast (boiled sweet potato is commonly served with sambal or grated coconut) or as a supplementary curry dish for rice.
The tubers of this plant, known as in Dhivehi, have been used in the traditional diet of the Maldives. The leaves were finely chopped and used in dishes such as mas huni.
In Japan, both sweet potatoes (called satsuma-imo) and true purple yams (called or ) are grown. Boiling, roasting and steaming are the most common cooking methods. Also, the use in vegetable tempura is common. (:ja:大学芋) is a baked and caramel-syruped sweet potato dessert. As it is sweet and starchy, it is used in imo-kinton and some other traditional sweets, such as ofukuimo. What is commonly called "sweet potato" (:ja:スイートポテト) in Japan is a cake made by baking mashed sweet potatoes.
Shōchū, a Japanese spirit normally made from the fermentation of rice, can also be made from sweet potato, in which case it is called . Imo-gohan, sweet potato cooked with rice, is popular in Guangdong, Taiwan and Japan. It is also served in nimono or nitsuke, boiled and typically flavored with soy sauce, mirin and dashi.
In Korean cuisine, sweet potato starch is used to produce (cellophane noodles). Sweet potatoes are also boiled, steamed, or roasted, and young stems are eaten as namul. Pizza restaurants such as Pizza Hut and Domino's in Korea are using sweet potatoes as a popular topping. Sweet potatoes are also used in the distillation of a variety of Soju. A popular Korean side dish or snack, , also known as Korean candied sweet potato, is made by deep-frying sweet potatoes that were cut into big chunks and coating them with caramelized sugar.
In Malaysia and Singapore, sweet potato is often cut into small cubes and cooked with taro and coconut milk () to make a sweet dessert called bubur cha cha. A favorite way of cooking sweet potato is deep-frying slices of sweet potato in batter, served as a tea-time snack. In homes, sweet potatoes are usually boiled. The leaves of sweet potatoes are usually stir-fried with only garlic or with and dried shrimp by Malaysians.
In the Philippines, sweet potatoes (locally known as or ) are an important food crop in rural areas. They are often a staple among impoverished families in provinces, as they are easier to cultivate and cost less than rice. The tubers are boiled or baked in coals and may be dipped in sugar or syrup. Young leaves and shoots (locally known as or tops) are eaten fresh in salads with shrimp paste () or fish sauce. They can be cooked in vinegar and soy sauce and served with fried fish (a dish known as ), or with recipes such as sinigang. The stew obtained from boiling tops is purple-colored, and is often mixed with lemon as juice. Sweet potatoes are also sold as street food in suburban and rural areas. Fried sweet potatoes coated with caramelized sugar and served in skewers (camote cue) or as French fries are popular afternoon snacks. Sweet potatoes are also used in a variant of halo-halo called , where they are cooked in coconut milk and sugar and mixed with a variety of rootcrops, sago, jackfruit, and (glutinous rice balls). Bread made from sweet potato flour is also gaining popularity. Sweet potato is relatively easy to propagate, and in rural areas can be seen abundantly at canals and dikes. The uncultivated plant is usually fed to pigs.
In Indonesia, sweet potatoes are locally known as (lit: "spreading tuber") or simply and are frequently fried with batter and served as snacks with spicy condiments, along with other kinds of fritters such as fried bananas, tempeh, tahu, breadfruit, or cassava. In the mountainous regions of West Papua, sweet potatoes are the staple food among the natives there. Using the method of cooking, rocks that have been burned in a nearby bonfire are thrown into a pit lined with leaves. Layers of sweet potatoes, an assortment of vegetables, and pork are piled on top of the rocks. The top of the pile then is insulated with more leaves, creating a pressure of heat and steam inside which cooks all food within the pile after several hours.
In Vietnamese cuisine sweet potatoes are known as and they are commonly cooked with a sweetener such as corn syrup, honey, sugar, or molasses.
Young sweet potato leaves are also used as baby food, particularly in Southeast Asia and East Asia. Mashed sweet potato tubers are used similarly throughout the world.
United States
Candied sweet potatoes are a side dish consisting mainly of sweet potatoes prepared with brown sugar, marshmallows, maple syrup, molasses, orange juice, marron glacé, or other sweet ingredients. It is often served in the US on Thanksgiving. Sweet potato casserole is a side dish of mashed sweet potatoes in a casserole dish, topped with a brown sugar and pecan topping.
The sweet potato became a favorite food item of the French and Spanish settlers, thus beginning a long history of cultivation in Louisiana. Sweet potatoes are recognized as the state vegetable of Alabama, Louisiana, and North Carolina. Sweet potato pie is also a traditional favorite dish in Southern U.S. cuisine. Another variation on the typical sweet potato pie is the Okinawan sweet potato haupia pie, which is made with purple sweet potatoes.
The fried sweet potatoes tradition dates to the early nineteenth century in the United States. Sweet potato fries or chips are a common preparation and are made by julienning and deep-frying sweet potatoes in the fashion of French fried potatoes. Roasting sliced or chopped sweet potatoes lightly coated in animal or vegetable oil at high heat became common in the United States at the start of the 21st century, a dish called "sweet potato fries". Sweet potato mash is served as a side dish, often at Thanksgiving dinner or with barbecue.
John Buttencourt Avila is called the "father of the sweet potato industry" in North America.
Oceania
Māori grew several varieties of small, yellow-skinned, finger-sized kūmara (with names including , , , , and ) that they had brought with them from east Polynesia. Modern trials have shown that these smaller varieties were capable of producing well, but when American whalers, sealers and trading vessels introduced larger cultivars in the early 19th century, they quickly predominated.
Prior to 2021, archaeologists believed that the sweet potato failed to flourish in New Zealand south of Christchurch due to the colder climate, forcing Māori in those latitudes to become (along with the Moriori of the Chatham Islands) the only Polynesian people who subsisted solely on hunting and gathering. However, a 2021 analysis of material excavated from a site near Dunedin, some further south, revealed that sweet potatoes were grown and stored there during the 15th century, before the industry was disrupted by factors speculated to be due to the Little Ice Age.
Māori traditionally cooked kūmara in a hāngī (earth oven). This is still a common practice when there are large gatherings on marae.
In 1947, black rot (Ceratocystis fimbriata) appeared in kūmara around Auckland and increased in severity through the 1950s. A disease-free strain was developed by Joe and Fay Gock. They gave the strain to the nation, earning them the Bledisloe Cup in 2013.
There are three main cultivars of kūmara sold in New Zealand: 'Owairaka Red' ("red"), 'Toka Toka Gold' ("gold"), and 'Beauregard' ("orange"). The country grows around 24,000 metric tons of kūmara annually, with nearly all of it (97%) grown in the Northland Region. Kūmara are widely available throughout New Zealand year-round, where they are a popular alternative to potatoes.
Kūmara are often included in roast meals, and served with sour cream and sweet chili sauce. They are served alongside such vegetables as potatoes and pumpkin and as such, are generally prepared in a savory manner. They are ubiquitous in supermarkets, roast meal takeaway shops and hāngī.
Among the Urapmin people of Papua New Guinea, taro (known in Urap as ) and the sweet potato (Urap: ) are the main sources of sustenance, and in fact the word for 'food' in Urap is a compound of these two words.
Europe
In the Veneto (northeast Italy), sweet potato is known as in the Venetian language ( in Italian, meaning "American potato"), and it is cultivated above all in the southern area of the region;.
In Spain, sweet potato is called . On the evening of All Souls' Day, in Catalonia (northeastern Spain) it is traditional to serve roasted sweet potato and chestnuts, panellets and sweet wine. The occasion is called . As of 2023 Spain is the largest sweet potato producer in Europe.
South America
In Peru, sweet potatoes are called and are frequently served alongside ceviche. Sweet potato chips are also a commonly sold snack, be it on the street or in packaged foods.
Dulce de batata is a traditional Argentine, Paraguayan and Uruguayan dessert, which is made of sweet potatoes. It is a sweet jelly, which resembles a marmalade because of its color and sweetness but it has a harder texture, and has to be sliced in thin portions with a knife as if it was a pie.
Globally
Globally, sweet potatoes are now a staple ingredient of modern sushi cuisine, specifically used in maki rolls. The advent of sweet potato as a sushi ingredient is credited to chef Bun Lai of Miya's Sushi, who first introduced sweet potato rolls in the 1990s as a plant-based alternative to traditional fish-based sushi rolls.
Molecular gastronomy
Freezing a sweet potato until solid, baking at a low temperature, then increasing to a high temperature brings out the sweetness by caramelizing converted sugars.
Ceramics
Ceramics modeled after sweet potatoes or are often found in the Moche culture.
Dyes
In South America, the juice of red sweet potatoes is combined with lime juice to make a dye for cloth. By varying the proportions of the juices, every shade from pink to black can be obtained. Purple sweet potato color is also used as a natural food coloring.
Aquariums
Cuttings of sweet potato vine, either edible or ornamental cultivars, will rapidly form roots in water and will grow in it, indefinitely, in good lighting with a steady supply of nutrients. For this reason, sweet potato vine is ideal for use in home aquariums, trailing out of the water with its roots submerged, as its rapid growth is fueled by toxic ammonia and nitrates, a waste product of aquatic life, which it removes from the water. This improves the living conditions for fish, which also find refuge in the extensive root systems.
Ornamentals
Ornamental sweet potatoes are popular landscape, container, and bedding plants. Grown as an annual in zones up to USDA hardiness Zone 9, they grow rapidly and spread quickly. Cultivars are available in many colors, such as green, yellow, and purple. Some ornamental varieties, like 'Blackie', flower more than others. These ornamental cultivars are not poisonous, and although the leaves are edible, the tubers do not have a good taste.
| Biology and health sciences | Solanales | null |
51632 | https://en.wikipedia.org/wiki/Puffball | Puffball | Puffballs are a type of fungus featuring a ball-shaped fruit body that (when mature) bursts on contact or impact, releasing a cloud of dust-like spores into the surrounding area. Puffballs belong to the division Basidiomycota and encompass several genera, including Calvatia, Calbovista and Lycoperdon. The puffballs were previously treated as a taxonomic group called the Gasteromycetes or Gasteromycetidae, but they are now known to be a polyphyletic assemblage.
The distinguishing feature of all puffballs is that they do not have an open cap with spore-bearing gills. Instead, spores are produced internally, in a spheroidal fruit body called a gasterothecium (gasteroid 'stomach-like' basidiocarp). As the spores mature, they form a mass called a gleba in the centre of the fruitbody that is often of a distinctive color and texture. The basidiocarp remains closed until after the spores have been released from the basidia. Eventually, it develops an aperture, or dries, becomes brittle, and splits, and the spores escape. The spores of puffballs are statismospores rather than ballistospores, meaning they are not forcibly extruded from the basidium. Puffballs and similar forms are thought to have evolved convergently (that is, in numerous independent events) from Hymenomycetes by gasteromycetation, through secotioid stages. Thus, 'Gasteromycetes' and 'Gasteromycetidae' are now considered to be descriptive, morphological terms (more properly gasteroid or gasteromycetes, to avoid taxonomic implications) but not valid cladistic terms.
True puffballs do not have a visible stalk or stem, while stalked puffballs do have a stalk that supports the gleba. None of the stalked puffballs are edible as they are tough and woody mushrooms. The Hymenogastrales and Enteridium lycoperdon, a slime mold, are the false puffballs. A gleba which is powdery on maturity is a feature of true puffballs, stalked puffballs and earthstars. False puffballs are hard like rock or brittle. All false puffballs are inedible, as they are tough and bitter to taste. The genus Scleroderma, which has a young purple gleba, should also be avoided.
Puffballs were traditionally used in Tibet for making ink by burning them, grinding the ash, then putting them in water and adding glue liquid and "a decoction", which, when pressed for a long time, made a black dark substance that was used as ink. Rural Americans burned the common puffball with some kind of bee smoker to anesthetize honey bees as a means to safely procure honey; the practice later inspired experimental medicinal application of the puffball smoke as a surgical general anesthetic in 1853.
Edibility and identification
While most puffballs are not poisonous, some often look similar to young agarics, and especially the deadly Amanitas, such as the death cap or destroying angel mushrooms. Young puffballs in the edible stage, before maturation of the gleba, have undifferentiated white flesh within, whereas the gills of immature Amanita mushrooms can be seen if they are closely examined. They can be very toxic.
The giant puffball, Calvatia gigantea (earlier classified as Lycoperdon giganteum), reaches or more in diameter, and is difficult to mistake for any other fungus. It has been estimated that, when mature, a large specimen of this fungus will produce around 7 × 10 spores.
Not all true puffball mushrooms are without stalks. Some may also be stalked, such as the Podaxis pistillaris, which is also called the "false shaggy mane". There are also a number of false puffballs that look similar to the true ones.
Stalked
Stalked puffballs species:
Battarrea phalloides
Calostoma cinnabarina (Stalked Puffball-in-Aspic)
Pisolithus tinctorius
Tulostoma (genus)
True
True puffballs genera and species:
Bovista – various species, including:
Bovista aestivalis
Bovista dermoxantha
Bovista nigrescens
Bovista plumbea
Calvatia – various species, including:
Calvatia bovista
Calvatia craniiformis
Calvatia cyathiformis
Calvatia gigantea
Calvatia booniana
Calvatia fumosa
Calvatia lepidophora
Calvatia pachyderma
Calvatia sculpta
Calvatia subcretacea – edible
Calbovista subsculpta
Handkea – various species, including:
Handkea utriformis
Lycoperdon – various species, including:
Lycoperdon candidum
Lycoperdon echinatum
Lycoperdon fusillum
Lycoperdon umbrinum
Scleroderma – various species, including:
Scleroderma auratium
Scleroderma geaster – not edible
False
False puffballs species:
Endoptychum agaricoides
Nivatogastrium nubigenum
Podaxis pistillaris
Rhizopogon rubescens
Truncocolumella citrina
Classification
Major orders:
Agaricales (including now-obsolete orders Lycoperdales, Tulostomatales, and Nidulariales)
Basidiomycetes: Agaricales: Lycoperdaceae: Calvatia
Calvatia booniana
Calvatia bovista (Handkea utriformis)
Calvatia craniiformis
Calvatia cyathiformis
Calvatia fumosa (Handkea fumosa)
Calvatia gigantea
Calvatia lepidophora
Calvatia rubroflava
Calvatia sculpta
Calvatia subcretacea (Handkea subcretacea)
Basidiomycetes: Agaricales: Lycoperdaceae: Lycoperdon
Lycoperdon foetidum (Lycoperdon nigrescens)
Lycoperdon perlatum
Lycoperdon pulcherrimum
Lycoperdon pusillum
Lycoperdon pyriforme
Basidiomycetes: Agaricales: Lycoperdaceae: Vascellum
Vascellum curtisii
Vascellum pratense – edible when interior is white
Geastrales and Phallales (related to Cantharellales),
Basidiomycetes: Phallales: Geastraceae: Geastrum
Geastrum coronatum
Geastrum fornicatum
Geastrum saccatum
Sclerodermatales (related to Boletales)
Basidiomycetes: Boletales: Sclerodermataceae: Scleroderma
Scleroderma areolatum
Scleroderma bovista
Scleroderma cepa
Scleroderma citrinum
Scleroderma meridionale
Scleroderma michiganense
Scleroderma polyrhizum
Scleroderma septentrionale
Various false-truffles (hypogaeic gasteromycetes) related to different hymenomycete orders
Similarly, the true truffles (Tuberales) are gasteroid Ascomycota. Their ascocarps are called tuberothecia.
| Biology and health sciences | Basics | Plants |
51698 | https://en.wikipedia.org/wiki/Extended%20real%20number%20line | Extended real number line | In mathematics, the extended real number system is obtained from the real number system by adding two elements denoted and that are respectively greater and lower than every real number. This allows for treating the potential infinities of infinitely increasing sequences and infinitely decreasing series as actual infinities. For example, the infinite sequence of the natural numbers increases infinitively and has no upper bound in the real number system (a potential infinity); in the extended real number line, the sequence has as its least upper bound and as its limit (an actual infinity). In calculus and mathematical analysis, the use of and as actual limits extends significantly the possible computations. It is the Dedekind–MacNeille completion of the real numbers.
The extended real number system is denoted , , or . When the meaning is clear from context, the symbol is often written simply as .
There is also a distinct projectively extended real line where and are not distinguished, i.e., there is a single actual infinity for both infinitely increasing sequences and infinitely decreasing sequences that is denoted as just or as .
Motivation
Limits
The extended number line is often useful to describe the behavior of a function when either the argument or the function value gets "infinitely large" in some sense. For example, consider the function defined by
.
The graph of this function has a horizontal asymptote at . Geometrically, when moving increasingly farther to the right along the -axis, the value of approaches 0. This limiting behavior is similar to the limit of a function in which the real number approaches except that there is no real number that approaches when increases infinitely. Adjoining the elements and to enables a definition of "limits at infinity" which is very similar to the usual defininion of limits, except that is replaced by (for ) or (for ). This allows proving and writing
Measure and integration
In measure theory, it is often useful to allow sets that have infinite measure and integrals whose value may be infinite.
Such measures arise naturally out of calculus. For example, in assigning a measure to that agrees with the usual length of intervals, this measure must be larger than any finite real number. Also, when considering improper integrals, such as
the value "infinity" arises. Finally, it is often useful to consider the limit of a sequence of functions, such as
.
Without allowing functions to take on infinite values, such essential results as the monotone convergence theorem and the dominated convergence theorem would not make sense.
Order and topological properties
The extended real number system , defined as or , can be turned into a totally ordered set by defining for all . With this order topology, has the desirable property of compactness: Every subset of has a supremum and an infimum (the infimum of the empty set is , and its supremum is ). Moreover, with this topology, is homeomorphic to the unit interval . Thus the topology is metrizable, corresponding (for a given homeomorphism) to the ordinary metric on this interval. There is no metric, however, that is an extension of the ordinary metric on .
In this topology, a set is a neighborhood of if and only if it contains a set for some real number . The notion of the neighborhood of can be defined similarly. Using this characterization of extended-real neighborhoods, limits with tending to or , and limits "equal" to and , reduce to the general topological definition of limits—instead of having a special definition in the real number system.
Arithmetic operations
The arithmetic operations of can be partially extended to as follows:
For exponentiation, see . Here, means both and , while means both and .
The expressions , , and (called indeterminate forms) are usually left undefined. These rules are modeled on the laws for infinite limits. However, in the context of probability or measure theory, is often defined as 0.
When dealing with both positive and negative extended real numbers, the expression is usually left undefined, because, although it is true that for every real nonzero sequence that converges to 0, the reciprocal sequence is eventually contained in every neighborhood of , it is not true that the sequence must itself converge to either or Said another way, if a continuous function achieves a zero at a certain value then it need not be the case that tends to either or in the limit as tends to . This is the case for the limits of the identity function when tends to 0, and of (for the latter function, neither nor is a limit of , even if only positive values of are considered).
However, in contexts where only non-negative values are considered, it is often convenient to define . For example, when working with power series, the radius of convergence of a power series with coefficients is often defined as the reciprocal of the limit-supremum of the sequence . Thus, if one allows to take the value , then one can use this formula regardless of whether the limit-supremum is 0 or not.
Algebraic properties
With the arithmetic operations defined above, is not even a semigroup, let alone a group, a ring or a field as in the case of . However, it has several convenient properties:
and are either equal or both undefined.
and are either equal or both undefined.
and are either equal or both undefined.
and are either equal or both undefined
and are equal if both are defined.
If and if both and are defined, then .
If and and if both and are defined, then .
In general, all laws of arithmetic are valid in as long as all occurring expressions are defined.
Miscellaneous
Several functions can be continuously extended to by taking limits. For instance, one may define the extremal points of the following functions as:
,
,
,
.
Some singularities may additionally be removed. For example, the function can be continuously extended to (under some definitions of continuity), by setting the value to for , and 0 for and . On the other hand, the function cannot be continuously extended, because the function approaches as approaches 0 from below, and as approaches 0 from above, i.e., the function not converging to the same value as its independent variable approaching to the same domain element from both the positive and negative value sides.
A similar but different real-line system, the projectively extended real line, does not distinguish between and (i.e. infinity is unsigned). As a result, a function may have limit on the projectively extended real line, while in the extended real number system only the absolute value of the function has a limit, e.g. in the case of the function at . On the other hand, on the projectively extended real line, and correspond to only a limit from the right and one from the left, respectively, with the full limit only existing when the two are equal. Thus, the functions and cannot be made continuous at on the projectively extended real line.
| Mathematics | Real analysis | null |
51706 | https://en.wikipedia.org/wiki/Very%20long%20instruction%20word | Very long instruction word | Very long instruction word (VLIW) refers to instruction set architectures that are designed to exploit instruction-level parallelism (ILP). A VLIW processor allows programs to explicitly specify instructions to execute in parallel, whereas conventional central processing units (CPUs) mostly allow programs to specify instructions to execute in sequence only. VLIW is intended to allow higher performance without the complexity inherent in some other designs.
The traditional means to improve performance in processors include dividing instructions into sub steps so the instructions can be executed partly at the same time (termed pipelining), dispatching individual instructions to be executed independently, in different parts of the processor (superscalar architectures), and even executing instructions in an order different from the program (out-of-order execution). These methods all complicate hardware (larger circuits, higher cost and energy use) because the processor must make all of the decisions internally for these methods to work.
In contrast, the VLIW method depends on the programs providing all the decisions regarding which instructions to execute simultaneously and how to resolve conflicts. As a practical matter, this means that the compiler (software used to create the final programs) becomes more complex, but the hardware is simpler than in many other means of parallelism.
History
The concept of VLIW architecture, and the term VLIW, were invented by Josh Fisher in his research group at Yale University in the early 1980s. His original development of trace scheduling as a compiling method for VLIW was developed when he was a graduate student at New York University. Before VLIW, the notion of prescheduling execution units and instruction-level parallelism in software was well established in the practice of developing horizontal microcode. Before Fisher the theoretical aspects of what would be later called VLIW were developed by the Soviet computer scientist Mikhail Kartsev based on his Sixties work on military-oriented M-9 and M-10 computers. His ideas were later developed and published as a part of a textbook two years before Fisher's seminal paper, but because of the Iron Curtain and because Kartsev's work was mostly military-related it remained largely unknown in the West.
Fisher's innovations involved developing a compiler that could target horizontal microcode from programs written in an ordinary programming language. He realized that to get good performance and target a wide-issue machine, it would be necessary to find parallelism beyond that generally within a basic block. He also developed region scheduling methods to identify parallelism beyond basic blocks. Trace scheduling is such a method, and involves scheduling the most likely path of basic blocks first, inserting compensating code to deal with speculative motions, scheduling the second most likely trace, and so on, until the schedule is complete.
Fisher's second innovation was the notion that the target CPU architecture should be designed to be a reasonable target for a compiler; that the compiler and the architecture for a VLIW processor must be codesigned. This was inspired partly by the difficulty Fisher observed at Yale of compiling for architectures like Floating Point Systems' FPS164, which had a complex instruction set computing (CISC) architecture that separated instruction initiation from the instructions that saved the result, needing very complex scheduling algorithms. Fisher developed a set of principles characterizing a proper VLIW design, such as self-draining pipelines, wide multi-port register files, and memory architectures. These principles made it easier for compilers to emit fast code.
The first VLIW compiler was described in a Ph.D. thesis by John Ellis, supervised by Fisher. The compiler was named Bulldog, after Yale's mascot.
Fisher left Yale in 1984 to found a startup company, Multiflow, along with cofounders John O'Donnell and John Ruttenberg. Multiflow produced the TRACE series of VLIW minisupercomputers, shipping their first machines in 1987. Multiflow's VLIW could issue 28 operations in parallel per instruction. The TRACE system was implemented in a mix of medium-scale integration (MSI), large-scale integration (LSI), and very large-scale integration (VLSI), packaged in cabinets, a technology obsoleted as it grew more cost-effective to integrate all of the components of a processor (excluding memory) on one chip.
Multiflow was too early to catch the following wave, when chip architectures began to allow multiple-issue CPUs. The major semiconductor companies recognized the value of Multiflow technology in this context, so the compiler and architecture were subsequently licensed to most of these firms.
Motivation
A processor that executes every instruction one after the other (i.e., a non-pipelined scalar architecture) may use processor resources inefficiently, yielding potential poor performance. The performance can be improved by executing different substeps of sequential instructions simultaneously (termed pipelining), or even executing multiple instructions entirely simultaneously as in superscalar architectures. Further improvement can be achieved by executing instructions in an order different from that in which they occur in a program, termed out-of-order execution.
These three methods all raise hardware complexity. Before executing any operations in parallel, the processor must verify that the instructions have no interdependencies. For example, if a first instruction's result is used as a second instruction's input, then they cannot execute at the same time and the second instruction cannot execute before the first. Modern out-of-order processors have increased the hardware resources which schedule instructions and determine interdependencies.
In contrast, VLIW executes operations in parallel, based on a fixed schedule, determined when programs are compiled. Since determining the order of execution of operations (including which operations can execute simultaneously) is handled by the compiler, the processor does not need the scheduling hardware that the three methods described above require. Thus, VLIW CPUs offer more computing with less hardware complexity (but greater compiler complexity) than do most superscalar CPUs. This is also complementary to the idea that as many computations as possible should be done before the program is executed, at compile time.
Design
In superscalar designs, the number of execution units is invisible to the instruction set. Each instruction encodes one operation only. For most superscalar designs, the instruction width is 32 bits or fewer.
In contrast, one VLIW instruction encodes multiple operations, at least one operation for each execution unit of a device. For example, if a VLIW device has five execution units, then a VLIW instruction for the device has five operation fields, each field specifying what operation should be done on that corresponding execution unit. To accommodate these operation fields, VLIW instructions are usually at least 64 bits wide, and far wider on some architectures.
For example, the following is an instruction for the Super Harvard Architecture Single-Chip Computer (SHARC). In one cycle, it does a floating-point multiply, a floating-point add, and two autoincrement loads. All of this fits in one 48-bit instruction:
f12 = f0 * f4, f8 = f8 + f12, f0 = dm(i0, m3), f4 = pm(i8, m9);
Since the earliest days of computer architecture, some CPUs have added several arithmetic logic units (ALUs) to run in parallel. Superscalar CPUs use hardware to decide which operations can run in parallel at runtime, while VLIW CPUs use software (the compiler) to decide which operations can run in parallel in advance. Because the complexity of instruction scheduling is moved into the compiler, complexity of hardware can be reduced substantially.
A similar problem occurs when the result of a parallelizable instruction is used as input for a branch. Most modern CPUs guess which branch will be taken even before the calculation is complete, so that they can load the instructions for the branch, or (in some architectures) even start to compute them speculatively. If the CPU guesses wrong, all of these instructions and their context need to be flushed and the correct ones loaded, which takes time.
This has led to increasingly complex instruction-dispatch logic that attempts to guess correctly, and the simplicity of the original reduced instruction set computing (RISC) designs has been eroded. VLIW lacks this logic, and thus lacks its energy use, possible design defects, and other negative aspects.
In a VLIW, the compiler uses heuristics or profile information to guess the direction of a branch. This allows it to move and preschedule operations speculatively before the branch is taken, favoring the most likely path it expects through the branch. If the branch takes an unexpected way, the compiler has already generated compensating code to discard speculative results to preserve program semantics.
Vector processor cores (designed for large one-dimensional arrays of data called vectors) can be combined with the VLIW architecture such as in the Fujitsu FR-V microprocessor, further increasing throughput and speed.
Implementations
Cydrome was a company producing VLIW numeric processors using emitter-coupled logic (ECL) integrated circuits in the same timeframe (late 1980s). This company, like Multiflow, failed after a few years.
One of the licensees of the Multiflow technology is Hewlett-Packard, which Josh Fisher joined after Multiflow's demise. Bob Rau, founder of Cydrome, also joined HP after Cydrome failed. These two would lead computer architecture research at Hewlett-Packard during the 1990s.
Along with the above systems, during the same time (1989–1990), Intel implemented VLIW in the Intel i860, their first 64-bit microprocessor, and the first processor to implement VLIW on one chip. This processor could operate in both simple RISC mode and VLIW mode:
In the early 1990s, Intel introduced the i860 RISC microprocessor. This simple chip had two modes of operation: a scalar mode and a VLIW mode. In the VLIW mode, the processor always fetched two instructions and assumed that one was an integer instruction and the other floating-point.
The i860's VLIW mode was used extensively in embedded digital signal processor (DSP) applications since the application execution and datasets were simple, well ordered and predictable, allowing designers to fully exploit the parallel execution advantages enabled by VLIW. In VLIW mode, the i860 could maintain floating-point performance in the range of 20-40 double-precision MFLOPS; a very high value for its time and for a processor running at 25-50Mhz.
In the 1990s, Hewlett-Packard researched this problem as a side effect of ongoing work on their PA-RISC processor family. They found that the CPU could be greatly simplified by removing the complex dispatch logic from the CPU and placing it in the compiler. Compilers of the day were far more complex than those of the 1980s, so the added complexity in the compiler was considered to be a small cost.
VLIW CPUs are usually made of multiple RISC-like execution units that operate independently. Contemporary VLIWs usually have four to eight main execution units. Compilers generate initial instruction sequences for the VLIW CPU in roughly the same manner as for traditional CPUs, generating a sequence of RISC-like instructions. The compiler analyzes this code for dependence relationships and resource requirements. It then schedules the instructions according to those constraints. In this process, independent instructions can be scheduled in parallel. Because VLIWs typically represent instructions scheduled in parallel with a longer instruction word that incorporates the individual instructions, this results in a much longer opcode (termed very long) to specify what executes on a given cycle.
Examples of contemporary VLIW CPUs include the TriMedia media processors by NXP (formerly Philips Semiconductors), the Super Harvard Architecture Single-Chip Computer (SHARC) DSP by Analog Devices, the ST200 family by STMicroelectronics based on the Lx architecture (designed in Josh Fisher's HP lab by Paolo Faraboschi), the FR-V from Fujitsu, the BSP15/16 from Pixelworks, the CEVA-X DSP from CEVA, the Jazz DSP from Improv Systems, the HiveFlex series from Silicon Hive, and the MPPA Manycore family by Kalray. The Texas Instruments TMS320 DSP line has evolved, in its C6000 family, to look more like a VLIW, in contrast to the earlier C5000 family. These contemporary VLIW CPUs are mainly successful as embedded media processors for consumer electronic devices.
VLIW features have also been added to configurable processor cores for system-on-a-chip (SoC) designs. For example, Tensilica's Xtensa LX2 processor incorporates a technology named Flexible Length Instruction eXtensions (FLIX) that allows multi-operation instructions. The Xtensa C/C++ compiler can freely intermix 32- or 64-bit FLIX instructions with the Xtensa processor's one-operation RISC instructions, which are 16 or 24 bits wide. By packing multiple operations into a wide 32- or 64-bit instruction word and allowing these multi-operation instructions to intermix with shorter RISC instructions, FLIX allows SoC designers to realize VLIW's performance advantages while eliminating the code bloat of early VLIW architectures.
The Infineon Carmel DSP is another VLIW processor core intended for SoC. It uses a similar code density improvement method called configurable long instruction word (CLIW).
Outside embedded processing markets, Intel's Itanium IA-64 explicitly parallel instruction computing (EPIC) and Elbrus 2000 appear as the only examples of a widely used VLIW CPU architectures. However, EPIC architecture is sometimes distinguished from a pure VLIW architecture, since EPIC advocates full instruction predication, rotating register files, and a very long instruction word that can encode non-parallel instruction groups. VLIWs also gained significant consumer penetration in the graphics processing unit (GPU) market, though both Nvidia and AMD have since moved to RISC architectures to improve performance on non-graphics workloads.
ATI Technologies' (ATI) and Advanced Micro Devices' (AMD) TeraScale microarchitecture for graphics processing units (GPUs) is a VLIW microarchitecture.
In December 2015, the first shipment of PCs based on VLIW CPU Elbrus-4s was made in Russia.
The Neo by REX Computing is a processor consisting of a 2D mesh of VLIW cores aimed at power efficiency.
The Elbrus 2000 () and its successors are Russian 512-bit wide VLIW microprocessors developed by Moscow Center of SPARC Technologies (MCST) and fabricated by TSMC.
Backward compatibility
When silicon technology allowed for wider implementations (with more execution units) to be built, the compiled programs for the earlier generation would not run on the wider implementations, as the encoding of binary instructions depended on the number of execution units of the machine.
Transmeta addressed this issue by including a binary-to-binary software compiler layer (termed code morphing) in their Crusoe implementation of the x86 architecture. This mechanism was advertised to basically recompile, optimize, and translate x86 opcodes at runtime into the CPU's internal machine code. Thus, the Transmeta chip is internally a VLIW processor, effectively decoupled from the x86 CISC instruction set that it executes.
Intel's Itanium architecture (among others) solved the backward-compatibility problem with a more general mechanism. Within each of the multiple-opcode instructions, a bit field is allocated to denote dependency on the prior VLIW instruction within the program instruction stream. These bits are set at compile time, thus relieving the hardware from calculating this dependency information. Having this dependency information encoded in the instruction stream allows wider implementations to issue multiple non-dependent VLIW instructions in parallel per cycle, while narrower implementations would issue a smaller number of VLIW instructions per cycle.
Another perceived deficiency of VLIW designs is the code bloat that occurs when one or more execution unit(s) have no useful work to do and thus must execute No Operation NOP instructions. This occurs when there are dependencies in the code and the instruction pipelines must be allowed to drain before later operations can proceed.
Since the number of transistors on a chip has grown, the perceived disadvantages of the VLIW have diminished in importance. VLIW architectures are growing in popularity, especially in the embedded system market, where it is possible to customize a processor for an application in a system-on-a-chip.
| Technology | Computer architecture concepts | null |
51714 | https://en.wikipedia.org/wiki/Taylor%27s%20theorem | Taylor's theorem | In calculus, Taylor's theorem gives an approximation of a -times differentiable function around a given point by a polynomial of degree , called the -th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation at the order of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation. There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial.
Taylor's theorem is named after the mathematician Brook Taylor, who stated a version of it in 1715, although an earlier version of the result was already mentioned in 1671 by James Gregory.
Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis. It gives simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions.
It is the starting point of the study of analytic functions, and is fundamental in various areas of mathematics, as well as in numerical analysis and mathematical physics. Taylor's theorem also generalizes to multivariate and vector valued functions. It provided the mathematical basis for some landmark early computing machines: Charles Babbage's Difference Engine calculated sines, cosines, logarithms, and other transcendental functions by numerically integrating the first 7 terms of their Taylor series.
Motivation
If a real-valued function is differentiable at the point , then it has a linear approximation near this point. This means that there exists a function h1(x) such that
Here
is the linear approximation of for x near the point a, whose graph is the tangent line to the graph at . The error in the approximation is:
As x tends to a, this error goes to zero much faster than , making a useful approximation.
For a better approximation to , we can fit a quadratic polynomial instead of a linear function:
Instead of just matching one derivative of at , this polynomial has the same first and second derivatives, as is evident upon differentiation.
Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of , more accurate than the linear approximation. Specifically,
Here the error in the approximation is
which, given the limiting behavior of , goes to zero faster than as x tends to a.
Similarly, we might get still better approximations to f if we use polynomials of higher degree, since then we can match even more derivatives with f at the selected base point.
In general, the error in approximating a function by a polynomial of degree k will go to zero much faster than as x tends to a. However, there are functions, even infinitely differentiable ones, for which increasing the degree of the approximating polynomial does not increase the accuracy of approximation: we say such a function fails to be analytic at x = a: it is not (locally) determined by its derivatives at this point.
Taylor's theorem is of asymptotic nature: it only tells us that the error in an approximation by a -th order Taylor polynomial Pk tends to zero faster than any nonzero -th degree polynomial as . It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulas for the remainder term (given below) which are valid under some additional regularity assumptions on f. These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function f is analytic. In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.)
There are several ways we might use the remainder term:
Estimate the error for a polynomial Pk(x) of degree k estimating on a given interval (a – r, a + r). (Given the interval and degree, we find the error.)
Find the smallest degree k for which the polynomial Pk(x) approximates to within a given error tolerance on a given interval (a − r, a + r) . (Given the interval and error tolerance, we find the degree.)
Find the largest interval (a − r, a + r) on which Pk(x) approximates to within a given error tolerance. (Given the degree and error tolerance, we find the interval.)
Taylor's theorem in one real variable
Statement of the theorem
The precise statement of the most basic version of Taylor's theorem is as follows:
The polynomial appearing in Taylor's theorem is the -th order Taylor polynomial
of the function f at the point a. The Taylor polynomial is the unique "asymptotic best fit" polynomial in the sense that if there exists a function and a -th order polynomial p such that
then p = Pk. Taylor's theorem describes the asymptotic behavior of the remainder term
which is the approximation error when approximating f with its Taylor polynomial. Using the little-o notation, the statement in Taylor's theorem reads as
Explicit formulas for the remainder
Under stronger regularity assumptions on f there are several precise formulas for the remainder term Rk of the Taylor polynomial, the most common ones being the following.
These refinements of Taylor's theorem are usually proved using the mean value theorem, whence the name. Additionally, notice that this is precisely the mean value theorem when . Also other similar expressions can be found. For example, if G(t) is continuous on the closed interval and differentiable with a non-vanishing derivative on the open interval between and , then
for some number between and . This version covers the Lagrange and Cauchy forms of the remainder as special cases, and is proved below using Cauchy's mean value theorem. The Lagrange form is obtained by taking and the Cauchy form is obtained by taking .
The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. However, it holds also in the sense of Riemann integral provided the (k + 1)th derivative of f is continuous on the closed interval [a,x].
Due to the absolute continuity of f on the closed interval between and , its derivative f exists as an L-function, and the result can be proven by a formal calculation using the fundamental theorem of calculus and integration by parts.
Estimates for the remainder
It is often useful in practice to be able to estimate the remainder term appearing in the Taylor approximation, rather than having an exact formula for it. Suppose that f is -times continuously differentiable in an interval I containing a. Suppose that there are real constants q and Q such that
throughout I. Then the remainder term satisfies the inequality
if , and a similar estimate if . This is a simple consequence of the Lagrange form of the remainder. In particular, if
on an interval with some , then
for all The second inequality is called a uniform estimate, because it holds uniformly for all x on the interval
Example
Suppose that we wish to find the approximate value of the function on the interval while ensuring that the error in the approximation is no more than 10−5. In this example we pretend that we only know the following properties of the exponential function:
From these properties it follows that for all , and in particular, . Hence the -th order Taylor polynomial of at and its remainder term in the Lagrange form are given by
where is some number between 0 and x. Since ex is increasing by (), we can simply use for to estimate the remainder on the subinterval . To obtain an upper bound for the remainder on , we use the property for to estimate
using the second order Taylor expansion. Then we solve for ex to deduce that
simply by maximizing the numerator and minimizing the denominator. Combining these estimates for ex we see that
so the required precision is certainly reached, when
(See factorial or compute by hand the values and .) As a conclusion, Taylor's theorem leads to the approximation
For instance, this approximation provides a decimal expression , correct up to five decimal places.
Relationship to analyticity
Taylor expansions of real analytic functions
Let I ⊂ R be an open interval. By definition, a function f : I → R is real analytic if it is locally defined by a convergent power series. This means that for every a ∈ I there exists some r > 0 and a sequence of coefficients ck ∈ R such that and
In general, the radius of convergence of a power series can be computed from the Cauchy–Hadamard formula
This result is based on comparison with a geometric series, and the same method shows that if the power series based on a converges for some b ∈ R, it must converge uniformly on the closed interval , where . Here only the convergence of the power series is considered, and it might well be that extends beyond the domain I of f.
The Taylor polynomials of the real analytic function f at a are simply the finite truncations
of its locally defining power series, and the corresponding remainder terms are locally given by the analytic functions
Here the functions
are also analytic, since their defining power series have the same radius of convergence as the original series. Assuming that ⊂ I and r < R, all these series converge uniformly on . Naturally, in the case of analytic functions one can estimate the remainder term by the tail of the sequence of the derivatives f′(a) at the center of the expansion, but using complex analysis also another possibility arises, which is described below.
Taylor's theorem and convergence of Taylor series
The Taylor series of f will converge in some interval in which all its derivatives are bounded and do not grow too fast as k goes to infinity. (However, even if the Taylor series converges, it might not converge to f, as explained below; f is then said to be non-analytic.)
One might think of the Taylor series
of an infinitely many times differentiable function f : R → R as its "infinite order Taylor polynomial" at a. Now the estimates for the remainder imply that if, for any r, the derivatives of f are known to be bounded over (a − r, a + r), then for any order k and for any r > 0 there exists a constant such that
for every x ∈ (a − r,a + r). Sometimes the constants can be chosen in such way that is bounded above, for fixed r and all k. Then the Taylor series of f converges uniformly to some analytic function
(One also gets convergence even if is not bounded above as long as it grows slowly enough.)
The limit function is by definition always analytic, but it is not necessarily equal to the original function f, even if f is infinitely differentiable. In this case, we say f is a non-analytic smooth function, for example a flat function:
Using the chain rule repeatedly by mathematical induction, one shows that for any order k,
for some polynomial pk of degree 2(k − 1). The function tends to zero faster than any polynomial as , so f is infinitely many times differentiable and for every positive integer k. The above results all hold in this case:
The Taylor series of f converges uniformly to the zero function Tf(x) = 0, which is analytic with all coefficients equal to zero.
The function f is unequal to this Taylor series, and hence non-analytic.
For any order k ∈ N and radius r > 0 there exists Mk,r > 0 satisfying the remainder bound () above.
However, as k increases for fixed r, the value of Mk,r grows more quickly than rk, and the error does not go to zero.
Taylor's theorem in complex analysis
Taylor's theorem generalizes to functions f : C → C which are complex differentiable in an open subset U ⊂ C of the complex plane. However, its usefulness is dwarfed by other general theorems in complex analysis. Namely, stronger versions of related results can be deduced for complex differentiable functions f : U → C using Cauchy's integral formula as follows.
Let r > 0 such that the closed disk B(z, r) ∪ S(z, r) is contained in U. Then Cauchy's integral formula with a positive parametrization of the circle S(z, r) with gives
Here all the integrands are continuous on the circle S(z, r), which justifies differentiation under the integral sign. In particular, if f is once complex differentiable on the open set U, then it is actually infinitely many times complex differentiable on U. One also obtains Cauchy's estimate
for any z ∈ U and r > 0 such that B(z, r) ∪ S(c, r) ⊂ U. The estimate implies that the complex Taylor series
of f converges uniformly on any open disk with into some function Tf. Furthermore, using the contour integral formulas for the derivatives f(c),
so any complex differentiable function f in an open set U ⊂ C is in fact complex analytic. All that is said for real analytic functions here holds also for complex analytic functions with the open interval I replaced by an open subset U ∈ C and a-centered intervals (a − r, a + r) replaced by c-centered disks B(c, r). In particular, the Taylor expansion holds in the form
where the remainder term Rk is complex analytic. Methods of complex analysis provide some powerful results regarding Taylor expansions. For example, using Cauchy's integral formula for any positively oriented Jordan curve which parametrizes the boundary of a region , one obtains expressions for the derivatives as above, and modifying slightly the computation for , one arrives at the exact formula
The important feature here is that the quality of the approximation by a Taylor polynomial on the region is dominated by the values of the function f itself on the boundary . Similarly, applying Cauchy's estimates to the series expression for the remainder, one obtains the uniform estimates
Example
The function
is real analytic, that is, locally determined by its Taylor series. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. This kind of behavior is easily understood in the framework of complex analysis. Namely, the function f extends into a meromorphic function
on the compactified complex plane. It has simple poles at and , and it is analytic elsewhere. Now its Taylor series centered at z0 converges on any disc B(z0, r) with r < |z − z0|, where the same Taylor series converges at z ∈ C. Therefore, Taylor series of f centered at 0 converges on B(0, 1) and it does not converge for any z ∈ C with |z| > 1 due to the poles at i and −i. For the same reason the Taylor series of f centered at 1 converges on and does not converge for any z ∈ C with .
Generalizations of Taylor's theorem
Higher-order differentiability
A function is differentiable at if and only if there exists a linear functional and a function such that
If this is the case, then is the (uniquely defined) differential of at the point . Furthermore, then the partial derivatives of exist at and the differential of at is given by
Introduce the multi-index notation
for and . If all the -th order partial derivatives of are continuous at , then by Clairaut's theorem, one can change the order of mixed derivatives at , so the short-hand notation
for the higher order partial derivatives is justified in this situation. The same is true if all the ()-th order partial derivatives of exist in some neighborhood of and are differentiable at . Then we say that is times differentiable at the point .
Taylor's theorem for multivariate functions
Using notations of the preceding section, one has the following theorem.
If the function is times continuously differentiable in a closed ball for some , then one can derive an exact formula for the remainder in terms of order partial derivatives of f in this neighborhood. Namely,
In this case, due to the continuity of ()-th order partial derivatives in the compact set , one immediately obtains the uniform estimates
Example in two dimensions
For example, the third-order Taylor polynomial of a smooth function is, denoting ,
Proofs
Proof for Taylor's theorem in one real variable
Let
where, as in the statement of Taylor's theorem,
It is sufficient to show that
The proof here is based on repeated application of L'Hôpital's rule. Note that, for each , . Hence each of the first derivatives of the numerator in vanishes at , and the same is true of the denominator. Also, since the condition that the function be times differentiable at a point requires differentiability up to order in a neighborhood of said point (this is true, because differentiability requires a function to be defined in a whole neighborhood of a point), the numerator and its derivatives are differentiable in a neighborhood of . Clearly, the denominator also satisfies said condition, and additionally, doesn't vanish unless , therefore all conditions necessary for L'Hôpital's rule are fulfilled, and its use is justified. So
where the second-to-last equality follows by the definition of the derivative at .
Alternate proof for Taylor's theorem in one real variable
Let be any real-valued continuous function to be approximated by the Taylor polynomial.
Step 1: Let and be functions. Set and to be
Step 2: Properties of and :
Similarly,
Step 3: Use Cauchy Mean Value Theorem
Let and be continuous functions on . Since so we can work with the interval . Let and be differentiable on . Assume for all .
Then there exists such that
Note: in and so
for some .
This can also be performed for :
for some .
This can be continued to .
This gives a partition in :
with
Set :
Step 4: Substitute back
By the Power Rule, repeated derivatives of , , so:
This leads to:
By rearranging, we get:
or because eventually:
Derivation for the mean value forms of the remainder
Let G be any real-valued function, continuous on the closed interval between and and differentiable with a non-vanishing derivative on the open interval between and , and define
For . Then, by Cauchy's mean value theorem,
for some on the open interval between and . Note that here the numerator is exactly the remainder of the Taylor polynomial for . Compute
plug it into () and rearrange terms to find that
This is the form of the remainder term mentioned after the actual statement of Taylor's theorem with remainder in the mean value form.
The Lagrange form of the remainder is found by choosing and the Cauchy form by choosing .
Remark. Using this method one can also recover the integral form of the remainder by choosing
but the requirements for f needed for the use of mean value theorem are too strong, if one aims to prove the claim in the case that f is only absolutely continuous. However, if one uses Riemann integral instead of Lebesgue integral, the assumptions cannot be weakened.
Derivation for the integral form of the remainder
Due to the absolute continuity of on the closed interval between and , its derivative exists as an -function, and we can use the fundamental theorem of calculus and integration by parts. This same proof applies for the Riemann integral assuming that is continuous on the closed interval and differentiable on the open interval between and , and this leads to the same result than using the mean value theorem.
The fundamental theorem of calculus states that
Now we can integrate by parts and use the fundamental theorem of calculus again to see that
which is exactly Taylor's theorem with remainder in the integral form in the case . The general statement is proved using induction. Suppose that
Integrating the remainder term by parts we arrive at
Substituting this into the formula shows that if it holds for the value , it must also hold for the value . Therefore, since it holds for , it must hold for every positive integer .
Derivation for the remainder of multivariate Taylor polynomials
We prove the special case, where has continuous partial derivatives up to the order in some closed ball with center . The strategy of the proof is to apply the one-variable case of Taylor's theorem to the restriction of to the line segment adjoining and . Parametrize the line segment between and by We apply the one-variable version of Taylor's theorem to the function :
Applying the chain rule for several variables gives
where is the multinomial coefficient. Since , we get:
| Mathematics | Differential calculus | null |
51727 | https://en.wikipedia.org/wiki/Moped | Moped | A moped () is a type of small motorcycle, generally having a less stringent licensing requirement than full motorcycles or automobiles. Historically, the term exclusively meant a similar vehicle with both bicycle pedals and a motorcycle engine. Mopeds typically travel only slightly faster than bicycles on public roads.
Traditional mopeds are distinguishable by their pedals, similar to a bicycle. Some mopeds have a step-through frame design, while others have motorcycle frame designs, including a backbone and a raised fuel tank, mounted directly between the saddle and the head tube. Some resemble motorized bicycles, similar to modern ebikes. Most are similar to a regular motorcycle but with pedals and a crankset that may be used with or instead of motor drive. Although mopeds usually have two wheels, some jurisdictions classify low-powered three- or four-wheeled vehicles (including ATVs and go-kart) as a moped.
In some countries, a moped can be any motorcycle with an engine capacity below (most commonly or lower).
Etymology
The word moped was coined by the Swedish journalist Harald Nielsen in 1952, as a portmanteau of the Swedish words and . The claimed derivation from the term motor-velocipede is incorrect. According to Douglas Harper, the Swedish terms originated from "(trampcykel med) mo(tor och) ped(aler)", which means "pedal cycle with engine and pedals" (the earliest versions had auxiliary pedals). Like some of the earliest two wheeled motorcycles, all mopeds were once equipped with bicycle pedals.
History
The term "moped" now only applies to low-power (often super-economy) vehicles, but pedals were fitted to some early motorcycles, such as the pictured 1912 Douglas. Pedaling away from stationary was a great improvement over "run and jump" and light pedal assistance (LPA) was valuable for climbing hills. Better transmissions with wider ranges, better clutches and much better engine performance made pedals obsolete on most motorcycles by 1918 but the pedals on mopeds remained valuable for their original purposes as late as the 1990s.
The earliest mopeds were bicycles with a helper motor in various locations, for example on top of the front wheel; they were also called cyclemotors. An example of that type is the VéloSoleX brand, which simply has a roller driving the front tire.
A more innovative design was known in the UK as the Cyclemaster. This had a complete powered rear wheel which was simply substituted for the bicycle rear wheel, which originated from a design by two DKW engineers in Germany. Slightly larger machines, commonly with a engine were known as autocycles. On the other hand, some mopeds, such as the Czech-made Jawa, were derived from motorcycles.
A further category of low-powered two-wheelers exists today in some jurisdictions for bicycles with helper motors – these are often defined as power-assisted bicycles or motorized bicycles. Other jurisdictions may categorize the same machines as mopeds, creating a certain amount of confusion. In many countries three-wheelers and microcars are classified as mopeds or variations thereof. This practice is not restricted to the third world; France and Belgium classify microcars such as the Aixam similarly or as "light quadricycles". The Ariel 3, a motorised three-wheeler is classed as a moped.
As of 1977, the Vienna Convention on Road Traffic considers the moped any two-wheeled or three-wheeled vehicle which is fitted with an internal combustion engine having a cylinder capacity not exceeding 50 cc.
Emissions
Mopeds can achieve fuel economy of over . The emissions of mopeds have been the subject of multiple studies. Studies have found that two-stroke 50 cc mopeds, with and without catalytic converters, emit ten to thirty times the hydrocarbons and particulate emissions of the outdated Euro 3 automobile standards. In the same study, four-stroke mopeds, with and without catalytic converters, emitted three to eight times the hydrocarbons and particulate emissions of the Euro 3 automobile standards. Approximate parity with automobiles was achieved with NOx emissions in these studies. Emissions performance was tested on a g/km basis and was unaffected by fuel economy. Currently in the United States, the EPA allows motorcycles, scooters, and mopeds with engine displacements less than to emit ten times the NOx and six times the CO as the median Tier II bin 5 automobile regulations. An additional air quality problem can also arise from the use of moped and scooter transportation over automobiles, as a higher density of motorized vehicles can be supported by existing transportation infrastructure.
Safety
Safely riding a moped mostly requires the same considerations as safely riding a motorcycle. However the lower speeds reduce some dangers and increase others. The biggest danger is that other traffic may not notice the presence of a moped; bright clothes and reflective fittings help. Drivers may even see the moped, recognize it as harmless to them and simply forget it is there, pulling out of side-turnings into its path. Similarly, a car approaching a moped from behind will approach it more quickly than the driver expects, and the driver's attention may be more attuned to other automobile traffic rather than the moped, increasing the likelihood of an accident. This is a particular problem for mopeds used on high-speed roads where they may not be intended to travel.
Mopeds are often tuned for higher speeds, powers or engine displacements. For this to be legally allowed in most jurisdictions, such vehicles should be re-registered as motorcycles, and their driver's license requirements, taxes, insurance costs, and minimum driver age may be higher. A tuned vehicle, not designed for higher speeds, is not as safe as a purpose-designed motorcycle. A survey of Finnish high school vocational and gymnasium students found that 80% and 70% of their respective mopeds were tuned. Only 10% of trade school students had a moped that conformed to legislation. The average maximum speed was 72 km/h (45 mph), far higher than the legally allowable 45 km/h (28 mph). Another study reported that of school-age moped owners, 50% of boys and 15% of girls have an illegally tuned moped.
Individual countries/regions
Sports moped
In the United Kingdom during the 1970s, a high-performance derivation of the moped concept was developed, aimed at 16-year-olds. It was created in order to circumvent governmental legislation aimed at forcing young motorcycle riders off the road. These new laws, called the "Sixteener Law", were introduced by John Peyton, the then Conservative Party Minister for Transport in 1971. They forbade 16-year-olds from riding motorcycles of up to capacity as they had done before, and limited them to machines until they were 17. The law provoked motorcycle manufacturers to develop new class of motorcycle which were then called "sports mopeds" or, colloquially, "sixteener specials" and was subject to much criticism. The market for these was primarily young males.
Sports mopeds were ostensibly motorcycles, capable of doing more than in some cases, with bicycle-style pedals added to them which the law required were capable of propelling the vehicle. Models were produced by Japanese manufacturers Honda, Yamaha and Suzuki, and European companies such as Puch, Fantic, Gilera, Gitane and Garelli from 1972 onwards, the most famous of which was the Yamaha FS1-E. They included roadsters, enduro and motocrossers, cafe racers and choppers or scooters, and led to a boom interest in motorcycling similar to the early 1960 rocker period. The government responded again by bringing in even more restrictive legislation in 1977 which limited mopeds to a weight of and a top speed to . The move contributed to the decline of the UK motorcycle market. In Continental Europe no such restrictions existed and such vehicles could be ridden by 14-year-olds.
| Technology | Motorized road transport | null |
51734 | https://en.wikipedia.org/wiki/Conveyor%20belt | Conveyor belt | A conveyor belt is the carrying medium of a belt conveyor system (often shortened to belt conveyor). A belt conveyor system is one of many types of conveyor systems. A belt conveyor system consists of two or more pulleys (sometimes referred to as drums), with a closed loop of carrying medium—the conveyor belt—that rotates about them. One or both of the pulleys are powered, moving the belt and the material on the belt forward. The powered pulley is called the drive pulley while the unpowered pulley is called the idler pulley. There are two main industrial classes of belt conveyors; Those in general material handling such as those moving boxes along inside a factory and bulk material handling such as those used to transport large volumes of resources and agricultural materials, such as grain, salt, coal, ore, sand, overburden and more.
Overview
Conveyors are durable and reliable components used in automated distribution and warehousing, as well as manufacturing and production facilities. In combination with computer-controlled pallet handling equipment this allows for more efficient retail, wholesale, and manufacturing distribution. It is considered a labor-saving system that allows large volumes to move rapidly through a process, allowing companies to ship or receive higher volumes with smaller storage space and with labor expense.
Belt conveyors are the most commonly used powered conveyors because they are the most versatile and the least expensive. Products are conveyed directly on the belt so both regular and irregular shaped objects, large or small, light and heavy, can be transported successfully. Belt conveyors are also manufactured with curved sections that use tapered rollers and curved belting to convey products around a corner. These conveyor systems are commonly used in postal sorting offices and airport baggage handling systems.
Belt conveyors are generally fairly similar in construction consisting of a metal frame with rollers at either end of a flat metal bed. Rubber conveyor belts are commonly used to convey items with irregular bottom surfaces, small items that would fall in between rollers (e.g. a sushi conveyor bar), or bags of product that would sag between rollers. The belt is looped around each of the rollers and when one of the rollers is powered (by an electrical motor) the belting slides across the solid metal frame bed, moving the product. In heavy use applications, the beds in which the belting is pulled over are replaced with rollers. The rollers allow weight to be conveyed as they reduce the amount of friction generated from the heavier loading on the belting. The exception to the standard belt conveyor construction is the sandwich belt conveyor. The sandwich belt conveyor uses two conveyor belts, instead of one. These two conventional conveyor belts are positioned face to face, to firmly contain the items being carried in a "sandwich-like" hold.
Belt conveyors can be used to transport products in a straight line or through changes in elevation or direction. For conveying bulk materials, over gentle slopes or gentle curvatures, a troughed belt conveyor is used. The trough of the belt ensures that the flowable material is contained within the edges of the belt. The trough is achieved by keeping the idler rollers in an angle to the horizontal at the sides of the idler frame. A pipe conveyor is used for material travel paths that require sharper bends and inclines up to 35 degrees. A pipe conveyor features the edges of the belt being rolled together to form a circular section like a pipe. Like a troughed belt conveyor, a pipe conveyor also uses idler rollers. However, in this case, the idler frame completely surrounds the conveyor belt helping it to retain the pipe section while pushing it forward. In the case of travel paths requiring high angles and snake-like curvatures, a sandwich belt is used. The sandwich belt design enables materials carried to travel along a path of high inclines up to 90-degree angles, enabling a vertical path as opposed to a horizontal one. This transport option is also powered by idlers.
Other important components of the belt conveying system apart from the pulleys and idler rollers include the drive arrangement of reducer gear boxes, drive motors, and associated couplings. scrapers to clean the belt, chutes for controlling the discharge direction, skirts for containing the discharge on the receiving belt, take up assembly for "tensioning" the belt, safety switches for personnel safety and technological structures like stringer, short post, drive frames, pulley frames make up the balance items to complete the belt conveying system. In certain applications, belt conveyors can also be used for static accumulation or cartons.
History
Primitive conveyor belts have been in use since the 19th century. In 1868, an English shipwright Joseph Thomas Parlour from Pimlico patented a grain elevator with a conveyor belt while Illinoisan Charles Denton of Ames Plow Co. patented a reaper with a belt "conveyer". By the 1880s conveyor belts were used in American elevators, sugarcane mills and sawmills, as well as British maltings.
In 1892, Thomas Robins began a series of inventions which led to the development of a conveyor belt used for carrying coal, ores and other products. In 1901, Sandvik invented and started the production of steel conveyor belts. In 1905, Richard Sutcliffe invented the first conveyor belts for use in coal mines which revolutionized the mining industry. In 1913, Henry Ford introduced conveyor-belt assembly lines at Ford Motor Company's Highland Park, Michigan factory.
In 1972, the French society REI created in New Caledonia the longest straight-belt conveyor in the world in that moment, at a length of . Hyacynthe Marcel Bocchetti was the concept designer.. The longest conveyor belt is that of the Bou Craa phosphate mine in Western Sahara (1973, 98 km in 11 sections). The longest single-span conveyor belt is at the Boddington bauxite mine in Western Australia (31 km).
In 1957, the B. F. Goodrich Company patented a Möbius strip conveyor belt, that it went on to produce as the "Turnover Conveyor Belt System". Incorporating a half-twist, it had the advantage over conventional belts of a longer life because it could expose all of its surface area to wear and tear. Such Möbius strip belts are no longer manufactured because untwisted modern belts can be made more durable by constructing them from several layers of different materials. In 1970, Intralox, a Louisiana-based company, registered the first patent for all plastic, modular belting.
Structure
The belt consists of one or more layers of material. It is common for belts to have three layers: a top cover, a carcass and a bottom cover. The purpose of the carcass is to provide linear strength and shape. The carcass is often a woven or metal fabric having a warp & weft. The warp refers to longitudinal cords whose characteristics of resistance and elasticity define the running properties of the belt. The weft represents the whole set of transversal cables allowing to the belt specific resistance against cuts, tears and impacts and at the same time high flexibility. The most common carcass materials are steel, polyester, nylon, cotton and aramid (class of heat-resistant and strong synthetic fibers, with Twaron or Kevlar as brand names). The covers are usually various rubber or plastic compounds specified by use of the belt.
Steel conveyor belts are used when high strength class is required. For example, the highest strength class conveyor belt installed is made of steel cords. This conveyor belt has a strength class of and it operates at Chuquicamata mine, in Chile. Polyester, nylon and cotton are popular with low strength classes. Aramid is used in the range . The advantages of using aramid are energy savings, enhanced lifetimes and improved productivity. As an example, a , underground belt installed at Baodian Coal Mine, part of in Yanzhou Coal Mining Company, China, was reported to provide energy savings of over 15%. Whilst Shenhua Group, has installed several aramid conveyor belts, including a belt with a length of .
Applications
Today there are different types of conveyor belts that have been created for conveying different kinds of material available in PVC and rubber materials.
Material flowing over the belt may be weighed in transit using a beltweigher. Belts with regularly spaced partitions, known as elevator belts, are used for transporting loose materials up steep inclines. Belt Conveyors are used in self-unloading bulk freighters and in live bottom trucks. Belt conveyor technology is also used in conveyor transport such as moving sidewalks or escalators, as well as on many manufacturing assembly lines. Stores often have conveyor belts at the check-out counter to move shopping items, and may use checkout dividers in this process. Ski areas also use conveyor belts to transport skiers up the hill. Industrial and manufacturing applications for belt conveyors include package handling, trough belt conveyors, trash handling, bag handling, coding conveyors, and more. Integration of Human-Machine Interface(HMI) to operate the conveyor system is in the developing stages and will prove to be an efficient innovation.
Long belt conveyors
The longest belt conveyor system in the world is in Western Sahara. It was built in 1972 by Friedrich Krupp GmbH (now thyssenkrupp) and is long, from the phosphate mines of Bu Craa to the coast south of El-Aaiun.
The longest conveyor system in an airport is the Dubai International Airport baggage handling system at . It was installed by Siemens and commissioned in 2008, and has a combination of traditional belt conveyors and tray conveyors.
Boddington Bauxite Mine in Western Australia is officially recognized as having the world's longest single flight conveyor. Single flight means the load is not transferred, it is a single continuous system for the entire length. This conveyor is a cable belt conveyor system with a conveyor feeding a conveyor. Cable belt conveyors are a variation on the more conventional idler belt system. Instead of running on top of idlers, cable belt conveyors are supported by two endless steel cables (steel wire rope) which are in turn supported by idler pulley wheels. This system feeds bauxite through the difficult terrain of the Darling Ranges to the Worsley Alumina refinery.
The second longest single trough belt conveyor is the Impumelelo conveyor near Secunda, South Africa. It was designed by Conveyor Dynamics, Inc. based in Bellingham, Washington, USA and constructed by ELB Engineering based in Johannesburg South Africa. The conveyor transports coal from a mine to a refinery that converts the coal to diesel fuel. The third longest trough belt conveyor in the world is the Curragh conveyor near Westfarmers, QLD, Australia. Conveyor Dynamics, Inc. supplied the basic engineering, control system and commissioning. Detail engineering and Construction was completed by Laing O'Rourke.
The longest single-belt international conveyor runs from Meghalaya in India to a cement factory at Chhatak Bangladesh. It is about long and conveys limestone and shale at , from the quarry in India to the cement factory ( long in India and long in Bangladesh). The conveyor was engineered by AUMUND France and Larsen & Toubro. The conveyor is actuated by three synchronized drive units for a total power of about 1.8 MW supplied by ABB (two drives at the head end in Bangladesh and one drive at the tail end in India). The conveyor belt was manufactured in lengths on the Indian side and lengths on the Bangladesh side. The idlers, or rollers, of the system are unique in that they are designed to accommodate both horizontal and vertical curves along the terrain. Dedicated vehicles were designed for the maintenance of the conveyor, which is always at a minimum height of above the ground to avoid being flooded during monsoon periods.
Belt conveyor safety system
Conveyors used in industrial settings include tripping mechanisms such as trip cords along the length of the conveyor. This allows for workers to immediately shut down the conveyor when a problem arises. Warning alarms are included to notify employees that a conveyor is about to turn on. In the United States, the Occupational Safety and Health Administration has issued regulations for conveyor safety, as OSHA 1926.555.
Some other systems used to safeguard the conveyor are belt sway switches, speed switches, belt rip switch, and emergency stops. The belt sway switch will stop the conveyor if the belt starts losing its alignment along the structure. The speed switch will stop the belt if the switch is not registering that the belt is running at the required speed. The belt rip switch will stop the belt when there is a cut, or a flap indicating that the belt is in danger of further damage. An emergency stop may be located on the conveyor control box in case of trip chord malfunctions.
Reuse
Worn rubber or elastomer belts can be reused in many ways. Applications for the material include toolbox liners, anti-fatigue floor mats, dock bumpers, landscale edging, livestock fencing, and water diversion.
| Technology | Industry: General | null |
51736 | https://en.wikipedia.org/wiki/Blimp | Blimp | A non-rigid airship, commonly called a blimp (/blɪmp/), is an airship (dirigible) without an internal structural framework or a keel. Unlike semi-rigid and rigid airships (e.g. Zeppelins), blimps rely on the pressure of their lifting gas (usually helium, rather than flammable hydrogen) and the strength of the envelope to maintain their shape. Blimps are known for their use in advertising, surveillance, and observation due to their maneuverability, slow speeds and steady flight capabilities.
Principle
Since blimps keep their shape with internal overpressure, typically the only solid parts are the passenger car (gondola) and the tail fins. A non-rigid airship that uses heated air instead of a light gas (such as helium) as a lifting medium is called a hot-air airship (sometimes there are battens near the bow, which assist with higher forces there from a mooring attachment or from the greater aerodynamic pressures there).
Volume changes of the lifting gas due to temperature changes or to changes of altitude are compensated for by pumping air into internal ballonets (air bags) to maintain the overpressure. Without sufficient overpressure, the blimp loses its ability to be steered and is slowed due to increased drag and distortion. The propeller air stream can be used to inflate the ballonets and so the hull. In some models, such as the Skyship 600, differential ballonet inflation can provide a measure of pitch trim control.
The engines driving the propellers are usually directly attached to the gondola, and in some models are partly steerable.
Blimps are the most commonly built airships because they are relatively easy to build and easy to transport once deflated. However, because of their unstable hull, their size is limited. A blimp with too long a hull may kink in the middle when the overpressure is insufficient or when maneuvered too fast (this has also happened with semi-rigid airships with weak keels). This led to the development of semi-rigids and rigid airships.
Modern blimps are launched somewhat heavier than air (overweight), in contrast to historic blimps. The missing lift is provided by lifting the nose and using engine power, or by angling the engine thrust. Some types also use steerable propellers or ducted fans. Operating in a state heavier than air avoids the need to dump ballast at lift-off and also avoids the need to lose costly helium lifting gas on landing (most of the Zeppelins achieved lift with very inexpensive hydrogen, which could be vented without concern to decrease altitude).
Etymology
The origin of the word "blimp" has been the subject of some confusion. Lennart Ege notes two possible derivations:
A 1943 etymology, published in The New York Times, supports a British origin during the First World War when the British were experimenting with lighter-than-air craft. The initial non-rigid aircraft was called the A-limp; and a second version called the B-limp was deemed more satisfactory.
Yet a third derivation is given by Barnes and James in Shorts Aircraft since 1900:
Dr. A. D. Topping researched the origins of the word and concluded that the British had never had a "Type B, limp" designation, and that Cunningham's coinage appeared to be the correct explanation.
The Oxford English Dictionary notes its use in print in 1916: "Visited the Blimps ... this afternoon at Capel". In 1918, the Illustrated London News said that it was "an onomatopœic name invented by that genius for apposite nomenclature, the late Horace Short".
Use
The B-class blimps were patrol airships operated by the United States Navy during and shortly after World War I. The Navy learned a great deal from the DN-1 fiasco. The result was the very successful B-type airships. Dr. Jerome Hunsaker was asked to develop a theory of airship design. This was followed by then-Lieutenant John H. Towers, USN, returning from Europe having inspected British designs, and the U.S. Navy subsequently sought bids for 16 blimps from American manufacturers. On 4 February 1917 the Secretary of the Navy directed that 16 nonrigid airships of Class B be procured. Ultimately Goodyear built 9 envelopes, Goodrich built five and Curtiss built the gondolas for all of those 14 ships. Connecticut Aircraft contracted with U.S. Rubber for its two envelopes and with Pigeon Fraser for its gondolas. The Curtiss-built gondolas were modified JN-4 fuselages and were powered by OX-5 engines. The Connecticut Aircraft blimps were powered by Hall-Scott engines.
In 1930, a former German airship officer, Captain Anton Heinen, working in the US for the US Navy on its dirigible fleet, attempted to design and build a four-place blimp called the "family air yacht" for private fliers which the inventor claimed would be priced below $10,000 and easier to fly than a fixed-wing aircraft if placed in production. It was unsuccessful.
In 2021, Reader's Digest said that "consensus is that there are about 25 blimps still in existence and only about half of them are still in use for advertising purposes". The Airsign Airship Group is the owner and operator of 8 of these active ships, including the Hood Blimp, DirecTV blimp, and the MetLife blimp.
Surveillance blimp
This blimp is a type of airborne early warning and control aircraft, typically as the active part of a system which includes a mooring platform, communications and information processing. Example systems include the U.S. JLENS and Israeli Aeronautics Defense Skystar 300.
Surveillance blimps known as aerostats have been used extensively in the Middle East by the United States military, the United Arab Emirates and Kuwait.
Examples of non-rigid airships
Manufacturers in many countries have built blimps in many designs. Some examples include:
ADB-3-X01, the largest lightship ever manufactured by Airship do Brasil, the only blimp manufacturing company in Latin America
AVIC AS700 Airship
Astra-Torres airship, non-rigid airships manufactured by Société Astra and used in World War I by France and UK
British Army airship Beta
Coastal class airship, C* class airship UK coastal blimps used in WW I
SS, SSP, SST, SSZ and NS class airships, convoy escort blimps used by the UK in WW I
G class blimp and L class blimp, US training blimps built by Goodyear during World War II
K class blimp and M class blimp, US anti-submarine blimps operated during World War II
Mantainer Ardath, an Australian blimp, in use during the mid-1970s
N class blimp (the "Nan ship"), used for anti-submarine and as a radar early-warning platform during the 1950s
Goodyear Blimps, a fleet of blimps operated for advertising purposes and as a television camera platform
Skyship 600, a private blimp used by advertising companies
P-791, an experimental aerostatic/aerodynamic hybrid airship developed by Lockheed-Martin corporation
SVAM CA-80, an airship manufactured by the Shanghai Vantage Airship Manufacture Co in China
TC-3 and TC-7, two US Army Corps non-rigid blimps used for parasite fighter trials during 1923–24
UConn Lumpy, an airship built and flown in 1975 by students at the University of Connecticut
WDL 2, airship for aerial advertising manufactured and used by WDL Group, Germany
Willows airships
| Technology | Types of aircraft | null |
51742 | https://en.wikipedia.org/wiki/Drawing%20board | Drawing board | A drawing board (also drawing table, drafting table or architect's table) is, in its antique form, a kind of multipurpose desk which can be used for any kind of drawing, writing or impromptu sketching on a large sheet of paper or for reading a large format book or other oversized document or for drafting precise technical illustrations (such as engineering drawings or architectural drawings). The drawing table used to be a frequent companion to a pedestal desk in a study or private library, during the pre-industrial and early industrial era.
During the Industrial Revolution, draftsmanship gradually became a specialized trade and drawing tables slowly moved out of the libraries and offices of most gentlemen. They became more utilitarian and were built of steel and plastic instead of fine woods and brass.
More recently, engineers and draftsmen use the drawing board for making and modifying drawings on paper with ink or pencil. Different drawing instruments (set square, protractor, etc.) are used on it to draw parallel, perpendicular or oblique lines. There are instruments for drawing circles, arcs, other curves and symbols too (compass, French curve, stencil, etc.). However, with the gradual introduction of computer aided drafting and design (CADD or CAD) in the last decades of the 20th century and the first of the 21st century, the drawing board is becoming less common.
A drawing table is also sometimes called a mechanical desk because, for several centuries, most mechanical desks were drawing tables. Unlike the gadgety mechanical desks of the second part of the 18th century, however, the mechanical parts of drawing tables were usually limited to notches, ratchets, and perhaps a few simple gears, or levers or cogs to elevate and incline the working surface.
Very often a drawing table could look like a writing table or even a pedestal desk when the working surface was set at the horizontal and the height adjusted to 29 inches, in order to use it as a "normal" desk. The only giveaway was usually a lip on one of the sides of the desktop. This lip or edge stopped paper or books from sliding when the surface was given an angle. It was also sometimes used to hold writing implements. When the working surface was extended at its full height, a drawing table could be used as a standing desk.
Many reproductions have been made and are still being produced of drawing tables, copying the period styles they were originally made in during the 18th and 19th centuries.
History
In the 18th and 19th centuries, drawing paper was dampened and then its edges glued to the drawing board. After drying the paper would be flat and smooth. The completed drawing was then cut free. Paper could also be secured to the drawing board with drawing pins or even C-clamps. More recent practice is to use self-adhesive drafting tape to secure paper to the board, including the sophisticated use of individualized adhesive dots from a dispensing roll. Some drawing boards are magnetized, allowing paper to be held down by long steel strips. Boards used for overlay drafting or animation may include registration pins or peg bars to ensure alignment of multiple layers of drawing media.
Contemporary drafting tables
Despite the prevalence of computer aided drafting, many older architects and even some structural designers still rely on paper and pencil graphics produced on a drafting table.
Modern drafting tables typically rely on a steel frame. Steel provides as much strength as the old oak drafting table frames and much easier portability. Typically the drafting board surface is a thick sheet of compressed fibreboard with sheets of Formica laminated to all its surfaces. The drafting board surface is usually secured to the frame by screws which can easily be removed for drafting table transportation.
The steel frame allows mechanical linkages to be installed that control both the height and angle of the drafting board surface. Typically, a single foot pedal is used to control a clutch which clamps the board in the desired position. A heavy counterweight full of lead shot is installed in the steel linkage so that if the pedal is accidentally released, the drafting board will not spring into the upright position and injure the user. Drafting table linkages and clutches have to be maintained to ensure that this safety mechanism counterbalances the weight of the table surface.
The drafting table surface is usually covered with a thin vinyl sheet called a board cover. This provides an optimum surface for pen and pencil drafting. It allows compasses and dividers to be used without damaging the wooden surface of the board. A board cover must be frequently cleaned to prevent graphite buildup from making new drawings dirty. At the bottom edge of the table, a single strip of aluminum or steel may serve as a place to rest drafting pencils. More purpose-built trays are also used which hold pencils even while the board is being adjusted.
Various types of drafting machine may be attached to the board surface to assist the draftsperson or artist. Parallel rules often span the entire width of the board and are so named because they remain parallel to the top edge of the board as they are moved up and down. Drafting machines use pre-calibrated scales and built in protractors to allow accurate drawing measurement.
Some drafting tables incorporate electric motors to provide the up and down and angle adjustment of the drafting table surface. These tables are at least as heavy as the original oak and brass drafting tables and so sacrifice portability for the convenience of push button table adjustment.
Modern-day idiom
The expression "back to the drawing board" is used when a plan or course of action needs to be changed, often drastically; usually due to a very unsuccessful result; e.g., "The battle plan, the result of months of conferences, failed because the enemy retreated too far back. It was back to the drawing board for the army captains."
The phrase was coined in the caption to a Peter Arno cartoon of The New Yorker of March 1, 1941.
| Technology | Artist's and drafting tools | null |
51776 | https://en.wikipedia.org/wiki/Hydrostatic%20equilibrium | Hydrostatic equilibrium | In fluid mechanics, hydrostatic equilibrium (hydrostatic balance, hydrostasy) is the condition of a fluid or plastic solid at rest, which occurs when external forces, such as gravity, are balanced by a pressure-gradient force. In the planetary physics of Earth, the pressure-gradient force prevents gravity from collapsing the planetary atmosphere into a thin, dense shell, whereas gravity prevents the pressure-gradient force from diffusing the atmosphere into outer space. In general, it is what causes objects in space to be spherical.
Hydrostatic equilibrium is the distinguishing criterion between dwarf planets and small solar system bodies, and features in astrophysics and planetary geology. Said qualification of equilibrium indicates that the shape of the object is symmetrically rounded, mostly due to rotation, into an ellipsoid, where any irregular surface features are consequent to a relatively thin solid crust. In addition to the Sun, there are a dozen or so equilibrium objects confirmed to exist in the Solar System.
Mathematical consideration
For a hydrostatic fluid on Earth:
Derivation from force summation
Newton's laws of motion state that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium.
The fluid can be split into a large number of cuboid volume elements; by considering a single element, the action of the fluid can be derived.
There are three forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is, from the definition of pressure,
Similarly, the force on the volume element from the pressure of the fluid below pushing upwards is
Finally, the weight of the volume element causes a force downwards. If the density is ρ, the volume is V and g the standard gravity, then:
The volume of this cuboid is equal to the area of the top or bottom, times the height – the formula for finding the volume of a cube.
By balancing these forces, the total force on the fluid is
This sum equals zero if the fluid's velocity is constant. Dividing by A,
Or,
Ptop − Pbottom is a change in pressure, and h is the height of the volume element—a change in the distance above the ground. By saying these changes are infinitesimally small, the equation can be written in differential form.
Density changes with pressure, and gravity changes with height, so the equation would be:
Derivation from Navier–Stokes equations
Note finally that this last equation can be derived by solving the three-dimensional Navier–Stokes equations for the equilibrium situation where
Then the only non-trivial equation is the -equation, which now reads
Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the Navier–Stokes equations.
Derivation from general relativity
By plugging the energy–momentum tensor for a perfect fluid
into the Einstein field equations
and using the conservation condition
one can derive the Tolman–Oppenheimer–Volkoff equation for the structure of a static, spherically symmetric relativistic star in isotropic coordinates:
In practice, Ρ and ρ are related by an equation of state of the form f(Ρ,ρ) = 0, with f specific to makeup of the star. M(r) is a foliation of spheres weighted by the mass density ρ(r), with the largest sphere having radius r:
Per standard procedure in taking the nonrelativistic limit, we let , so that the factor
Therefore, in the nonrelativistic limit the Tolman–Oppenheimer–Volkoff equation reduces to Newton's hydrostatic equilibrium:
(we have made the trivial notation change h = r and have used f(Ρ,ρ) = 0 to express ρ in terms of P). A similar equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads:
Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when treating stars, one chooses spherical coordinates as basis coordinates , the index i runs for the coordinates r and ).
Applications
Fluids
The hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic balance is a particular balance for weighing substances in water. Hydrostatic balance allows the discovery of their specific gravities. This equilibrium is strictly applicable when an ideal fluid is in steady horizontal laminar flow, and when any fluid is at rest or in vertical motion at constant speed. It can also be a satisfactory approximation when flow speeds are low enough that acceleration is negligible.
Astrophysics and planetary science
From the time of Isaac Newton much work has been done on the subject of the equilibrium attained when a fluid rotates in space. This has application to both stars and objects like planets, which may have been fluid in the past or in which the solid material deforms like a fluid when subjected to very high stresses.
In any given layer of a star, there is a hydrostatic equilibrium between the outward-pushing pressure gradient and the weight of the material above pressing inward. One can also study planets under the assumption of hydrostatic equilibrium. A rotating star or planet in hydrostatic equilibrium is usually an oblate spheroid, an ellipsoid in which two of the principal axes are equal and longer than the third.
An example of this phenomenon is the star Vega, which has a rotation period of 12.5 hours. Consequently, Vega is about 20% larger at the equator than from pole to pole.
In his 1687 Philosophiæ Naturalis Principia Mathematica Newton correctly stated that a rotating fluid of uniform density under the influence of gravity would take the form of a spheroid and that the gravity (including the effect of centrifugal force) would be weaker at the equator than at the poles by an amount equal (at least asymptotically) to five fourths the centrifugal force at the equator. In 1742, Colin Maclaurin published his treatise on fluxions in which he showed that the spheroid was an exact solution. If we designate the equatorial radius by the polar radius by and the eccentricity by with
he found that the gravity at the poles is
where is the gravitational constant, is the (uniform) density, and is the total mass. The ratio of this to the gravity if the fluid is not rotating, is asymptotic to
as goes to zero, where is the flattening:
The gravitational attraction on the equator (not including centrifugal force) is
Asymptotically, we have:
Maclaurin showed (still in the case of uniform density) that the component of gravity toward the axis of rotation depended only on the distance from the axis and was proportional to that distance, and the component in the direction toward the plane of the equator depended only on the distance from that plane and was proportional to that distance. Newton had already pointed out that the gravity felt on the equator (including the lightening due to centrifugal force) has to be in order to have the same pressure at the bottom of channels from the pole or from the equator to the centre, so the centrifugal force at the equator must be
Defining the latitude to be the angle between a tangent to the meridian and axis of rotation, the total gravity felt at latitude (including the effect of centrifugal force) is
This spheroid solution is stable up to a certain (critical) angular momentum (normalized by ), but in 1834, Carl Jacobi showed that it becomes unstable once the eccentricity reaches 0.81267 (or reaches 0.3302).
Above the critical value, the solution becomes a Jacobi, or scalene, ellipsoid (one with all three axes different). Henri Poincaré in 1885 found that at still higher angular momentum it will no longer be ellipsoidal but piriform or oviform. The symmetry drops from the 8-fold D point group to the 4-fold C, with its axis perpendicular to the axis of rotation. Other shapes satisfy the equations beyond that, but are not stable, at least not near the point of bifurcation. Poincaré was unsure what would happen at higher angular momentum but concluded that eventually the blob would split into two.
The assumption of uniform density may apply more or less to a molten planet or a rocky planet but does not apply to a star or to a planet like the earth which has a dense metallic core. In 1737, Alexis Clairaut studied the case of density varying with depth. Clairaut's theorem states that the variation of the gravity (including centrifugal force) is proportional to the square of the sine of the latitude, with the proportionality depending linearly on the flattening () and the ratio at the equator of centrifugal force to gravitational attraction. (Compare with the exact relation above for the case of uniform density.) Clairaut's theorem is a special case for an oblate spheroid of a connexion found later by Pierre-Simon Laplace between the shape and the variation of gravity.
If the star has a massive nearby companion object, tidal forces come into play as well, which distort the star into a scalene shape if rotation alone would make it a spheroid. An example of this is Beta Lyrae.
Hydrostatic equilibrium is also important for the intracluster medium, where it restricts the amount of fluid that can be present in the core of a cluster of galaxies.
We can also use the principle of hydrostatic equilibrium to estimate the velocity dispersion of dark matter in clusters of galaxies. Only baryonic matter (or, rather, the collisions thereof) emits X-ray radiation. The absolute X-ray luminosity per unit volume takes the form where and are the temperature and density of the baryonic matter, and is some function of temperature and fundamental constants. The baryonic density satisfies the above equation
The integral is a measure of the total mass of the cluster, with being the proper distance to the center of the cluster. Using the ideal gas law ( is the Boltzmann constant and is a characteristic mass of the baryonic gas particles) and rearranging, we arrive at
Multiplying by and differentiating with respect to yields
If we make the assumption that cold dark matter particles have an isotropic velocity distribution, the same derivation applies to these particles, and their density satisfies the non-linear differential equation
With perfect X-ray and distance data, we could calculate the baryon density at each point in the cluster and thus the dark matter density. We could then calculate the velocity dispersion of the dark matter, which is given by
The central density ratio is dependent on the redshift of the cluster and is given by
where is the angular width of the cluster and the proper distance to the cluster. Values for the ratio range from 0.11 to 0.14 for various surveys.
Planetary geology
The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is a planet, dwarf planet, or small Solar System body. According to the definition of planet that was adopted by the International Astronomical Union in 2006, one defining characteristic of planets and dwarf planets is that they are objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body often has the differentiated interior and geology of a world (a planemo), but near-hydrostatic or formerly hydrostatic bodies such as the proto-planet 4 Vesta may also be differentiated and some hydrostatic bodies (notably Callisto) have not thoroughly differentiated since their formation. Often, the equilibrium shape is an oblate spheroid, as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces create a scalene ellipsoid. Also, the purported dwarf planet is scalene because of its rapid rotation though it may not currently be in equilibrium.
Icy objects were previously believed to need less mass to attain hydrostatic equilibrium than rocky objects. The smallest object that appears to have an equilibrium shape is the icy moon Mimas at 396 km, but the largest icy object known to have an obviously non-equilibrium shape is the icy moon Proteus at 420 km, and the largest rocky bodies in an obviously non-equilibrium shape are the asteroids Pallas and Vesta at about 520 km. However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The smallest body confirmed to be in hydrostatic equilibrium is the dwarf planet Ceres, which is icy, at 945 km, and the largest known body to have a noticeable deviation from hydrostatic equilibrium is Iapetus being made of mostly permeable ice and almost no rock. At 1,469 km Iapetus is neither spherical nor ellipsoid. Instead, it is rather in a strange walnut-like shape due to its unique equatorial ridge. Some icy bodies may be in equilibrium at least partly due to a subsurface ocean, which is not the definition of equilibrium used by the IAU (gravity overcoming internal rigid-body forces). Even larger bodies deviate from hydrostatic equilibrium, although they are ellipsoidal: examples are Earth's Moon at 3,474 km (mostly rock), and the planet Mercury at 4,880 km (mostly metal).
In 2024, Kiss et al. found that has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium but that its shape became "frozen in" and did not change as it spun down because of tidal forces from its moon Weywot. If so, this would resemble the situation of Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless though not always.
Solid bodies have irregular surfaces, but local irregularities may be consistent with global equilibrium. For example, the massive base of the tallest mountain on Earth, Mauna Kea, has deformed and depressed the level of the surrounding crust and so the overall distribution of mass approaches equilibrium.
Atmospheric modeling
In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an upward force called the pressure-gradient force. The force of gravity balances this out, keeps the atmosphere bound to Earth and maintains pressure differences with altitude.
| Physical sciences | Fluid mechanics | Physics |
51829 | https://en.wikipedia.org/wiki/Rodinia | Rodinia | Rodinia (from the Russian родина, rodina, meaning "motherland, birthplace") was a Mesoproterozoic and Neoproterozoic supercontinent that assembled 1.26–0.90 billion years ago (Ga) and broke up 750–633 million years ago (Ma). were probably the first to recognise a Precambrian supercontinent, which they named "Pangaea I." It was renamed "Rodinia" by , who also were the first to produce a plate reconstruction and propose a temporal framework for the supercontinent.
Rodinia formed at c. 1.23 Ga by accretion and collision of fragments produced by breakup of an older supercontinent, Columbia, assembled by global-scale 2.0–1.8 Ga collisional events. Rodinia broke up in the Neoproterozoic, with its continental fragments reassembled to form Pannotia 633–573 Ma. In contrast with Pannotia, little is known about Rodinia's configuration and geodynamic history. Paleomagnetic evidence provides some clues to the paleolatitude of individual pieces of the Earth's crust, but not to their longitude, which geologists have pieced together by comparing similar geologic features, often now widely dispersed.
The extreme cooling of the global climate around 717–635 Ma (the so-called Snowball Earth of the Cryogenian period) and the rapid evolution of primitive life during the subsequent Ediacaran and Cambrian periods are thought to have been triggered by the breaking up of Rodinia or to a slowing down of tectonic processes.
Geodynamics
Paleogeographic reconstructions
The idea that a supercontinent existed in the early Neoproterozoic arose in the 1970s, when geologists determined that orogens of this age exist on virtually all cratons. Examples are the Grenville orogeny in North America and the Dalslandian orogeny in Europe. Since then, many alternative reconstructions have been proposed for the configuration of the cratons in this supercontinent. Most of these reconstructions are based on the correlation of the orogens on different cratons. Though the configuration of the core cratons in Rodinia is now reasonably well known, recent reconstructions still differ in many details. Geologists try to decrease the uncertainties by collecting geological and paleomagnetical data.
Most reconstructions show Rodinia's core formed by the North American Craton (the later paleocontinent of Laurentia), surrounded in the southeast with the East European Craton (the later paleocontinent of Baltica), the Amazonian Craton and the West African Craton; in the south with the Río de la Plata and São Francisco cratons; in the southwest with the Congo and Kalahari cratons; and in the northeast with Australia, India and eastern Antarctica. The positions of Siberia and North and South China north of the North American craton differ strongly depending on the reconstruction:
SWEAT-Configuration (Southwest US-East Antarctica craton): Antarctica is southwest of Laurentia, and Australia is north of Antarctica.
AUSWUS-Configuration (Australia-western US): Australia is west of Laurentia.
AUSMEX-Configuration (Australia-Mexico): Australia is at the location of current day Mexico relative to Laurentia.
The "Missing-link" model by which has South China between Australia and the west coast of Laurentia. A revised "Missing-link" model is proposed in which Tarim Block serves as an extended or alternative missing-link between Australia and Laurentia.
Siberia attached to the western US (via the Belt Supergroup), as in .
Little is known about the paleogeography before the formation of Rodinia. Paleomagnetic and geologic data are only definite enough to form reconstructions from the breakup of Rodinia onwards. Rodinia is considered to have formed between 1.3 and 1.23 Ga and broke up again before 750 Ma. Rodinia was surrounded by the superocean Mirovia.
According to J.D.A. Piper, Rodinia is one of two models for the configuration and history of the continental crust in the latter part of Precambrian times. The other is Paleopangea, Piper's own concept. Piper proposes an alternative hypothesis for this era and the previous ones. This idea rejects that Rodinia ever existed as a transient supercontinent subject to progressive break-up in the late Proterozoic and instead that this time and earlier times were dominated by a single, persistent "Paleopangaea" supercontinent. As evidence, he suggests an observation that the palaeomagnetic poles from the continental crust assigned to this time conform to a single path between 825 and 633 Ma and latterly to a near-static position between 750 and 633 Ma. This latter solution predicts that break-up was confined to the Ediacaran period and produced the dramatic environmental changes that characterised the transition between the Precambrian and Phanerozoic. However, this theory has been widely criticized, as incorrect applications of paleomagnetic data have been pointed out.
Breakup
In 2009 UNESCO's International Geoscience Programme project 440, named "Rodinia Assembly and Breakup," concluded that Rodinia broke up in four stages between 825 and 550 Ma:
The breakup was initiated by a superplume around 825–800 Ma whose influence—such as crustal arching, intense bimodal magmatism, and accumulation of thick rift-type sedimentary successions—has been recorded in South Australia, South China, Tarim, Kalahari, India, and the Arabian-Nubian Craton.
Rifting progressed in the same cratons 800–750 Ma and spread into Laurentia and perhaps Siberia. India (including Madagascar) and the Congo–São Francisco Craton were either detached from Rodinia during this period or simply never were part of the supercontinent.
As the central part of Rodinia reached the Equator around 750–700 Ma, a new pulse of magmatism and rifting continued the disassembly in western Kalahari, West Australia, South China, Tarim, and most margins of Laurentia.
650–550 Ma several events coincided: the opening of the Iapetus Ocean; the closure of the Braziliano, Adamastor, and Mozambique oceans; and the Pan-African orogeny. The result was the formation of Gondwana.
The Rodinia hypothesis assumes that rifting did not start everywhere simultaneously. Extensive lava flows and volcanic eruptions of Neoproterozoic age are found on most continents, evidence for large scale rifting about 750 Ma. As early as 850 to 800 Ma, a rift developed between the continental masses of present-day Australia, East Antarctica, India and the Congo and Kalahari cratons on one side and later Laurentia, Baltica, Amazonia and the West African and Rio de la Plata cratons on the other. This rift developed into the Adamastor Ocean during the Ediacaran.
Around 550 Ma, near the boundary between the Ediacaran and Cambrian, the first group of cratons fused again with Amazonia, West Africa and the Rio de la Plata cratons during the Pan-African orogeny, which caused the development of Gondwana.
In a separate rifting event about 610 Ma, the Iapetus Ocean formed. The eastern part of this ocean formed between Baltica and Laurentia, the western part between Amazonia and Laurentia. Because the timeframe of this separation and the partially contemporaneous Pan-African orogeny are difficult to correlate, it might be that all continental mass was again joined in one supercontinent between roughly 600 and 550 Ma. This hypothetical supercontinent is called Pannotia.
Influence on paleoclimate and life
Unlike later supercontinents, Rodinia was entirely barren. It existed before complex life colonized on dry land. Based on sedimentary rock analysis, Rodinia's formation happened when the ozone layer was not as extensive as it is now. Ultraviolet light discouraged organisms from inhabiting its interior. Nevertheless, its existence significantly influenced the marine life of its time.
In the Cryogenian, Earth experienced large glaciations, and temperatures were at least as cool as today. Substantial parts of Rodinia may have been covered by glaciers or the southern polar ice cap. Low temperatures may have been exaggerated during the early stages of continental rifting. Geothermal heating peaks in crust about to be rifted, and since warmer rocks are less dense, the crustal rocks rise up relative to their surroundings. This rising creates areas of higher altitude where the air is cooler and ice is less likely to melt with changes in season, and it may explain the evidence of abundant glaciation in the Ediacaran.
The rifting of the continents created new oceans and seafloor spreading, which produces warmer, less dense oceanic crust. Lower-density, hot oceanic crust will not lie as deep as older, cool oceanic lithosphere. In periods with relatively large areas of new lithosphere, the ocean floors come up, causing the sea level to rise. The result was a greater number of shallower seas.
The increased evaporation from the oceans' larger water area may have increased rainfall, which in turn increased the weathering of exposed rock. By inputting data on the ratio of stable isotopes 18O:16O into computer models, it has been shown that in conjunction with quick weathering of volcanic rock, the increased rainfall may have reduced greenhouse gas levels to below the threshold required to trigger the period of extreme glaciation known as Snowball Earth. Increased volcanic activity also introduced into the marine environment biologically active nutrients, which may have played an important role in the earliest animals' development.
| Physical sciences | Paleogeography | Earth science |
5540732 | https://en.wikipedia.org/wiki/Microsauria | Microsauria | Microsauria ("small lizards") is an extinct, possibly polyphyletic order of tetrapods from the late Carboniferous and early Permian periods. It is the most diverse and species-rich group of lepospondyls. Recently, Microsauria has been considered paraphyletic, as several other non-microsaur lepospondyl groups such as Lysorophia seem to be nested in it. Microsauria is now commonly used as a collective term for the grade of lepospondyls that were originally classified as members of Microsauria.
The microsaurs all had short tails and small legs, but were otherwise quite varied in form. The group included lizard-like animals that were relatively well-adapted to living on dry land, burrowing forms, and others that, like the modern axolotl, retained their gills into adult life, and so presumably never left the water. Their skeleton was heavily ossified, and their development was likely gradual with no metamorphosis.
Distribution
Microsaur remains have been found from Europe and North America in Late Carboniferous and Early Permian localities. Most North American microsaurs have been found in the United States in Arizona, Texas, Oklahoma, Ohio, Illinois, as well as Kansas and Nebraska, although remains have also been found in Nova Scotia. In Europe, microsaurs are known from Germany and the Czech Republic. Possible microsaur remains have also been found from strata in the town of Vyazniki in the Vladimir Oblast of Russia. These strata are Late Permian in age, near the Permo-Triassic boundary. The microsaur material at Vyazniki may be the youngest record of microsaurs, and would extend their range by around 20 million years. However, fossil remains from Gansu Province shows possible Triassic record of microsaur.
Classification
Cladogram modified from Anderson (2001), with microsaur taxa marked with :
Cladogram from Ruta and Coates (2007):
Cladistic analysis by Pardo et al. (2017) places recumbirostran microsaurs and lysorophians as members of Amniota.
| Biology and health sciences | Prehistoric amphibians | Animals |
5543603 | https://en.wikipedia.org/wiki/Tripedalism | Tripedalism | Tripedalism (from the Latin tri = three + ped = foot) is locomotion by the use of three limbs. Real-world tripedalism is rare, in contrast to the common bipedalism of two-legged animals and quadrupedalism of four-legged animals. Bilateral symmetry seems to have become entrenched very early in evolution, appearing even before appendages like legs, fins or flippers had evolved.
In nature
It has been said that parrots (birds of the order Psittaciformes) display tripedalism during climbing gaits, which was tested and proven in a 2022 paper on the subject, making parrots the only creatures to truly use tripedal forms of locomotion. Tripedal gaits were also observed by K. Hunt in primates. This is usually observed when the animal is using one limb to grasp a carried object and is thus a non-standard gait. Apart from climbing in parrots, there are no known animal behaviours where the same three extremities are routinely used to contact environmental supports, although the movement of some macropods such as kangaroos, which can alternate between resting their weight on their muscular tails and their two hind legs and hop on all three, may be an example of tripedal locomotion in animals. There are also the tripod fish. Several species of these fish rest on the ocean bottom on two rays from their two pelvic fins and one ray from their caudal fin.
Quadrupedal amputees and mutations
There are some three-legged creatures in the world today, namely four-legged animals (such as pet dogs and cats) which have had one limb amputated. With proper medical treatment most of these injured animals can go on to live fairly normal lives, despite being artificially tripedal. There are also cases of mutations or birth abnormalities in animals (including humans) which have resulted in three legs.
| Biology and health sciences | Ethology | Biology |
2196648 | https://en.wikipedia.org/wiki/Lithium%20hydride | Lithium hydride | Lithium hydride is an inorganic compound with the formula LiH. This alkali metal hydride is a colorless solid, although commercial samples are grey. Characteristic of a salt-like (ionic) hydride, it has a high melting point, and it is not soluble but reactive with all protic organic solvents. It is soluble and nonreactive with certain molten salts such as lithium fluoride, lithium borohydride, and sodium hydride. With a molar mass of 7.95 g/mol, it is the lightest ionic compound.
Physical properties
LiH is a diamagnetic and an ionic conductor with a conductivity gradually increasing from at 443 °C to 0.18 Ω−1cm−1 at 754 °C; there is no discontinuity in this increase through the melting point. The dielectric constant of LiH decreases from 13.0 (static, low frequencies) to 3.6 (visible-light frequencies). LiH is a soft material with a Mohs hardness of 3.5. Its compressive creep (per 100 hours) rapidly increases from < 1% at 350 °C to > 100% at 475 °C, meaning that LiH cannot provide mechanical support when heated.
The thermal conductivity of LiH decreases with temperature and depends on morphology: the corresponding values are 0.125 W/(cm·K) for crystals and 0.0695 W/(cm·K) for compacts at 50 °C, and 0.036 W/(cm·K) for crystals and 0.0432 W/(cm·K) for compacts at 500 °C. The linear thermal expansion coefficient is 4.2/°C at room temperature.
Synthesis and processing
LiH is produced by treating lithium metal with hydrogen gas:
This reaction is especially rapid at temperatures above 600 °C. Addition of 0.001–0.003% carbon, and/or increasing temperature/pressure, increases the yield up to 98% at 2-hour residence time. However, the reaction proceeds at temperatures as low as 29 °C. The yield is 60% at 99 °C and 85% at 125 °C, and the rate depends significantly on the surface condition of LiH.
Less common ways of LiH synthesis include thermal decomposition of lithium aluminium hydride (200 °C), lithium borohydride (300 °C), n-butyllithium (150 °C), or ethyllithium (120 °C), as well as several reactions involving lithium compounds of low stability and available hydrogen content.
Chemical reactions yield LiH in the form of lumped powder, which can be compressed into pellets without a binder. More complex shapes can be produced by casting from the melt. Large single crystals (about 80 mm long and 16 mm in diameter) can be then grown from molten LiH powder in hydrogen atmosphere by the Bridgman–Stockbarger technique. They often have bluish color owing to the presence of colloidal Li. This color can be removed by post-growth annealing at lower temperatures (~550 °C) and lower thermal gradients. Major impurities in these crystals are Na (20–200 ppm), O (10–100 ppm), Mg (0.5–6 ppm), Fe (0.5-2 ppm) and Cu (0.5-2 ppm).
Bulk cold-pressed LiH parts can be easily machined using standard techniques and tools to micrometer precision. However, cast LiH is brittle and easily cracks during processing.
A more energy efficient route to form lithium hydride powder is by ball milling lithium metal under high hydrogen pressure. A problem with this method is the cold welding of lithium metal due to the high ductility. By adding small amounts of lithium hydride powder the cold welding can be avoided.
Reactions
LiH powder reacts rapidly with air of low humidity, forming LiOH, and . In moist air the powder ignites spontaneously, forming a mixture of products including some nitrogenous compounds. The lump material reacts with humid air, forming a superficial coating, which is a viscous fluid. This inhibits further reaction, although the appearance of a film of "tarnish" is quite evident. Little or no nitride is formed on exposure to humid air. The lump material, contained in a metal dish, may be heated in air to slightly below 200 °C without igniting, although it ignites readily when touched by an open flame. The surface condition of LiH, presence of oxides on the metal dish, etc., have a considerable effect on the ignition temperature. Dry oxygen does not react with crystalline LiH unless heated strongly, when an almost explosive combustion occurs.
LiH is highly reactive towards water and other protic reagents:
LiH is less reactive with water than Li and thus is a much less powerful reducing agent for water, alcohols, and other media containing reducible solutes. This is true for all the binary saline hydrides.
LiH pellets slowly expand in moist air, forming LiOH; however, the expansion rate is below 10% within 24 hours in a pressure of 2 Torr of water vapor. If moist air contains carbon dioxide, then the product is lithium carbonate. LiH reacts with ammonia, slowly at room temperature, but the reaction accelerates significantly above 300 °C. LiH reacts slowly with higher alcohols and phenols, but vigorously with lower alcohols.
LiH reacts with sulfur dioxide to give the dithionite:
though above 50 °C the product is lithium sulfide instead.
LiH reacts with acetylene to form lithium carbide and hydrogen. With anhydrous organic acids, phenols and acid anhydrides, LiH reacts slowly, producing hydrogen gas and the lithium salt of the acid. With water-containing acids, LiH reacts faster than with water. Many reactions of LiH with oxygen-containing species yield LiOH, which in turn irreversibly reacts with LiH at temperatures above 300 °C:
Lithium hydride is rather unreactive at moderate temperatures with or . It is, therefore, used in the synthesis of other useful hydrides, e.g.,
Applications
Hydrogen storage and fuel
With a hydrogen content in proportion to its mass three times that of NaH, LiH has the highest hydrogen content of any hydride. LiH is periodically of interest for hydrogen storage, but applications have been thwarted by its stability to decomposition. Thus removal of requires temperatures above the 700 °C used for its synthesis, such temperatures are expensive to create and maintain. The compound was once tested as a fuel component in a model rocket.
Precursor to complex metal hydrides
LiH is not usually a hydride-reducing agent, except in the synthesis of hydrides of certain metalloids. For example, silane is produced in the reaction of lithium hydride and silicon tetrachloride by the Sundermeyer process:
Lithium hydride is used in the production of a variety of reagents for organic synthesis, such as lithium aluminium hydride () and lithium borohydride (). Triethylborane reacts to give superhydride ().
In nuclear chemistry and physics
Lithium hydride (LiH) is sometimes a desirable material for the shielding of nuclear reactors, with the isotope lithium-6 (Li-6), and it can be fabricated by casting.
Lithium deuteride
Lithium deuteride, in the form of lithium-7 deuteride ( or 7LiD), is a good moderator for nuclear reactors, because deuterium (2H or D) has a lower neutron absorption cross-section than ordinary hydrogen or protium (1H) does, and the cross-section for 7Li is also low, decreasing the absorption of neutrons in a reactor. 7Li is preferred for a moderator because it has a lower neutron capture cross-section, and it also forms less tritium (3H or T) under bombardment with neutrons.
The corresponding lithium-6 deuteride ( or 6LiD) is the primary fusion fuel in thermonuclear weapons. In hydrogen warheads of the Teller–Ulam design, a nuclear fission trigger explodes to heat and compress the lithium-6 deuteride, and to bombard the 6LiD with neutrons to produce tritium in an exothermic reaction:
The deuterium and tritium then fuse to produce helium, one neutron, and 17.59 MeV of free energy in the form of gamma rays, kinetic energy, etc. Tritium has a favorable reaction cross section. The helium is an inert byproduct.
+ → + n.
Before the Castle Bravo nuclear weapons test in 1954, it was thought that only the less common isotope 6Li would breed tritium when struck with fast neutrons. The Castle Bravo test showed (accidentally) that the more plentiful 7Li also does so under extreme conditions, albeit by an endothermic reaction.
Safety
LiH reacts violently with water to give hydrogen gas and LiOH, which is caustic. Consequently, LiH dust can explode in humid air, or even in dry air due to static electricity. At concentrations of in air the dust is extremely irritating to the mucous membranes and skin and may cause an allergic reaction. Because of the irritation, LiH is normally rejected rather than accumulated by the body.
Some lithium salts, which can be produced in LiH reactions, are toxic. LiH fire should not be extinguished using carbon dioxide, carbon tetrachloride, or aqueous fire extinguishers; it should be smothered by covering with a metal object or graphite or dolomite powder. Sand is less suitable, as it can explode when mixed with burning LiH, especially if not dry. LiH is normally transported in oil, using containers made of ceramic, certain plastics or steel, and is handled in an atmosphere of dry argon or helium. Nitrogen can be used, but not at elevated temperatures, as it reacts with lithium. LiH normally contains some metallic lithium, which corrodes steel or silica containers at elevated temperatures.
| Physical sciences | Hydride salts | Chemistry |
2197089 | https://en.wikipedia.org/wiki/Lithostratigraphy | Lithostratigraphy | Lithostratigraphy is a sub-discipline of stratigraphy, the geological science associated with the study of strata or rock layers. Major focuses include geochronology, comparative geology, and petrology.
In general, strata are primarily igneous or sedimentary relating to how the rock was formed. Sedimentary layers are laid down by deposition of sediment associated with weathering processes, decaying organic matter (biogenic) or through chemical precipitation. These layers are often distinguishable as having many fossils and are important for the study of biostratigraphy. Igneous layers occur as stacks of lava flows, layers of lava fragments (called tephra) both erupted onto the Earth's surface by volcanoes, and in layered intrusions formed deep underground. Igneous layers are generally devoid of fossils and represent magmatic or volcanic activity that occurred during the geologic history of an area.
There are a number of principles that are used to explain relationships between strata. When an igneous rock cuts across a formation of sedimentary rock, then we can say that the igneous intrusion is younger than the sedimentary rock. The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed stratum is younger than the one beneath and older than the one above it. The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds.
Types of lithostratigraphic units
The principles of lithostratigraphy were first established by the Danish naturalist, Nicolas Steno, in his 1669 Dissertationis prodromus. A lithostratigraphic unit conforms to the law of superposition, which in its modern form states that in any succession of strata, not disturbed or overturned since deposition, younger rocks lies above older rocks. The principle of lateral continuity states that a set of bed extends and can be traceable over a large area.
Lithostratigraphic units are recognized and defined on the basis of observable physical rock characteristics. The lithology of a unit includes characteristics such as chemical and mineralogical composition, texture, color, primary depositional structures, fossils regarded as rock-forming particles, or other organic materials such as coal or kerogen. The taxonomy of fossils is not a valid lithological basis for defining a lithostratigraphic unit. The descriptions of strata based on physical appearance define facies.
The formal description of a lithostratigraphic unit includes a stratotype which is usually a type section. A type section is ideally a good exposure of the unit that shows its entire thickness. If the unit is nowhere entirely exposed, or if it shows considerably lateral variation, additional reference sections may be defined. Long-established lithostratigraphic units dating to before the modern codification of stratigraphy, or which lack tabular form (such as volcanic domes), may substitute a type locality for a type section as their stratotype. The geologist defining the unit is expected to describe the stratotype in sufficient detail that other geologists can unequivocally recognize the unit.
Lithosome: Masses of rock of essentially uniform character and having interchanging relationships with adjacent masses of different lithology. e.g.: shale lithosome, limestone lithosome.
The fundamental Lithostratigraphic unit is the formation. A formation is a lithologically distinctive stratigraphic unit that is large enough to be mappable and traceable. Formations may be subdivided into members and beds and aggregated with other formations into groups and supergroups.
Stratigraphic relationship
Two types of contact: conformable and unconformable.
Conformable: unbroken deposition, no break or hiatus (break or interruption in the continuity of the geological record). The surface strata resulting is called a conformity.
Two types of contact between conformable strata: abrupt contacts (directly separate beds of distinctly different lithology, minor depositional break, called diastems) and gradational contact (gradual change in deposition, mixing zone).
Unconformable: period of erosion/non-deposition. The surface stratum resulting is called an unconformity.
Four types of unconformity:
Angular unconformity: younger sediment lies upon an eroded surface of tilted or folded older rocks. The older rock dips at a different angle from the younger.
Disconformity: the contact between younger and older beds is marked by visible, irregular erosional surfaces. Paleosol might develop right above the disconformity surface because of the non-deposition setting.
Paraconformity: the bedding planes below and above the unconformity are parallel. A time gap is present, as shown by a faunal break, but there is no erosion, just a period of non-deposition.
Nonconformity: relatively young sediments are deposited right above older igneous or metamorphic rocks.
Lithostratigraphic correlation
To correlate lithostratigraphic units, geologists define facies, and look for key beds or key sequences that can be used as a datum.
Direct correlation: based on lithology, color, structure, thickness...
Indirect correlation: electric log correlation (gamma-ray, density, resistivity...)
Geological correlation is the main tool for reconstructing the geometry of layering in sedimentary basins. The lithological correlation is a procedure, decisive what layers (strata) in geological cross-sections located in different places belong to the same geological body now (or belonged in the past). The identification is based on comparison of physical and mineralogical characteristics of the rocks, and on general assumptions known as the Steno's principles:
1. The sedimentary strata occurred sequentially in time: the youngest at the top.
2. The strata are originally horizontal.
3. The stratum extends in all directions until it thins out or encounters a barrier.
The results are presented as a correlation scheme (A). Practical correlation has a lot of difficulties: fuzzy borders of the layers, variations in composition and structure of the rocks in the layer, unconformities in the sequence of layers, etc. This is why errors in correlation schemes are not seldom. When the distances between available cross-sections are decreasing (for example, by drilling new wells) the quality of correlation is improving, but meanwhile the wrong geological decisions could be made that increases the expenses of geological projects.
Lithodemic stratigraphy
The law of superposition is inapplicable to intrusive, highly deformed, or metamorphic bodies of rock lacking discernible stratification. Such bodies of rock are described as lithodemic and are determined and delimited based on rock characteristics. The 1983 North American Stratigraphic Code adopted the formal terms lithodeme, which is comparable to a formation; a suite, which is analogous to a group, and a supersuite, similar to a supergroup. A lithodeme is the fundamental unit and should possess distinctive and consistent lithological features, comprising a single rock type or a mixture of two or more types that distinguishes the unit from those around it. As with formations, a lithodemic unit is given a geographical name combined with either a rock name or some term describing its form. The term suite is deprecated. Also formalized is the term complex, which applies to a body of rock of two or more genetic classes (sedimentary, metamorphic, or igneous). This establishes two hierarchies of lithodemic units:
Similar rules have been adopted in Sweden. However, the 1994 International Stratigraphic Guide regards plutons and non-layered metamorphic rocks of undetermined origin as special cases within lithostratigraphy.
| Physical sciences | Stratigraphy | Earth science |
2198661 | https://en.wikipedia.org/wiki/Trypanosoma%20brucei | Trypanosoma brucei | Trypanosoma brucei is a species of parasitic kinetoplastid belonging to the genus Trypanosoma that is present in sub-Saharan Africa. Unlike other protozoan parasites that normally infect blood and tissue cells, it is exclusively extracellular and inhabits the blood plasma and body fluids. It causes deadly vector-borne diseases: African trypanosomiasis or sleeping sickness in humans, and animal trypanosomiasis or nagana in cattle and horses. It is a species complex grouped into three subspecies: T. b. brucei, T. b. gambiense and T. b. rhodesiense. The first is a parasite of non-human mammals and causes nagana, while the latter two are zoonotic infecting both humans and animals and cause African trypanosomiasis.
T. brucei is transmitted between mammal hosts by an insect vector belonging to different species of tsetse fly (Glossina). Transmission occurs by biting during the insect's blood meal. The parasites undergo complex morphological changes as they move between insect and mammal over the course of their life cycle. The mammalian bloodstream forms are notable for their cell surface proteins, variant surface glycoproteins, which undergo remarkable antigenic variation, enabling persistent evasion of host adaptive immunity leading to chronic infection. T. brucei is one of only a few pathogens known to cross the blood brain barrier. There is an urgent need for the development of new drug therapies, as current treatments can have severe side effects and can prove fatal to the patient.
Whilst not historically regarded as T. brucei subspecies due to their different means of transmission, clinical presentation, and loss of kinetoplast DNA, genetic analyses reveal that T. equiperdum and T. evansi are evolved from parasites very similar to T. b. brucei, and are thought to be members of the brucei clade.
The parasite was discovered in 1894 by Sir David Bruce, after whom the scientific name was given in 1899.
History and discovery
Early records
Sleeping sickness in animals were described in ancient Egyptian writings. During the Middle Ages, Arabian traders noted the prevalence of sleeping sickness among Africans and their dogs. It was a major infectious diseases in southern and eastern Africa in the 19th century. The Zulu Kingdom (now part of South Africa) was severely struck by the disease, which became known to the British as nagana, a Zulu word for to be low or depressed in spirit. In other parts of Africa, Europeans called it the "fly disease."
John Aktins, an English naval surgeon, gave first medical description of human sleeping sickness in 1734. He attributed deaths which he called "sleepy distemper" in Guinea to the infection. Another English physician Thomas Masterman Winterbottom gave clearer description of the symptoms from Sierra Leone in 1803. Winterbottom described a key feature of the disease as swollen posterior cervical lymph nodes and slaves who developed such swellings were ruled unfit for trade. The symptom is eponymously known as "Winterbottom's sign."
Discovery of the parasite
The Royal Army Medical Corps appointed David Bruce, who at the time was assistant professor of pathology at the Army Medical School in Netley with a rank of Captain in the army, in 1894 to investigate a disease known as nagana in South Africa. The disease caused severe problems among the local cattle and British Army horses. On 27 October 1894, Bruce and his microbiologist-wife Mary Elizabeth Bruce (née Steele) moved to Ubombo Hill, where the disease was most prevalent.
On the sixth day of investigation, Bruce identified parasites from the blood of diseased cows. He initially noted them as a kind of filaria (tiny roundworms), but by the end of the year established that the parasites were "haematozoa" (protozoan) and were the cause of nagana. It was the discovery of Trypanosoma brucei. The scientific name was created by British zoologists Henry George Plimmer and John Rose Bradford in 1899 as Trypanosoma brucii due to printer's error. The genus Trypanosoma was already introduced by Hungarian physician David Gruby in his description of T. sanguinis, a species he discovered in frogs in 1843.
Outbreaks
In Uganda, the first case of human infection was reported in 1898. It was followed by an outbreak in 1900. By 1901, it became severe with death toll estimated to about 20,000. More than 250,000 people died in the epidemic that lasted for two decades. The disease commonly popularised as "negro lethargy." It was not known whether the human sleeping sickness and nagana were similar or the two disease were caused by similar parasites. Even the observations of Forde and Dutton did not indicate that the trypanosome was related to sleeping sickness.
Sleeping Sickness Commission
The Royal Society constituted a three-member Sleeping Sickness Commission on 10 May 1902 to investigate the epidemic in Uganda. The Commission comprised George Carmichael Low from the London School of Hygiene and Tropical Medicine as the leader, his colleague Aldo Castellani and Cuthbert Christy, a medical officer on duty in Bombay, India. At the time, a debate remained on the etiology, some favoured bacterial infection while some believed as helminth infection. The first investigation focussed on Filaria perstans (later renamed Mansonella perstans), a small roundworm transmitted by flies, and bacteria as possible causes, only to discover that the epidemic was not related to these pathogens. The team was described as an "ill-assorted group" and a "queer lot", and the expedition "a failure." Low, whose conduct was described as "truculent and prone to take offence," left the Commission and Africa after three months.
In February 1902, the British War Office, following a request from the Royal Society, appointed David Bruce to lead the second Sleeping Sickness Commission. With David Nunes Nabarro (from the University College Hospital), Bruce and his wife joined Castellani and Christy on 16 March. In November 1902, Castellani had found the trypanosomes in the cerebrospinal fluid of an infected person. He was convinced that the trypanosome was the causative parasite of sleeping sickness. Like Low, his conduct has been criticised and the Royal Society refused to publish his report. He was further infuriated when Bruce advised him not to make rash conclusion without further evidences, as there were many other parasites to consider. Castellani left Africa in April and published his report as "On the discovery of a species of Trypanosoma in the cerebrospinal fluid of cases of sleeping sickness" in The Lancet. By then the Royal Society had already published the report. By August 1903, Bruce and his team established that the disease was transmitted by the tsetse fly, Glossina palpalis. However, Bruce did not understand the trypanosoma life cycle and believed that the parasites were simply transmitted from one person to another.
Around the same time, Germany sent an expeditionary team led by Robert Koch to investigate the epidemic in Togo and East Africa. In 1909, one of the team members, Friedrich Karl Kleine discovered that the parasite had developmental stages in the tsetse flies. Bruce, in the third Sleeping Sickness Commission (1908–1912) that included Albert Ernest Hamerton, H.R. Bateman and Frederick Percival Mackie, established the basic developmental cycle through which the trypanosome in tsetse fly must pass. An open question, noted by Bruce at this stage, was how the trypanosome finds its way to the salivary glands. Muriel Robertson, in experiments carried out between 1911 and 1912, established how ingested trypanosomes finally reach the salivary glands of the fly.
Discovery of human trypanosomes
British Colonial Surgeon Robert Michael Forde was the first to find the parasite in human. He found it from an English steamboat captain who was admitted to a hospital at Bathurst, Gambia, in 1901. His report in 1902 indicates that he believed it to be a kind of filarial worm. From the same person, Forde's colleague Joseph Everett Dutton identified it as a protozoan belonging to the genus Trypanosoma. Knowing the distinct features, Dutton proposed a new species name in 1902:
Another human trypanosome (now called T. brucei rhodesiense) was discovered by British parasitologists John William Watson Stephens and Harold Benjamin Fantham. In 1910, Stephens noted in his experimental infection in rats that the trypanosome, obtained from an individual from Northern Rhodesia (later Zambia), was not the same as T. gambiense. The source of the parasite, an Englishman travelling in Rhodesia was found with the blood parasites in 1909, and was transported to and admitted at the Royal Southern Hospital in Liverpool under the care of Ronald Ross. Fantham described the parasite's morphology and found that it was a different trypanosome.
Species
T. brucei is a species complex that includes:
T. brucei gambiense which causes slow onset chronic trypanosomiasis in humans. It is most common in central and western Africa, where humans are thought to be the primary reservoir. In 1973, David Hurst Molyneux was the first to find infection of this strain in wildlife and domestic animals. Since 2002, there are several reports showing that animals, including cattle, are also infected. It is responsible for 98% of all human African trypanosomiasis, and is roughly 100% fatal.
T. brucei rhodesiense which causes fast onset acute trypanosomiasis in humans. A highly zoonotic parasite, it is prevalent in southern and eastern Africa, where game animals and livestock are thought to be the primary reservoir.
T. brucei brucei which causes animal trypanosomiasis, along with several other species of Trypanosoma. T. b. brucei is not infective to humans due to its susceptibility to lysis by trypanosome lytic factor-1 (TLF-1). However, it is closely related to, and shares fundamental features with the human-infective subspecies. Only rarely can the T. b. brucei infect a human.
The subspecies cannot be distinguished from their structure as they are all identical under microscopes. Geographical location is the main distinction. Molecular markers have been developed for individual identification. Serum resistance-associated (SRA) gene is used to differentiate T. b. rhodesiense from other subspecies. TgsGP gene, found only in type 1 T. b. gambiense is also a specific distinction between T. b. gambiense strains.
The subspecies lack many of the features commonly considered necessary to constitute monophyly. As such Lukeš et al., 2022 proposes a new polyphyly by ecotype.
Etymology
The genus name is derived from two Greek words: τρυπανον (trypanon or trupanon), which means "borer" or "auger", referring to the corkscrew-like movement; and σῶμα (sôma), meaning "body." The specific name is after David Bruce, who discovered the parasites in 1894. The subspecies, the human strains, are named after the regions in Africa where they were first identified: T. brucei gambiense was described from an Englishman in Gambia in 1901; T. brucei rhodesiense was found from another Englishman in Northern Rhodesia in 1909.
Structure
T. brucei is a typical unicellular eukaryotic cell, and measures 8 to 50 μm in length. It has an elongated body having a streamlined and tapered shape. Its cell membrane (called pellicle) encloses the cell organelles, including the nucleus, mitochondria, endoplasmic reticulum, Golgi apparatus, and ribosomes. In addition, there is an unusual organelle called the kinetoplast, which is a complex of thousands of interlinked circles of mitochondrial DNA known as mini- and maxicircles. The kinetoplast lies near the basal body with which it is indistinguishable under microscope. From the basal body arises a single flagellum that run towards the anterior end. Along the body surface, the flagellum is attached to the cell membrane forming an undulating membrane. Only the tip of the flagellum is free at the anterior end. The cell surface of the bloodstream form features a dense coat of variant surface glycoproteins (VSGs) which is replaced by an equally dense coat of procyclins when the parasite differentiates into the procyclic phase in the tsetse fly midgut.
Trypanosomatids show several different classes of cellular organisation of which two are adopted by T. brucei at different stages of the life cycle:
Epimastigote, which is found in tsetse fly. Its kinetoplast and basal body lie anterior to the nucleus, with a long flagellum attached along the cell body. The flagellum starts from the centre of the body.
Trypomastigote, which is found in mammalian hosts. The kinetoplast and basal body are posterior of nucleus. The flagellum arises from the posterior end of the body.
These names are derived from the Greek mastig- meaning whip, referring to the trypanosome's whip-like flagellum. The trypanosome flagellum has two main structures. It is made up of a typical flagellar axoneme, which lies parallel to the paraflagellar rod, a lattice structure of proteins unique to the kinetoplastids, euglenoids and dinoflagellates.
The microtubules of the flagellar axoneme lie in the normal 9+2 arrangement, orientated with the + at the anterior end and the − in the basal body. The cytoskeletal structure extends from the basal body to the kinetoplast. The flagellum is bound to the cytoskeleton of the main cell body by four specialised microtubules, which run parallel and in the same direction to the flagellar tubulin.
The flagellar function is twofold — locomotion via oscillations along the attached flagellum and cell body in human blood stream and tsetse fly gut, and attachment to the salivary gland epithelium of the fly during the epimastigote stage. The flagellum propels the body in such a way that the axoneme generates the oscillation and a flagellar wave is created along the undulating membrane. As a result, the body moves in a corkscrew pattern. In flagella of other organisms, the movement starts from the base towards the tip, while in T. brucei and other trypanosomatids, the beat originates from the tip and progresses towards the base, forcing the body to move towards the direction of the tip of the flagellum.
Life cycle
T. brucei completes its life cycle between tsetse fly (of the genus Glossina) and mammalian hosts, including humans, cattle, horses, and wild animals. In stressful environments, T. brucei produces exosomes containing the spliced leader RNA and uses the endosomal sorting complexes required for transport (ESCRT) system to secrete them as extracellular vesicles. When absorbed by other trypanosomes these EVs cause repulsive movement away from the area and so away from bad environments.
In mammalian host
Infection occurs when a vector tsetse fly bites a mammalian host. The fly injects the metacyclic trypomastigotes into the skin tissue. The trypomastigotes enter the lymphatic system and into the bloodstream. The initial trypomastigotes are short and stumpy (SS). They are protected from the host's immune system by producing antigentic variation called variant surface glycoproteins on their body surface. Once inside the bloodstream, they grow into long and slender forms (LS). Then, they multiply by binary fission. Some of the daughter cells then become short and stumpy again. Some of them remains as intermediate forms, representing a transitional stage between the long and short forms. The long slender forms are able to penetrate the blood vessel endothelium and invade extravascular tissues, including the central nervous system (CNS) and placenta in pregnant women.
Sometimes, wild animals can be infected by the tsetse fly and they act as reservoirs. In these animals, they do not produce the disease, but the live parasite can be transmitted back to the normal hosts. Besides preparation to be taken up and vectored to another host by a tsetse fly, transition from LS to SS in the mammal serves to prolong the host's lifespan controlling parasitemia aids in increasing the total transmitting duration of any particular infested host.
In tsetse fly
Unlike anopheline mosquitos and sandflies that transmit other protozoan infections in which only females are involved, both sexes of tsetse flies are blood feeders and equally transmit trypanosomes. The short and stumpy trypomastigotes (SS) are taken up by tsetse flies during a blood meal. Survival in the tsetse midgut is one reason for the particular adaptations of the SS stage. The trypomastigotes enter the midgut of the fly where they become procyclic trypomastigotes as they replace their VSG with other protein coats called procyclins. Because the fly faces digestive damage from immune factors in the bloodmeal, it produces serpins to suppress the infection. The serpins including GmmSRPN3, GmmSRPN5, GmmSRPN9, and especially GmmSRPN10 are then hijacked by the parasite to aid its own midgut infection, using them to inactivate bloodmeal trypanolytic factors which would otherwise make the fly host inhospitable.
The procyclic trypomastigotes cross the peritrophic matrix, undergo slight elongation and migrate to the anterior part of the midgut as non-proliferative long mesocyclic trypomastigotes. As they reach the proventriculus, they became thinner and undergo cytoplasmic rearrangement to give rise to proliferative epimastigotes. The epimastigotes divide asymmetrically to produce long and short epimastigotes. The long epimastigote cannot move to other places and simply die off by apoptosis. The short epimastigote migrate from the proventriculus via the foregut and proboscis to the salivary glands where they get attached to the salivary gland epithelium. Even all the short forms do not succeed in the complete migration to the salivary glands as most of them perish on the way–only up to five may survive.
In the salivary glands, the survivors undergo phases of reproduction. The first cycle in an equal mitosis by which a mother cell produces two similar daughter epimastigotes. They remain attach to the epithelium. This phase is the main reproduction in first-stage infection to ensure sufficient number of parasites in the salivary gland. The second cycle, which usually occurs in late-stage infection, involves unequal mitosis that produces two different daughter cells from the mother epimastigote. One daughter is an epimastigote that remains non-infective and the other is a trypomastigote. The trypomastigote detach from the epithelium and undergo transformation into short and stumpy trypomastigotes. The surface procyclins are replaced with VSGs and become the infective metacyclic trypomastigotes. Complete development in the fly takes about 20 days. They are injected into the mammalian host along with the saliva on biting, and are known as salivarian.
In the case of T. b. brucei infecting Glossina palpalis gambiensis, the parasite changes the proteome contents of the fly's head and causes behavioral changes such as unnecessarily increased feeding frequency, which increases transmission opportunities. This is related to altered glucose metabolism that causes a perceived need for more calories. (The metabolic change, in turn, being due to complete absence of glucose-6-phosphate 1-dehydrogenase in infected flies.) Monoamine neurotransmitter synthesis is also altered: production of aromatic L-amino acid decarboxylase involved in dopamine and serotonin synthesis, and α-methyldopa hypersensitive protein was induced. This is similar to the alterations in other dipteran vectors' head proteomes under infection by other eukaryotic parasites of mammals.
Reproduction
Binary fission
The reproduction of T. brucei is unusual compared to most eukaryotes. The nuclear membrane remains intact and the chromosomes do not condense during mitosis. The basal body, unlike the centrosome of most eukaryotic cells, does not play a role in the organisation of the spindle and instead is involved in division of the kinetoplast. The events of reproduction are:
The basal body duplicates and both remain associated with the kinetoplast. Each basal body forms a separate flagellum.
Kinetoplast DNA undergoes synthesis then the kinetoplast divides coupled with separation of the two basal bodies.
Nuclear DNA undergoes synthesis while a new flagellum extends from the younger, more posterior, basal body.
The nucleus undergoes mitosis.
Cytokinesis progresses from the anterior to posterior.
Division completes with abscission.
Meiosis
In the 1980s, DNA analyses of the developmental stages of T. brucei started to indicate that the trypomastigote in the tsetse fly undergoes meiosis, i.e., a sexual reproduction stage. But it is not always necessary for a complete life cycle. The existence of meiosis-specific proteins was reported in 2011. The haploid gametes (daughter cells produced after meiosis) were discovered in 2014. The haploid trypomastigote-like gametes can interact with each other via their flagella and undergo cell fusion (the process is called syngamy). Thus, in addition to binary fission, T. brucei can multiply by sexual reproduction. Trypanosomes belong to the supergroup Excavata and are one of the earliest diverging lineages among eukaryotes. The discovery of sexual reproduction in T. brucei supports the hypothesis that meiosis and sexual reproduction are ancestral and ubiquitous features of eukaryotes.
Infection and pathogenicity
The insect vectors for T. brucei are different species of tsetse fly (genus Glossina). The major vectors of T. b. gambiense, causing West African sleeping sickness, are G. palpalis, G. tachinoides, and G. fuscipes. While the principal vectors of T. b. rhodesiense, causing East African sleeping sickness, are G. morsitans, G. pallidipes, and G. swynnertoni. Animal trypanosomiasis is transmitted by a dozen species of Glossina.
In later stages of a T. brucei infection of a mammalian host the parasite may migrate from the bloodstream to also infect the lymph and cerebrospinal fluids. It is under this tissue invasion that the parasites produce the sleeping sickness.
In addition to the major form of transmission via the tsetse fly, T. brucei may be transferred between mammals via bodily fluid exchange, such as by blood transfusion or sexual contact, although this is thought to be rare. Newborn babies can be infected (vertical or congenital transmission) from infected mothers.
Chemotherapy
There are four drugs generally recommended for the first-line treatment of African trypanosomiasis: suramin developed in 1921, pentamidine developed in 1941, melarsoprol developed in 1949 and eflornithine developed in 1990. These drugs are not fully effective and are toxic to humans. In addition, drug resistance has developed in the parasites against all the drugs. The drugs are of limited application since they are effective against specific strains of T. brucei and the life cycle stages of the parasites. Suramin is used only for first-stage infection of T. b. rhodesiense, pentamidine for first-stage infection of T. b. gambiense, and eflornithine for second-stage infection of T. b. gambiense. Melarsopol is the only drug effective against the two types of parasite in both infection stages, but is highly toxic, such that 5% of treated individuals die of brain damage (reactive encephalopathy). Another drug, nifurtimox, recommended for Chagas disease (American trypanosomiasis), is itself a weak drug but in combination with melarsopol, it is used as the first-line medication against second-stage infection of T. b. gambiense.
Historically, arsenic and mercuric compounds were introduced in the early 20th century, with success particularly in animal infections. German physician Paul Ehrlich and his Japanese associate Kiyoshi Shiga developed the first specific trypanocidal drug in 1904 from a dye, trypan red, which they named Trypanroth. These chemical preparations were effective only at high and toxic dosages, and were not suitable for clinical use.
Animal trypanosomiasis is treated with six drugs: diminazene aceturate, homidium (homidium bromide and homidium chloride), isometamidium chloride, melarsomine, quinapyramine, and suramin. They are all highly toxic to animals, and drug resistance is prevalent. Homidium is the first prescription anti-trypanosomal drug. It was developed as a modified compound of phenantridine, which was found in 1938 to have trypanocidal activity against the bovine parasite, T. congolense. Among its products, dimidium bromide and its derivatives were first used in 1948 in animal cases in Africa, and became known as homidium (or as ethidium bromide in molecular biology).
Drug development
The major challenge against the human disease has been to find drugs that readily pass the blood-brain barrier. The latest drug that has come into clinical use is fexinidazol, but promising results have also been obtained with the benzoxaborole drug acoziborole (SCYX-7158). This drug is currently under evaluation as a single-dose oral treatment, which is a great advantage compared to currently used drugs. Another research field that has been extensively studied in Trypanosoma brucei is to target its nucleotide metabolism. The nucleotide metabolism studies have both led to the development of adenosine analogues looking promising in animal studies, and to the finding that downregulation of the P2 adenosine transporter is a common way to acquire partial drug resistance against the melaminophenyl arsenical and diamidine drug families (containing melarsoprol and pentamidine, respectively). This is particularly a problem with the veterinary drug diminazene aceturate. Drug uptake and degradation are two major issues to consider to avoid drug resistance development. In the case of nucleoside analogues, they need to be taken up by the P1 nucleoside transporter (instead of P2), and they also need to be resistant against cleavage in the parasite.
Phytochemicals. Some phytochemicals have shown research promise against the T. b. brucei strain. Aderbauer et al., 2008 and Umar et al., 2010 find Khaya senegalensis is effective in vitro and Ibrahim et al., 2013 and 2008 in vivo (in rats). Ibrahim et al., 2013 find a lower dose reduces parasitemia by this subspecies and a higher dose is curative and prevents injury.
Distribution
T. brucei is found where its tsetse fly vectors are prevalent in continental Africa. That is to say, tropical rainforest (Af), tropical monsoon (Am), and tropical savannah (Aw) areas of continental Africa. Hence, the equatorial region of Africa is called the "sleeping sickness" belt. However, the specific type of the trypanosome differs according to geography. T. b. rhodesiense is found primarily in East Africa (Botswana, Democratic Republic of the Congo, Ethiopia, Kenya, Malawi, Tanzania, Uganda and Zimbabwe), while T. b. gambiense is found in Central and West Africa.
Impact
T. brucei is a major cause of livestock disease in sub-Saharan Africa. It is thus of tremendous veterinary concern and one of the greatest limitations on agriculture in Africa and the economic life of sub-Saharan Africa.
Evolution
Trypanosoma brucei gambiense evolved from a single progenitor ~10,000 years ago. It is evolving asexually and its genome shows the Meselson effect.
Genetics
There are two subpopulations of T. b. gambiense that possesses two distinct groups that differ in genotype and phenotype. Group 2 is more akin to T. b. brucei than group 1 T. b. gambiense.
All T. b. gambiense are resistant to killing by a serum component — trypanosome lytic factor (TLF) of which there are two types: TLF-1 and TLF-2. Group 1 T. b. gambiense parasites avoid uptake of the TLF particles while those of group 2 are able to either neutralize or compensate for the effects of TLF.
In contrast, resistance in T. b. rhodesiense is dependent upon the expression of a serum resistance associated (SRA) gene. This gene is not found in T. b. gambiense.
Genome
The genome of T. brucei is made up of:
11 pairs of large chromosomes of 1 to 6 megabase pairs.
3–5 intermediate chromosomes of 200 to 500 kilobase pairs.
Around 100 minichromosomes of around 50 to 100 kilobase pairs. These may be present in multiple copies per haploid genome.
Most genes are held on the large chromosomes, with the minichromosomes carrying only VSG genes. The genome has been sequenced and is available on GeneDB.
The mitochondrial genome is found condensed into the kinetoplast, an unusual feature unique to the kinetoplastid protozoans. The kinetoplast and the basal body of the flagellum are strongly associated via a cytoskeletal structure
In 1993, a new base, ß-d-glucopyranosyloxymethyluracil (base J), was identified in the nuclear DNA of T. brucei.
VSG coat
The surface of T. brucei and other species of trypanosomes is covered by a dense external coat called variant surface glycoprotein (VSG). VSGs are 60-kDa proteins which are densely packed (~5 x 106 molecules) to form a 12–15 nm surface coat. VSG dimers make up about 90% of all cell surface proteins in trypanosomes. They also make up ~10% of total cell protein. For this reason, these proteins are highly immunogenic and an immune response raised against a specific VSG coat will rapidly kill trypanosomes expressing this variant. However, with each cell division there is a possibility that the progeny will switch expression to change the VSG that is being expressed.
This VSG coat enables an infecting T. brucei population to persistently evade the host's immune system, allowing chronic infection. VSG is highly immunogenic, and an immune response raised against a specific VSG coat rapidly kills trypanosomes expressing this variant. Antibody-mediated trypanosome killing can also be observed in vitro by a complement-mediated lysis assay. However, with each cell division there is a possibility that one or both of the progeny will switch expression to change the VSG that is being expressed. The frequency of VSG switching has been measured to be approximately 0.1% per division. As T. brucei populations can peak at a size of 1011 within a host this rapid rate of switching ensures that the parasite population is typically highly diverse. Because host immunity against a specific VSG does not develop immediately, some parasites will have switched to an antigenically distinct VSG variant, and can go on to multiply and continue the infection. The clinical effect of this cycle is successive 'waves' of parasitemia (trypanosomes in the blood).
Expression of VSG genes occurs through a number of mechanisms yet to be fully understood. The expressed VSG can be switched either by activating a different expression site (and thus changing to express the VSG in that site), or by changing the VSG gene in the active site to a different variant. The genome contains many hundreds if not thousands of VSG genes, both on minichromosomes and in repeated sections ('arrays') in the interior of the chromosomes. These are transcriptionally silent, typically with omitted sections or premature stop codons, but are important in the evolution of new VSG genes. It is estimated up to 10% of the T. brucei genome may be made up of VSG genes or pseudogenes. It is thought that any of these genes can be moved into the active site by recombination for expression. VSG silencing is largely due to the effects of histone variants H3.V and H4.V. These histones cause changes in the three-dimensional structure of the T. brucei genome that results in a lack of expression. VSG genes are typically located in the subtelomeric regions of the chromosomes, which makes it easier for them to be silenced when they are not being used. It remains unproven whether the regulation of VSG switching is purely stochastic or whether environmental stimuli affect switching frequency. Switching is linked to two factors: variation in activation of individual VSG genes; and differentiation to the "short stumpy" stage - triggered by conditions of high population density - which is the nonreproductive, interhost transmission stage. it also remains unexplained how this transition is timed and how the next surface protein gene is chosen. These questions of antigenic variation in T. brucei and other parasites are among the most interesting in the field of infection.
Killing by human serum and resistance to human serum killing
Trypanosoma brucei brucei (as well as related species T. equiperdum and T. evansi) is not human infective because it is susceptible to innate immune system 'trypanolytic' factors present in the serum of some primates, including humans. These trypanolytic factors have been identified as two serum complexes designated trypanolytic factors (TLF-1 and −2) both of which contain haptoglobin-related protein (HPR) and apolipoprotein LI (ApoL1). TLF-1 is a member of the high density lipoprotein family of particles while TLF-2 is a related high molecular weight serum protein binding complex. The protein components of TLF-1 are haptoglobin related protein (HPR), apolipoprotein L-1 (apoL-1) and apolipoprotein A-1 (apoA-1). These three proteins are colocalized within spherical particles containing phospholipids and cholesterol. The protein components of TLF-2 include IgM and apolipoprotein A-I.
Trypanolytic factors are found only in a few species, including humans, gorillas, mandrills, baboons and sooty mangabeys. This appears to be because haptoglobin-related protein and apolipoprotein L-1 are unique to primates. This suggests these genes originated in the primate genome -.
Human infective subspecies T. b. gambiense and T. b. rhodesiense have evolved mechanisms of resisting the trypanolytic factors, described below.
ApoL1
ApoL1 is a member of a six gene family, ApoL1-6, that have arisen by tandem duplication. These proteins are normally involved in host apoptosis or autophagic death and possess a Bcl-2 homology domain 3. ApoL1 has been identified as the toxic component involved in trypanolysis. ApoLs have been subject to recent selective evolution possibly related to resistance to pathogens.
The gene encoding ApoL1 is found on the long arm of chromosome 22 (22q12.3). Variants of this gene, termed G1 and G2, provide protection against T. b. rhodesiense. These benefits are not without their downside as a specific ApoL1 glomerulopathy has been identified. This glomerulopathy may help to explain the greater prevalence of hypertension in African populations.
The gene encodes a protein of 383 residues, including a typical signal peptide of 12 amino acids. The plasma protein is a single chain polypeptide with an apparent molecular mass of 42 kilodaltons. ApoL1 has a membrane pore forming domain functionally similar to that of bacterial colicins. This domain is flanked by the membrane addressing domain and both these domains are required for parasite killing.
Within the kidney, ApoL1 is found in the podocytes in the glomeruli, the proximal tubular epithelium and the arteriolar endothelium. It has a high affinity for phosphatidic acid and cardiolipin and can be induced by interferon gamma and tumor necrosis factor alpha.
Hpr
Hpr is 91% identical to haptoglobin (Hp), an abundant acute phase serum protein, which possesses a high affinity for hemoglobin (Hb). When Hb is released from erythrocytes undergoing intravascular hemolysis Hp forms a complex with the Hb and these are removed from circulation by the CD163 scavenger receptor. In contrast to Hp–Hb, the Hpr–Hb complex does not bind CD163 and the Hpr serum concentration appears to be unaffected by hemolysis.
Killing mechanism
The association of HPR with hemoglobin allows TLF-1 binding and uptake via the trypanosome haptoglobin-hemoglobin receptor (TbHpHbR). TLF-2 enters trypanosomes independently of TbHpHbR. TLF-1 uptake increases when haptoglobin level is low. TLF-1 overtakes haptoglobin and binds free hemoglobin in the serum. However the complete absence of haptoglobin is associated with a decreased killing rate by serum.
The trypanosome haptoglobin-hemoglobin receptor is an elongated three a-helical bundle with a small membrane distal head. This protein extends above the variant surface glycoprotein layer that surrounds the parasite.
The first step in the killing mechanism is the binding of TLF to high affinity receptors—the haptoglobin-hemoglobin receptors—that are located in the flagellar pocket of the parasite. The bound TLF is endocytosed via coated vesicles and then trafficked to the parasite lysosomes. ApoL1 is the main lethal factor in the TLFs and kills trypanosomes after insertion into endosomal / lysosomal membranes. After ingestion by the parasite, the TLF-1 particle is trafficked to the lysosome wherein ApoL1 is activated by a pH mediated conformational change. After fusion with the lysosome the pH drops from ~7 to ~5. This induces a conformational change in the ApoL1 membrane addressing domain which in turn causes a salt bridge linked hinge to open. This releases ApoL1 from the HDL particle to insert in the lysosomal membrane. The ApoL1 protein then creates anionic pores in the membrane which leads to depolarization of the membrane, a continuous influx of chloride and subsequent osmotic swelling of the lysosome. This influx in its turn leads to rupture of the lysosome and the subsequent death of the parasite.
Resistance mechanisms: T. b. gambiense
Trypanosoma brucei gambiense causes 97% of human cases of sleeping sickness. Resistance to ApoL1 is principally mediated by the hydrophobic β-sheet of the T. b. gambiense specific glycoprotein. Other factors involved in resistance appear to be a change in the cysteine protease activity and TbHpHbR inactivation due to a leucine to serine substitution (L210S) at codon 210. This is due to a thymidine to cytosine mutation at the second codon position.
These mutations may have evolved due to the coexistence of malaria where this parasite is found. Haptoglobin levels are low in malaria because of the hemolysis that occurs with the release of the merozoites into the blood. The rupture of the erythrocytes results in the release of free haem into the blood where it is bound by haptoglobin. The haem is then removed along with the bound haptoglobin from the blood by the reticuloendothelial system.
Resistance mechanisms: T. b. rhodesiense
Trypanosoma brucei rhodesiense relies on a different mechanism of resistance: the serum resistance associated protein (SRA). The SRA gene is a truncated version of the major and variable surface antigen of the parasite, the variant surface glycoprotein. However, it has little similarity (low sequence homology) with the VSG gene (<25%). SRA is an expression site associated gene in T. b. rhodesiense and is located upstream of the VSGs in the active telomeric expression site. The protein is largely localized to small cytoplasmic vesicles between the flagellar pocket and the nucleus. In T. b. rhodesiense the TLF is directed to SRA containing endosomes while some dispute remains as to its presence in the lysosome. SRA binds to ApoL1 using a coiled–coiled interaction at the ApoL1 SRA interacting domain while within the trypanosome lysosome. This interaction prevents the release of the ApoL1 protein and the subsequent lysis of the lysosome and death of the parasite.
Baboons are known to be resistant to T. b. rhodesiense. The baboon version of the ApoL1 gene differs from the human gene in a number of respects including two critical lysines near the C terminus that are necessary and sufficient to prevent baboon ApoL1 binding to SRA. Experimental mutations allowing ApoL1 to be protected from neutralization by SRA have been shown capable of conferring trypanolytic activity on T. b. rhodesiense. These mutations resemble those found in baboons, but also resemble natural mutations conferring protection of humans against T. b. rhodesiense which are linked to kidney disease.
| Biology and health sciences | Excavata | Plants |
2198678 | https://en.wikipedia.org/wiki/Wet%20wipe | Wet wipe | A wet wipe, also known as a wet towel, wet one, moist towelette, disposable wipe, disinfecting wipe, or a baby wipe (in specific circumstances) is a small to medium-sized moistened piece of plastic or cloth that either comes folded and individually wrapped for convenience or, in the case of dispensers, as a large roll with individual wipes that can be torn off. Wet wipes are used for cleaning purposes like personal hygiene and household cleaning; each is a separate product depending on the chemicals added and medical or office cleaning wipes are not intended for skin hygiene.
In 2013, owing to increasing sales of the product in affluent countries, Consumer Reports reported that efforts to make the wipes "flushable" down the toilet had not entirely succeeded, according to their test.
Invention
American Arthur Julius is seen as the inventor of wet wipes. Julius worked in the cosmetics industry and in 1957, adjusted a soap portioning machine, putting it in a loft in Manhattan. Julius trademarked the name Wet-Nap in 1958, a name for the product that is still being used. After fine tuning his new hand-cleaning aid together with a mechanic, he unveiled his invention at the 1960 National Restaurant Show in Chicago and in 1963 started selling Wet-Nap products to Colonel Harland Sanders to be distributed to customers of Kentucky Fried Chicken.
Production
Ninety percent of wet wipes on the market are produced from nonwoven fabrics made of polyester or polypropylene.
The material is moistened with water or other liquids (e.g., isopropyl alcohol) depending on the applications. The material may be treated with softeners, lotions, or perfume to adjust the tactile and olfactory properties. Preservatives such as methylisothiazolinone are used to prevent bacterial or fungal growth in the package. The finished wet wipes are folded and put in pocket size package or a box dispenser.
Uses
Wet wipes can serve a number of personal and household purposes. Although marketed primarily for wiping infants' bottoms in diaper changing, it is not uncommon for consumers to also use the product to clean floors, toilet seats, and other surfaces around the home. Parents also use wet wipes, or as they are called for baby care, baby wipes, for wiping up baby vomit and to clean babies' hands and faces.
Baby wipes
Baby wipes are wet wipes used to cleanse the sensitive skin of infants. These are saturated with solutions anywhere from gentle cleansing ingredients to alcohol-based "cleaners". Baby wipes are typically different pack counts (ranging up to 80 or more sheets per pack), and come with dispensing mechanisms. The origin of baby wipes most likely came in the mid-1950s as more people were travelling and needed a way to clean up on the go. One of the first companies to produce these was a company called Nice-Pak. They made napkin sized paper cloth saturated with a scented skin cleanser.
The first wet-wipe products specifically marketed as baby wipes, such as Kimberly-Clark's Huggies wipes and Procter & Gamble's Pampers wipes, appeared on the market in 1990. As the technology to produce wipes matured and became more affordable, smaller brands began to appear. By the 1990s, most super stores like Kmart and Wal-Mart had their own private label brand of wipes made by other manufacturers. After this period there was a boom in the industry and many local brands started manufacturing because of low entry barriers.
In December 2018, a New Zealand company launched the country's first ever wet and baby wipe alternative, the BDÉT Foam Wash.
Toilet wet wipes
Toilet wet wipes are sometimes preferred to standard toilet paper. Many brands sell toilet wet wipes, claiming they are "flushable". However, they do not decompose in septic tanks as they are made of polyester or polypropylene. In 2013 a Consumer Reports article said that none of the leading brands could pass their test.
Personal hygiene
Wet wipes are often included as part of a standard sealed cutlery package offered in restaurants or along with airline meals.
Wet wipes began to be marketed as a luxury alternative to toilet paper by 2005 by companies such as Kimberly-Clark and Procter & Gamble. They are dispensed in the toilets of restaurants, service stations, doctors' offices, and other places with public use.
Wet wipes have also found a use among visitors to outdoor music festivals, particularly those who camp, as an alternative to communal showers.
Cleansing pads
Cleansing pads are fiber sponges which have been previously soaked with water, alcohol and other active ingredients for a specific intended use. They are ready to use hygiene products and they are simple and convenient solutions to dispose of dirt or other undesirable elements.
There are different type of cleansing pads offered by the beauty industry: make-up removing pads, anti-spot treatments and anti-acne pads that usually contain salicylic acid, vitamins, menthol and other treatments.
Cleansing pads for preventing infection are usually saturated with alcohol and bundled in sterile packages. Hands and instruments may be disinfected with these pads while treating wounds. Disinfecting cleansing pads are often included in first aid kits for this purpose. Since the outbreak of H1N1 sales of individual impregnated wet wipes and gels in sachets and flowpacks have dramatically increased in the UK following the Government's advice to keep hands and surfaces clean to prevent the spread of germs.
Industrial wipes
Pre-impregnated industrial-strength cleaning wipes with powerful cleaning fluid that cuts through the dirt as the high performance fabric absorbs the residue. Industrial wipes has the ability to clean a vast range of though substances from hands, tools and surfaces, including: grime, grease, oil- and water-based paints and coatings, adhesives, silicone and acrylic sealants, poly foam, epoxy, oil, tar and more.
Pain relief
There are pain relief pads sopping with alcohol and benzocaine. These pads are good for treating minor scrapes, burns, and insect bites. They disinfect the injury and also ease pain and itching.
Pet care
Wet wipes are produced specifically for pet care, for example eye, ear, or dental cleansing pads (with boric acid, potassium chloride, zinc sulfate, sodium borate) for dogs, cats, horses, and birds.
Healthcare
Medical wet wipes are available for various applications. These include alcohol wet wipes, chlorhexidine wipes (for disinfection of surfaces and noninvasive medical devices), and sporicidal wipes. Medical wipes can be used to prevent the spread of pathogens such as norovirus and Clostridioides difficile.
Effect on sewage systems
Water management companies ask people not to flush wet wipes down toilets, as their failure to break apart or dissolve in water can cause sewer blockages known as fatbergs.
Since the mid-2000s, wet wipes such as baby wipes have become more common for use as an alternative to toilet paper in affluent countries, including the United States and the United Kingdom. This usage has in some cases been encouraged by manufacturers, who have labelled some wet wipe brands as "flushable". Wet wipes, when flushed down the toilet, have been reported to clog internal plumbing, septic systems and public sewer systems. The tendency for fat and wet wipes to cling together allegedly encourages the growth of the problematic obstructions in sewers known as "fatbergs". In addition, some brands of wipes contain alcohol, which can kill bacteria and denature enzymes responsible for breaking down solid waste in septic tanks. In the late 2010s, other alternatives such as gel wipe had also come on to the market.
In 2014, a class action suit was filed in the U.S. District Court for the Northern District of Ohio against Target Corporation, and Nice-Pak Products Inc. on behalf of consumers in Ohio who purchased Target-brand flushable wipes. The lawsuit alleged the retailer misled consumers by marking the packaging on its Up & Up brand wipes as flushable and safe for sewer and septic systems. The lawsuit also alleged that the products were a public health hazard because they clogged pumps at municipal waste-treatment facilities. Target and Nice-Pak agreed to settle the case in 2018.
In 2015, the city of Wyoming, Minnesota, launched a class action suit against six companies, including Procter & Gamble, Kimberly-Clark, and Nice-Pak, alleging they were fraudulently promoting their products as "flushable". The city dropped the lawsuit in 2018 after concluding that the city had not experienced damage to its sewer systems or a rise in maintenance costs. Upon announcement of the withdrawal of the suit, an industry trade group representing the manufacturers of the wipes released a statement that disputed the claims that the products are harmful to sewer systems.
In 2019, the industry body Water UK announced a new standard for flushable wet wipes. Wipes will need to pass rigorous testing in order to gain a new and approved "Fine to Flush" logo. As of January 2019, only one product had been confirmed to meet the standard, although there were about seven others in the process of being tested.
| Biology and health sciences | Hygiene products | Health |
2198997 | https://en.wikipedia.org/wiki/Acacia%20pycnantha | Acacia pycnantha | Acacia pycnantha, most commonly known as the golden wattle, is a tree of the family Fabaceae. It grows to a height of and has phyllodes (flattened leaf stalks) instead of true leaves. The profuse fragrant, golden flowers appear in late winter and spring, followed by long seed pods. Explorer Thomas Mitchell collected the type specimen, from which George Bentham wrote the species description in 1842. The species is native to southeastern Australia as an understorey plant in eucalyptus forest. Plants are cross-pollinated by several species of honeyeater and thornbill, which visit nectaries on the phyllodes and brush against flowers, transferring pollen between them.
A. pycnantha has become a weed in areas of Australia, as well as in Africa and Eurasia. Its bark produces more tannin than any other wattle species, resulting in its commercial cultivation for production of this compound. It has been widely grown as an ornamental garden plant and for cut flower production. A. pycnantha was made the official floral emblem of Australia in 1988, and has been featured on the country's postal stamps.
Description
Acacia pycnantha generally grows as a small tree to between in height, though trees of up to high have been reported in Morocco. The bark is generally dark brown to grey—smooth in younger plants though it can be furrowed and rough in older plants. Branchlets may be bare and smooth or covered with a white bloom. The mature trees do not have true leaves but have phyllodes—flat and widened leaf stems—that hang down from the branches. Shiny and dark green, these are between long, wide and falcate (sickle-shaped) to oblanceolate in shape. New growth has a bronze colouration. Field observations at Hale Conservation Park show the bulk of new growth occurs over spring and summer from October to January.
Floral buds are produced year-round on the tips of new growth, but only those initiated between November and May go on to flower several months later. Flowering usually takes place from July to November (late winter to early summer) in the golden wattle's native range, and because the later buds develop faster, flowering peaks over July and August. The bright yellow inflorescences occur in groups of 40 to 80 on -long racemes that arise from axillary buds. Each inflorescence is a ball-like structure covered by 40 to 100 small flowers that have five tiny petals (pentamerous) and long erect stamens, which give the flower head a fluffy appearance.
Developing after flowering has finished, the seed pods are flattish, straight or slightly curved, long and 5–8 mm wide. They are initially bright green, maturing to dark brown and have slight constrictions between the seeds, which are arranged in a line in the pod. The oblong seeds themselves are 5.5 to 6 mm long, black and shiny, with a clavate (club-shaped) aril. They are released in December and January, when the pods are fully ripe.
Similar species
Species similar in appearance include mountain hickory wattle (A. obliquinervia), coast golden wattle (A. leiophylla) and golden wreath wattle (A. saligna). Acacia obliquinervia has grey-green phyllodes, fewer flowers in its flower heads, and broader (-wide) seed pods. A. leiophylla has paler phyllodes. A. saligna has longer, narrower phyllodes.
Taxonomy
Acacia pycnantha was first formally described by botanist George Bentham in the London Journal of Botany in 1842. The type specimen was collected by the explorer Thomas Mitchell in present-day northern Victoria between Pyramid Hill and the Loddon River. Bentham thought it was related to A. leiophylla, which he described in the same paper. The specific epithet pycnantha is derived from the Greek words (dense) and (flower), a reference to the dense cluster of flowers that make up the globular inflorescences. Queensland botanist Les Pedley reclassified the species as Racosperma pycnanthum in 2003, when he proposed placing almost all Australian members of the genus into the new genus Racosperma. However, this name is treated as a synonym of its original name.
Johann Georg Christian Lehmann described Acacia petiolaris in 1851 from a plant grown at Hamburg Botanic Gardens from seed said to be from the Swan River Colony (Perth). Carl Meissner described A. falcinella from material from Port Lincoln in 1855. Bentham classified both as A. pycnantha in his 1864 Flora Australiensis, though he did categorise a possible subspecies angustifolia based on material from Spencer Gulf with narrower phyllodes and fewer inflorescences. However, no subspecies are currently recognised, though an informal classification distinguishes wetland and dryland forms, the latter with narrower phyllodes.
In 1921 Joseph Maiden described Acacia westonii from the northern and western slopes of Mount Jerrabomberra near Queanbeyan in New South Wales. He felt it was similar to, but distinct from, A. pycnantha and was uncertain whether it warranted species rank. His colleague Richard Hind Cambage grew seedlings and reported they had much longer internodes than those of A. pycnantha, and that the phyllodes appeared to have three nectaries rather than the single one of the latter species. It is now regarded as a synonym of A. pycnantha.
Common names recorded include golden wattle, green wattle, black wattle, and broad-leaved wattle. At Ebenezer Mission in the Wergaia country of north-western Victoria the Aboriginal people referred to it as witch.
Hybrids of the species are known in nature and cultivation. In the Whipstick forest near Bendigo in Victoria, putative hybrids with Whirrakee wattle (Acacia williamsonii) have been identified; these resemble hakea wattle (Acacia hakeoides). Garden hybrids with Queensland silver wattle (Acacia podalyriifolia) raised in Europe have been given the names Acacia x siebertiana and Acacia x deneufvillei.
Distribution and habitat
Golden wattle occurs in south-eastern Australia from South Australia's southern Eyre Peninsula and Flinders Ranges across Victoria and northwards into inland areas of southern New South Wales and the Australian Capital Territory. It is found in the understorey of open eucalypt forests on dry, shallow soils.
The species has become naturalised beyond its original range in Australia. In New South Wales it is especially prevalent around Sydney and the Central Coast region. In Tasmania it has spread in the east of the state and become weedy in bushland near Hobart. In Western Australia, it is found in the Darling Range and western wheatbelt as well as Esperance and Kalgoorlie.
Outside Australia it has become naturalised in South Africa where it is considered an invasive alien plant and is uprooted to prevent water depletion and protect local flora, Tanzania, Italy, Portugal, Sardinia, India, Indonesia and New Zealand. It is present in California as a garden escapee, but is not considered to be naturalised there. In South Africa, where it had been introduced between 1858 and 1865 for dune stabilization and tannin production, it had spread along waterways into forest, mountain and lowland fynbos, and borderline areas between fynbos and karoo. The gall-forming wasp Trichilogaster signiventris has been introduced in South Africa for biological control and has reduced the capacity of trees to reproduce throughout their range. The eggs are laid by adult wasps into buds of flower heads in the summer, before hatching in May and June when the larvae induce the formation of the grape-like galls and prevent flower development. The galls can be so heavy that branches break under their weight. In addition, the introduction in 2001 of the acacia seed weevil Melanterius compactus has also proved effective.
Ecology
Though plants are usually killed by a severe fire, mature specimens are able to resprout. Seeds are able to persist in the soil for more than five years, germinating after fire.
Like other wattles, Acacia pycnantha fixes nitrogen from the atmosphere. It hosts bacteria known as rhizobia that form root nodules, where they make nitrogen available in organic form and thus help the plant grow in poor soils. A field study across Australia and South Africa found that the microbes are genetically diverse, belonging to various strains of the species Bradyrhizobium japonicum and genus Burkholderia in both countries. It is unclear whether the golden wattle was accompanied by the bacteria to the African continent or encountered new populations there.
Self-incompatible, A. pycnantha cannot fertilise itself and requires cross-pollination between plants to set seed. Birds facilitate this and field experiments keeping birds away from flowers greatly reduces seed production. Nectaries are located on phyllodes; those near open flowers become active, producing nectar that birds feed upon just before or during flowering. While feeding, birds brush against the flower heads and dislodge pollen and often visit multiple trees. Several species of honeyeater, including the white-naped, yellow-faced, New Holland, and occasionally white-plumed and crescent honeyeaters, and eastern spinebills have been observed foraging. Other bird species include the silvereye, striated, buff-rumped and brown thornbills. As well as eating nectar, birds often pick off insects on the foliage. Honeybees, native bees, ants and flies also visit nectaries, but generally do not come into contact with the flowers during this activity. The presence of A. pycnantha is positively correlated with numbers of swift parrots overwintering in box–ironbark forest in central Victoria, though it is not clear whether the parrots are feeding on them or some other factor is at play.
The wood serves as food for larvae of the jewel beetle species Agrilus assimilis, A. australasiae and A. hypoleucus. The larvae of a number of butterfly species feed on the foliage including the fiery jewel, icilius blue, lithocroa blue and wattle blue. Trichilogaster wasps form galls in the flowerheads, disrupting seed set and
Acizzia acaciaepycnanthae, a psyllid, sucks sap from the leaves.
Acacia pycnantha is a host to rust fungus species in the genus Uromycladium that affect the phyllodes and branches. These include Uromycladium simplex that forms pustules and U. tepperianum that causes large swollen brown to black galls, which eventually lead to the death of the host plant. Two fungal species have been isolated from leaf spots on A. pycnantha: Seimatosporium arbuti, which is found on a wide range of plant hosts, and Monochaetia lutea.
Cultivation
Golden wattle is cultivated in Australia and was introduced to the northern hemisphere in the mid-1800s. Although it has a relatively short lifespan of 15 to 30 years, it is widely grown for its bright yellow, fragrant flowers. As well as being an ornamental plant, it has been used as a windbreak or in controlling erosion. Trees are sometimes planted with the taller sugar gum (Eucalyptus cladocalyx) to make a two-layered windbreak. One form widely cultivated was originally collected on Mount Arapiles in western Victoria. It is floriferous, with fragrant flowers appearing from April to July. The species has a degree of frost tolerance and is adaptable to a wide range of soil conditions, but it prefers good drainage. It tolerates heavy soils in dry climates, as well as mild soil salinity. It can suffer yellowing (chlorosis) in limestone-based (alkaline) soils. Highly drought-tolerant, it needs winter rainfall for cultivation. It is vulnerable to gall attack in cultivation. Propagation is from seed which has been pre-soaked in hot water to soften the hard seed coating.
Uses
Golden wattle has been grown in temperate regions around the world for the tannin in its bark, as it provides the highest yield of all wattles. Trees can be harvested for tannin from seven to ten years of age. Commercial use of its timber is limited by the small size of trees, but it has high value as a fuel wood. The scented flowers have been used for perfume making, and honey production in humid areas. However, the pollen is too dry to be collected by bees in dry climates. In southern Europe, it is one of several species grown for the cut-flower trade and sold as "mimosa". Like many other species of wattle, Acacia pycnantha exudes gum when stressed. Eaten by indigenous Australians, the gum has been investigated as a possible alternative to gum arabic, commonly used in the food industry.
In culture
Wattles, and in particular the golden wattle, have been an informal floral emblem of Australia for many years (for instance, it represented Australia on the Coronation gown of Queen Elizabeth II in 1953). While some advocates forcefully argued for the adoption of the waratah, during Australia's bicentenary in 1988 the golden wattle was formally adopted as the floral emblem of Australia. This was proclaimed by governor-general Sir Ninian Stephen (on the advice of the Hawke government) in the Commonwealth gazette published on 1 September. The day was marked by a ceremony at the Australian National Botanic Gardens, which included the planting of a golden wattle by Hazel Hawke, the prime minister's wife. In 1992, 1 September was formally declared "National Wattle Day".
The Australian Coat of Arms includes a wreath of wattle; this does not, however, accurately represent a golden wattle. Similarly, the green and gold colours used by Australian international sporting teams were inspired by the colours of wattles in general, rather than the golden wattle specifically.
The species was depicted on a stamp captioned "wattle" as part of a 1959–60 Australian stamp set featuring Australian native flowers. In 1970, a 5c stamp labelled "Golden Wattle" was issued to complement an earlier set depicting the floral emblems of Australia. To mark Australia Day in 1990, a 41c stamp labelled "Acacia pycnantha" was issued. Another stamp labelled "Golden Wattle", with a value of 70c, was issued in 2014.
Clare Waight Keller included golden wattles to represent Australia in Meghan Markle's wedding veil, which included the distinctive flora of each Commonwealth country.
The 1970 Monty Python's Flying Circus Bruces sketch includes a reference, by one of the stereotyped Australian characters, to "the wattle" as being "the emblem of our land", with suggested methods of display, including "stick[ing] it in a bottle or hold[ing] it in your hand" — despite the wattle prop itself being a large, forked branch with sparse patches of leaves and generic yellow flowers.
| Biology and health sciences | Fabales | Plants |
2199066 | https://en.wikipedia.org/wiki/Paraceratherium | Paraceratherium | Paraceratherium is an extinct genus of hornless rhinocerotoids belonging to the family Paraceratheriidae. It is one of the largest terrestrial mammals that has ever existed and lived from the early to late Oligocene epoch (34–23 million years ago). The first fossils were discovered in what is now Pakistan, and remains have been found across Eurasia between China and the Balkans. Paraceratherium means "near the hornless beast", in reference to Aceratherium, the genus in which the type species P. bugtiense was originally placed.
The exact size of Paraceratherium is unknown because of the incompleteness of the fossils. The shoulder height was about , and the length about . Its weight is estimated to have been about . The long neck supported a skull that was about long. It had large, tusk-like incisors and a nasal incision that suggests it had a prehensile upper lip or proboscis (trunk). The legs were long and pillar-like. The lifestyle of Paraceratherium may have been similar to that of modern large mammals such as the elephants and extant rhinoceroses. Because of its size, it would have had few predators and a long gestation period. It was a browser, eating mainly leaves, soft plants, and shrubs. It lived in habitats ranging from arid deserts with a few scattered trees to subtropical forests. The reasons for the animal's extinction are unknown, but various factors have been proposed.
The taxonomy of the genus and the species within has a long and complicated history. Other genera of Oligocene indricotheres, such as Baluchitherium, Indricotherium, and Pristinotherium, have been named, but no complete specimens exist, making comparison and classification difficult. Most modern scientists consider these genera to be junior synonyms of Paraceratherium, and it is thought to contain the following species; P. bugtiense, P. transouralicum, P. huangheense, and P. linxiaense. The most completely-known species is P. transouralicum, so most reconstructions of the genus are based on it. Differences between P. bugtiense and P. transouralicum may be due to sexual dimorphism, which would make them the same species.
Taxonomy
The taxonomic history of Paraceratherium is complex due to the fragmentary nature of the known fossils and because Western, Soviet, and Chinese scientists worked in isolation from each other for much of the 20th century and published research mainly in their respective languages. Scientists from different parts of the world tried to compare their finds to get a more complete picture of these animals, but were hindered by politics and wars. The opposing taxonomic tendencies of "lumping and splitting" have also contributed to the problem. Inaccurate geological dating previously led scientists to believe various geological formations that are now known to be contemporaneous were of different ages. Many genera were named on the basis of subtle differences in molar tooth characteristicsfeatures that vary within populations of other rhinoceros taxaand are therefore not accepted by most scientists for distinguishing species.
Early discoveries of indricotheres were made through various colonial links to Asia. The first known indricothere fossils were collected from Balochistan (in modern-day Pakistan) in 1846 by a soldier named Vickary, but these fragments were unidentifiable at the time. The first fossils now recognised as Paraceratherium were discovered by the British geologist Guy Ellcock Pilgrim in Balochistan in 1907–1908. His material consisted of an upper jaw, lower teeth, and the back of a jaw. The fossils were collected in the Chitarwata Formation of Dera Bugti, where Pilgrim had previously been exploring. In 1908, he used the fossils as basis for a new species of the extinct rhinoceros genus Aceratherium; A. bugtiense. Aceratherium was by then a wastebasket taxon; it included several unrelated species of hornless rhinoceros, many of which have since been moved to other genera. Fossil incisors that Pilgrim had previously assigned to the unrelated genus Bugtitherium were later shown to belong to the new species.
In 1910, more partial fossils were discovered in Dera Bugti during an expedition by the British palaeontologist Clive Forster-Cooper. Based on these remains, Foster-Cooper moved A. bugtiense to the new genus Paraceratherium, meaning "near the hornless beast", in reference to Aceratherium. His rationale for this reclassification was the species' distinctly down-turned lower tusks. In 1913, Forster-Cooper named a new genus and species, Thaumastotherium ("wonderful beast") osborni, based on larger fossils from the same excavations (some of which he had earlier suggested to belong to male P. bugtiense), but he renamed the genus Baluchitherium later that year because the former name was preoccupied, as it had already been used for a hemipteran insect. The fossils of Baluchitherium were so fragmentary that Foster-Cooper was only able to identify it as a kind of odd-toed ungulate, but he mentioned the possibility of confusion with Paraceratherium. The American palaeontologist Henry Fairfield Osborn, after which B. osborni was named, suggested it may have been a titanothere.
A Russian Academy of Sciences expedition later found fossils in the Aral Formation near the Aral Sea in Kazakhstan; it was the most complete indricothere skeleton known, but it lacked the skull. It is mounted in the Moscow Paleontological Museum. In 1916, based on these remains, Aleksei Alekseeivich Borissiak erected the genus Indricotherium named for a mythological monster, the "Indrik beast". He did not assign a species name, I. asiaticum, until 1923, but the Russian palaeontologist Maria Pavlova had already named it I. transouralicum in 1922. Also in 1923, Borissiak created the subfamily Indricotheriinae to include the various related forms known by then.
In 1922, the American explorer Roy Chapman Andrews led a well-documented expedition to China and Mongolia sponsored by the American Museum of Natural History. Various indricothere remains were found in formations of the Mongolian Gobi Desert, including the legs of a specimen standing in an upright position, indicating that it had died while trapped in quicksand, as well as a very complete skull. These remains became the basis of Baluchitherium grangeri, named by Osborn in 1923.
In 2017, a new species, P. huangheense, was named by the Chinese palaeontologist Yong-Xiang Li and colleagues based on jaw elements from the Hanjiajing Formation in the Gansu Province of China; the name refers to the nearby Huanghe River. In 2021, the Chinese palaeontologist Tao Deng and colleague described the new species P. linxiaense, based on a complete skull with an associated mandible and atlas bone from the Jiaozigou Formation of the Linxia Basin (to which the name refers) of northwestern China. A multitude of other species and genus namesmostly based on differences in size, snout shape, and front tooth arrangementhave been coined for various indricothere remains. Fossils attributable to Paraceratherium continue to be discovered across Eurasia, but the political situation in Pakistan had become too unstable for further excavations to occur there.
Species and synonyms
In 1922 Forster-Cooper named the new species Metamynodon bugtiensis based on a palate and other fragments from Dera Bugti, thought to belong to a giant member of that genus. These fossils are now thought to have belonged to an aberrant Paraceratherium bugtiense specimen that lacked the M3 molar. In 1936, the American palaeontologists Walter Granger and William K. Gregory proposed that Forster-Cooper's Baluchitherium osborni was likely a junior synonym (an invalid name for the same taxon) of Paraceratherium bugtiense, because these specimens were collected at the same locality and were possibly part of the same morphologically variable species. The American palaeontologist William Diller Matthew and Forster-Cooper himself had expressed similar doubts few years earlier. Although it had already been declared a junior synonym, the genus name Baluchitherium remained popular in various media because of the publicity surrounding Osborn's B. grangeri.
In 1989, the American palaeontologists Spencer G. Lucas and Jay C. Sobus published a revision of indricothere taxa, which was subsequently followed by western scientists. They concluded that Paraceratherium, as the oldest name, was the only valid indricothere genus from the Oligocene, and contained four valid species, P. bugtiense, P. transouralicum (originally in Indricotherium), P. prohorovi (originally in Aralotherium), and P. orgosensis (originally in Dzungariotherium). They considered most other names to be junior synonyms of those taxa, or as dubious names, based on remains too fragmentary to identify properly. By analysing alleged differences between named genera and species, Lucas and Sobus found that these most likely represented variation within populations, and that most features were indistinguishable between specimens, as had been pointed out in the 1930s. The fact that the single skull assigned to P. transouralicum or Indricotherium was domed, while others were flat at the top was attributed to sexual dimorphism; it is possible that P. bugtiense fossils represent the female, while P. transouralicum represents the male of the same species.
According to Lucas and Sobus, the type species P. bugtiense from the late Oligocene of Pakistan included junior synonyms such as B. osborni and P. zhajremensis. P. transouralicum from the late Oligocene of Kazakhstan, Mongolia, and northern China included B. grangeri and I. minus. By this scheme, P. orgosensis from the middle and late Oligocene of northwest China included D. turfanensis and P. lipidus. In 2013, the American palaeontologist Donald Prothero suggested that P. orgosensis may be distinct enough to warrant its original genus name Dzungariotherium, though its exact position requires evaluation. P. prohorovi from the late Oligocene of Kazakhstan may be too incomplete for its position to be resolved in relation to the other species; the same applies to proposed species such as I. intermedium and P. tienshanensis, as well as the genus Benaratherium. Though the genus name Indricotherium is now a junior synonym of Paraceratherium, the subfamily name Indricotheriinae is still in use because genus name synonymy does not affect the names of higher level taxa that are derived from these. Members of the subfamily are therefore still commonly referred to as indricotheres.
In contrast to the revision by Lucas and Sobus, a 2003 paper by Chinese palaeontologist Jie Ye and colleagues suggested that Indricotherium and Dzungariotherium were valid genera, and that P. prohorovi did not belong in Paraceratherium. They also recognised the validity of species such as P. lipidus, P. tienshanensis, and P. sui. A 2004 paper by Deng and colleagues also recognised three distinct genera. Some western writers have similarly used names otherwise considered invalid since the 1989 revision, but without providing detailed analysis and justification. Deng and colleagues recognised six Paraceratherium species in 2021, including some that had previously been declared synonyms, P. grangeri, P. asiaticum, and P. lepidum, while keeping Indricotherium and Baluchitherium as synonyms of the genus.
Evolution
The superfamily Rhinocerotoidea, which includes modern rhinoceroses, can be traced back to the early Eoceneabout 50 million years agowith early precursors such as Hyrachyus. Rhinocerotoidea contains three families; Amynodontidae, Rhinocerotidae ("true rhinoceroses"), and Hyracodontidae. The diversity within the rhinocerotoid group was much larger in prehistoric times; they ranged from dog-sized to the size of Paraceratherium. There were long-legged, cursorial forms adapted for running and squat, semi aquatic forms. Most species did not have horns. Rhinoceros fossils are identified as such mainly by characteristics of their teeth, which is the part of the animals most likely to be preserved. The upper molars of most rhinoceroses have a pi-shaped (π) pattern on the crown, and each lower molar has paired L-shapes. Various skull features are also used for identification of fossil rhinoceroses.
The subfamily Indricotheriinae, to which Paraceratherium belongs, was first classified as part of the family Hyracodontidae by the American palaeontologist Leonard B. Radinsky in 1966. Previously, they had been regarded as a subfamily within Rhinocerotidea, or even a full family, Indricotheriidae. In a 1999 cladistic study of tapiromorphs, the American palaeontologist Luke Holbrook found indricotheres to be outside the hyracodontid clade, and wrote that they may not be a monophyletic (natural) grouping. Radinsky's scheme is the prevalent hypothesis today. The hyracodont family contains long-legged members adapted to running, such as Hyracodon, and were distinguished by incisor characteristics. Indricotheres are distinguished from other hyracodonts by their larger size and the derived structure of their snouts, incisors and canines. The earliest known indricothere is the dog-sized Forstercooperia from the middle and late Eocene of western North America and Asia. The cow-sized Juxia is known from the middle Eocene; by the late Eocene the genus Urtinotherium of Asia had almost reached the size of Paraceratherium. Paraceratherium itself lived in Eurasia during the Oligocene period, 23 to 34 million years ago. The genus is distinguished from other indricotheres by its large size, nasal incision that would have supported a muscular snout, and its down-turned premaxillae. It had also lost the second and third lower incisors, lower canines, and lower first premolars.
The cladogram below follows the 1989 analysis of Indricotheriinae by Lucas and Sobus, and shows the closest relatives of Paraceratherium:
Lucas and colleagues had reached similar conclusions in a previous 1981 analysis of Forstercooperia, wherein they still retained Paraceratherium and Indricotherium as separate genera. In 2016, the Chinese researchers Haibing Wang and colleagues used the name Paraceratheriidae for the family and Paraceratheriine for the subfamily, and placed them outside of Hyracodontidae. Deng and colleagues confirmed previous studies with their 2021 analysis, suggesting that Juxia evolved from a clade consisting of Forstercooperia and Pappaceras 40 million years ago, with the resulting stock evolving into Urtinotherium in the late Eocene and Paraceratherium in the Oligocene. These researchers did not find Hyracodontidae to form a natural group, and found Paraceratheriidae to be closer to Rhinocerotidae, unlike previous studies.
Description
Paraceratherium is one of the largest known land mammals that have ever existed, but its precise size is unclear because of the lack of complete specimens. Its total body length was estimated as from front to back by Granger and Gregory in 1936, and by the palaeontologist Vera Gromova in 1959, but the former estimate is now considered exaggerated. The weight of Paraceratherium was similar to that of some extinct proboscideans, with the largest complete skeleton known belonging to the steppe mammoth (Mammuthus trogontherii). Despite its roughly equivalent mass, Paraceratherium might have been taller than any proboscidean. Its shoulder height was estimated as at the shoulders by Granger and Gregory, but by the palaeontologist Gregory S. Paul in 1997. The neck was estimated at long by the palaeontologists Michael P. Taylor and Mathew J. Wedel in 2013.
Early estimates of are now considered exaggerated; it may have been in the range of at maximum, and as low as on average. Calculations have mainly been based on fossils of P. transouralicum because this species is known from the most complete remains. Estimates have been based on skull, teeth, and limb bone measurements, but the known bone elements are represented by individuals of different sizes, so all skeletal reconstructions are composite extrapolations, resulting in several weight ranges.
There are no indications of the colour and skin texture of the animal because no skin impressions or mummies are known. Most life restorations show the creature's skin as thick, folded, grey, and hairless, based on modern rhinoceroses. Because hair retains body heat, modern large mammals such as elephants and rhinoceroses are largely hairless. Prothero has proposed that, contrary to most depictions, Paraceratherium had large elephant-like ears that it used for thermoregulation. The ears of elephants enlarge the body's surface area and are filled with blood vessels, making the dissipation of excess heat easier. According to Prothero, this would have been true for Paraceratherium, as indicated by the robust bones around the ear openings. The palaeontologists Pierre-Olivier Antoine and Darren Naish have expressed scepticism towards this idea.
Due to the fragmentary nature of known Paraceratherium fossils, the skeleton of the animal has been reconstructed in several different ways since its discovery. In 1923, Matthew supervised an artist to draw a reconstruction of the skeleton based on the even less complete P. transouralicum specimens known by then, using the proportions of a modern rhinoceros as a guide. The result was too squat and compact, and Osborn had a more slender version drawn later the same year. Some later life restorations have made the animal too slender, with little regard to the underlying skeleton. Gromova published a more complete skeletal reconstruction in 1959, based on the P. transouralicum skeleton from the Aral Formation, but this also lacked several neck vertebrae.
Skull
The largest skulls of Paraceratherium are around long, at the back of the skull, and wide across by the zygomatic arches. Paraceratherium had a long forehead, which was smooth and lacked the roughened area that serves as attachment point for the horns of other rhinoceroses. The bones above the nasal region are long and the nasal incision goes far into the skull. This indicates that Paraceratherium had a prehensile upper lip similar to that of the black rhinoceros and the Indian rhinoceros, or a short proboscis (trunk) as in tapirs. A distinguishing feature was that the nasal incision was retracted to the P2-P3 premolars.
The back of the skull was low and narrow, without the large lambdoid crests at the top and along the sagittal crest, which are otherwise found in horned and tusked animals that need strong muscles to push and fight. It also had a deep pit for the attachment of nuchal ligaments, which hold up the skull automatically. The occipital condyle was very wide and Paraceratherium appears to have had large, strong neck muscles, which allowed it to sweep its head strongly downwards while foraging from branches. The upper profile of the skull was arched, a distinguishing feature of the genus. One skull of P. transouralicum has a domed forehead, whereas others have flat foreheads, possibly because of sexual dimorphism. A brain endocast of P. transouralicum shows it was only 8 percent of the skull length, while the brain of the Indian rhinoceros is 17.7 percent of its skull length.
The species of Paraceratherium are mainly discernible through skull characteristics. P. bugtiense had features such as relatively slender maxillae and premaxillae, shallow skull roofs, mastoid-paroccipital processes that were relatively thin and placed back on the skull, a lambdoid crest, which extended less back, and an occipital condyle with a horizontal orientation, which it shared with Dzungariotherium. P. transouralicum had robust maxillae and premaxillae, upturned zygomata, domed frontal bones, thick mastoid-paroccipital processes, a lambdoid crest that extended back, and occipital condyles with a vertical orientation. P. huangheense differed from P. bugtiense only in the anatomy of the rear portion of the jaw, as well as its larger size. P. linxiaense differed from the other species in that the nasal notch was deeper, with the bottom placed above the middle of molar M2, a proportionally higher occipital condyle compared to the occipital surface's height, short muzzle bones and diastema in front of the cheek teeth, and a high zygomatic arch with a prominent hind end, and a smaller upper incisor I1.
Unlike those of most primitive rhinocerotoids, the front teeth of Paraceratherium were reduced to a single pair of incisors in either jaw, which were large and conical, and have been described as tusks. The upper incisors pointed downwards; the lower ones were shorter and pointed forwards. Among known rhinoceroses, this arrangement is unique to Paraceratherium and the related Urtinotherium. The incisors may have been larger in males. The canine teeth otherwise found behind the incisors were lost. The incisors were separated from the row of cheek teeth by a large diastema (gap). This feature is found in mammals where the incisors and cheek teeth have different specialisations. The upper molars, except for the third upper molar that was V-shaped, had a pi-shaped (π) pattern and a reduced metastyle. The premolars only partially formed the pi pattern. Each molar was the size of a human fist; among mammals they were only exceeded in size by proboscideans, though they were small relative to the size of the skull. The lower cheek teeth were L-shaped, which is typical of rhinoceroses.
Postcranial skeleton
No complete set of vertebrae and ribs of Paraceratherium has yet been found and the tail is completely unknown. The atlas and axis vertebrae of the neck were wider than in most modern rhinoceroses, with space for strong ligaments and muscles that would be needed to hold up the large head. The rest of the vertebrae were also very wide, and had large zygapophyses with much room for muscles, tendons, ligaments, and nerves, to support the head, neck, and spine. The neural spines were long and formed a long "hump" along the back, where neck muscles and nuchal ligaments for holding up the skull were attached. The ribs were similar to those of modern rhinoceroses, but the ribcage would have looked smaller in proportion to the long legs and large bodies, because modern rhinoceroses are comparatively short-limbed. The last vertebra of the lower back was fused to the sacrum, a feature found in advanced rhinoceroses. Like sauropod dinosaurs, Paraceratherium had pleurocoel-like openings (hollow parts of the bone) in their pre-sacral vertebrae, which probably helped to lighten the skeleton.
The limbs were large and robust to support the animal's large weight, and were in some ways similar to and convergent with those of elephants and sauropod dinosaurs with their likewise graviportal (heavy and slow moving) builds. Unlike such animals, which tend to lengthen the upper limb bones while shortening, fusing and compressing the lower limb, hand, and foot bones, Paraceratherium had short upper limb bones and long hand and foot bonesexcept for the disc-shaped phalangessimilar to the running rhinoceroses from which they descended. Some foot bones were almost long. The thigh bones typically measured , a size only exceeded by those of some elephants and dinosaurs. The thigh bones were pillar-like and much thicker and more robust than those of other rhinoceroses, and the three trochanters on the sides were much reduced, as this robustness diminished their importance. The limbs were held in a column-like posture instead of bent, as in smaller animals, which reduced the need for large limb muscles. The front limbs had three toes.
Palaeobiology
The zoologist Robert M. Alexander suggested in 1988 that overheating may have been a serious problem in Paraceratherium due to its size. According to Prothero, the best living analogues for Paraceratherium may be large mammals such as elephants, rhinoceroses and hippopotamuses. To aid in thermoregulation, these animals cool down during the day by resting in the shade or by wallowing in water and mud. They also forage and move mainly at night. Because of its large size, Paraceratherium would not have been able to run and move quickly, but they would have been able to cross large distances, which would be necessary in an environment with a scarcity of food. They may therefore have had large home ranges and have been migratory. Prothero suggests that animals as big as indricotheres would need very large home ranges or territories of at least and that, because of a scarcity of resources, there would have been little room in Asia for many populations or a multitude of nearly identical species and genera. This principle is called competitive exclusion; it is used to explain how the black rhinoceros (a browser) and white rhinoceros (a grazer) exploit different niches in the same areas of Africa.
Most terrestrial predators in their habitat were no bigger than a modern wolf and were not a threat to Paraceratherium. Adult individuals would be too large for any land predators to attack, but the young would have been vulnerable. Bite marks on bones from the Bugti beds indicate that even adults may have been preyed on by -long crocodiles, Astorgosuchus. As in elephants, the gestation period of Paraceratherium may have been lengthy and individuals may have had long lifespans. Paraceratherium may have lived in small herds, perhaps consisting of females and their calves, which they protected from predators. It has been proposed that may be the maximum weight possible for land mammals, and Paraceratherium was close to this limit. The reasons mammals cannot reach the much larger size of sauropod dinosaurs are unknown. The reason may be ecological instead of biomechanical, and perhaps related to reproduction strategies. Movement, sound, and other behaviours seen in CGI documentaries such as Walking With Beasts are entirely conjectural.
Diet and feeding
The simple, low-crowned teeth indicate that Paraceratherium was a browser with a diet consisting of relatively soft leaves and shrubs. Later rhinoceroses were grazers, with high-crowned teeth because their diets contained grit that quickly wore down their teeth. Studies of mesowear on Paraceratherium teeth confirm the creatures had a diet of soft leaves; microwear studies have yet to be conducted. Isotope analysis shows that Paraceratherium fed chiefly on C3 plants, which are mainly leaves. Like its perissodactyl relatives, the horses, tapirs, and other rhinoceroses, Paraceratherium would have been a hindgut fermenter; it would extract relatively little nutrition from its food and would have to eat large volumes to survive. Like other large herbivores, Paraceratherium would have had a large digestive tract.
Granger and Gregory argued that the large incisors were used for defence or for loosening shrubs by moving the neck downwards, thereby acting as picks and levers. Tapirs use their proboscis to wrap around branches while stripping off bark with the front teeth; this ability would have been helpful to Paraceratherium. Some Russian authors suggested that the tusks were probably used for breaking twigs, stripping bark and bending high branches and that, because species from the early Oligocene had larger tusks than later ones, they probably had a more bark than leaf based diet. Since the species involved are now known to have been contemporaneous, and the differences in tusks are now thought to be sexually dimorphic, the latter idea is not accepted today. Herds of Paraceratherium may have migrated while continuously foraging from tall trees, which smaller mammals could not reach. Osborn suggested that its mode of foraging would have been similar to that of the high-browsing giraffe and okapi, rather than to modern rhinoceroses, whose heads are carried close to the ground.
Distribution and habitat
Remains assignable to Paraceratherium have been found in early to late Oligocene (34–23 million years ago) formations across Eurasia, in modern-day China, Mongolia, India, Pakistan, Kazakhstan, Georgia, Turkey, Romania, Bulgaria, and the Balkans. Their distribution may be correlated with the palaeogeographic development of the Alpine-Himalayan mountain belt. The range of Paraceratherium finds implies that they inhabited a continuous landmass with a similar environment across it, but this is contradicted by palaeogeographic maps that show this area had various marine barriers, so the genus was successful in being widely distributed despite this. The fauna which coexisted with Paraceratherium included other rhinocerotoids, artiodactyls, rodents, amphicyonids, mustelids, hyaenodonts, nimravids and felids.
The habitat of Paraceratherium appears to have varied across its range, based on the types of geological formations it has been found in. The Hsanda Gol Formation of Mongolia represents an arid desert basin, and the environment is thought to have had few tall trees and limited brush cover, as the fauna consisted mainly of animals that fed from tree tops or close to the ground. A study of fossil pollen showed that much of China was woody shrubland, with plants such as saltbush, mormon tea (Ephedra), and nitre bush (Nitraria), all adapted to arid environments. Trees were rare, and concentrated near groundwater. The parts of China where Paraceratherium lived had dry lakes and abundant sand dunes, and the most common plant fossils are leaves of the desert-adapted Palibinia. Trees in Mongolia and China included birch, elm, oaks, and other deciduous trees, while Siberia and Kazakhstan also had walnut trees. Dera Bugti in Pakistan had dry, temperate to subtropical forest.
Deng and colleagues speculated about the palaeobiogeography of Paraceratherium based on their phylogenetic analys in 2021. They found that P. bugtiense was the only species of the genus represented in the Oligocene of western Pakistan, while the genus was highly diversified across the Mongolian Plateau, northwestern China, and Kazakhstan north to the Tibetan Plateau. They hypothesised that P. asiaticum dispersed westward to Kazakhstan during the early Oligocene from the ancestral area of Mongolia, where the most primitive member of the genus, P. grangeri, lived, and descendants may have continued to South Asia as P. bugtiense, dispersing through the Tibetan region. P. lepidum existed in Xinjiang and Kazakhstan and P. linxiaense in Linxia during the late Oligocene, and it is possible that these sister species of P. bugtiense had been able to migrate back north to Central Asia during this time when that area had become tropical (it was arid during the early Oligocene). This implies the Tibetan region was not yet a high-elevation plateau that could act as a barrier, and large animals may therefore have been able to move freely along the eastern coast of the Tethys sea, and through lowlands in the area, some of which were possibly under in elevation at the time.
Extinction
The reasons Paraceratherium and its relatives became extinct after surviving for about 11 million years are unknown, but it is unlikely that there was a single cause. Theories include that their large size was related to the now outdated concept of inadaptive evolution, climate change, vegetational change, and low reproduction rate. Prothero and the zoologist Pavel V. Putshkov have considered these causes unlikely since these animals managed to survive regardless of these issues for millions of years under the harsh conditions of their environment, and were not much larger than the biggest proboscideans, extinct as well as extant, which faced similar challenges.
Putshkov and Andrzej H. Kulczicki instead suggested in 1995 and 2001 that invading gomphothere proboscideans from Africa in the late Oligocene (between 28 and 23 million years ago) may have considerably changed the habitats they entered, like African elephants do today. This would have made food scarcer for Paraceratherium, and as their numbers dwindled, they would have become more vulnerable to other threats. Prothero has pointed out that gomphotheres are not known to have generally coexisted with paraceratheres, and there are no known co-occurrences between paraceratheres and the large deinotheres, which would have been their most likely competitors. While cautioning that the true cause of their extinction will never be known for certain, Prothero found it to be more than a coincidence that paraceratheres disappeared just as large predators and other large herbivores entered Asia during the early Miocene (between 23 and 16 million years ago).
| Biology and health sciences | Perissodactyla | Animals |
2200368 | https://en.wikipedia.org/wiki/Geology%20of%20Venus | Geology of Venus | The geology of Venus is the scientific study of the surface, crust, and interior of the planet Venus. Within the Solar System, it is the one nearest to Earth and most like it in terms of mass, but has no magnetic field or recognizable plate tectonic system. Much of the ground surface is exposed volcanic bedrock, some with thin and patchy layers of soil covering, in marked contrast with Earth, the Moon, and Mars. Some impact craters are present, but Venus is similar to Earth in that there are fewer craters than on the other rocky planets that are largely covered by them. This is due in part to the thickness of the Venusian atmosphere disrupting small impactors before they strike the ground, but the paucity of large craters may be due to volcanic re-surfacing, possibly of a catastrophic nature. Volcanism appears to be the dominant agent of geological change on Venus. Some of the volcanic landforms appear to be unique to the planet. There are shield and composite volcanoes similar to those found on Earth, although these volcanoes are significantly shorter than those found on Earth or Mars. Given that Venus has approximately the same size, density, and composition as Earth, it is plausible that volcanism may be continuing on the planet today, as demonstrated by recent studies.
Most of the Venusian surface is relatively flat; it is divided into three topographic units: lowlands, highlands, and plains. In the early days of radar observation the highlands drew comparisons to the continents of Earth, but modern research has shown that this is superficial and the absence of plate tectonics makes this comparison misleading. Tectonic features are present to a limited extent, including linear "deformation belts" composed of folds and faults. These may be caused by mantle convection. Many of the tectonic features such as tesserae (large regions of highly deformed terrain, folded and fractured in two or three dimensions), and arachnoids (those features resembling a spider's web) are associated with volcanism.
Eolian landforms are not widespread on the planet's surface, but there is considerable evidence the planet's atmosphere causes the chemical weathering of rock, especially at high elevations. The planet is remarkably dry, with only a chemical trace of water vapor (20 ppm) in the Venusian atmosphere. No landforms indicative of past water or ice are visible in radar images of the surface. The atmosphere shows isotopic evidence of having been stripped of volatile elements by off-gassing and solar wind erosion over time, implying the possibility that Venus may have had liquid water at some point in the distant past; no direct evidence for this has been found. Much speculation about the geological history of Venus continues today.
The surface of Venus is not easily accessible because of the extremely thick atmosphere (some 90 times that of Earth's) and the surface temperature. Much of what is known about it stems from orbital radar observations, because the surface is permanently obscured in visible wavelengths by cloud cover. In addition, a number of landers have returned data from the surface, including images.
Studies reported in October 2023 suggest for the first time that Venus may have had plate tectonics during ancient times and, as a result, may have had a more habitable environment, possibly once capable of harboring life forms.
Topography
The surface of Venus is comparatively flat. When 93% of the topography was mapped by Pioneer Venus Orbiter, scientists found that the total distance from the lowest point to the highest point on the entire surface was about , about the same as the vertical distance between the Earth's ocean floor and the higher summits of the Himalayas. This similarity is to be expected as the maximum attainable elevation contrasts on a planet are largely dictated by the strength of the planet's gravity and the mechanical strength of its lithosphere, these are similar for Earth and Venus.
According to data from the Pioneer Venus Orbiter altimeters, nearly 51% of the surface is located within of the median radius of ; only 2% of the surface is located at elevations greater than from the median radius.
The altimetry experiment of Magellan confirmed the general character of the landscape. According to the Magellan data, 80% of the topography is within of the median radius. The most important elevations are in the mountain chains that surround Lakshmi Planum: Maxwell Montes (11 km, 6.8 mi), Akna Montes (7 km, 4.3 mi) and Freya Montes (7 km, 4.3 mi). Despite the relatively flat landscape of Venus, the altimetry data also found large inclined plains. Such is the case on the southwest side of Maxwell Montes, which in some parts seems to be inclined some 45°. Inclinations of 30° were registered in Danu Montes and Themis Regio.
About 75% of the surface is composed of bare rock.
Based on altimeter data from the Pioneer Venus Orbiter probe, supported by Magellan data, the topography of the planet is divided into three provinces: lowlands, deposition plains, and highlands.
Highlands
This unit covers about 10% of the planet's surface, with elevations greater than . The largest provinces of the highlands are Aphrodite Terra, Ishtar Terra, and Lada Terra, as well as the regions Beta Regio, Phoebe Regio and Themis Regio. The regions Alpha Regio, Bell Regio, Eistla Regio and Tholus Regio are smaller regions of highlands.
Some of the terrain in these areas is particularly efficient at reflecting radar signals. This is possibly analogous to snow lines on Earth and is likely related to temperatures and pressures there being lower than in the other provinces due to the higher elevation, which allows for distinct mineralogy to occur. It is thought that high-elevation rock formations may contain or be coated by minerals that have high dielectric constants. The high dielectric minerals would be stable at the ambient temperatures in the highlands, but not on the plains that comprise the rest of the planet's surface. Pyrite, an iron sulfide, matches these criteria and is widely suspected as a possible cause; it would be produced by chemical weathering of the volcanic highlands after long-term exposure to the sulfur-bearing Venusian atmosphere. The presence of pyrite on Venus has been contested, with atmospheric modeling showing that it might not be stable under Venusian atmospheric conditions. Other hypotheses have been put forward to explain the higher radar reflectivity in the highlands, including the presence of a ferroelectric material whose dielectric constant changes with temperature (with Venus having a changing temperature gradient with elevation). It has been observed that the character of the radar-bright highlands is not consistent across the surface of Venus. For example, Maxwell Montes shows the sharp, snow line-like change in reflectivity that is consistent with a change in mineralogy, whereas Ovda Regio shows a more gradual brightening upwards trend. The brightening upwards trend on Ovda Regio is consistent with a ferroelectric signature, and has been suggested to indicate the presence of chlorapatite.
Deposition plains
Deposition plains have elevations averaging 0 to 2 km and cover more than half of the planet's surface.
Lowlands
The rest of the surface is lowlands and generally lies below zero elevation. Radar reflectivity data suggest that at a centimeter scale these areas are smooth, as a result of gradation (accumulation of fine material eroded from the highlands).
Surface observations
Ten spacecraft have successfully landed on Venus and returned data; all were flown by the Soviet Union. Venera 9, 10, 13, and 14 had cameras and returned images of soil and rock. Spectrophotometry results showed that these four missions kicked up dust clouds on landing, which means that some of the dust particles must be smaller than about 0.02 mm. The rocks at all four sites showed fine layers, some layers were more reflective than others. Experiments on rocks at the Venera 13 and 14 sites found that they were porous and easily crushed (bearing maximum loads of 0.3 to 1 MPa) these rocks may be weakly lithified sediments or volcanic tuff. Spectrometry found that the surface materials at the Venera 9, 10, 14 and Vega 1 and 2 landing had chemical compositions similar to tholeiitic basalts, while the Venera 8 and 13 sites chemically resembled alkaline basalts.
Impact craters and age estimates of the surface
Earth-based radar surveys made it possible to identify some topographic patterns related to craters, and the Venera 15 and Venera 16 probes identified almost 150 such features of probable impact origin. Global coverage from Magellan subsequently made it possible to identify nearly 900 impact craters.
Compared to Mercury, the Moon and other such bodies, Venus has very few craters. In part, this is because Venus's dense atmosphere burns up smaller meteorites before they hit the surface. The Venera and Magellan data are in agreement: there are very few impact craters with a diameter less than , and data from Magellan show an absence of any craters less than in diameter. The small craters are irregular and appear in groups, thus pointing to the deceleration and the breakup of impactors. However, there are also fewer of the large craters, and those appear relatively young; they are rarely filled with lava, showing that they were formed after volcanic activity in the area ceased, and radar data indicates that they are rough and have not had time to be eroded down.
Compared to the situation on bodies such as the Moon, it is more difficult to determine the ages of different areas of the surface on Venus, on the basis of crater counts, due to the small number of craters at hand. However, the surface characteristics are consistent with a completely random distribution, implying that the surface of the entire planet is roughly the same age, or at least that very large areas are not very different in age from the average.
Taken together, this evidence suggests that the surface of Venus is geologically young. The impact crater distribution appears to be most consistent with models that call for a near-complete resurfacing of the planet. Subsequent to this period of extreme activity, process rates declined and impact craters began to accumulate, with only minor modification and resurfacing since.
A young surface all created at the same time is a different situation compared with any of the other terrestrial planets.
Global resurfacing event
Age estimates based on crater counts indicate a young surface, in contrast to the much older surfaces of Mars, Mercury, and the Moon. For this to be the case on a planet without crustal recycling by plate tectonics requires explanation. One hypothesis is that Venus underwent some sort of global resurfacing about 300–500 million years ago that erased the evidence of older craters.
One possible explanation for this event is that it is part of a cyclic process on Venus. On Earth, plate tectonics allows heat to escape from the mantle by advection, the transport of mantle material to the surface and the return of old crust to the mantle. But Venus has no evidence of plate tectonics, so this theory states that the interior of the planet heats up (due to the decay of radioactive elements) until material in the mantle is hot enough to force its way to the surface. The subsequent resurfacing event covers most or all of the planet with lava, until the mantle is cool enough for the process to start over.
Volcanoes
The surface of Venus is dominated by volcanism. Although Venus is superficially similar to Earth, it seems that the tectonic plates so active in Earth's geology do not exist on Venus. About 80% of the planet consists of a mosaic of volcanic lava plains, dotted with more than a hundred large isolated shield volcanoes, and many hundreds of smaller volcanoes and volcanic constructs such as coronae. These are geological features believed to be almost unique to Venus: huge, ring-shaped structures across and rising hundreds of meters above the surface. The only other place they have been discovered is on Uranus's moon Miranda. It is believed that they are formed when plumes of rising hot material in the mantle push the crust upwards into a dome shape, which then collapses in the centre as the molten lava cools and leaks out at the sides, leaving a crown-like structure: the corona.
Differences can be seen in volcanic deposits. In many cases, volcanic activity is localized to a fixed source, and deposits are found in the vicinity of this source. This kind of volcanism is called "centralized volcanism," in that volcanoes and other geographic features form distinct regions. The second type of volcanic activity is not radial or centralized; flood basalts cover wide expanses of the surface, similar to features such as the Deccan Traps on Earth. These eruptions result in "flow type" volcanoes.
Volcanoes less than in diameter are very abundant on Venus and they may number hundreds of thousands or even millions. Many appear as flattened domes or 'pancakes', thought to be formed in a similar way to shield volcanoes on Earth. These pancake dome volcanoes are fairly round features that are less than in height and many times that in width. It is common to find groups of hundreds of these volcanoes in areas called shield fields. The domes of Venus are between 10 and 100 times larger than those formed on Earth. They are usually associated with "coronae" and tesserae. The pancakes are thought to be formed by highly viscous, silica-rich lava erupting under Venus's high atmospheric pressure. Domes called scalloped margin domes (commonly called ticks because they appear as domes with numerous legs), are thought to have undergone mass wasting events such as landslides on their margins. Sometimes deposits of debris can be seen scattered around them.
On Venus, volcanoes are mainly of the shield type. Nevertheless, the morphology of the shield volcanoes of Venus is different from shield volcanoes on Earth. On the Earth, shield volcanoes can be a few tens of kilometers wide and up to 10 kilometers high (6.2 mi) in the case of Mauna Kea, measured from the sea floor. On Venus, these volcanoes can cover hundreds of kilometers in area, but they are relatively flat, with an average height of .
Other unique features of Venus's surface are novae (radial networks of dikes or grabens) and arachnoids. A nova is formed when large quantities of magma are extruded onto the surface to form radiating ridges and trenches which are highly reflective to radar. These dikes form a symmetrical network around the central point where the lava emerged, where there may also be a depression caused by the collapse of the magma chamber.
Arachnoids are so named because they resemble a spider's web, featuring several concentric ovals surrounded by a complex network of radial fractures similar to those of a nova. It is not known whether the 250 or so features identified as arachnoids actually share a common origin, or are the result of different geological processes.
Tectonic activity
Despite the fact that Venus appears to have no global plate tectonic system as such, the planet's surface shows various features associated with local tectonic activity. Features such as faults, folds, and volcanoes are present there and may be driven largely by processes in the mantle.
The active volcanism of Venus has generated chains of folded mountains, rift valleys, and terrain known as tesserae, a word meaning "floor tiles" in Greek. Tesserae exhibit the effects of eons of compression and tensional deformation.
Unlike those on Earth, the deformations on Venus are directly related to regional dynamic forces within the planet's mantle. Gravitational studies suggest that Venus differs from Earth in lacking an asthenosphere—a layer of lower viscosity and mechanical weakness that allows Earth's crustal tectonic plates to move. The apparent absence of this layer on Venus suggests that the deformation of the Venusian surface must be explained by convective movements within the planet's mantle.
The tectonic deformations on Venus occur on a variety of scales, the smallest of which are related to linear fractures or faults. In many areas these faults appear as networks of parallel lines. Small, discontinuous mountain crests are found which resemble those on the Moon and Mars. The effects of extensive tectonism are shown by the presence of normal faults, where the crust has sunk in one area relative to the surrounding rock, and superficial fractures. Radar imaging shows that these types of deformation are concentrated in belts located in the equatorial zones and at high southern latitudes. These belts are hundreds of kilometers wide and appear to interconnect across the whole of the planet, forming a global network associated with the distribution of volcanoes.
The rifts of Venus, formed by the expansion of the lithosphere, are groups of depressions tens to hundreds of meters wide and extending up to in length. The rifts are mostly associated with large volcanic elevations in the form of domes, such as those at Beta Regio, Atla Regio and the western part of Eistla Regio. These highlands seem to be the result of enormous mantle plumes (rising currents of magma) which have caused elevation, fracturing, faulting, and volcanism.
The highest mountain chain on Venus, Maxwell Montes in Ishtar Terra, was formed by processes of compression, expansion, and lateral movement. Another type of geographical feature, found in the lowlands, consists of ridge belts elevated several meters above the surface, hundreds of kilometers wide and thousands of kilometers long. Two major concentrations of these belts exist: one in Lavinia Planitia near the southern pole, and the second adjacent to Atalanta Planitia near the northern pole.
Tesserae are found mainly in Aphrodite Terra, Alpha Regio, Tellus Regio and the eastern part of Ishtar Terra (Fortuna Tessera). These regions contain the superimposition and intersection of grabens of different geological units, indicating that these are the oldest parts of the planet. It was once thought that the tesserae were continents associated with tectonic plates like those of the Earth; in reality they are probably the result of floods of basaltic lava forming large plains, which were then subjected to intense tectonic fracturing.
Nonetheless, studies reported on 26 October 2023 suggest that Venus, for the first time, may have had plate tectonics during ancient times. As a result, Venus may have had a more habitable environment, and possibly once capable of life forms.
Magnetic field and internal structure
Venus's crust appears to be thick on average, and composed of mafic silicate rocks. Venus's mantle is approximately thick, its chemical composition is probably similar to that of chondrites. Since Venus is a terrestrial planet, it is presumed to have a core, made of semisolid iron and nickel with a radius of approximately .
The unavailability of seismic data from Venus severely limits what can be definitely known about the structure of the planet's mantle, but models of Earth's mantle have been modified to make predictions. It is expected that the uppermost mantle, from about deep is mostly made of the mineral olivine. Descending through the mantle, the chemical composition remains largely the same but at somewhere between about , the increasing pressure causes the crystal structure of olivine to change to the more densely packed structure of spinel. Another transition occurs between deep, where the material takes on the progressively more compact crystal structures of ilmenite and perovskite, and gradually becomes more like perovskite until the core boundary is reached.
Venus is similar to Earth in size and density, and so probably also in bulk composition, but it does not have a significant magnetic field. Earth's magnetic field is produced by what is known as the core dynamo, consisting of an electrically conducting liquid, the nickel-iron outer core that rotates and is convecting. Venus is expected to have an electrically conductive core of similar composition, and although its rotation period is very long (243.7 Earth days), simulations show that this is adequate to produce a dynamo. This implies that Venus lacks convection in its outer core. Convection occurs when there is a large difference in temperature between the inner and outer part of the core, but since Venus has no plate tectonics to let off heat from the mantle, it is possible that outer core convection is being suppressed by a warm mantle. It is also possible that Venus may lack a solid inner core for the same reason, if the core is either too hot or is not under enough pressure to allow molten nickel-iron to freeze there.
Lava flows and channels
Lava flows on Venus are often much larger than Earth's, up to several hundred kilometers long and tens of kilometers wide. It is still unknown why these lava fields or lobate flows reach such sizes, but it is suggested that they are the result of very large eruptions of basaltic, low-viscosity lava spreading out to form wide, flat plains.
On Earth, there are two known types of basaltic lava: aa and pāhoehoe. Aa lava presents a rough texture in the shape of broken blocks (clinkers). Pāhoehoe lava is recognized by its pillowy or ropy appearance. Rough surfaces appear bright in radar images, which can be used to determine the differences between aa and pāhoehoe lavas. These variations can also reflect differences in lava age and preservation. Channels and lava tubes (channels that have cooled down and over which a dome has formed) are very common on Venus. Two planetary astronomers from the University of Wollongong in Australia, Dr Graeme Melville and Prof. Bill Zealey, researched these lava tubes, using data supplied by NASA, over a number of years and concluded that they were widespread and up to ten times the size of those on the Earth. Melville and Zealey said that the gigantic size of the Venusian lava tubes (tens of meters wide and hundreds of kilometers long) may be explained by the very fluid lava flows together with the high temperatures on Venus, allowing the lava to cool slowly.
For the most part, lava flow fields are associated with volcanoes. The central volcanoes are surrounded by extensive flows that form the core of the volcano. They are also related to fissure craters, coronae, dense clusters of volcanic domes, cones, wells and channels.
Thanks to Magellan, more than 200 channels and valley complexes have been identified. The channels were classified as simple, complex, or compound. Simple channels are characterized by a single, long main channel. This category includes rills similar to those found on the Moon, and a new type, called canali, consisting of long, distinct channels which maintain their width throughout their entire course. The longest such channel identified (Baltis Vallis) has a length of more than , about one-sixth of the circumference of the planet.
Complex channels include anastomosed networks, in addition to distribution networks. This type of channel has been observed in association with several impact craters and important lava floods related to major lava flow fields. Compound channels are made of both simple and complex segments. The largest of these channels shows an anastomosed web and modified hills similar to those present on Mars.
Although the shape of these channels is highly suggestive of fluid erosion, there is no evidence that they were formed by water. In fact, there is no evidence of water anywhere on Venus in the last 600 million years. While the most popular theory for the channels' formation is that they are the result of thermal erosion by lava, there are other hypotheses, including that they were formed by heated fluids formed and ejected during impacts.
Surface processes
Wind
Liquid water and ice are nonexistent on Venus, and thus the only agent of physical erosion to be found (apart from thermal erosion by lava flows) is wind. Wind tunnel experiments have shown that the density of the atmosphere allows the transport of sediments with even a small breeze. Therefore, the seeming rarity of eolian land forms must have some other cause. This implies that transportable sand-size particles are relatively scarce on the planet; which would be a result of very slow rates of mechanical erosion. The process that is most important for the production of sediment on Venus may be crater-forming impact events, which is bolstered by the seeming association between impact craters and downwind eolian land forms.
This process is manifest in the ejecta of impact craters expelled onto the surface of Venus. The material ejected during a meteorite impact is lifted to the atmosphere, where winds transport the material toward the west. As the material is deposited on the surface, it forms parabola-shaped patterns. This type of deposit can be established on top of various geologic features or lava flows. Therefore, these deposits are the youngest structures on the planet. Images from Magellan reveal the existence of more than 60 of these parabola-shaped deposits that are associated with crater impacts.
The ejection material, transported by the wind, is responsible for the process of renovation of the surface at speeds, according to the measurements of the Venera soundings, of approximately one metre per second. Given the density of the lower Venusian atmosphere, the winds are more than sufficient to provoke the erosion of the surface and the transportation of fine-grained material. In the regions covered by ejection deposits one may find wind lines, dunes, and yardangs. The wind lines are formed when the wind blows ejection material and volcanic ash, depositing it on top of topographic obstacles such as domes. As a consequence, the leeward sides of domes are exposed to the impact of small grains that remove the surface cap. Such processes expose the material beneath, which has a different roughness, and thus different characteristics under radar, compared to formed sediment.
The dunes are formed by the depositing of particulates that are the size of grains of sand and have wavy shapes. Yardangs are formed when the wind-transported material carves the fragile deposits and produces deep furrows.
The line-shaped patterns of wind associated with impact craters follow a trajectory in the direction of the equator. This tendency suggests the presence of a system of circulation of Hadley cells between medium latitudes and the equator. Magellan radar data confirm the existence of strong winds that blow toward the east in the upper surface of Venus, and meridional winds on the surface.
Chemical erosion
Chemical and mechanical erosion of the old lava flows is caused by reactions of the surface with the atmosphere in the presence of carbon dioxide and sulfur dioxide (see carbonate–silicate cycle for details). These two gases are the planet's first and third most abundant gases, respectively; the second most abundant gas is inert nitrogen. The reactions probably include the deterioration of silicates by carbon dioxide to produce carbonates and quartz, as well as the deterioration of silicates by sulfur dioxide to produce anhydrate calcium sulfate and quartz.
Ancient liquid water
NASA's Goddard Institute for Space Studies and others have postulated that Venus may have had a shallow ocean in the past for up to 2 billion years, with as much water as Earth. Depending on the parameters used in their theoretical model, the last liquid water could have evaporated as recently as 715 million years ago. Currently, the only known water on Venus is in the form of a tiny amount of atmospheric vapor (20 ppm). Hydrogen, a component of water, is still being lost to space today as detected by ESA's Venus Express spacecraft.
| Physical sciences | Solar System | Astronomy |
2201190 | https://en.wikipedia.org/wiki/Billfish | Billfish | The billfish are a group (Xiphioidea) of saltwater predatory fish characterised by prominent pointed bills (rostra), and by their large size; some are longer than . Extant billfish include sailfish and marlin, which make up the family Istiophoridae; and swordfish, sole member of the family Xiphiidae. They are often apex predators which feed on a wide variety of smaller fish, crustaceans and cephalopods. These two families are sometimes classified as belonging to the order Istiophoriformes, a group which originated around 71 million years ago in the Late Cretaceous, with the two families diverging around 15 million years ago in the Late Miocene. However, they are also classified as being closely related to the mackerels and tuna within the suborder Scombroidei of the order Perciformes. However, the 5th edition of the Fishes of the World does recognise the Istiophoriformes as a valid order, albeit including the Sphyraenidae, the barracudas.
Billfish are pelagic and highly migratory, and are found in all oceans. Although they usually inhabit tropical and subtropical waters, swordfish are also found in temperate waters. Billfish use their long spear/sword-like upper beaks to slash at and stun prey during feeding. Their bills have been known to impale prey, and have sometimes even accidentally impaled boats and people, but they are not intentionally used for this purpose. They are highly valued as game fish by sports fishermen.
Evolution
Several extinct families of smaller billfish are known from the early Paleogene, including the Blochiidae, Palaeorhynchidae, and Hemingwayidae; all of these already have the elongated rostrum present in modern billfish. The earliest fossil billfishes are a Blochius-like fish from Peru and Hemingwaya from Turkmenistan, both of which are known from the Late Paleocene or earliest Eocene.
The enigmatic Cylindracanthus, known from the Late Cretaceous to the Eocene, is sometimes considered a "billfish" related to blochiids on the basis of its presumed rostral spines, but no other fossils are known of it aside from its rostral spines, leading to the suggestion that it had a cartilaginous body and may even be a relative of sturgeons. Similarly, the pachycormid fish Protosphyraena and the plethodid fish Rhamphoichthys from the Late Cretaceous had both convergently evolved a highly billfish-like body plan, but are known to be very distantly related to actual billfish; these genera may have instead served as a Cretaceous ecological analogue to billfish.
Species
The term billfish refers to the fishes of the families Xiphiidae and Istiophoridae. These large fishes are "characterized by the prolongation of the upper jaw, much beyond the lower jaw into a long rostrum which is flat and sword-like (swordfish) or rounded and spear-like (sailfishes, spearfishes, and marlins)."
True billfish
The 12 species of true billfish are divided into two families and five genera. One family, Xiphiidae, contains only one species, the swordfish Xiphias gladius, and the other family, Istiophoridae, contains 11 species in four genera, including marlin, spearfish, and sailfish. Controversy exists about whether the Indo-Pacific blue marlin, Makaira mazara, is the same species as the Atlantic blue marlin, M. nigricans. FishBase follows Nakamura (1985) in recognizing M. mazara as a distinct species, "chiefly because of differences in the pattern of the lateral line system".
Billfish-like fish
A number of other fishes have pronounced bills or beaks, and are sometimes referred to as billfish, despite not being true billfish. Halfbeaks look somewhat like miniature billfish, and the sawfish and sawshark, which are cartilaginous fishes with long, serrated rostrums. Needlefish are sometimes confused with billfish, but they are "easily distinguished from the true billfish by having both jaws prolonged, the dorsal and anal fins both single and similar in size and shape, and the pelvic fins inserted far behind the pectorals." Paddlefish have elongated rostrums containing electroreceptors that can detect weak electrical fields. Paddlefish are filter feeders and may use their rostrum to detect zooplankton.
Structure and function of the bill
Billfish have a long, bony, spear-shaped bill, sometimes called a snout, beak or rostrum. The swordfish has the longest bill, about one-third its body length. Like a true sword, it is smooth, flat, pointed and sharp. The bills of other billfish are shorter and rounder, more like spears.
Billfish normally use their bills to slash at schooling fish. They swim through the fish school at high speed, slashing left and right, and then circle back to eat the fish they stunned. Adult swordfish have no teeth, and other billfish have only small file-like teeth. They swallow their catch whole, head-first. Billfish do not normally spear with their bills, though occasionally a marlin will flip a fish into the air and bayonet it. Given the speed and power of these fish, when they do spear things the results can be dramatic. Predators of billfish, such as great white and mako sharks, have been found with billfish spears embedded in them. Pelagic fish generally are fascinated by floating objects, and congregate about them. Billfish can accidentally impale boats and other floating objects when they pursue the small fish that aggregate around them. Care is needed when attempting to land a hooked billfish. Many fisherman have been injured, some seriously, by a billfish thrashing its bill about.
Other characteristics
Billfish are large swift predators which spend most of their time in the epipelagic zone of the open ocean. They feed voraciously on smaller pelagic fish, crustaceans and small squid. Some billfish species also hunt demersal fish on the seafloor, while others descend periodically to mesopelagic depths. They may come closer to the coast when they spawn in the summer. Their eggs and larvae are pelagic, that is they float freely in the water column. Many grow over three metres (10 feet) long, and the blue marlin can grow to five metres (16 feet). Females are usually larger than males.
Like scombroids (tuna, bonito and mackerel), billfish have both the ability to migrate over long distances, efficiently cruising at slow speeds, and the ability to generate rapid bursts of speed. These speed bursts can be quite astonishing, and the Indo-Pacific sailfish has been recorded making a burst of 68 miles per hour (110 km/h), nearly top speed for a cheetah and the highest speed ever recorded for a fish.
Some billfish also descend to considerable mesopelagic depths. They have sophisticated swim bladders which allow them to rapidly compensate for pressure changes as the depth changes. This means that when they are swimming deep, they can return swiftly to the surface without problems. "Like the large tuna, some billfish maintain their body temperature several degrees above ambient water temperatures; this elevated body temperature increases the efficiency of the swimming muscles, especially during excursions into the cold water below the thermocline." See heater cells for more information about these specialized modified muscle cells.
In 1936 the British zoologist James Gray posed a conundrum which has come to be known as Gray's paradox. The problem he posed was how dolphins can swim and accelerate so fast when it seemed their muscles lacked the needed power. If this is a problem with dolphins it is an even greater problem with billfish such as swordfish, which swim and accelerate faster than dolphins. In 2009, Taiwanese researchers from the National Chung Hsing University introduced new concepts of "kidnapped airfoils and circulating horsepower" to explain the swimming capabilities of swordfish. The researchers claim this analysis also "solves the perplexity of dolphin's Gray paradox". They also assert that swordfish "use sensitive rostrum/lateral-line sensors to detect upcoming/ambient water pressure and attain the best attack angle to capture the body lift power aided by the forward-biased dorsal fin to compensate for most of the water resistance power."
Billfish have prominent dorsal fins. Like tuna, mackerel and other scombroids, billfish streamline themselves by retracting their dorsal fins into a groove in their body when they swim. The shape, size, position and colour of the dorsal fin varies with the type of billfish, and can be a simple way to identify a billfish species. For example, the white marlin has a dorsal fin with a curved front edge and is covered with black spots. The huge dorsal fin, or sail of the sailfish is kept retracted most of the time. Sailfish raise them if they want to herd a school of small fish, and also after periods of high activity, presumably to cool down.
Distribution and migration
Billfish occur worldwide in temperate and tropical waters. They are highly migratory oceanic fish, spending much of their time in the epipelagic zone of international water following major ocean currents. Migrations are linked to seasonal patterns of sea surface temperatures. They are sometimes referred to as "rare event species" because the areas they roam over in the open seas are so large that researchers have difficulty locating them. Little is known about their movements and life histories, so assessing how they can be sustainably managed is not easy.
Unlike coastal fish, billfish usually avoid inshore waters unless there is a deep dropoff close to the land. Instead, they swim along the edge of the continental shelf where cold nutrient rich upwellings can fuel large schools of forage fish. Billfish can be found here, cruising and feeding "above the craggy bottom like hawks soaring along a ridge line".
Commercial fishing
In parts of the Pacific and Indian Ocean such as the Maldives, billfishing, particularly for swordfish, is an important component of subsistence fishing.
Recreational fishing
Billfish are among the most coveted of big gamefish, and major recreational fisheries cater to the demand. In North America, "the apex of the salt water pursuits is billfishing, the quest for elusive blue marlin and sailfish in the deep blue water about 60 miles out." A lot of resources are committed to the activity, particularly in the construction of private and charter billfishing boats to participate in the billfishing tournament circuit. These are expensive purpose-built offshore vessels with powerfully driven deep sea hulls. They are often built to luxury standards and equipped with many technologies to ease the life of the deep sea recreational fisherman, including outriggers, flying bridges and fighting chairs, and state of the art fishfinders and navigation electronics.
The boats cruise along the edge of the continental shelf where billfish can be found down to 200 metres (600 ft), sometimes near weed lines at the surface and submarine canyons and ridges deeper down. Commercial fishermen usually use drift nets or longlines to catch billfish, but recreational fishermen usually drift with bait fish or troll a bait or lure. Billfish are caught deeper down the water column by drifting with live bait fish such as ballyhoo, striped mullet or bonito. Alternatively, they can be caught by trolling at the surface with dead bait or trolling lures designed to imitate bait fish.
Most recreational fishermen now tag and release billfish. A 2003 study surveyed 317,000 billfish known to have been tagged and released since 1954. Of these, 4122 were recovered. The study concluded that, while tag and release programs have limitations, they provided important information about billfish that cannot currently be obtained by other methods.
As food
Billfish make good eating fish, and are high in omega-3 oils. Blue marlin has a particularly high oil content.
Billfish are primarily marketed in Japan, where they are eaten raw as sashimi. They are marketed fresh, frozen, canned, cooked and smoked. It is not usually a good idea to fry billfish. Swordfish and marlin are best grilled or broiled, or eaten raw as in sashimi. Sailfish and spearfish are somewhat tough and are better cooked over charcoal or smoked.
Mercury
However, because billfish have high trophic levels, near the top of the food web, they also contain significant levels of mercury and other toxins. According to the United States Food and Drug Administration, swordfish is one of four fishes, along with tilefish, shark, and king mackerel, that children and pregnant women should avoid due to high levels of methylmercury found in these fish and the consequent risk of mercury poisoning.
Conservation
Billfish are exploited both as food and as fish. Marlin and sailfish are eaten in many parts of the world, and many sport fisheries target these species. Swordfish are subject to particularly intense fisheries pressures, and although their survival is not threatened worldwide, they are now comparatively rare in many places where once they were abundant. The istiophorid billfishes (marlin and spearfish) also suffer from intense fishing pressures. High mortality levels occur when they are caught incidentally by longline fisheries targeting other fish. Overfishing continues to "push these declines further in some species". Because of these concerns about declining populations, sport fishermen and conservationists now work together to gather information on billfish stocks and implement programs such as catch and release, where fish are returned to the sea after they have been caught. However, the process of catching them can leave them too traumatised to recover. Studies have shown that circle fishing hooks do much less damage to billfish than the traditional J-hooks, yet they are at just as effective for catching billfish. This is good for conservation, since it improves survival rates after release.
The stocks for individual species in billfish longline fisheries can "boom and bust" in linked and compensatory ways. For example, the Atlantic catch of blue marlin declined in the 1960s. This was accompanied by an increase in sailfish catch. The sailfish catch then declined from the end of the 1970s to the end of the 1980s, compensated by an increase in swordfish catch. As a result, overall billfish catches remained fairly stable.
"Many of the world's fisheries operate in a data poor environment that precludes predictions about how different management actions will affect individual species and the ecosystem as a whole." In recently years pop-up satellite archival tags have been used to monitor billfish. The capability of these tags to recover useful data is improving, and their use should result in more accurate stock assessments. In 2011, a group of researchers claimed they have, for the first time, standardized all available data about scombrids and billfishes so it is in a form suitable for assessing threats to these species. The synthesis shows that those species which combine a long life with a high economic value, such as the Atlantic blue marlin and the white marlin, are generally threatened. The combination puts such species in "double jeopardy".
| Biology and health sciences | Acanthomorpha | Animals |
331731 | https://en.wikipedia.org/wiki/Plaster | Plaster | Plaster is a building material used for the protective or decorative coating of walls and ceilings and for moulding and casting decorative elements. In English, "plaster" usually means a material used for the interiors of buildings, while "render" commonly refers to external applications. The term stucco refers to plasterwork that is worked in some way to produce relief decoration, rather than flat surfaces.
The most common types of plaster mainly contain either gypsum, lime, or cement, but all work in a similar way. The plaster is manufactured as a dry powder and is mixed with water to form a stiff but workable paste immediately before it is applied to the surface. The reaction with water liberates heat through crystallization and the hydrated plaster then hardens.
Plaster can be relatively easily worked with metal tools and sandpaper and can be moulded, either on site or in advance, and worked pieces can be put in place with adhesive. Plaster is suitable for finishing rather than load-bearing, and when thickly applied for decoration may require a hidden supporting framework.
Forms of plaster have several other uses. In medicine, plaster orthopedic casts are still often used for supporting set broken bones. In dentistry, plaster is used to make dental models by pouring the material into dental impressions. Various types of models and moulds are made with plaster. In art, lime plaster is the traditional matrix for fresco painting; the pigments are applied to a thin wet top layer of plaster and fuse with it so that the painting is actually in coloured plaster. In the ancient world, as well as the sort of ornamental designs in plaster relief that are still used, plaster was also widely used to create large figurative reliefs for walls, though few of these have survived.
History
Plaster was first used as a building material and for decoration in the Middle East at least 7,000 years ago. In Egypt, gypsum was burned in open fires, crushed into powder, and mixed with water to create plaster, used as a mortar between the blocks of pyramids and to provide a smooth wall facing. In Jericho, a cult arose where human skulls were decorated with plaster and painted to appear lifelike. The Romans brought plaster-work techniques to Europe.
Types
Clay plaster
Clay plaster is a mixture of clay, sand and water often with the addition of plant fibers for tensile strength over wood lath.
Clay plaster has been used around the world at least since antiquity. Settlers in the American colonies used clay plaster on the interiors of their houses: "Interior plastering in the form of clay antedated even the building of houses of frame, and must have been visible in the inside of wattle filling in those earliest frame houses in which … wainscot had not been indulged. Clay continued in use long after the adoption of laths and brick filling for the frame." Where lime was not easily accessible it was rationed and usually substituted with clay as a binder. In Martin E. Weaver's seminal work he says, "Mud plaster consists of clay or earth which is mixed with water to give a 'plastic' or workable consistency. If the clay mixture is too plastic it will shrink, crack and distort on drying. Sand, fine gravels and fibres were added to reduce the concentrations of fine clay particles which were the cause of the excessive shrinkage." Manure was often added for its fibre content. In some building techniques straw or grass was used as reinforcement.
In the Earliest European settlers' plasterwork, a mud plaster was used McKee wrote, of a circa 1675 Massachusetts contract that specified the plasterer, "Is to lath and siele the four rooms of the house betwixt the joists overhead with a coat of lime and haire upon the clay; also to fill the gable ends of the house with ricks and plaister them with clay. 5. To lath and plaster partitions of the house with clay and lime, and to fill, lath, and plaister them with lime and haire besides; and to siele and lath them overhead with lime; also to fill, lath, and plaster the kitchen up to the wall plate on every side. 6. The said Daniel Andrews is to find lime, bricks, clay, stone, haire, together with laborers and workmen." Records of the New Haven colony in 1641 mention clay and hay as well as lime and hair also. In German houses of Pennsylvania the use of clay persisted.
Old Economy Village is one such German settlement. The early Nineteenth-Century utopian village in present-day Ambridge, Pennsylvania, used clay plaster substrate exclusively in the brick and wood frame high architecture of the Feast Hall, Great House and other large and commercial structures as well as in the brick, frame and log dwellings of the society members. The use of clay in plaster and in laying brickwork appears to have been a common practice at that time not just in the construction of Economy village when the settlement was founded in 1824. Specifications for the construction of, "Lock keepers houses on the Chesapeake and Ohio Canal, written about 1828, require stone walls to be laid with clay mortar, excepting 3 inches on the outside of the walls … which (are) to be good lime mortar and well pointed." The choice of clay was because of its low cost, but also the availability. At Economy, root cellars dug under the houses yielded clay and sand (stone), or the nearby Ohio river yielded washed sand from the sand bars; and lime outcroppings and oyster shell for the lime kiln.
The surrounding forests of the new village of Economy provided straight grain, old-growth oak trees for lath. Hand split lath starts with a log of straight grained wood of the required length. The log is split into quarters and then smaller and smaller bolts with wedges and a sledge. When small enough, a froe and mallet were used to split away narrow strips of lath. Farm animals provided hair and manure for the float coat of plaster. Fields of wheat and grains provided straw and hay to reinforce the clay plaster. But there was no uniformity in clay plaster recipes.
Manure provides fiber for tensile strength as well as protein adhesive. Unlike casein used with lime plaster, hydrogen bonds of manure proteins are weakened by moisture. With braced timber-framed structures clay plaster was used on interior walls and ceilings as well as exterior walls as the wall cavity and exterior cladding isolated the clay plaster from moisture penetration. Application of clay plaster in brick structures risked water penetration from failed mortar joints on the exterior brick walls. In Economy Village, the rear and middle wythes of brick dwelling walls are laid in a clay and sand mortar with the front wythe bedded in a lime and sand mortar to provide a weather proof seal to protect from water penetration. This allowed a rendering of clay plaster and setting coat of thin lime and fine sand on exterior-walled rooms.
Split lath was nailed with square cut lath nails, one into each framing member. With hand split lath the plasterer had the luxury of making lath to fit the cavity being plastered. Lengths of lath two to six foot are not uncommon at Economy Village. Hand split lath is not uniform like sawn lath. The straightness or waviness of the grain affected the thickness or width of each lath, and thus the spacing of the lath. The clay plaster rough coat varied to cover the irregular lath. Window and door trim as well as the mudboard (baseboard) acted as screeds. With the variation of the lath thickness and use of coarse straw and manure, the clay coat of plaster was thick in comparison to later lime-only and gypsum plasters. In Economy Village, the lime top coats are thin veneers often an eighth inch or less attesting to the scarcity of limestone supplies there.
Clay plasters with their lack of tensile and compressive strength fell out of favor as industrial mining and technology advances in kiln production led to the exclusive use of lime and then gypsum in plaster applications. However, clay plasters still exist after hundreds of years clinging to split lath on rusty square nails. The wall variations and roughness reveal a hand-made and pleasing textured alternative to machine-made modern substrate finishes. But clay plaster finishes are rare and fleeting. According to Martin Weaver, "Many of North America's historic building interiors … are all too often … one of the first things to disappear in the frenzy of demolition of interiors which has unfortunately come to be a common companion to 'heritage preservation' in the guise of building rehabilitation."
Gypsum plaster (plaster of Paris)
Gypsum plaster, also known as plaster of Paris, is a white powder consisting of calcium sulfate hemihydrate. The natural form of the compound is the mineral bassanite.
Etymology
The name "plaster of Paris" was given because it was originally made by heating gypsum from a large deposit at Montmartre, a hill in the north end of Paris.
Chemistry
Gypsum plaster, gypsum powder, or plaster of Paris, is produced by heating gypsum to about 120–180 °C (248–356 °F) in a kiln:
CaSO4.2H2O \overset{heat}{{}->{}} {CaSO4.1/2H2O} + 1\!1/2 H2O ^
(released as steam).
Plaster of Paris has a remarkable property of setting into a hard mass on wetting with water.
CaSO4.1/2H2O + 1 1/2H2O -> CaSO4.2H2O
Plaster of Paris is stored in moisture-proof containers, because the presence of moisture can cause slow setting of plaster of Paris by bringing about its hydration, which will make it useless after some time.
When the dry plaster powder is mixed with water, it rehydrates over time into gypsum. The setting of plaster slurry starts about 10 minutes after mixing and is complete in about 45 minutes. The setting of plaster of Paris is accompanied by a slight expansion of volume. It is used in making casts for statues, toys, and more. The initial matrix consists mostly of orthorhombic crystals: the kinetic product. Over the next 72 hours, the rhombic crystals give way to an interlocking mass of monoclinic crystal needles, and the plaster increases in hardness and strength. If plaster or gypsum is heated to between 130 °C (266 °F) and 180 °C (350 °F), hemihydrate is formed, which will also re-form as gypsum if mixed with water.
On heating to 180 °C (350 °F), the nearly water-free form, called γ-anhydrite (CaSO4·nH2O where n = 0 to 0.05) is produced. γ-anhydrite reacts slowly with water to return to the dihydrate state, a property exploited in some commercial desiccants. On heating above 250 °C (480 °F), the completely anhydrous form called β-anhydrite or dead burned plaster is formed.
Uses of gypsum plaster
for making surfaces like the walls of a house smooth before painting them and for making ornamental designs on the ceilings of houses and other buildings. (see Plaster In decorative architecture)
for making toys, decorative materials, cheap ornaments, cosmetics, and black-board chalk.
a fire-proofing material. (see Plaster in Fire protection)
an orthopedic cast is used in hospitals for setting fractured bones in the right position to ensure correct healing and avoid nonunion. It keeps the fractured bone straight. It is used in this way, because when plaster of Paris is mixed with a proper quantity of water and applied around the fractured limb, it sets into a hard mass, thereby keeping the bones in a fixed position. It is also used for making casts in dentistry. (see Plaster in Medicine)
chemistry laboratory for sealing air-gaps in apparatus when air-tight arrangement is required.
Lime plaster
Lime plaster is a mixture of calcium hydroxide and sand (or other inert fillers). Carbon dioxide in the atmosphere causes the plaster to set by transforming the calcium hydroxide into calcium carbonate (limestone). Whitewash is based on the same chemistry.
To make lime plaster, limestone (calcium carbonate) is heated above approximately 850 °C (1600 °F) to produce quicklime (calcium oxide). Water is then added to produce slaked lime (calcium hydroxide), which is sold as a wet putty or a white powder. Additional water is added to form a paste prior to use. The paste may be stored in airtight containers. When exposed to the atmosphere, the calcium hydroxide very slowly turns back into calcium carbonate through reaction with atmospheric carbon dioxide, causing the plaster to increase in strength.
Lime plaster was a common building material for wall surfaces in a process known as lath and plaster, whereby a series of wooden strips on a studwork frame was covered with a semi-dry plaster that hardened into a surface. The plaster used in most lath and plaster construction was mainly lime plaster, with a cure time of about a month. To stabilize the lime plaster during curing, small amounts of plaster of Paris were incorporated into the mix. Because plaster of Paris sets quickly, "retardants" were used to slow setting time enough to allow workers to mix large working quantities of lime putty plaster. A modern form of this method uses expanded metal mesh over wood or metal structures, which allows a great freedom of design as it is adaptable to both simple and compound curves. Today this building method has been partly replaced with drywall, also composed mostly of gypsum plaster. In both these methods, a primary advantage of the material is that it is resistant to a fire within a room and so can assist in reducing or eliminating structural damage or destruction provided the fire is promptly extinguished.
Lime plaster is used for frescoes, where pigments, diluted in water, are applied to the still wet plaster.
USA and Iran are the main plaster producers in the world.
Cement plaster
Cement plaster is a mixture of suitable plaster, sand, Portland cement and water which is normally applied to masonry interiors and exteriors to achieve a smooth surface. Interior surfaces sometimes receive a final layer of gypsum plaster. Walls constructed with stock bricks are normally plastered while face brick walls are not plastered. Various cement-based plasters are also used as proprietary spray fireproofing products. These usually use vermiculite as lightweight aggregate. Heavy versions of such plasters are also in use for exterior fireproofing, to protect LPG vessels, pipe bridges and vessel skirts.
Cement plaster was first introduced in America around 1909 and was often called by the generic name adamant plaster after a prominent manufacturer of the time. The advantages of cement plaster noted at that time were its strength, hardness, quick setting time and durability.
Heat-resistant plaster
Heat-resistant plaster is a building material used for coating walls and chimney breasts and for use as a fire barrier in ceilings. Its purpose is to replace conventional gypsum plasters in cases where the temperature can get too high for gypsum plaster to stay on the wall or ceiling.
An example of a heat-resistant plaster composition is a mixture of Portland cement, gypsum, lime, exfoliated insulating aggregate (perlite and vermiculite or mica), phosphate shale, and small amounts of adhesive binder (such as Gum karaya), and a detergent agent (such as sodium dodecylbenzene sulfonate).
Applications
In decorative architecture
Plaster may also be used to create complex detailing for use in room interiors. These may be geometric (simulating wood or stone) or naturalistic (simulating leaves, vines, and flowers). These are also often used to simulate wood or stone detailing found in more substantial buildings.
In modern days this material is also used for false ceiling. In this, the powder form is converted in a sheet form and the sheet is then attached to the basic ceiling with the help of fasteners. It is done in various designs containing various combinations of lights and colors. The common use of this plaster can be seen in the construction of houses. Post-construction, direct painting is possible (which is commonly seen in French architecture), but elsewhere plaster is used. The walls are painted with the plaster which (in some countries) is nothing but calcium carbonate. After drying the calcium carbonate plaster turns white and then the wall is ready to be painted. Elsewhere in the world, such as the UK, ever finer layers of plaster are added on top of the plasterboard (or sometimes the brick wall directly) to give a smooth brown polished texture ready for painting.
Art
Mural paintings are commonly painted onto a plaster secondary support. Some, like Michelangelo's Sistine Chapel ceiling, are executed in fresco, meaning they are painted on a thin layer of wet plaster, called intonaco; the pigments sink into this layer so that the plaster itself becomes the medium holding them, which accounts for the excellent durability of fresco. Additional work may be added a secco on top of the dry plaster, though this is generally less durable.
Plaster (often called stucco in this context) is a far easier material for making reliefs than stone or wood, and was widely used for large interior wall-reliefs in Egypt and the Near East from antiquity into Islamic times (latterly for architectural decoration, as at the Alhambra), Rome, and Europe from at least the Renaissance, as well as probably elsewhere. However, it needs very good conditions to survive long in unmaintained buildings – Roman decorative plasterwork is mainly known from Pompeii and other sites buried by ash from Mount Vesuvius.
Plaster may be cast directly into a damp clay mold. In creating this piece molds (molds designed for making multiple copies) or waste molds (for single use) would be made of plaster. This "negative" image, if properly designed, may be used to produce clay productions, which when fired in a kiln become terra cotta building decorations, or these may be used to create cast concrete sculptures. If a plaster positive was desired this would be constructed or cast to form a durable image artwork. As a model for stonecutters this would be sufficient. If intended for producing a bronze casting the plaster positive could be further worked to produce smooth surfaces. An advantage of this plaster image is that it is relatively cheap; should a patron approve of the durable image and be willing to bear further expense, subsequent molds could be made for the creation of a wax image to be used in lost wax casting, a far more expensive process. In lieu of producing a bronze image suitable for outdoor use the plaster image may be painted to resemble a metal image; such sculptures are suitable only for presentation in a weather-protected environment.
Plaster expands while hardening then contracts slightly just before hardening completely. This makes plaster excellent for use in molds, and it is often used as an artistic material for casting. Plaster is also commonly spread over an armature (form), made of wire mesh, cloth, or other materials; a process for adding raised details. For these processes, limestone or acrylic based plaster may be employed, known as stucco.
Products composed mainly of plaster of Paris and a small amount of Portland cement are used for casting sculptures and other art objects as well as molds. Considerably harder and stronger than straight plaster of Paris, these products are for indoor use only as they degrade in moist conditions.
Medicine
Plaster is widely used as a support for broken bones; a bandage impregnated with plaster is moistened and then wrapped around the damaged limb, setting into a close-fitting yet easily removed tube, known as an orthopedic cast.
Plaster is also used in preparation for radiotherapy when fabricating individualized immobilization shells for patients. Plaster bandages are used to construct an impression of a patient's head and neck, and liquid plaster is used to fill the impression and produce a plaster bust. The transparent material polymethyl methacrylate (Plexiglas, Perspex) is then vacuum formed over this bust to create a clear face mask which will hold the patient's head steady while radiation is being delivered.
In dentistry, plaster is used for mounting casts or models of oral tissues. These diagnostic and working models are usually made from dental stone, a stronger, harder and denser derivative of plaster which is manufactured from gypsum under pressure. Plaster is also used to invest and flask wax dentures, the wax being subsequently removed by "burning out," and replaced with flowable denture base material. The typically acrylic denture base then cures in the plaster investment mold. Plaster investments can withstand the high heat and pressure needed to ensure a rigid denture base. Moreover, in dentistry there are 5 types of gypsum products depending on their consistency and uses: 1) impression plaster (type 1), 2) model plaster (type 2), dental stones (types 3, 4 and 5)
In orthotics and prosthetics, plaster bandages traditionally were used to create impressions of the patient's limb (or residuum). This negative impression was then, itself, filled with plaster of Paris, to create a positive model of the limb and used in fabricating the final medical device.
In addition, dentures (false teeth) are made by first taking a dental impression using a soft, pliable material that can be removed from around the teeth and gums without loss of fidelity and using the impression to creating a wax model of the teeth and gums. The model is used to create a plaster mold (which is heated so the wax melts and flows out) and the denture materials are injected into the mold. After a curing period, the mold is opened and the dentures are cleaned up and polished.
Fire protection
Plasters have been in use in passive fire protection, as fireproofing products, for many decades.
Gypsum plaster releases water vapor when exposed to flame, acting to slow the spread of the fire, for as much as an hour or two depending on thickness. Plaster also provides some insulation to retard heat flow into structural steel elements, that would otherwise lose their strength and collapse in a fire. Early versions of protective plasters often contain asbestos fibres, which since have been outlawed in many industrialized nations.
Recent plasters for fire protection either contain cement or gypsum as binding agents as well as mineral wool or glass fiber to add mechanical strength.
Vermiculite, polystyrene beads or chemical expansion agents are often added to decrease the density of the finished product and increase thermal insulation.
One differentiates between interior and exterior fireproofing. Interior products are typically less substantial, with lower densities and lower cost. Exterior products have to withstand harsher environmental conditions. A rough surface is typically forgiven inside of buildings as dropped ceilings often hide them. Fireproofing plasters are losing ground to more costly intumescent and endothermic products, simply on technical merit. Trade jurisdiction on unionized construction sites in North America remains with the plasterers, regardless of whether the plaster is decorative in nature or is used in passive fire protection. Cementitious and gypsum based plasters tend to be endothermic. Fireproofing plasters are closely related to firestop mortars. Most firestop mortars can be sprayed and tooled very well, due to the fine detail work that is required of firestopping.
3D printing
Powder bed and inkjet head 3D printing is commonly based on the reaction of gypsum plaster with water, where the water is selectively applied by the inkjet head.
Gallery
Safety issues
The chemical reaction that occurs when plaster is mixed with water is exothermic. When plaster sets, it can reach temperatures of more than 60 °C (140 °F) and, in large volumes, can burn the skin. In January 2007, a secondary school student in Lincolnshire, England sustained third-degree burns after encasing her hands in a bucket of plaster as part of a school art project.
Plaster that contains powdered silica or asbestos presents health hazards if inhaled repeatedly. Asbestos is a known irritant when inhaled and can cause cancer, especially in people who smoke, and inhalation can also cause asbestosis. Inhaled silica can cause silicosis and (in very rare cases) can encourage the development of cancer. Persons working regularly with plaster containing these additives should take precautions to avoid inhaling powdered plaster, cured or uncured.
People can be exposed to plaster of Paris in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for plaster of Paris exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday.
| Technology | Building materials | null |
331755 | https://en.wikipedia.org/wiki/Transitional%20fossil | Transitional fossil | A transitional fossil is any fossilized remains of a life form that exhibits traits common to both an ancestral group and its derived descendant group. This is especially important where the descendant group is sharply differentiated by gross anatomy and mode of living from the ancestral group. These fossils serve as a reminder that taxonomic divisions are human constructs that have been imposed in hindsight on a continuum of variation. Because of the incompleteness of the fossil record, there is usually no way to know exactly how close a transitional fossil is to the point of divergence. Therefore, it cannot be assumed that transitional fossils are direct ancestors of more recent groups, though they are frequently used as models for such ancestors.
In 1859, when Charles Darwin's On the Origin of Species was first published, the fossil record was poorly known. Darwin described the perceived lack of transitional fossils as "the most obvious and gravest objection which can be urged against my theory," but he explained it by relating it to the extreme imperfection of the geological record. He noted the limited collections available at the time but described the available information as showing patterns that followed from his theory of descent with modification through natural selection. Indeed, Archaeopteryx was discovered just two years later, in 1861, and represents a classic transitional form between earlier, non-avian dinosaurs and birds. Many more transitional fossils have been discovered since then, and there is now abundant evidence of how all classes of vertebrates are related, including many transitional fossils. Specific examples of class-level transitions are: tetrapods and fish, birds and dinosaurs, and mammals and "mammal-like reptiles".
The term "missing link" has been used extensively in popular writings on human evolution to refer to a perceived gap in the hominid evolutionary record. It is most commonly used to refer to any new transitional fossil finds. Scientists, however, do not use the term, as it refers to a pre-evolutionary view of nature.
Evolutionary and phylogenetic taxonomy
Transitions in phylogenetic nomenclature
In evolutionary taxonomy, the prevailing form of taxonomy during much of the 20th century and still used in non-specialist textbooks, taxa based on morphological similarity are often drawn as "bubbles" or "spindles" branching off from each other, forming evolutionary trees. Transitional forms are seen as falling between the various groups in terms of anatomy, having a mixture of characteristics from inside and outside the newly branched clade.
With the establishment of cladistics in the 1990s, relationships commonly came to be expressed in cladograms that illustrate the branching of the evolutionary lineages in stick-like figures. The different so-called "natural" or "monophyletic" groups form nested units, and only these are given phylogenetic names. While in traditional classification tetrapods and fish are seen as two different groups, phylogenetically tetrapods are considered a branch of fish. Thus, with cladistics there is no longer a transition between established groups, and the term "transitional fossils" is a misnomer. Differentiation occurs within groups, represented as branches in the cladogram.
In a cladistic context, transitional organisms can be seen as representing early examples of a branch, where not all of the traits typical of the previously known descendants on that branch have yet evolved. Such early representatives of a group are usually termed "basal taxa" or "sister taxa," depending on whether the fossil organism belongs to the daughter clade or not.
Transitional versus ancestral
A source of confusion is the notion that a transitional form between two different taxonomic groups must be a direct ancestor of one or both groups. The difficulty is exacerbated by the fact that one of the goals of evolutionary taxonomy is to identify taxa that were ancestors of other taxa. However, because evolution is a branching process that produces a complex bush pattern of related species rather than a linear process producing a ladder-like progression, and because of the incompleteness of the fossil record, it is unlikely that any particular form represented in the fossil record is a direct ancestor of any other. Cladistics deemphasizes the concept of one taxonomic group being an ancestor of another, and instead emphasizes the identification of sister taxa that share a more recent common ancestor with one another than they do with other groups. There are a few exceptional cases, such as some marine plankton microfossils, where the fossil record is complete enough to suggest with confidence that certain fossils represent a population that was actually ancestral to a later population of a different species. But, in general, transitional fossils are considered to have features that illustrate the transitional anatomical features of actual common ancestors of different taxa, rather than to be actual ancestors.
Prominent examples
Archaeopteryx
Archaeopteryx is a genus of theropod dinosaur closely related to the birds. Since the late 19th century, it has been accepted by palaeontologists, and celebrated in lay reference works, as being the oldest known bird, though a study in 2011 has cast doubt on this assessment, suggesting instead that it is a non-avialan dinosaur closely related to the origin of birds.
It lived in what is now southern Germany in the Late Jurassic period around 150 million years ago, when Europe was an archipelago in a shallow warm tropical sea, much closer to the equator than it is now. Similar in shape to a European magpie, with the largest individuals possibly attaining the size of a raven, Archaeopteryx could grow to about 0.5 metres (1.6 ft) in length. Despite its small size, broad wings, and inferred ability to fly or glide, Archaeopteryx has more in common with other small Mesozoic dinosaurs than it does with modern birds. In particular, it shares the following features with the deinonychosaurs (dromaeosaurs and troodontids): jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which suggest homeothermy), and various skeletal features. These features make Archaeopteryx a clear candidate for a transitional fossil between dinosaurs and birds, making it important in the study both of dinosaurs and of the origin of birds.
The first complete specimen was announced in 1861, and ten more Archaeopteryx fossils have been found since then. Most of the eleven known fossils include impressions of feathers—among the oldest direct evidence of such structures. Moreover, because these feathers take the advanced form of flight feathers, Archaeopteryx fossils are evidence that feathers began to evolve before the Late Jurassic.
Australopithecus afarensis
The hominid Australopithecus afarensis represents an evolutionary transition between modern bipedal humans and their quadrupedal ape ancestors. A number of traits of the A. afarensis skeleton strongly reflect bipedalism, to the extent that some researchers have suggested that bipedality evolved long before A. afarensis. In overall anatomy, the pelvis is far more human-like than ape-like. The iliac blades are short and wide, the sacrum is wide and positioned directly behind the hip joint, and there is clear evidence of a strong attachment for the knee extensors, implying an upright posture.
While the pelvis is not entirely like that of a human (being markedly wide, or flared, with laterally orientated iliac blades), these features point to a structure radically remodelled to accommodate a significant degree of bipedalism. The femur angles in toward the knee from the hip. This trait allows the foot to fall closer to the midline of the body, and strongly indicates habitual bipedal locomotion. Present-day humans, orangutans and spider monkeys possess this same feature. The feet feature adducted big toes, making it difficult if not impossible to grasp branches with the hindlimbs. Besides locomotion, A. afarensis also had a slightly larger brain than a modern chimpanzee (the closest living relative of humans) and had teeth that were more human than ape-like.
Pakicetids, Ambulocetus
The cetaceans (whales, dolphins and porpoises) are marine mammal descendants of land mammals. The pakicetids are an extinct family of hoofed mammals that are the earliest whales, whose closest sister group is Indohyus from the family Raoellidae. They lived in the Early Eocene, around 53 million years ago. Their fossils were first discovered in North Pakistan in 1979, at a river not far from the shores of the former Tethys Sea. Pakicetids could hear under water, using enhanced bone conduction, rather than depending on tympanic membranes like most land mammals. This arrangement does not give directional hearing under water.
Ambulocetus natans, which lived about 49 million years ago, was discovered in Pakistan in 1994. It was probably amphibious, and looked like a crocodile. In the Eocene, ambulocetids inhabited the bays and estuaries of the Tethys Ocean in northern Pakistan. The fossils of ambulocetids are always found in near-shore shallow marine deposits associated with abundant marine plant fossils and littoral molluscs. Although they are found only in marine deposits, their oxygen isotope values indicate that they consumed water with a range of degrees of salinity, some specimens showing no evidence of sea water consumption and others none of fresh water consumption at the time when their teeth were fossilized. It is clear that ambulocetids tolerated a wide range of salt concentrations. Their diet probably included land animals that approached water for drinking, or freshwater aquatic organisms that lived in the river. Hence, ambulocetids represent the transition phase of cetacean ancestors between freshwater and marine habitat.
Tiktaalik
Tiktaalik is a genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian period, with many features akin to those of tetrapods (four-legged animals). It is one of several lines of ancient sarcopterygians to develop adaptations to the oxygen-poor shallow water habitats of its time—adaptations that led to the evolution of tetrapods. Well-preserved fossils were found in 2004 on Ellesmere Island in Nunavut, Canada.
Tiktaalik lived approximately 375 million years ago. Paleontologists suggest that it is representative of the transition between non-tetrapod vertebrates such as Panderichthys, known from fossils 380 million years old, and early tetrapods such as Acanthostega and Ichthyostega, known from fossils about 365 million years old. Its mixture of primitive fish and derived tetrapod characteristics led one of its discoverers, Neil Shubin, to characterize Tiktaalik as a "fishapod." Unlike many previous, more fish-like transitional fossils, the "fins" of Tiktaalik have basic wrist bones and simple rays reminiscent of fingers. They may have been weight-bearing. Like all modern tetrapods, it had rib bones, a mobile neck with a separate pectoral girdle, and lungs, though it had the gills, scales, and fins of a fish. However in a 2008 paper by Boisvert at al. it is noted that Panderichthys, due to its more derived distal portion, might be closer to tetrapods than Tiktaalik, which might have independently developed similarities to tetrapods by convergent evolution.
Tetrapod footprints found in Poland and reported in Nature in January 2010 were "securely dated" at 10 million years older than the oldest known elpistostegids (of which Tiktaalik is an example), implying that animals like Tiktaalik, possessing features that evolved around 400 million years ago, were "late-surviving relics rather than direct transitional forms, and they highlight just how little we know of the earliest history of land vertebrates."
Amphistium
Pleuronectiformes (flatfish) are an order of ray-finned fish. The most obvious characteristic of the modern flatfish is their asymmetry, with both eyes on the same side of the head in the adult fish. In some families the eyes are always on the right side of the body (dextral or right-eyed flatfish) and in others they are always on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-eyed individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head.
Amphistium is a 50-million-year-old fossil fish identified as an early relative of the flatfish, and as a transitional fossil. In Amphistium, the transition from the typical symmetric head of a vertebrate is incomplete, with one eye placed near the top-center of the head. Paleontologists concluded that "the change happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe."
Amphistium is among the many fossil fish species known from the Monte Bolca Lagerstätte of Lutetian Italy. Heteronectes is a related, and very similar fossil from slightly earlier strata of France.
Runcaria
A Middle Devonian precursor to seed plants has been identified from Belgium, predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the multilobed integument. It is suspected that the extension was involved in anemophilous pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed, having all the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed.
Fossil record
Not every transitional form appears in the fossil record, because the fossil record is not complete. Organisms are only rarely preserved as fossils in the best of circumstances, and only a fraction of such fossils have been discovered. Paleontologist Donald Prothero noted that this is illustrated by the fact that the number of species known through the fossil record was less than 5% of the number of known living species, suggesting that the number of species known through fossils must be far less than 1% of all the species that have ever lived.
Because of the specialized and rare circumstances required for a biological structure to fossilize, logic dictates that known fossils represent only a small percentage of all life-forms that ever existed—and that each discovery represents only a snapshot of evolution. The transition itself can only be illustrated and corroborated by transitional fossils, which never demonstrate an exact half-way point between clearly divergent forms.
The fossil record is very uneven and, with few exceptions, is heavily slanted toward organisms with hard parts, leaving most groups of soft-bodied organisms with little to no fossil record. The groups considered to have a good fossil record, including a number of transitional fossils between traditional groups, are the vertebrates, the echinoderms, the brachiopods and some groups of arthropods.
History
Post-Darwin
The idea that animal and plant species were not constant, but changed over time, was suggested as far back as the 18th century. Darwin's On the Origin of Species, published in 1859, gave it a firm scientific basis. A weakness of Darwin's work, however, was the lack of palaeontological evidence, as pointed out by Darwin himself. While it is easy to imagine natural selection producing the variation seen within genera and families, the transmutation between the higher categories was harder to imagine. The dramatic find of the London specimen of Archaeopteryx in 1861, only two years after the publication of Darwin's work, offered for the first time a link between the class of the highly derived birds, and that of the more basal reptiles. In a letter to Darwin, the palaeontologist Hugh Falconer wrote:
Had the Solnhofen quarries been commissioned—by august command—to turn out a strange being à la Darwin—it could not have executed the behest more handsomely—than in the Archaeopteryx.
Thus, transitional fossils like Archaeopteryx came to be seen as not only corroborating Darwin's theory, but as icons of evolution in their own right. For example, the Swedish encyclopedic dictionary Nordisk familjebok of 1904 showed an inaccurate Archaeopteryx reconstruction (see illustration) of the fossil, "ett af de betydelsefullaste paleontologiska fynd, som någonsin gjorts" ("one of the most significant paleontological discoveries ever made").
The rise of plants
Transitional fossils are not only those of animals. With the increasing mapping of the divisions of plants at the beginning of the 20th century, the search began for the ancestor of the vascular plants. In 1917, Robert Kidston and William Henry Lang found the remains of an extremely primitive plant in the Rhynie chert in Aberdeenshire, Scotland, and named it Rhynia.
The Rhynia plant was small and stick-like, with simple dichotomously branching stems without leaves, each tipped by a sporangium. The simple form echoes that of the sporophyte of mosses, and it has been shown that Rhynia had an alternation of generations, with a corresponding gametophyte in the form of crowded tufts of diminutive stems only a few millimetres in height. Rhynia thus falls midway between mosses and early vascular plants like ferns and clubmosses. From a carpet of moss-like gametophytes, the larger Rhynia sporophytes grew much like simple clubmosses, spreading by means of horizontal growing stems growing rhizoids that anchored the plant to the substrate. The unusual mix of moss-like and vascular traits and the extreme structural simplicity of the plant had huge implications for botanical understanding.
Missing links
The idea of all living things being linked through some sort of transmutation process predates Darwin's theory of evolution. Jean-Baptiste Lamarck envisioned that life was generated constantly in the form of the simplest creatures, and strove towards complexity and perfection (i.e. humans) through a progressive series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene.
After On the Origin of Species, the idea of "lower animals" representing earlier stages in evolution lingered, as demonstrated in Ernst Haeckel's figure of the human pedigree. While the vertebrates were then seen as forming a sort of evolutionary sequence, the various classes were distinct, the undiscovered intermediate forms being called "missing links."
The term was first used in a scientific context by Charles Lyell in the third edition (1851) of his book Elements of Geology in relation to missing parts of the geological column, but it was popularized in its present meaning by its appearance on page xi of his book Geological Evidences of the Antiquity of Man of 1863. By that time, it was generally thought that the end of the last glacial period marked the first appearance of humanity; Lyell drew on new findings in his Antiquity of Man to put the origin of human beings much further back. Lyell wrote that it remained a profound mystery how the huge gulf between man and beast could be bridged. Lyell's vivid writing fired the public imagination, inspiring Jules Verne's Journey to the Center of the Earth (1864) and Louis Figuier's 1867 second edition of La Terre avant le déluge ("Earth before the Flood"), which included dramatic illustrations of savage men and women wearing animal skins and wielding stone axes, in place of the Garden of Eden shown in the 1863 edition.
The search for a fossil showing transitional traits between apes and humans, however, was fruitless until the young Dutch geologist Eugène Dubois found a skullcap, a molar and a femur on the banks of Solo River, Java in 1891. The find combined a low, ape-like skull roof with a brain estimated at around 1000 cc, midway between that of a chimpanzee and an adult human. The single molar was larger than any modern human tooth, but the femur was long and straight, with a knee angle showing that "Java Man" had walked upright. Given the name Pithecanthropus erectus ("erect ape-man"), it became the first in what is now a long list of human evolution fossils. At the time it was hailed by many as the "missing link," helping set the term as primarily used for human fossils, though it is sometimes used for other intermediates, like the dinosaur-bird intermediary Archaeopteryx.
While "missing link" is still a popular term, well-recognized by the public and often used in the popular media, the term is avoided in scientific publications. Some bloggers have called it "inappropriate"; both because the links are no longer "missing", and because human evolution is no longer believed to have occurred in terms of a single linear progression.
Punctuated equilibrium
The theory of punctuated equilibrium developed by Stephen Jay Gould and Niles Eldredge and first presented in 1972 is often mistakenly drawn into the discussion of transitional fossils. This theory, however, pertains only to well-documented transitions within taxa or between closely related taxa over a geologically short period of time. These transitions, usually traceable in the same geological outcrop, often show small jumps in morphology between extended periods of morphological stability. To explain these jumps, Gould and Eldredge envisaged comparatively long periods of genetic stability separated by periods of rapid evolution. Gould made the following observation concerning creationist misuse of his work to deny the existence of transitional fossils:
| Biology and health sciences | Paleontology | Biology |
331784 | https://en.wikipedia.org/wiki/Drywall | Drywall | Drywall (also called plasterboard, dry lining, wallboard, sheet rock, gib board, gypsum board, buster board, turtles board, slap board, custard board, gypsum panel and gyprock) is a panel made of calcium sulfate dihydrate (gypsum), with or without additives, typically extruded between thick sheets of facer and backer paper, used in the construction of interior walls and ceilings. The plaster is mixed with fiber (typically paper, glass wool, or a combination of these materials); plasticizer, foaming agent; and additives that can reduce mildew, flammability, and water absorption.
In the mid-20th century, drywall construction became prevalent in North America as a time- and labor-saving alternative to lath and plaster.
History
Sackett Board was invented in 1890 by New York Coal Tar Chemical Company employees Augustine Sackett and Fred L. Kane, graduates of Rensselaer Polytechnic Institute. It was made by layering plaster within four plies of wool felt paper. Sheets were thick with open (untaped) edges.
Gypsum board evolved between 1910 and 1930, beginning with wrapped board edges and the elimination of the two inner layers of felt paper in favor of paper-based facings. In 1910 United States Gypsum Corporation bought Sackett Plaster Board Company and by 1917 introduced Sheetrock. Providing installation efficiency, it was developed additionally as a measure of fire resistance. Later air entrainment technology made boards lighter and less brittle, and joint treatment materials and systems also evolved. Gypsum lath was an early substrate for plaster. An alternative to traditional wood or metal lath was a panel made up of compressed gypsum plaster board that was sometimes grooved or punched with holes to allow wet plaster to key into its surface. As it evolved, it was faced with paper impregnated with gypsum crystals that bonded with the applied facing layer of plaster. In 1936, US Gypsum trademarked ROCKLATH for their gypsum lath product.
In 2002, the European Commission imposed fines totaling €420 million on the companies Lafarge, BPB, Knauf and Gyproc Benelux, which had operated a cartel on the market which affected 80% of consumers in France, the UK, Germany and the Benelux countries.
Manufacture
A wallboard panel consists of a layer of gypsum plaster sandwiched between two layers of paper. The raw gypsum, , is heated to drive off the water and then slightly rehydrated to produce the hemihydrate of calcium sulfate (). The plaster is mixed with fiber (typically paper and/or glass fiber), plasticizer, foaming agent, finely ground gypsum crystal as an accelerator, EDTA, starch or other chelate as a retarder, and various additives that may increase mildew and fire resistance, lower water absorption (wax emulsion or silanes), reduce creep (tartaric or boric acid). The board is then formed by sandwiching a core of the wet mixture between two sheets of heavy paper or fiberglass mats. When the core sets, it is dried in a large drying chamber, and the sandwich becomes rigid and strong enough for use as a building material.
Drying chambers typically use natural gas today. To dry of wallboard, between is required. Organic dispersants and plasticizers are used so that the slurry will flow during manufacture and to reduce the water and hence the drying time. Coal-fired power stations include devices called scrubbers to remove sulfur from their exhaust emissions. The sulfur is absorbed by powdered limestone in a process called flue-gas desulfurization (FGD), which produces several new substances. One is called "FGD gypsum". This is commonly used in drywall construction in the United States and elsewhere.
In 2020, 8.4 billion square meters of drywall were sold around the world.
Construction techniques
As an alternative to a week-long plaster application, an entire house can be drywalled in one or two days by two experienced drywallers, and drywall is easy enough to be installed by many amateur home carpenters. In large-scale commercial construction, the work of installing and finishing drywall is often split between drywall mechanics, or hangers, who install the wallboard, and tapers (also known as finishers, mud men, or float crew) who finish the joints and cover the fastener heads with drywall compound. Drywall can be finished anywhere from a level 0 to a level 5, where 0 is not finished in any fashion, and five is the most pristine. Depending on how significant the finish is to the customer, the extra steps in the finish may or may not be necessary, though priming and painting of drywall are recommended in any location where it may be exposed to any wear.
Drywall is cut to size by scoring the paper on the finished side (usually white) with a utility knife, breaking the sheet along the cut, and cutting the paper backing. Small features such as holes for outlets and light switches are usually cut using a keyhole saw, oscillating multi-tool or a tiny high-speed bit in a rotary tool. Drywall is then fixed to the structure with nails or drywall screws and often glue. Drywall fasteners, also referred to as drywall clips or stops, are gaining popularity in residential and commercial construction. Drywall fasteners are used for supporting interior drywall corners and replacing the non-structural wood or metal blocking that traditionally was used to install drywall. Their function saves material and labor costs, minimizes call-backs due to truss uplift, increases energy efficiency, and makes plumbing and electrical installation simpler.
When driven fully home, drywall screws countersink their heads slightly into the drywall. They use a 'bugle head', a concave taper, rather than the conventional conical countersunk head; this compresses the drywall surface rather than cutting into it and so avoids tearing the paper. Screws for light-gauge steel framing have a sharp point and finely spaced threads. If the steel framing is heavier than 20-gauge, self-drilling screws with finely spaced threads must be used. In some applications, the drywall may be attached to the wall with adhesives.
After the sheets are secured to the wall studs or ceiling joists, the installer conceals the seams between drywall sheets with joint tape or fiber mesh. Layers of joint compound, sometimes called mud, are typically spread with a drywall trowel or knife. This compound is also applied to any screw holes or defects. The compound is allowed to air dry and then typically sanded smooth before painting. Alternatively, for a better finish, the entire wall may be given a skim coat, a thin layer (about ) of finishing compound, to minimize the visual differences between the paper and mudded areas after painting.
Another similar skim coating process is called veneer plastering, although it is done slightly thicker (about ). Veneering uses a slightly different specialized setting compound ("finish plaster") that contains gypsum and lime putty. This application uses blueboard, which has specially treated paper to accelerate the setting of the gypsum plaster component. This setting has far less shrinkage than the air-dry compounds normally used in drywall, so it only requires one coat. Blueboard also has square edges rather than tapered-edge drywall boards. The tapered drywall boards are used to countersink the tape in taped jointing, whereas the tape in veneer plastering is buried beneath a level surface. One coat veneer plaster over dry board is an intermediate style step between full multi-coat "wet" plaster and the limited joint-treatment-only given "dry" wall.
Properties
Sound control
The method of installation and type of drywall can reduce sound transmission through walls and ceilings. Several builders' books state that thicker drywall reduces sound transmission, but engineering manuals recommend using multiple layers of drywall, sometimes of different thicknesses and glued together, or special types of drywall designed to reduce noise. Also important are the construction details of the framing with steel studs, wider stud spacing, double studding, insulation, and other details reducing sound transmission. Sound transmission class (STC) ratings can be increased from 33 for an ordinary stud-wall to as high as 59 with double drywall on both sides of a wood stud wall with resilient channels on one side and glass wool batt insulation between the studs.
Sound transmission may be slightly reduced using regular panels (with or without light-gauge resilient metal channels and/or insulation), but it is more effective to use two layers of drywall, sometimes in combination with other factors, or specially designed, sound-resistant drywall.
Water damage and molding
Drywall is highly vulnerable to moisture due to the inherent properties of the materials that constitute it: gypsum, paper, and organic additives and binders. Gypsum will soften with exposure to moisture and eventually turn into a gooey paste with prolonged immersion, such as during a flood. During such incidents, some, or all, of the drywall in an entire building will need to be removed and replaced. Furthermore, the paper facings and organic additives mixed with the gypsum core are food for mold.
The porosity of the board—introduced during manufacturing to reduce the board's weight, lowering construction time and transportation costs—enables water to rapidly reach the core through capillary action, where mold can grow inside. Water that enters a room from overhead may cause ceiling drywall tape to separate from the ceiling as a result of the grooves immediately behind the tape where the drywall pieces meet becoming saturated. The drywall may also soften around the screws holding the drywall in place, and with the aid of gravity, the weight of the water may cause the drywall to sag and eventually collapse, requiring replacement.
Drywall's paper facings are edible to termites, which can eat the paper if they infest a wall cavity covered with drywall. This causes the painted surface to crumble to the touch, its paper backing material being eaten. In addition to the necessity of patching the damaged surface and repainting, if enough of the paper has been eaten, the gypsum core can easily crack or crumble without it, and the drywall must be removed and replaced.
In many circumstances, especially when the drywall has been exposed to water or moisture for less than 48 hours, professional restoration experts can avoid the cost, inconvenience, and difficulty of removing and replacing the affected drywall. They use rapid drying techniques that eliminate the elements required to support microbial activity while restoring most or all of the drywall.
It is for these reasons that greenboard, a type of drywall with an outer face that is wax- and/or chemically coated to resist mold growth, and ideally cement board are used for rooms expected to have high humidity, primarily kitchens, bathrooms, and laundry rooms.
Other damage risks
Foam insulation and the gypsum part of sheetrock are easily chewed out by honeybees when they are setting up a stray nest in a building, and they want to enlarge their nest area.
Fire resistance
Some fire barrier walls are constructed of Type X drywall as a passive fire protection item. Gypsum contains the water of crystallization bound in the form of hydrates. When exposed to heat or fire, the resulting decomposition reaction releases water vapor and is endothermic (it absorbs thermal energy), which retards heat transfer until the water in the gypsum is gone. The fire-resistance rating of the fire barrier assembly is increased with additional layers of drywall, up to four hours for walls and three hours for floor/ceiling assemblies. Fire-rated assemblies constructed of drywall are documented in design or certification listing catalogues, including DIN 4102 Part 4 and the Canadian Building Code, Underwriters Laboratories and Underwriters Laboratories of Canada (ULC).
Tests result in code-recognized designs with assigned fire-resistance ratings. The resulting designs become part of the code and are not limited to use by any manufacturer. However, individual manufacturers may also have proprietary designs that they have had third-party tested and approved, provided that the material used in the field configuration can be demonstrated to meet the minimum requirements of Type X drywall and that sufficient layers and thicknesses are used.
Type X drywall
In the Type X gypsum board, special glass fibers are intermixed with the gypsum to reinforce the core of the panels. These fibers reduce the size of the cracks that form as the water is driven off, thereby extending the length of time the gypsum panels resist fire without failure.
Type C drywall
Type C gypsum panels provide stronger fire resistance than Type X. The core of Type C panels contains a higher density of glass fibers. The core of Type C panels also contains vermiculite which acts as a shrinkage-compensating additive that expands when exposed to elevated temperatures of a fire. This expansion occurs at roughly the same temperature as the calcination of the gypsum in the core, allowing the core of the Type C panels to remain dimensionally stable in a fire.
Waste
Because up to 12% of drywall is wasted during the manufacturing and installation processes and the drywall material is frequently not reused, disposal can become a problem. Some landfill sites have banned the dumping of drywall. Some manufacturers take back waste wallboard from construction sites and recycle them into new wallboard. Recycled paper is typically used during manufacturing. More recently, recycling at the construction site itself has been researched. There is potential for using crushed drywall to amend certain soils at building sites, such as sodic clay and silt mixtures (bay mud), as well as using it in compost. As of 2016, industry standards are being developed to ensure that when and if wallboard is taken back for recycling, quality and composition are maintained.
Market
North America
North America is one of the largest gypsum board users in the world, with a total wallboard plant capacity of per year, roughly half of the worldwide annual production capacity of . Moreover, the homebuilding and remodeling markets in North America in the late 1990s and early 2000s increased demand. The gypsum board market was one of the biggest beneficiaries of the housing boom as "an average new American home contains more than 7.31 metric tons of gypsum."
The introduction in March 2005 of the Clean Air Interstate Rule by the United States Environmental Protection Agency requires fossil-fuel power plants to "cut sulfur dioxide emissions by 73%" by 2018. The Clean Air Interstate Rule also requested that the power plants install new scrubbers (industrial pollution control devices) to remove sulfur dioxide present in the output waste gas. Scrubbers use the technique of flue-gas desulfurization (FGD), which produces synthetic gypsum as a usable by-product. In response to the new supply of this raw material, the gypsum board market was predicted to shift significantly. However, issues such as mercury release during calcining need to be resolved.
Controversies
High-sulfur drywall illness and corrosion issues
A substantial amount of defective drywall was imported into the United States from China and incorporated into tens of thousands of homes during rebuilding in 2006 and 2007 following Hurricane Katrina and in other places. Complaints included the structure's foul odour, health effects, and metal corrosion. The emission of sulfurous gases causes this. The same drywall was sold in Asia without problems resulting, but US homes are built much more tightly than homes in China, with less ventilation. Volatile sulfur compounds, including hydrogen sulfide, have been detected as emissions from the imported drywall and may be linked to health problems. These compounds are emitted from many different types of drywall.
Several lawsuits are underway in many jurisdictions, but many of the sheets of drywall are simply marked "Made in China", thus making the manufacturer's identification difficult. An investigation by the Consumer Product Safety Commission, CPSC, was underway in 2009. In November 2009, the CPSC reported a "strong association" between Chinese drywall and corrosion of pipes and wires reported by thousands of homeowners in the United States. The issue was resolved in 2011, and now all drywall must be tested for volatile sulfur, and any containing more than ten ppm is unable to be sold in the US.
Variants
The following variants available in Canada or the United States are listed below:
Regular white board, from thickness
Fire-resistant ("Type X"), different thicknesses and multiple layers of wallboard provide increased fire rating based on the time a specific wall assembly can withstand a standardized fire test. Often perlite, vermiculite, and boric acid are added to improve fire resistance.
Greenboard, the drywall containing an oil-based additive in the green-colored paper covering, provides moisture resistance. It is commonly used in washrooms and other areas expected to experience elevated humidity levels.
Blueboard, blue face paper forms a strong bond with a skim coat or a built-up plaster finish, providing water and mold resistance.
Cement board, which is more water-resistant than greenboard, for use in showers or sauna rooms, and as a base for ceramic tile.
Soundboard is made from wood fibers to increase the sound transmission class.
Soundproof drywall is a laminated drywall made with gypsum and other materials such as damping polymers to significantly increase the sound transmission class rating.
Mold-resistant, paperless drywall with fiberglass face
Enviroboard, a board made from recycled agricultural materials
Lead-lined drywall, a drywall used around radiological equipment.
Foil-backed drywall used as a vapor barrier.
Controlled density (CD), also called ceiling board, which is available only in thickness and is significantly stiffer than the regular whiteboard.
EcoRock, a drywall that uses a combination of 20 materials including recycled fly ash, slag, kiln dust and fillers and no starch cellulose; it is advertised as being environmentally friendly due to the use of recycled materials and an energy efficient process.
Gypsum "Firecode C". This board is similar in composition to Type X, except for more glass fibres and a form of the vermiculite used to reduce shrinkage. When exposed to high heat, the gypsum core shrinks, but this additive expands at about the same rate, so the gypsum core is more stable in a fire and remains in place even after the gypsum dries up.
Specifications
Australia and New Zealand
The term plasterboard is used in Australia and New Zealand. In Australia, the product is often called Gyprock, the name of the largest plasterboard manufacturer. In New Zealand it is also called Gibraltar and Gib board, genericised from the registered trademark ("GIB") of the locally made product that dominates the local market. A specific type of Gibraltar board for use in wet conditions (such as bathrooms and kitchens) is known as AquaGib.
It is made in thicknesses of 10 mm, 13 mm, and 16 mm, and sometimes other thicknesses up to 25 mm. Panels are commonly sold in 1200 mm-wide sheets, which may be 1800, 2400, 3000, 4800, or 6000 mm in length. Sheets are usually secured to either timber or cold-formed steel frames anywhere from 150 to 300 mm centres along the beam and 400 to 600 mm across members.
In both countries, plasterboard has become a widely used replacement for scrim and sarking walls in renovating 19th- and early 20th-century buildings.
Canada and the United States
Drywall panels in Canada and the United States are made in widths of and varying lengths to suit the application. The most common width is 48 inches; however, 54-inch-wide panels are becoming more popular as ceiling heights become more common. Lengths up to are common; the most common is . Common thicknesses are ; thicknesses of are used in specific applications. In many parts of Canada, drywall is commonly referred to as Gyproc.
Europe
In Europe, most plasterboard is made in sheets wide; sheets wide are also made. Plasterboard wide is most commonly made in lengths; sheets of and longer also are common. Thicknesses of plasterboard available are .
Plasterboard is commonly made with one of three edge treatments: tapered edge, where the long edges of the board are tapered with a wide bevel at the front to allow jointing materials to be finished flush with the main board face; plain edge, used where the whole surface will receive a thin coating (skim coat) of finishing plaster; and beveled on all four sides, used in products specialized for roofing. Major UK manufacturers do not offer four-sided chamfered drywall for general use.
| Technology | Building materials | null |
331824 | https://en.wikipedia.org/wiki/Wrasse | Wrasse | The wrasses are a family, Labridae, of marine ray-finned fish, many of which are brightly colored. The family is large and diverse, with over 600 species in 81 genera, which are divided into nine subgroups or tribes.
They are typically small, most of them less than long, although the largest, the humphead wrasse, can measure up to . They are efficient carnivores, feeding on a wide range of small invertebrates. Many smaller wrasses follow the feeding trails of larger fish, picking up invertebrates disturbed by their passing. Juveniles of some representatives of the genera Bodianus, Epibulus, Cirrhilabrus, Oxycheilinus, and Paracheilinus hide among the tentacles of the free-living mushroom corals and Heliofungia actiniformis.
Taxonomy
Parrotfish were traditionally regarded as comprising their own family (Scaridae), but are now often treated as a subfamily (Scarinae) or tribe (Scarini) of the wrasses (Labridae), being nested deep within the wrasse phylogenetic tree. The odacine wrasses, traditionally classified as forming their own family, were found nested deep within the wrasse tribe Hypsigenyini, and most closely related to the tuskfishes.
Etymology
The word "wrasse" comes from the Cornish word wragh, a lenited form of gwragh, meaning an old woman or hag, via Cornish dialect wrath. It is related to the Welsh gwrach and Breton gwrac'h.
Genera
Description
Wrasses have protractile mouths, usually with separate jaw teeth that jut outwards. Many species can be readily recognized by their thick lips, the inside of which is sometimes curiously folded, a peculiarity which gave rise to the German name of "lip-fishes" (Lippfische), and the Dutch name of lipvissen. The dorsal fin has eight to 21 spines and six to 21 soft rays, usually running most of the length of the back. Wrasses are sexually dimorphic. Many species are capable of changing sex. Juveniles are a mix of males and females (known as initial-phase individuals), but the largest adults become territory-holding (terminal-phase) males.
The wrasses have become a primary study species in fish-feeding biomechanics due to their jaw structures. The nasal and mandibular bones are connected at their posterior ends to the rigid neurocranium, and the superior and inferior articulations of the maxilla are joined to the anterior tips of these two bones, respectively, creating a loop of four rigid bones connected by moving joints. This "four-bar linkage" has the property of allowing numerous arrangements to achieve a given mechanical result (fast jaw protrusion or a forceful bite), thus decoupling morphology from function. The actual morphology of wrasses reflects this, with many lineages displaying different jaw morphology that results in the same functional output in a similar or identical ecological niche.
Distribution and habitat
Most wrasses inhabit the tropical and subtropical waters of the Atlantic, Indian, and Pacific Oceans, though some species live in temperate waters: the Ballan wrasse is found as far north as Norway. Wrasses are usually found in shallow-water habitats such as coral reefs and rocky shores, where they live close to the substrate.
Reproductive behavior
Most labrids are protogynous hermaphrodites within a haremic mating system. A good example of this reproductive behavior is seen in the California sheephead. Hermaphroditism allows for complex mating systems. Labroids exhibit three different mating systems: polygynous, lek-like, and promiscuous. Group spawning and pair spawning occur within mating systems. The type of spawning that occurs depends on male body size. Labroids typically exhibit broadcast spawning, releasing high numbers of planktonic eggs, which are broadcast by tidal currents; adult labroids have no interaction with offspring. Wrasses of a particular subgroup of the family Labridae, Labrini, do not exhibit broadcast spawning.
Sex change in wrasses is generally female-to-male, but experimental conditions have allowed for male-to-female sex change. Placing two male Labroides dimidiatus wrasses in the same tank results in the smaller of the two becoming female again. Additionally, while the individual to change sex is generally the largest female, evidence also exists of the largest female instead "choosing" to remain female in situations in which she can maximize her evolutionary fitness by refraining from changing sex.
Broodcare behavior of the tribe
The subgroup Labrini arose from a basal split within family Labridae during the Eocene period. Subgroup Labrini is composed of eight genera, wherein 15 of 23 species exhibit broodcare behavior, which ranges from simple to complex parental care of spawn; males build algae nests or crude cavities, ventilate eggs, and defend nests against conspecific males and predators. In species that express this behavior, eggs cannot survive without parental care. Species of Symphodus, Centrolabrus, and Labrus genera exhibit broodcare behavior.
Sexual developmental systems
Wrasses exhibit three types of sexual development, depending on the species. Sex in this context refers to functional sex, ie the individual's role when mating. Some species show functional gonochorism, meaning that they are born functionally either male or female, and remain so for their entire life; there is no sex change. Meanwhile, functionally hermaphoditic species exhibit sex change, and are protogynous, meaning that individuals that are functionally female can become functionally male. These protogynous species are either monandric (all individuals are born functionally female, but can become functionally male) or diandric (individuals can be born either female or male, and individuals that are born female can become male).
Evolutionarily, wrasse lineages trend towards developing monandry. Monandric lineages rarely transition directly to diandry, instead transitioning through functional gonochorism first on the pathway to diandry.
Potential tool use
Many species of wrasses have been recorded using large rocks or hard coral as "anvils", upon which they smash open hard-shelled prey items. At least some of these species can remember to use a particular rock or coral repeatedly for this purpose. This behaviour usually involves invertebrate prey such as clams, sea urchins, and crabs, but on one occasion, a blue tuskfish was filmed smashing a young green sea turtle on an anvil.
21 species of 8 genera have been documented exhibiting this behaviour, including Choerodon (C. anchorago, C. cyanodus, C. graphicus, C. schoenleinii), Coris (C. aygula, C. bulbifrons, C. julis, C. sandeyeri), Cheilinus (C. fasciatus, C. lunulatus, C. trilobatus), Thalassoma (T. hardwicke, T. jansenii, T. lunare, T. lutescens, T. pavo), Symphodus (S. mediterraneus), Halichoeres (H. garnoti, H. hortulanus), Bodianus (B. pulcher), and Pseudolabrus (P. luculentus).
Cleaner wrasse
Cleaner wrasses are the best-known of the cleaner fish. They live in a cleaning symbiosis with larger, often predatory, fish, grooming them and benefiting by consuming what they remove. "Client" fish congregate at wrasse "cleaning stations" and wait for the cleaner fish to remove gnathiid parasites, the cleaners even swimming into their open mouths and gill cavities to do so.
Cleaner wrasses are best known for feeding on dead tissue, scales, and ectoparasites, although they are also known to 'cheat', consuming healthy tissue and mucus, which is energetically costly for the client fish to produce. The bluestreak cleaner wrasse, Labroides dimidiatus, is one of the most common cleaners found on tropical reefs. Few cleaner wrasses have been observed being eaten by predators, possibly because parasite removal is more important for predator survival than the short-term gain of eating the cleaner.
In a 2019 study, cleaner wrasses passed the mirror test, the first fish to do so. However, the test's inventor, American psychologist Gordon G. Gallup, has said that the fish were most likely trying to scrape off a perceived parasite on another fish and that they did not demonstrate self-recognition. The authors of the study retorted that because the fish checked themselves in the mirror before and after the scraping, this meant that the fish had self-awareness and recognized that their reflections belonged to their own bodies. In a 2024 study, "mirror-naive" bluestreak cleaner wrasse were reported to initially show aggression to wrasse photographs sized 10% larger or 10% smaller than themselves, regardless of size. However, upon viewing their reflections in a mirror, they avoided confronting photographs 10% larger than they were.
Significance to humans
In the Western Atlantic coastal region of North America, the most common food species for indigenous humans was the tautog, a species of wrasse. Wrasses today are commonly found in both public and home aquaria. Some species are small enough to be considered reef safe. They may also be employed as cleaner fish to combat sea-lice infestations in salmon farms. Commercial fish farming of cleaner wrasse for sea-lice pest control in commercial salmon farming has developed in Scotland as lice busters, with apparent commercial benefit and viability.
Parasites
As all fish, labrids are the hosts of a number of parasites. A list of 338 parasite taxa from 127 labrid fish species was provided by Muñoz and Diaz in 2015. An example is the nematode Huffmanela ossicola.
Gallery
| Biology and health sciences | Acanthomorpha | null |
331826 | https://en.wikipedia.org/wiki/Prime-counting%20function | Prime-counting function | In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number . It is denoted by (unrelated to the number ).
A symmetric variant seen sometimes is , which is equal to if is exactly a prime number, and equal to otherwise. That is, the number of prime numbers less than , plus half if equals a prime.
Growth rate
Of great interest in number theory is the growth rate of the prime-counting function. It was conjectured in the end of the 18th century by Gauss and by Legendre to be approximately
where is the natural logarithm, in the sense that
This statement is the prime number theorem. An equivalent statement is
where is the logarithmic integral function. The prime number theorem was first proved in 1896 by Jacques Hadamard and by Charles de la Vallée Poussin independently, using properties of the Riemann zeta function introduced by Riemann in 1859. Proofs of the prime number theorem not using the zeta function or complex analysis were found around 1948 by Atle Selberg and by Paul Erdős (for the most part independently).
More precise estimates
In 1899, de la Vallée Poussin proved that
for some positive constant . Here, is the big notation.
More precise estimates of are now known. For example, in 2002, Kevin Ford proved that
Mossinghoff and Trudgian proved an explicit upper bound for the difference between and :
For values of that are not unreasonably large, is greater than . However, is known to change sign infinitely many times. For a discussion of this, see Skewes' number.
Exact form
For let when is a prime number, and otherwise. Bernhard Riemann, in his work On the Number of Primes Less Than a Given Magnitude, proved that is equal to
where
is the Möbius function, is the logarithmic integral function, indexes every zero of the Riemann zeta function, and is not evaluated with a branch cut but instead considered as where is the exponential integral. If the trivial zeros are collected and the sum is taken only over the non-trivial zeros of the Riemann zeta function, then may be approximated by
The Riemann hypothesis suggests that every such non-trivial zero lies along .
Table of , , and
The table shows how the three functions , , and compared at powers of 10. | Mathematics | Specific functions | null |
331884 | https://en.wikipedia.org/wiki/Particle%20horizon | Particle horizon | The particle horizon (also called the cosmological horizon, the comoving horizon (in Scott Dodelson's text), or the cosmic light horizon) is the maximum distance from which light from particles could have traveled to the observer in the age of the universe. Much like the concept of a terrestrial horizon, it represents the boundary between the observable and the unobservable regions of the universe, so its distance at the present epoch defines the size of the observable universe. Due to the expansion of the universe, it is not simply the age of the universe times the speed of light (approximately 13.8 billion light-years), but rather the speed of light times the conformal time. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.
Conformal time and the particle horizon
In terms of comoving distance, the particle horizon is equal to the conformal time that has passed since the Big Bang, times the speed of light . In general, the conformal time at a certain time is given by
where is the (dimensionless) scale factor of the Friedmann–Lemaître–Robertson–Walker metric, and we have taken the Big Bang to be at . By convention, a subscript 0 indicates "today" so that the conformal time today . Note that the conformal time is not the age of the universe, which is estimated around . Rather, the conformal time is the amount of time it would take a photon to travel from where we are located to the furthest observable distance, provided the universe ceased expanding. As such, is not a physically meaningful time (this much time has not yet actually passed); though, as we will see, the particle horizon with which it is associated is a conceptually meaningful distance.
The particle horizon recedes constantly as time passes and the conformal time grows. As such, the observed size of the universe always increases. Since proper distance at a given time is just comoving distance times the scale factor (with comoving distance normally defined to be equal to proper distance at the present time, so at present), the proper distance to the particle horizon at time is given by
and for today
Evolution of the particle horizon
In this section we consider the FLRW cosmological model. In that context, the universe can be approximated as composed by non-interacting constituents, each one being a perfect fluid with density , partial pressure and state equation , such that they add up to the total density and total pressure . Let us now define the following functions:
Hubble function
The critical density
The i-th dimensionless energy density
The dimensionless energy density
The redshift given by the formula
Any function with a zero subscript denote the function evaluated at the present time (or equivalently ). The last term can be taken to be including the curvature state equation. It can be proved that the Hubble function is given by
where the dilution exponent . Notice that the addition ranges over all possible partial constituents and in particular there can be countably infinitely many. With this notation we have:
where is the largest (possibly infinite). The evolution of the particle horizon for an expanding universe () is:
where is the speed of light and can be taken to be (natural units). Notice that the derivative is made with respect to the FLRW-time , while the functions are evaluated at the redshift which are related as stated before. We have an analogous but slightly different result for event horizon.
Horizon problem
The concept of a particle horizon can be used to illustrate the famous horizon problem, which is an unresolved issue associated with the Big Bang model. Extrapolating back to the time of recombination when the cosmic microwave background (CMB) was emitted, we obtain a particle horizon of about
which corresponds to a proper size at that time of:
Since we observe the CMB to be emitted essentially from our particle horizon (), our expectation is that parts of the cosmic microwave background (CMB) that are separated by about a fraction of a great circle across the sky of
(an angular size of ) should be out of causal contact with each other. That the entire CMB is in thermal equilibrium and approximates a blackbody so well is therefore not explained by the standard explanations about the way the expansion of the universe proceeds. The most popular resolution to this problem is cosmic inflation.
| Physical sciences | Physical cosmology | Astronomy |
331907 | https://en.wikipedia.org/wiki/Archerfish | Archerfish | The archerfish (also known as spinner fish or archer fish) or Toxotidae are a monotypic family (although some include a second genus) of perciform tropical fish known for their unique predation technique of "shooting down" land-based insects and other small prey with jets of water spit from their specialized mouths. The family is small, consisting of ten species in a single genus, Toxotes. Most archerfish live in freshwater streams, ponds and wetlands, but two or three species are euryhaline, inhabiting both fresh and brackish water habitats such as estuaries and mangroves. They can be found from India, Bangladesh and Sri Lanka, through Southeast Asia, to Melanesia and Northern Australia.
Archerfish have deep and laterally compressed bodies, with the dorsal fin and the profile a straight line from dorsal fin to mouth. The mouth is protractile, and the lower jaw juts out. Sizes are fairly small, typically up to about , but T. chatareus can reach .
Archerfish are popular exotic fish for aquaria, but are difficult to feed and maintain by average fishkeepers since they prefer live prey over typical fish foods.
Capture of prey
Archerfish are remarkably accurate in their shooting; an adult fish almost always hits the target on the first shot. Although it is presumed that all archerfish species do this, it has only been confirmed from T. blythii, T. chatareus and T. jaculatrix. They can bring down insects and other prey up to above the water's surface. This is partially due to their good eyesight, but also to their ability to compensate for the refraction of light as it passes through the air-water interface when aiming at their prey. They typically spit at prey at a mean angle of about 74° from the horizontal but can still aim accurately when spitting at angles between 45° and 110°.
When an archerfish selects its prey, it rotates its eye so that the image of the prey falls on a particular portion of the eye in the ventral temporal periphery of the retina, and its lips just break the surface, squirting a jet of water at its victim. The archerfish does this by forming a small groove in the roof of its mouth and its tongue into a narrow channel. It then fires by contracting its gill covers and forcing water through the channel, shooting a stream that, shaped by its mouth parts, travels faster at the rear than at the front. This speed differential causes the stream to become a blob directly before impact as the slower leading water is overtaken by the faster trailing water, and it is varied by the fish to account for differences in range. It also makes this one of the few animals that both make and use tools, as they both utilise the water and shape it to make it more useful to them. They are persistent and will make multiple shots if the first one fails.
Young archerfish start shooting when they are about long but are inaccurate at first and must learn from experience. During this learning period, they hunt in small schools. This way, the probability is enhanced that at least one jet will hit its target. A 2006 experimental study found that archerfish appear to benefit from observational learning by watching a performing group member shoot, without having to practice: However, little of their social behaviour is currently known beyond that archerfish are sensitive to, and make changes to their shooting behaviour, when conspecifics are visible to them. This is probably as a result of the potential threat of kleptoparasitism that other archerfish represent to a shooting fish.
An archerfish will often leap out of the water and grab an insect in its mouth if it happens to be within reach. Individuals typically prefer to remain close to the surface of the water.
New research has found that archerfish also use jets to hunt underwater prey, such as those embedded in silt. It is not known whether they learned aerial or underwater shooting first, but the two techniques may have evolved in parallel, as improvements in one can be adapted to the other. This makes it an example of exaptation.
Species
There are 9 valid species, 8 in the genus Toxotes:
Protoxotes lorentzi Weber, 1910 - primitive archerfish
Toxotes blythii Boulenger, 1892 - clouded archerfish, zebra archerfish
Toxotes carpentariensis, Castelnau, 1878
Toxotes chatareus (Hamilton, 1822) - largescale archerfish, common archerfish
Toxotes jaculatrix (Pallas, 1767) - banded archerfish
Toxotes kimberleyensis Allen, 2004 - Kimberley archerfish, western archerfish
Toxotes microlepis Günther, 1860 - smallscale archerfish
Toxotes oligolepis Bleeker, 1876 - big scale archerfish
Toxotes sundaicus Kottelat & Tan, 2018
Timeline
| Biology and health sciences | Acanthomorpha | Animals |
331951 | https://en.wikipedia.org/wiki/Fish%20anatomy | Fish anatomy | Fish anatomy is the study of the form or morphology of fish. It can be contrasted with fish physiology, which is the study of how the component parts of fish function together in the living fish. In practice, fish anatomy and fish physiology complement each other, the former dealing with the structure of a fish, its organs or component parts and how they are put together, such as might be observed on the dissecting table or under the microscope, and the latter dealing with how those components function together in living fish.
The anatomy of fish is often shaped by the physical characteristics of water, the medium in which fish live. Water is much denser than fish, holds a relatively small amount of dissolved oxygen, and absorbs more light than air does. The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage (cartilaginous fish) or bone (bony fish). The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays which, with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk.
The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and then around the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low-frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, which responds to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, and they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large yolky eggs. Some species are ovoviviparous, having the young develop internally, but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Body
In many respects, fish anatomy is different from mammalian anatomy. However, it still shares the same basic body plan from which all vertebrates have evolved: a notochord, rudimentary vertebrae, and a well-defined head and tail.
Fish have a variety of different body plans. At the broadest level, their body is divided into the head, trunk, and tail, although the divisions are not always externally visible. The body is often fusiform, a streamlined body plan often found in fast-moving fish. Some species may be filiform (eel-shaped) or vermiform (worm-shaped). Fish are often either compressed (laterally thin and tall) or depressed (dorso-ventrally flattened).
Skeleton
There are two different skeletal types: the exoskeleton, which is the stable outer shell of an organism, and the endoskeleton, which forms the support structure inside the body. The skeleton of the fish is made of either cartilage (cartilaginous fishes) or bone (bony fishes). The endoskeleton of the fish is made up of two main components: the axial skeleton consisting of the skull and vertebral column, and the appendicular skeleton supporting the fins. The fins are made up of bony fin rays and, except for the caudal fin, have no direct connection with the spine. They are supported only by the muscles. The ribs attach to the spine.
Bones are rigid organs that form part of the endoskeleton of vertebrates. They function to move, support, and protect the various organs of the body, produce red and white blood cells and store minerals. Bone tissue is a type of dense connective tissue. Bones come in a variety of shapes and have a complex internal and external structure. They are lightweight, yet strong and hard, in addition to fulfilling their many other biological functions.
Vertebrae
Fish are vertebrates. All vertebrates are built along the basic chordate body plan: a stiff rod running through the length of the animal (vertebral column or notochord), with a hollow tube of nervous tissue (the spinal cord) above it and the gastrointestinal tract below. In all vertebrates, the mouth is found at, or right below, the anterior end of the animal, while the anus opens to the exterior before the end of the body. The remaining part of the body beyond the anus forms a tail with vertebrae and the spinal cord, but no gut.
The defining characteristic of a vertebrate is the vertebral column, in which the notochord (a stiff rod of uniform composition) found in all chordates has been replaced by a segmented series of stiffer elements (vertebrae) separated by mobile joints (intervertebral discs, derived embryonically and evolutionarily from the notochord). However, a few fish have secondarily lost this anatomy, retaining the notochord into adulthood, such as the sturgeon.
The vertebral column consists of a centrum (the central body or spine of the vertebra), vertebral arches which protrude from the top and bottom of the centrum, and various processes which project from the centrum or arches. An arch extending from the top of the centrum is called a neural arch, while the haemal arch or chevron is found underneath the centrum in the caudal vertebrae of fish. The centrum of a fish is usually concave at each end (amphicoelous), which limits the motion of the fish. In contrast, the centrum of a mammal is flat at each end (acoelous), a shape that can support and distribute compressive forces.
The vertebrae of lobe-finned fishes consist of three discrete bony elements. The vertebral arch surrounds the spinal cord, and is broadly similar in form to that found in most other vertebrates. Just beneath the arch lies the small plate-like pleurocentrum, which protects the upper surface of the notochord. Below that, a larger arch-shaped intercentrum protects the lower border. Both of these structures are embedded within a single cylindrical mass of cartilage. A similar arrangement was found in primitive tetrapods, but in the evolutionary line that led to reptiles, mammals and birds, the intercentrum became partially or wholly replaced by an enlarged pleurocentrum, which in turn became the bony vertebral body.
In most ray-finned fishes, including all teleosts, these two structures are fused with and embedded within a solid piece of bone superficially resembling the vertebral body of mammals. In living amphibians, there is simply a cylindrical piece of bone below the vertebral arch, with no trace of the separate elements present in the early tetrapods.
In cartilaginous fish such as sharks, the vertebrae consist of two cartilaginous tubes. The upper tube is formed from the vertebral arches, but also includes additional cartilaginous structures filling in the gaps between the vertebrae, enclosing the spinal cord in an essentially continuous sheath. The lower tube surrounds the notochord and has a complex structure, often including multiple layers of calcification.
Lampreys have vertebral arches, but nothing resembling the vertebral bodies found in all higher vertebrates. Even the arches are discontinuous, consisting of separate pieces of arch-shaped cartilage around the spinal cord in most parts of the body, changing to long strips of cartilage above and below in the tail region. Hagfishes lack a true vertebral column, but a few tiny neural arches are present in the tail. Hagfishes do, however, possess a cranium. For this reason, hagfishes have sometimes been excluded from Vertebrata in the past, and instead placed as a sister group of vertebrates within the taxon "Craniata". Molecular analyses since 1992 have shown that hagfishes are the sister group of lampreys within the clade Cyclostomi, and therefore are vertebrates in a phylogenetic sense.
Head
The head or skull includes the skull roof (a set of bones covering the brain, eyes and nostrils), the snout (from the eye to the forward-most point of the upper jaw), the operculum or gill cover (absent in sharks and jawless fish), and the cheek, which extends from the eye to the preopercle. The operculum and preopercle may or may not have spines. In sharks and some primitive bony fish the spiracle, a small extra gill opening, is found behind each eye.
The skull in fishes is formed from a series of only loosely connected bones. Jawless fish and sharks only possess a cartilaginous endocranium, with the upper and lower jaws of cartilaginous fish being separate elements not attached to the skull. Bony fishes have additional dermal bone, forming a more or less coherent skull roof in lungfish and holost fish. The lower jaw defines a chin.
In lampreys, the mouth is formed into an oral disk. In most jawed fish, however, there are three general configurations. The mouth may be on the forward end of the head (terminal), may be upturned (superior), or may be turned downwards or on the bottom of the fish (subterminal or inferior). The mouth may be modified into a suckermouth adapted for clinging onto objects in fast-moving water.
The simpler structure is found in jawless fish, in which the cranium is represented by a trough-like basket of cartilaginous elements only partially enclosing the brain and associated with the capsules for the inner ears and the single nostril. Distinctively, these fish have no jaws.
Cartilaginous fish such as sharks also have simple, and presumably primitive, skull structures. The cranium is a single structure forming a case around the brain, enclosing the lower surface and the sides, but always at least partially open at the top as a large fontanelle. The most anterior part of the cranium includes a forward plate of cartilage, the rostrum, and capsules to enclose the olfactory organs. Behind these are the orbits, and then an additional pair of capsules enclosing the structure of the inner ear. Finally, the skull tapers towards the rear, where the foramen magnum lies immediately above a single condyle, articulating with the first vertebra. Smaller foramina for the cranial nerves can be found at various points throughout the cranium. The jaws consist of separate hoops of cartilage, almost always distinct from the cranium proper.
In the ray-finned fishes, there has also been considerable modification from the primitive pattern. The roof of the skull is generally well formed, and although the exact relationship of its bones to those of tetrapods is unclear, they are usually given similar names for convenience. Other elements of the skull, however, may be reduced; there is little cheek region behind the enlarged orbits, and little if any bone in between them. The upper jaw is often formed largely from the premaxilla, with the maxilla itself located further back, and an additional bone, the sympletic, linking the jaw to the rest of the cranium.
Although the skulls of fossil lobe-finned fish resemble those of the early tetrapods, the same cannot be said of those of the living lungfishes. The skull roof is not fully formed, and consists of multiple, somewhat irregularly shaped bones with no direct relationship to those of tetrapods. The upper jaw is formed from the pterygoid bones and vomers alone, all of which bear teeth. Much of the skull is formed from cartilage, and its overall structure is reduced.
The head may have several fleshy structures known as barbels, which may be very long and resemble whiskers. Many fish species also have a variety of protrusions or spines on the head. The nostrils or nares of almost all fishes do not connect to the oral cavity, but are pits of varying shape and depth.
External organs
Jaw
The vertebrate jaw probably originally evolved in the Silurian period and appeared in the Placoderm fish which further diversified in the Devonian. Jaws are thought to derive from the pharyngeal arches that support the gills in fish. The two most anterior of these arches are thought to have become the jaw itself (see hyomandibula) and the hyoid arch, which braces the jaw against the braincase and increases mechanical efficiency. While there is no fossil evidence directly to support this theory, it makes sense in light of the numbers of pharyngeal arches that are visible in extant jawed animals (the gnathostomes), which have seven arches, and primitive jawless vertebrates (the Agnatha), which have nine.
It is thought that the original selective advantage garnered by the jaw was not related to feeding, but to increase respiration efficiency. The jaws were used in the buccal pump (observable in modern fish and amphibians) that pumps water across the gills of fish or air into the lungs of amphibians. Over evolutionary time, the more familiar use of jaws in feeding was selected for and became a very important function in vertebrates.
Linkage systems are widely distributed in animals. The most thorough overview of the different types of linkages in animals has been provided by M. Muller, who also designed a new classification system which is especially well suited for biological systems. Linkage mechanisms are especially frequent and various in the head of bony fishes, such as wrasses, which have evolved many specialized aquatic feeding mechanisms. Especially advanced are the linkage mechanisms of jaw protrusion. For suction feeding a system of connected four-bar linkages is responsible for the coordinated opening of the mouth and 3-D expansion of the buccal cavity. Other linkages are responsible for protrusion of the premaxilla.
Eyes
Fish eyes are similar to terrestrial vertebrates like birds and mammals, but have a more spherical lens. Their retinas generally have both rod cells and cone cells (for scotopic and photopic vision), and most species have colour vision. Some fish can see ultraviolet and some can see polarized light. Amongst jawless fish, the lamprey has well-developed eyes, while the hagfish has only primitive eyespots. The ancestors of modern hagfish, thought to be protovertebrate, were evidently pushed to very deep, dark waters, where they were less vulnerable to sighted predators and where it is advantageous to have a convex eyespot, which gathers more light than a flat or concave one. Unlike humans, fish normally adjust focus by moving the lens closer to or further from the retina.
Gills
Skin
The skin of the fish are a part of the integumentary system, which contains two layers: the epidermis and the dermis layer. The epidermis is derived from the ectoderm and becomes the most superficial layer that consists entirely of live cells, with only minimal quantities of keratin. It is generally permeable. The dermis is derived from the mesoderm and resembles the little connective tissue which are composed of mostly collagen fibers found in bony fish. Some fish species have scales that emerge from the dermis, penetrate the thin layer of the basement membrane that lies between the epidermis and dermis, and becomes externally visible and covers the epidermis layer.
Generally, the skin also contains sweat glands and sebaceous glands that are both unique to mammals, but additional types of skin glands are found in fish. Found in the epidermis, fish typically have numerous individual mucus-secreting skin cells called goblet cells that produce a slimy substance to the surface of the skin. This aids in insulation and protection from bacterial infection. The skin colour of many mammals are often due to melanin found in their epidermis. In fish, however, the colour of the skin are largely due to chromatophores in the dermis, which, in addition to melanin, may contain guanine or carotenoid pigments. Many species, such as flounders, change the colour of their skin by adjusting the relative size of their chromatophores. Some fishes may also have venom glands, photophores, or cells that produce a more watery serous fluid in the dermis.
Scales
Also part of the fish's integumentary system are the scales that cover the outer body of many jawed fish. The commonly known scales are the ones that originate from the dermis or mesoderm, and may be similar in structure to teeth. Some species are covered by scutes instead. Others may have no scales covering the outer body.
There are four principal types of fish scales that originate from the dermis.
Placoid scales, also called dermal denticles, are pointed scales. They are similar to the structure of teeth, in which they are made of dentin and covered by enamel. They are typical of cartilaginous fish (even though chimaeras have it on claspers only).
Ganoid scales are flat, basal-looking scales. Derived from placoid scales, they have a thick coat of enamel, but without the underlying layer of dentin. These scales cover the fish's body with little overlapping. They are typical of gar and bichirs.
Cycloid scales are small, oval-shaped scales with growth rings like the rings of a tree. They lack enamel, dentin, and a vascular bone layer. Bowfin and remora have cycloid scales.
Ctenoid scales are similar to cycloid scales, also having growth rings, lack enamel, dentin, and a vascular bone layer. They are distinguished by spines or projections along one edge. Halibut have this type of scale.
Lateral line
The lateral line is a sense organ used to detect movement and vibration in the surrounding water. For example, fish can use their lateral line system to follow the vortices produced by fleeing prey. In most species, it consists of a line of receptors running along each side of the fish.
Photophores
Fins
Fins are the most distinctive features of fish. They are either composed of bony spines or rays protruding from the body with skin covering them and joining them together, either in a webbed fashion as seen in most bony fish, or similar to a flipper as seen in sharks. Apart from the tail or caudal fin, fins have no direct connection with the spine and are supported by muscles only. Their principal function is to help the fish swim. Fins can also be used for gliding or crawling, as seen in the flying fish and frogfish. Fins located in different places on the fish serve different purposes, such as moving forward, turning, and keeping an upright position. For every fin, there are a number of fish species in which this particular fin has been lost during evolution.
Spines and rays
Spines have a variety of uses. In catfish, they are used as a form of defense; many catfish have the ability to lock their spines outwards. Triggerfish also use spines to lock themselves in crevices to prevent them being pulled out.
Lepidotrichia are bony, bilaterally-paired, segmented fin rays found in bony fishes. They develop around actinotrichia as part of the dermal exoskeleton. Lepidotrichia may have some cartilage or bone in them as well. They are actually segmented and appear as a series of disks stacked one on top of another. The genetic basis for the formation of the fin rays is thought to be genes coding for the proteins actinodin 1 and actinodin 2.
Types of fin
Dorsal fins: Located on the back of the fish, dorsal fins serve to prevent the fish from rolling and assist in sudden turns and stops. Most fishes have one dorsal fin, but some fishes have two or three. In anglerfish, the anterior of the dorsal fin is modified into an illicium and esca, a biological equivalent to a fishing rod and lure. The two to three bones that support the dorsal fin are called the proximal, middle, and distal pterygiophores. In spinous fins, the distal pterygiophore is often fused to the middle or not present at all.
Caudal/Tail fins: Also called the tail fins, caudal fins are attached to the end of the caudal peduncle and used for propulsion. The caudal peduncle is the narrow part of the fish's body. The hypural joint is the joint between the caudal fin and the last of the vertebrae. The hypural is often fan-shaped. The tail may be heterocercal, reversed heterocercal, protocercal, diphycercal, or homocercal.
Heterocercal: vertebrae extend into the upper lobe of the tail, making it longer (as in sharks)
Reversed heterocercal: vertebrae extend into the lower lobe of the tail, making it longer (as in the Anaspida)
Protocercal: vertebrae extend to the tip of the tail; the tail is symmetrical but not expanded (as in cyclostomatans, the ancestral vertebrates and lancelets).
Diphycercal: vertebrae extend to the tip of the tail; the tail is symmetrical and expanded (as in the bichir, lungfish, lamprey and coelacanth). Most Palaeozoic fishes had a diphycercal heterocercal tail.
Homocercal: vertebrae extend a very short distance into the upper lobe of the tail; tail still appears superficially symmetric. Most fish have a homocercal tail, but it can be expressed in a variety of shapes. The tail fin can be rounded at the end, truncated (almost vertical edge, as in salmon), forked (ending in two prongs), emarginate (with a slight inward curve), or continuous (dorsal, caudal, and anal fins attached, as in eels).
Anal fins: Located on the ventral surface behind the anus, this fin is used to stabilize the fish while swimming.
Pectoral fins: Found in pairs on each side, usually just behind the operculum. Pectoral fins are homologous to the forelimbs of tetrapods, and aid walking in several fish species such as some anglerfish and the mudskipper. A peculiar function of pectoral fins, highly developed in some fish, is the creation of the dynamic lifting force that assists some fish such as sharks in maintaining depth and also enables the "flight" for flying fish. Certain rays of the pectoral fins may be adapted into finger-like projections, such as in sea robins and flying gurnards.
"Cephalic fins": The "horns" of manta rays and their relatives, sometimes called cephalic fins, are actually a modification of the anterior portion of the pectoral fin.
Pelvic/Ventral fins: Found in pairs on each side ventrally below the pectoral fins, pelvic fins are homologous to the hindlimbs of tetrapods. They assist the fish in going up or down through the water, turning sharply, and stopping quickly. In gobies, the pelvic fins are often fused into a single sucker disk that can be used to attach to objects.
Adipose fin: A soft, fleshy fin found on the back behind the dorsal fin and just in front of the caudal fin. It is absent in many fish families, but is found in Salmonidae, characins and catfishes. Its function has remained a mystery, and is frequently clipped off to mark hatchery-raised fish, though data from 2005 showed that trout with their adipose fin removed have an 8% higher tailbeat frequency. Additional research published in 2011 has suggested that the fin may be vital for the detection of and response to stimuli such as touch, sound and changes in pressure. Canadian researchers identified a neural network in the fin, indicating that it likely has a sensory function, but are still not sure exactly what the consequences of removing it are.
Internal organs
Intestines
As with other vertebrates, the intestines of fish consist of two segments, the small intestine and the large intestine. In most higher vertebrates, the small intestine is further divided into the duodenum and other parts. In fish, the divisions of the small intestine are not as clear, and the terms anterior intestine or proximal intestine may be used instead of duodenum.
In bony fish, the intestine is relatively short, typically around one and a half times the length of the fish's body. It commonly has a number of pyloric caeca, small pouch-like structures along its length that help to increase the overall surface area of the organ for digesting food. There is no ileocaecal valve in teleosts, with the boundary between the small intestine and the rectum being marked only by the end of the digestive epithelium. There is no small intestine as such in non-teleost fish, such as sharks, sturgeons, and lungfish. Instead, the digestive part of the gut forms a spiral intestine, connecting the stomach to the rectum. In this type of gut, the intestine itself is relatively straight, but has a long fold running along the inner surface in a spiral fashion, sometimes for dozens of turns. This fold creates a valve-like structure that greatly increases both the surface area and the effective length of the intestine. The lining of the spiral intestine is similar to that of the small intestine in teleosts and non-mammalian tetrapods. In lampreys, the spiral valve is extremely small, possibly because their diet requires little digestion. Hagfish have no spiral valve at all, with digestion occurring for almost the entire length of the intestine, which is not subdivided into different regions.
Pyloric caeca
Many fish have a number of small outpocketings, called pyloric caeca, along their intestine. The purpose of the caeca is to increase the overall surface area of the intestines, thereby increasing the absorption of nutrients.
The number of pyloric caeca varies widely between species, and in some species of fish no caeca are present at all. Species with few or no caeca compensate for their lack by having longer intestines, or by have taller or more convoluted intestinal villi, thereby achieving similar levels of absorptive surface area.
Lungfish also have a pouch located at the beginning of their intestine, which is also called a pyloric caecum, but it has a different structure and function that the pyloric caeca of other fish species. The lungfish caecum is homologous (due to common descent) with the caecum present in most amniotes (tetrapod vertebrates that include all mammals, reptiles, and birds). In most herbivores the caecum receives partially digested food from the small intestine, and serves as a fermentation chamber to break down cellulose (such as grass or leaves) in the diet. In carnivores the caecum is often greatly reduced or missing.
Stomach
As with other vertebrates, the relative positions of the esophageal and duodenal openings to the stomach remain relatively constant. As a result, the stomach always curves somewhat to the left before curving back to meet the pyloric sphincter. However, lampreys, hagfishes, chimaeras, lungfishes, and some teleost fish have no stomach at all, with the esophagus opening directly into the intestine. These fish consume diets that either require little storage of food, no pre-digestion with gastric juices, or both.
Kidneys
The kidneys of fish are typically narrow, elongated organs, occupying a significant portion of the trunk. They are similar to the mesonephros of higher vertebrates (reptiles, birds, and mammals). The kidneys contain clusters of nephrons, serviced by collecting ducts which usually drain into a mesonephric duct. However, the situation is not always so simple. In cartilaginous fish, there is also a shorter duct which drains the posterior (metanephric) parts of the kidney, and joins with the mesonephric duct at the bladder or cloaca. Indeed, in many cartilaginous fish, the anterior portion of the kidney may degenerate or cease to function altogether in the adult. Hagfish and lamprey kidneys are unusually simple. They consist of a row of nephrons, each emptying directly into the mesonephric duct.
Like the Nile tilapia, the kidney of some fish shows its three parts; head, trunk, and tail kidneys.
Fish do not have a discrete adrenal gland with distinct cortex and medulla, similar to those found in mammals. The interrenal and chromaffin cells are located within the head kidney.
Urinary bladder
Spleen
The spleen is found in nearly all vertebrates. It is a non-vital organ, similar in structure to a large lymph node. It acts primarily as a blood filter, and plays important roles in regards to red blood cells and the immune system. In cartilaginous and bony fish it consists primarily of red pulp and is normally a somewhat elongated organ as it actually lies inside the serosal lining of the intestine. The only vertebrates lacking a spleen are the lampreys and hagfishes. Even in these animals, there is a diffuse layer of haematopoietic tissue within the gut wall, which has a similar structure to red pulp, and is presumed to be homologous to the spleen of higher vertebrates.
Liver
The liver is a large vital organ present in all fish. It has a wide range of functions, including detoxification, protein synthesis, and production of biochemicals necessary for digestion. It is very susceptible to contamination by organic and inorganic compounds because they can accumulate over time and cause potentially life-threatening conditions. Because of the liver's capacity for detoxification and storage of harmful components, it is often used as an environmental biomarker.
Heart
Fish have what is often described as a two-chambered heart, consisting of one atrium to receive blood and one ventricle to pump it, in contrast to three chambers (two atria, one ventricle) of amphibian and most reptile hearts and four chambers (two atria, two ventricles) of mammal and bird hearts. However, the fish heart has entry and exit compartments that may be called chambers, so it is also sometimes described as three-chambered, or four-chambered, depending on what is counted as a chamber. The atrium and ventricle are sometimes considered "true chambers", while the others are considered "accessory chambers". In similarity to humans, fish have a closed circulatory system where the blood is contained in a circuit of blood vessels, and the blood never leaves these vessels. Deoxygenated blood is carried from the veins to the heart from different parts of the body. The blood from the heart is then pumped to the gills to be oxygenated, and is then circulated through the rest of the body.
The four compartments are arranged sequentially:
Sinus venosus: A thin-walled sac or reservoir with some cardiac muscle that collects deoxygenated blood through the incoming hepatic and cardinal veins.
Atrium: A thicker-walled, muscular chamber that sends blood to the ventricle.
Ventricle: A thick-walled, muscular chamber that pumps the blood to the fourth part, the outflow tract. The shape of the ventricle varies considerably, usually tubular in fish with elongated bodies, pyramidal with a triangular base in others, or sometimes sac-like in some marine fish.
Outflow tract (OFT): Goes to the ventral aorta and consists of the tubular conus arteriosus, bulbus arteriosus, or both. The conus arteriosus, typically found in more primitive species of fish, contracts to assist blood flow to the aorta, while the bulbus anteriosus does not.
Ostial valves, consisting of flap-like connective tissues, prevent blood from flowing backward through the compartments. The ostial valve between the sinus venosus and atrium is called the sino-atrial valve, which closes during ventricular contraction. Between the atrium and ventricle is an ostial valve called the atrioventricular valve, and between the bulbus arteriosus and ventricle is an ostial valve called the bulbo-ventricular valve. The conus arteriosus has a variable number of semilunar valves.
The ventral aorta delivers blood to the gills where it is oxygenated and flows, through the dorsal aorta, into the rest of the body. (In tetrapods, the ventral aorta is divided in two; one half forms the ascending aorta, while the other forms the pulmonary artery).
The circulatory systems of all vertebrates are closed. Fish have the simplest circulatory system, consisting of only one circuit, with the blood being pumped through the capillaries of the gills and on to the capillaries of the body tissues. This is known as single cycle circulation.
In the adult fish, the four compartments are not arranged in a straight row, instead forming an S-shape with the latter two compartments lying above the former two. This relatively simpler pattern is found in cartilaginous fish and in the ray-finned fish. In teleosts, the conus arteriosus is very small and can more accurately be described as part of the aorta rather than of the heart proper. The conus arteriosus is not present in any amniotes, presumably having been absorbed into the ventricles over the course of evolution. Similarly, while the sinus venosus is present as a vestigial structure in some reptiles and birds, it is otherwise absorbed into the right atrium and is no longer distinguishable.
Swim bladder
Weberian apparatus
Fishes of the superorder Ostariophysi possess a structure called the Weberian apparatus, a modification which allows them to hear better. This ability may explain the marked success of ostariophysian fishes. The apparatus is made up of a set of bones known as Weberian ossicles, a chain of small bones that connect the auditory system to the swim bladder of fishes. The ossicles connect the gas bladder wall with Y-shaped lymph sinus that is next to the lymph-filled transverse canal joining the saccules of the right and left ears. This allows the transmission of vibrations to the inner ear. A fully functioning Weberian apparatus consists of the swim bladder, the Weberian ossicles, a portion of the anterior vertebral column, and some muscles and ligaments.
Reproductive organs
Fish reproductive organs include testicles and ovaries. In most species, gonads are paired organs of similar size, which can be partially or totally fused. There may also be a range of secondary organs that increase reproductive fitness. The genital papilla is a small, fleshy tube behind the anus in teleost fishes from which the sperm or eggs are released; the sex of a fish often can be determined by the shape of its papilla. Sex determination in fish, which is dependent on intrinsic genetic factors, is followed by sex differentiation through gene expression of feedback mechanisms that ensure the stability of the levels of particular hormones and cellular profile. However, the hermaphroditic species are an exception in which they are able to alter the course of sex differentiation in order to maximize their fitness. There are various determination mechanisms for gonadal sex in fish and processes that aid development of the gonadal function. Gonadal sex is influenced by a number of factors, including cell-autonomous genetic mechanisms, endocrine, paracrine, behavioral, or environmental signals. This results in the primordial germ cells (PGCs) to be able to interpret internal or external stimuli to develop into spermatogonia or oogonia. Spermatogenesis in testes is a process in which spermatogonia differentiates into spermatocytes through mitosis and meiosis, which halves the number of chromosomes, creating haploid spermatids. During spermiogenesis, the last stage of spermatogenesis, the haploid spermatids develop into spermatozoa. In the ovaries, oogonia also undergo mitosis and meiosis during oogenesis, and this gives rise to primary oocytes and then eventually the ovum. The primary oocyte divides and produces the secondary oocyte as well as a polar body, before the secondary oocyte develops into the haploid ootid.
Testes
Most male fish have two testes of similar size. In the case of sharks, the testis on the right side is usually larger. The primitive jawless fish have only a single testis located in the midline of the body, although even this forms from the fusion of paired structures in the embryo.
Under a tough membranous shell, the tunica albuginea, the testis of some teleost fish, contains very fine coiled tubes called seminiferous tubules. The tubules are lined with a layer of cells (germ cells) that from puberty into old age, develop into sperm cells (also known as spermatozoa or male gametes). The developing sperm travel through the seminiferous tubules to the rete testis located in the mediastinum testis, to the efferent ducts, and then to the epididymides (depending on the species) where newly created sperm cells mature (see spermatogenesis). The sperm move into the vasa deferentia, and are eventually expelled through the urethra and out of the urethral orifice through muscular contractions.
However, most fish do not possess seminiferous tubules. Instead, the sperm are produced in spherical structures called sperm ampullae. These are seasonal structures, releasing their contents during the breeding season and then being reabsorbed by the body. Before the next breeding season, new sperm ampullae begin to form and ripen. The ampullae are otherwise essentially identical to the seminiferous tubules in higher vertebrates, including the same range of cell types.
In terms of spermatogonia distribution, the structure of teleost testes have two types: in the most common, spermatogonia occur all along the seminiferous tubules, while in Atherinomorpha, they are confined to the distal portion of these structures. Fish can present cystic or semi-cystic spermatogenesis in relation to the release phase of germ cells in cysts to the lumen of the seminiferous tubules.
Ovaries
Many of the features found in ovaries are common to all vertebrates, including the presence of follicular cells and tunica albuginea There may be hundreds or even millions of fertile eggs present in the ovary of a fish at any given time. Fresh eggs may be developing from the germinal epithelium throughout life. Corpora lutea are found only in mammals, and in some elasmobranch fish; in other species, the remnants of the follicle are quickly resorbed by the ovary. The ovary of teleosts is often contains a hollow, lymph-filled space which opens into the oviduct, and into which the eggs are shed. Most normal female fish have two ovaries. In some elasmobranchs, only the right ovary develops fully. In the primitive jawless fish and some teleosts, there is only one ovary, formed by the fusion of the paired organs in the embryo.
Fish ovaries may be of three types: gymnovarian, secondary gymnovarian or cystovarian. In the first type, the oocytes are released directly into the coelomic cavity and then enter the ostium, then through the oviduct and are eliminated. Secondary gymnovarian ovaries shed ova into the coelom from which they go directly into the oviduct. In the third type, the oocytes are conveyed to the exterior through the oviduct. Gymnovaries are the primitive condition found in lungfish, sturgeon, and bowfin. Cystovaries characterize most teleosts, where the ovary lumen has continuity with the oviduct. Secondary gymnovaries are found in salmonids and a few other teleosts.
Nervous system
Central nervous system
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal. However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.
Fish brains are divided into several regions. At the front are the olfactory lobes, a pair of structures that receive and process signals from the nostrils via the two olfactory nerves. Similar to the way humans smell chemicals in the air, fish smell chemicals in the water by tasting them. The olfactory lobes are very large in fish that hunt primarily by smell, such as hagfish, sharks, and catfish. Behind the olfactory lobes is the two-lobed telencephalon, the structural equivalent to the cerebrum in higher vertebrates. In fish the telencephalon is concerned mostly with olfaction. Together these structures form the forebrain.
The forebrain is connected to the midbrain via the diencephalon (in the diagram, this structure is below the optic lobes and consequently not visible). The diencephalon performs functions associated with hormones and homeostasis. The pineal body lies just above the diencephalon. This structure detects light, maintains circadian rhythms, and controls colour changes. The midbrain or mesencephalon contains the two optic lobes. These are very large in species that hunt by sight, such as rainbow trout and cichlids.
The hindbrain or metencephalon is particularly involved in swimming and balance. The cerebellum is a single-lobed structure that is typically the biggest part of the brain. Hagfish and lampreys have relatively small cerebella, while the mormyrid cerebellum is massive and apparently involved in their electrical sense.
The brain stem or myelencephalon is the brain's posterior. As well as controlling some muscles and body organs, in bony fish at least, the brain stem governs respiration and osmoregulation.
Vertebrates are the only chordate group to exhibit a proper brain. A slight swelling of the anterior end of the dorsal nerve cord is found in the lancelet, though it lacks the eyes and other complex sense organs comparable to those of vertebrates. Other chordates do not show any trends towards cephalisation. The central nervous system is based on a hollow nerve tube running along the length of the animal, from which the peripheral nervous system branches out to innervate the various systems. The front end of the nerve tube is expanded by a thickening of the walls and expansion of the central canal of spinal cord into three primary brain vesicles; the prosencephalon (forebrain), mesencephalon (midbrain) and rhombencephalon (hindbrain) then further differentiated in the various vertebrate groups. Two laterally placed eyes form around outgrows from the midbrain, except in hagfish, though this may be a secondary loss. The forebrain is well developed and subdivided in most tetrapods, while the midbrain dominates in many fish and some salamanders. Vesicles of the forebrain are usually paired, giving rise to hemispheres like the cerebral hemispheres in mammals. The resulting anatomy of the central nervous system, with a single, hollow ventral nerve cord topped by a series of (often paired) vesicles is unique to vertebrates.
Cerebellum
The circuits in the cerebellum are similar across all classes of vertebrates, including fish, reptiles, birds, and mammals. There is also an analogous brain structure in cephalopods with well-developed brains, such as octopuses. This has been taken as evidence that the cerebellum performs functions important to all animal species with a brain.
There is considerable variation in the size and shape of the cerebellum in different vertebrate species. In amphibians, lampreys, and hagfish, the cerebellum is little developed; in the latter two groups, it is barely distinguishable from the brain-stem. Although the spinocerebellum is present in these groups, the primary structures are small paired nuclei corresponding to the vestibulocerebellum.
The cerebellum of cartilaginous and bony fishes is extraordinarily large and complex. In at least one important respect, it differs in internal structure from the mammalian cerebellum: The fish cerebellum does not contain discrete deep cerebellar nuclei. Instead, the primary targets of Purkinje cells are a distinct type of cell distributed across the cerebellar cortex, a type not seen in mammals. In mormyrids (a family of weakly electrosensitive freshwater fish), the cerebellum is considerably larger than the rest of the brain put together. The largest part of it is a special structure called the valvula, which has an unusually regular architecture and receives much of its input from the electrosensory system.
Most species of fish and amphibians possess a lateral line system that senses pressure waves in water. One of the brain areas that receives primary input from the lateral line organ, the medial octavolateral nucleus, has a cerebellum-like structure, with granule cells and parallel fibers. In electrosensitive fish, the input from the electrosensory system goes to the dorsal octavolateral nucleus, which also has a cerebellum-like structure. In ray-finned fishes (by far the largest group), the optic tectum has a layer—the marginal layer—that is cerebellum-like.
Identified neurons
A neuron is "identified" if it has properties that distinguish it from every other neuron in the same animal—properties such as location, neurotransmitter, gene expression pattern, and connectivity—and if every individual organism belonging to the same species has one and only one neuron with the same set of properties. In vertebrate nervous systems, very few neurons are "identified" in this sense (in humans, there are believed to be none). In simpler nervous systems, some or all neurons may be thus unique.
In vertebrates, the best known identified neurons are the gigantic Mauthner cells of fish. Every fish has two Mauthner cells, located in the bottom part of the brainstem, one on the left side and one on the right. Each Mauthner cell has an axon that crosses over, innervating neurons at the same brain level and then travelling down through the spinal cord, making numerous connections as it goes. The synapses generated by a Mauthner cell are so powerful that a single action potential gives rise to a major behavioral response: within milliseconds the fish curves its body into a C-shape, then straightens, thereby propelling itself rapidly forward. Functionally, this is a fast escape response, triggered most easily by a strong sound wave or pressure wave impinging on the lateral line organ of the fish. Mauthner cells are not the only identified neurons in fish—there are about 20 more types, including pairs of "Mauthner cell analogs" in each spinal segmental nucleus. Although a Mauthner cell is capable of bringing about an escape response all by itself, in the context of ordinary behavior, other types of cells usually contribute to shaping the amplitude and direction of the response.
Mauthner cells have been described as command neurons. A command neuron is a special type of identified neuron, defined as a neuron that is capable of driving a specific behavior all by itself. Such neurons appear most commonly in the fast escape systems of various species—the squid giant axon and squid giant synapse, used for pioneering experiments in neurophysiology because of their enormous size, both participate in the fast escape circuit of the squid. The concept of a command neuron has, however, become controversial, because of studies showing that some neurons that initially appeared to fit the description were really only capable of evoking a response in a limited set of circumstances.
Immune system
Immune organs vary by type of fish. In the jawless fish (lampreys and hagfish), true lymphoid organs are absent. These fish rely on regions of lymphoid tissue within other organs to produce immune cells. For example, erythrocytes, macrophages and plasma cells are produced in the anterior kidney (or pronephros) and some areas of the gut (where granulocytes mature). They resemble primitive bone marrow in hagfish.
Cartilaginous fish (sharks and rays) have a more advanced immune system. They have three specialized organs that are unique to chondrichthyes; the epigonal organs (lymphoid tissues similar to mammalian bone) that surround the gonads, the Leydig's organ within the walls of their esophagus, and a spiral valve in their intestine. These organs house typical immune cells (granulocytes, lymphocytes and plasma cells). They also possess an identifiable thymus and a well-developed spleen (their most important immune organ) where various lymphocytes, plasma cells and macrophages develop and are stored.
Chondrostean fish (sturgeons, paddlefish and bichirs) possess a major site for the production of granulocytes within a mass that is associated with the meninges, the membranes surrounding the central nervous system. Their heart is frequently covered with tissue that contains lymphocytes, reticular cells and a small number of macrophages. The chondrostean kidney is an important hemopoietic organ; it is where erythrocytes, granulocytes, lymphocytes and macrophages develop.
Like chondrostean fish, the major immune tissues of bony fish (teleostei) include the kidney (especially the anterior kidney), which houses many different immune cells. In addition, teleost fish possess a thymus, spleen and scattered immune areas within mucosal tissues (e.g. in the skin, gills, gut and gonads). Much like the mammalian immune system, teleost erythrocytes, neutrophils and granulocytes are believed to reside in the spleen whereas lymphocytes are the major cell type found in the thymus. In 2006, a lymphatic system similar to that in mammals was described in one species of teleost fish, the zebrafish. Although not confirmed as yet, this system presumably will be where unstimulated naive T cells accumulate while waiting to encounter an antigen.
| Biology and health sciences | Basic anatomy | Biology |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.